Format Will Shift To Video and French

I have retracted a previous statement about deleting all content from this platform. After I took down the entire blog, I was reminded that I am contractually obligated to keep a certain number of posts online. I have discretion over which ones, and I retain full creative control over the delivery of the content, including personal liability for the stuff that I say.

So, I will be setting up a new platform where the posts are delivered in short videos. Meanwhile, I will drastically reduce the number of written posts into pages, so as to avoid undue focus on the “latest” stuff versus the quality of the articles, and so all new content will be in video, in order to reflect the original vocation of the platform and the consumption habits of the majority of users.

As I re-uploaded the blog, unfortunately all contributing authors and AI GEN content is now attributed to my own name. I don’t think there are more than 10 very short LLM generated posts and likely only from 2023. I will fix that. In the meantime I apologize for the chaos.

Contract or no contract, I simply don’t see a point of continuing a blog right now, because many of the things I said and wrote 6 years ago (when I was still in law school) are basically still trending, progress and reform around the world are in stagnation, so clearly I don’t need to hold my breath for the latest stuff. I will begin scripting after the holidays and will launch the new platform by May 2025.

Finally, to comply with Quebec’s language requirements, I will be emphasizing French in the future. I let go during the pandemic, but I realize I’ve been giving a bad example. Maintaining French is essential for Canada’s identity.

Happy Holidays!


Joyeux temps des Fêtes!

Je reviens sous un autre format.

Online Harms Bill Must Address Platform Liability And Provide For Swift Banning Of Platforms

Contrary to my previous objections to the Online Harms Bill, which I criticized as “too little too late nothingburger” and “disappointing” because age verification is missing, I am now finding new ways to work with this law to arrive precisely where we need to get regarding corporate criminal liability of platforms. Given that we don’t have the sociopathic section 230 CDA here, all we need is to be bold and move fast, before the law is struck on constitutional grounds by corporate lobbies.

The Online Harms Bill creates a very welcome tool to repress rampant tech facilitated crimes, by reversing the criminal law onus, in other words we can finally say that anyone who produces and disseminates harmful content is by definition guilty until proven otherwise.

Among many things, I see a clear possibility to raise criminal sentencing for child pornographers from nothing to perpetuity through the Online Harms Bill, simply by proving that juvenile porn is according to United Nation reports a most blatant instance of hate speech and antisocial behaviour. Interference with minors is absolutely encompassed in the current hate speech definition. Moreover, we have decades of studies and reports on the societal decay and breakdown resulting from technology facilitated violence (a.k.a. hate speech) against women and children.

My understanding is that we will be setting up administrative tribunals where you don’t need to be a member of a bar, you can be a social worker and hand out life-sentences. To accelerate trials and sentencing, we can also implement AI decision-makers like in the European Court. They seem to be doing pretty well so far.

We have extensive reports on the ways that platforms knowingly encourage and perpetuate hate speech, mainly in the form of tech facilitated violence. Honestly, I don’t see how user-generated and hardcore porn (and anything that is not LGBTQ+) will get a hate-speech exemption, given the Privacy Commissioner report (that stayed hidden for as long as it possibly could) specifically on how consent of unwitting “performers” is NEVER verified on Aylo. Even the new “safeguards” Aylo brought forward include the possibility to consent for somebody else by providing a release form. As if a user couldn’t produce a fake release. I had 9 remixes commercialized to my name and someone gave a release signed by someone pretending to be me to a US publisher, so Aylo’s efforts are total bullshit in that regard. The rest is voluntary blindness by pro-Aylo officials. This is just one example of organized inefficiency.

The Online Harms Bill should also allow victims from outside of Canada to file complaints. We learned from parliamentary sessions on the status of women that intimate partner violence victims are fleeing Canada, because the criminal justice system here intentionally compromises their safety by protecting and releasing violent criminals. We saw in these sessions that reps from the current administration were antagonizing and harassing victims (survivors left in tears), which shows that officials political interests are aligned with the rise of technology facilitated violence. It is our duty to take the Online Harms Bill and use it against all the corporations and their users these officials try to protect. It is a small sacrifice to stop speech temporarily (voluntarily remain silent or shut down or pause social media accounts) until we weed out the bad apples once and for all.

I am currently examining a report from 5 years ago, called Deplatforming Mysogyny on platform liability for technology facilitated violence, and will compare it with the efforts brought forward in the Online Harms Bill. The report explains how digital platforms business models, design decisions, and technological features optimize them for abusive speech and behaviour (the current definition of hate speech) by users and examine how tech violence always results in real life violence and harm. It is funny how we’ve known all these years that tech platforms are destroying society by encouraging violence and murders, but allowed them to stay in business.

As early as 2018, the Report of the Special Reporteur on violence against women, UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018) reports that “Information and communications technology is used directly as a tool for making digital threats and inciting gender-based violence, including threats of physical and sexual violence, rape, killing, unwanted and harassing online communications or even the encouragement of others to harm women physically. It may also involve the dissemination of reputation harming lies, electronic sabotage in the form of spam and milgnant viruses, impersonation of the victim online and the sending of abusive emails or spam, blog posts, tweets or other online communications in the victim’s name. Technology facilitated violence may also be committed in the work place or in the form of so-called honour-based violence by intimate partners […]

It is therefore important to acknowledge that the Internet is being used in broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the internet and other information and communications technology. Emerging forms of ICT have facilitated new types of gender-based violence and gender inequality in access to technologies, which hinder women’s and girls’ full enjoyment of their human rights and their ability to achieve gender equality. […] 

The consequences of harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of cultural inequality, discrimination, and patriarchy. Women subjected to online violence are often further victimized through harmful and negative gender stereotypes, which are prohibited by international law.”

If intentionally sexualizing individuals or a group of people in order to deprive them of the basic enjoyment of their human rights is not hate speech, good luck proving otherwise.

Tech facilitated gender based violence is further defined as being rooted in, arising from, and exacerbated by misogyny, sexist norms, and rape culture, all of which existed long before the internet. However TFGBV in turn accelerates, amplifies, aggravates, and perpetuates the enactment of and harm from these same values, norms and institutions, in a vicious circle of technosocial oppression. (Source Jessica West)

Deplatforming misogyny gives several examples of hate speech:

  • Online Abuse: verbally or emotionally abusing someone online, such as insulting and harassing them, their work, or their personality traits and capabilities, including telling that person she should commit suicide or deserves to be sexually assaulted
  • Online Harassment: persistently engaging with someone online in a way that is unwanted, often but not necessarily with the intention to cause distress or inconvenience to that person. It is perpetrated by one or several organized persons, as in gang stalking (source Suzie Dunn)
  • Slut-shaming (100% hate-speech) can be perpetrated across several platforms and may include references to the targeted person’s sexuality, sexualized insults, or shaming the person for their sexuality or for engaging in sexual activity. This type of hate-speech has the objective to create an intimidating, hostile, degrading, humiliating or offensive environment (UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018))
    • Discussing someone else’s sexuality is kind of always a red flag and criminal defense lawyers (among many other professionals) are totally engaging in hate speech in total impunity, just saying. Something needs to change or the legal industry should be completely eliminated from enforcing a clean internet. They should have zero immunity for perpetrating hate-speech and thereby encouraging violence against women and children.
  • Non-consensual distribution of intimate images: (see Aylo’s business model) circulating intimate or sexual images or recordings of someone without their consent, such as where a person is nude, partially clothed, or engaged in sexual activity, often with the purpose of shaming, stigmatizing or harming the victim. (also known as image based abuse and image-based sexual exploitation). The UN warns against using the term “revenge porn” because it implies that the victim did something wrong deserving of revenge.
  • Sextortion: attempting to sexually extort another person by capturing sexual or intimate images or recordings of them and threatening to distribute them without consent unless the targeted person pays the perpetrator, follows their orders, or engages in sexual activity with or for them.
  • Voyeurism: criminal offense involving surreptitiously observing or recording someone while they are in a situation that gives rise to a reasonable expectation of privacy.
  • Doxing: publicly disclosing someone’s personal information online, such as their full name, home adress, and social insurance number. Doxing is particularily concerning for individuals who are in or escaping situations of intimate partner violence, or who use pseudonyms due to living in repressive regimes or to avoid harmful discrimination for aspects of their identity, such as being a transgender or sex worker. (see: The Guardian: Facebook’s real name policy hurts people)
  • Impersonation: taking over a person’s social media accounts, or creating false social media accounts purporting to be the victim, usually to solicit sex or make compromising statements.
  • Identity and Image Manipulation, i.e. Deepfake videos: use of AI to produce videos of an individual saying something they did not say or did not do. In reality, video deepfakes are kind of fringe. The current AI applications are mainly focused on sexualizing and undressing women through unauthorized use of Instagram photos.
  • Online mobbing, or swarming: large numbers of people engaging in online harassment or online abuse against a single individual (Amber Herd comes to mind)
    • The Depp and Herd trial is an example of court-enabled hate-speech. The way Herd was cross-examined on television falls within the definition of incitement of violence against victims of intimate partner violence. This trial harmed the reputation of the profession beyond any repair and resulted in uncontrollable online mobbing.
  • Coordinated flagging and Brigading are cited in the report but I am not at all convinced that they are user-perpetrated. I believe that algorithmic conduct is 100% on the platforms. Users have zero control and liability in that regard. Nice try, but nope. If a survivor is taken down, I won’t let platforms get away with “users did it”. No way. Saying otherwise is pro-corporate propaganda.
  • Technology aggravated sexual assault: group assault which is filmed and posted online. Here is where the Online Harms Bill can be used to sentence perps to life in prison, something that can’t be achieved under the criminal code.
  • Luring for sexual exploitation: i.e. grooming through social media, or through fake online ads, in order to lure underage victims into offline forms of sexual exploitation, such as sex trafficking and child sexual abuse. Here is another instance of hate speech deserving of a life-sentence.

To be continued in another post: it is a long report (or to be more precise a bundle of legal and UN reports) and the bill is also a handful. I am only skimming the surface of the most prevalent forms of hate-speech which invariably equate to incitement of gender-based and intersectional genocide (see report on missing and murdered indigenous women and how it amounts to genocide). Just to say I can work with that bill. Bring it!


Law school messed too much with my head by convincing me that I care about human rights for violent criminals and procedural safeguards for perp corps. I never did. It feels good to be my dystopian self again.

Age Verification Bill Is Preferable to (too little too late) Online Harms Bill

Age verification to access adult content online is the only viable and sensible way to counter the irreparable damage pornographic platforms cause to society. The fact that Pornhub prefers to block access to their content in jurisdictions that enforce age verification is a sign that Pornhub is nothing less than a criminal platform. If all adult sites are truly “sketchy” to cite our prime minister, and couldn’t be trusted to verify ID, then I don’t understand why they are allowed to legally operate. They should simply be blocked and it would save the government a great deal of money.

Last time I checked, everyone in Canada (and many places in the US) needs to show their papers to buy alcohol, cigarettes, or government weed. Even nightclubs want to see your papers before letting you in. If you don’t want to show your papers, you don’t get in. If you’re too young, you don’t get in. Not once have I been able to get into a club in our (extremely liberal) Quebec before the age of 18, or the (more conservative) province of Ontario before the age of 19. We also hear stories of the time when porn content was only available on tangible format (magazines, videotapes, dvd’s) people had to show ID to access such content. Yet, online porn of the vilest kind has always been accessible to children in Canada. How does that make any sense?

I personally worked on cannabis legalization memoir during my second year in law school in 2016 (two years later, it was legalized) and age verification was always a sine qua non for legalization, given how harmful weed can be to the developing brain. In the same manner, I also recommended a system preventing the sale of cannabis to people experiencing mental health issues. It didn’t get implemented, but it should. You can hate me for it but the science is clear, if you have a diagnosed mental health condition, weed will make you psychotic and likely a danger to yourself and others. In order to counter the overdose epidemic, I am also a proponent of the legalization of opiates, and mainly pharmaceutical opiates that should be available to all addicts, who are often patients in need of pain-management let down by the health system, to be administered by certified nurses in every pharmacy of this country.

However, when it comes to porn, I believe the societal damage exceeds that of any drug. I believe that online porn (through the nonconsensual user generated model that is being pushed and rewarded on popular platforms) is the main factor behind the mental health epidemic amongst minors. Many kids never really fully get to understand how consent works. Those who believe they need to perform the violent acts depicted in porn videos, become suicidal. For many people, it is the first introduction to heterosexual relations and it makes kids hate society and their biological sex. It is not a coincidence that so many kids refuse to conform to their gender.

Given that online porn tends to obfuscate the notion of consent for profit, which in itself promotes content depicting self-harm and assault, studies are proving now and again that online porn is the main driver of nonconsensual content, antisocial behaviour, intimate partner violence, criminal harassment, cyberbullying (to name a few), and now identity theft via deepfakes.

This is not an ideological or political issue. I don’t understand why online pornographers in Canada should be exempt from age checks. Even less do I understand why the federal government keeps giving these platforms a free pass to make their content available to everyone, for free (a paywall would fix a few issues). But, this is the feeling I am getting when reading the Online Harms Bill that took 5 years in the making, with its convoluted system of takedown enforcement, as if Canada ever enforced anything. I myself spent 4 years in court to take down commercial nonconsensual stuff and it only worked out when the adverse party corporation declared a bankruptcy, briefly went out of business, and their international distributor finally caved because even Google intervened before the courts reluctantly did. Canadian courts in general are mildly useless, as they seem to spend most of their efforts in further sexualizing survivors and siding with the adverse parties’ commercial interests (like the government consistently sides with Pornhub). Nobody can tell us how Canada under the online harms bill will enforce “hefty” fines on platforms that operate in Sweden, South Korea, Morocco, or Iceland for example. In my case I had to take down over 5400 pieces of online content spread over 50 countries and an extraterritorial interlocutory injunction wasn’t enough. It was only the beginning. But oh, age verification has nothing to do with Digital ID (something that will happen anyway, don’t worry). It has to do with common sense.

Not once in my life have I heard an argument saying that parents should be the ones to enforce a ban on cigarettes or cannabis, rather than the state to impose age verification at the stores. Not once have I heard the argument that age verification to access cannabis is infringing on the privacy of old farts who want to buy legal cannabis. And don’t start me on the times we needed to disclose our health status AND show government ID to buy food at Costco or Walmart, a trauma that feels like yesterday… (will not forget, neither forgive). Why is online porn so different and important to the federal government that it should be accessible for free to children at all times?


Update: Although Australia failed to follow up on introducing age checks last year, given their unique diaspora of single-user sex workers and (not human trafficked) entrepreneurs, the UK is already surprisingly advanced into determining “trusted and secure digital verification services” with a focus on “layered” checks. It is encouraging to know that government ID alone won’t be enough to access adult sites in the UK, and that users will need to submit at least one instant selfie (timestamped at the moment of access) to prove they really are who they say they are. If photos on ID don’t match selfies, users’ access to the sites will be blocked. This is easily enforceable through third party facial recognition AI that will not store any personal information, face scans, or selfies, and will only assess age on a moment to moment basis. Contrary to banks who regularly leak users personal information for the simple reason that they need to store such data, it won’t be possible for pornsites to leak anything because they won’t have access to any personal information, and the third party AI verifying it won’t be allowed to store it.

If we worry so much about porn sites handling sensitive information, then we should bar them from taking users credit cards for their premium content. As it is now, they have large databases of credit cards. A credit card is sufficient to perform a full credit check on the holder, so it is pretty damn sufficient at identifying a user.

Canada should follow in the steps of the UK and rewrite the online harms act, first to remove the bizarre ideological sections regarding hate speech (we already have hate speech offenses in the criminal code and more than enough caselaw on the matter), as well as the bizarre life sentence for vague ideological thought crimes, since it has nothing to do with protecting children. I wouldn’t mind a life sentence for child porn producers and pedophiles, however, who currently get out with a slap on the wrist; (2) borrowing from the UK Online Safety Act, to mandate the use of trusted and secure digital verification services including real time facial recognition, face scans, digital wallets, government ID, selfies and many combinations thereof. Of course the cost will be relayed on platforms. This will unite Bill S210 and C63 on same footing; (3) similar to the UK Act, Canada should exempt Twitter, Reddit, and other mainly text-based platforms; (4) leave the 24 hour takedown requirements, but create an expeditious appeal process to affected users to reinstate content that doesn’t fall under the purview of the act, and impose dissuasive fines including the payment of attorney fees for frivolous takedown requests (à la DMCA by analogy). (5) to err on the safe side, Canada should mandate all mobile providers to automatically block porn sites, so that only computer cameras would be used for real time face scans and face video.

Another reason to block adult mobile apps is that all mobile apps are specifically designed to collect and store personal information even when you are not using them. Mobile OS also regularly take photos, videos and recordings of users for the purpose of improving their experience. It is standard practice to collect extensive personal information on mobile users since intelligent phones exist. Cybersecurity experts are able to decrypt such data packets while hackers (or law enforcement with or without a warrant) are able to intercept and use them. If you access porn on your phone, you can safely expect that your most intimate and biometric details are stored in many many places, and you would be even more surprised to learn that you automatically consented to all of it. Age verification would be the least of your problems. There are tons of applications capable of accurately guessing your age based on what you do with your phone.

Finally, we should never leave it to parents to protect children, because if you read criminal jurisprudence, parents and especially foster parents (and other family members) are often factors of child abuse and child pornography in this countryfor the reason that they have unfettered access to these children. Abusive parents also get away with a slap on the wrist. Since we don’t trust parents to respect children’s choice of gender, it would be a little hypocritical to trust them to safeguard their kids from porn. I wouldn’t.


Update 2nd: after wasting a few hours on online harms bill scenarios, I predict the bill has no future other than to target speech criticizing the bill (like this post) and to ban survivor speech (already going on without the help of the bill). So basically, if the bill ever comes to exist, it will achieve the exact opposite effect of its apparent intended purpose. As Australia has shown, nothing concrete will happen in the sphere of child protection anywhere. These bills are all for show, as corporate commercial interests will always trump child safety and consent. Even the UK will only apply age checks from 2025. Why 2025? Because the UK will likely also bail before the promised deadline and drop the checks altogether shortly before 2025. Comparative law should be renamed to comparative inefficiency.

Just like electric cars promises are flopping all over the place, because you can’t tell people to choose between doing their laundry or charging their car to go to work, you also can’t authorize a mega-polluting wetland-destroying Swedish project on unceded Mohawk territory, and pretend to care about the environment or ancestral rights in the same sentence. And very obviously, you can’t make porn accessible to children for free at all times and pretend to be a good person just because you wrote another fake bill (which is not quite written yet).

The point is, do not wait for a bill or a court to save you. As I previously said, the only way to enforce anything in the realm of nonconsensual material is to arm yourself with patience and look for ways in and out of court to apply pressure on local courts via foreign legal mechanisms, file police reports and Interpol reports, seek injunctions, sue platforms, sue banks that continue to work with rogue platforms, use the takedown and delisting mechanisms of search engines, make videos, hit film festivals, write open letters to ministers.. and whatever other grassroots ideas you may come up with. If you sue in damages, sue in the US, not Canada. The important thing is to take action every single day. I love how in the US people pick up the phone and call their state rep or senator. The only way out is to let the whole world know that you did not consent. Don’t stop until everything is taken down to the ground.

Shopping For a Non-Intelligent Phone Without GPS, Camera, and Wifi

Do flip phones emit less radiation? Yes, Flip phones and dumb phones are objectively better for those looking to reduce radiation exposure; particularly models without bluetooth & GPS capabilities. Also flip phones without multiple apps running in the background that require constant internet access are speculated to be safer. (Source) The SAR value, also known as …

Illinois Flood of Class Actions: Twitter, Snap, Tinder, Home Depot Collected, Stored, Used Biometric Data Without Users Consent

A proposed class action lawsuit claims X Corp., which owns and operates Twitter, has wrongfully captured, stored and used Illinois residents’ biometric data, including facial scans, without consent. The suit more specifically alleges that Twitter has run afoul of the Illinois Biometric Information Privacy Act by capturing and storing users’ biometric information without notice or express consent …

GDPR: Meta Hit With Dissuasive Fine For Illegal Data Transfers From Europe To US Servers

This decision comes in the wake of Meta’s and 5000 other companies willful ignorance of EU data protection law and persistent illegal transfers of European users’ sensitive data, such as names, email and IP addresses, messages, viewing history, geolocation data and other information, to US servers. Meta will of course appeal the ruling, but if …

Andy Warhol Foundation For Visual Arts Loses Fair Use Case at SCOTUS

SCOTUS applied the fair use doctrine in a manner coherent with its previous rulings. The Orange Prince silkscreen licensed to Condé Nast constitutes copyright infringement, because a commercial license is not fair use. https://www.supremecourt.gov/opinions/22pdf/21-869_87ad.pdf To cite the court: the “purpose and character” of the Foundation’s use of Goldsmith’s photograph in commercially licensing Orange Prince to …

Sky v SkyKick: UK Court Finds Sky Filed Bad Faith Registrations

In the long-running dispute between Sky and SkyKick, the UK courts developed on the notion of bad faith in trademark filings that normally extends to seeking trade mark protection for a broad range of goods and services way and beyond classes covered by the core trademark. The Court of Appeal clarified that an applicant does …