Suno AI Lawsuit Breakdown

This complaint is very similar to the Udio complaint. I will address different points. Suno is the first Music AI platform I started testing last month. Others including Udio followed through word of mouth. Prior to May, there were no viable music AI platforms according to professional standards, but Suno’s latest version opened the floodgates of creativity – the industry mentions 10 new songs a second on Suno alone – and there are already a good few dozens of platforms quickly catching on.

In a way, everything we may say now about AI is at a very early stage of training, building, debugging and adjusting and is evolving as we speak through the invaluable input of millions of user pioneers. We are seeing progress unfold at the speed of light before our eyes. Everyone is learning, AI is learning and countless users who never made music in their lives are also learning about making music, with each platform providing valuable tips and tricks. There is a process of demystification and breakdown of loops, beats, melodies, and vocal flows in different languages, as well as deconstruction and re-appropriation of the music production process. It brings tears to my eyes to see so many users become creators instead of passive consumers.

Many users throughout platforms mention that since AI came along, their favorite songs are the songs they made themselves. This is fantastic for humanity. Obviously, these users have now less time to listen to commercial songs. Until now, we had to listen to everything the industry imposes on us, because there was no alternative to learn from, other than public domain. It was time-consuming, frustrating, and depressing due to violent, reductive, and misogynistic lyrics and systemic undue sexualization and dehumanization of artists by the industry. Now that AI listens to these commercial “hits”, we can protect our ears while focusing on more productive things that bring us joy. In a way AI doesn’t do anything more than we’d be doing without AI, but AI saves us time and protects our emotional well-being and integrity by ingesting and filtering the trash the industry throws at us, so that we can minimize our exposure to harmful content.

Can the music industry really stop progress and continue keeping AI for themselves?

In both complaints we see that the platforms refuse to disclose what data they trained their models on. They claim it is proprietary information. The reasoning behind refusing to disclose training particulars may be that anything related to training is a trade secret and training in itself is fair use.

Ideally a LLM should have no restrictions regarding training and they shouldn’t pay for data that is publicly available. Copyright law specifically provides a training / education exemption under its fair use doctrine which may differ from one country to another, but essentially recognizes that non-commercial and transformative activity which is good for humans and society in general justifies limiting the ability of rights-holders to derive profit from copyright. Without fair use exceptions, there would be no journalists, no standup comedians, no content creators, no Youtube or TikTok, no parodies, no criticism (i.e. pop art), etc.

I can certainly copy an entire song to break it down and learn how it was made note by note. Why can’t AI? When I need to learn a music video choreography, I copy entire videos from the internet, I break them down into sections which I then further copy (several times per section, slow then normal speed) into a myriad of little video tutorials that I watch a million times until I get the moves right. While I learn the moves, I reproduce these moves with my own body which I film (another countless times) and edit into new videos. This is a 100% fair use example (and btw it’s true, I do that every day). Why can’t AI do the same with music? What’s the difference? Why does it stop being fair use when AI does the copying for the purpose of training rather than a user trying to learn a song or a dance?

It seems that both complaints put much effort in proving that the LLMs copied entire songs for training. They are not really denying it. Training is clearly a transformative process. I think what the fuss is revolving around is whether there is such a thing as “excessive training” that should be excluded from fair use defenses.

In Para. 12, the plaintiffs suggest that music generated on AI platforms is NOT human-created work! This is a strange insult to millions of human users. I’m pretty sure this qualifies as hate speech. Last time I checked, I am human and I write my own lyrics. Yet another lowly and unfounded attack. Why do they think they are the only humans in the room. WTF!

Due to the dehumanizing characterization of human users as non-human, I am not going to read the rest of the complaint. Sorry, but I can’t deal with more hateful content. Not on Canada Day. I’ll let my bot finish the job but I won’t publish the result.

Udio Complaint Entirely Based On Industry Infringing Its Own Lyrics

I am reading the Udio complaint right now. It is a little more than a “nothingburger” as the majority of users and IP lawyers have overwhelmingly noted. It is also an example of how to make a mockery of the justice system, beginning with basing an entire claim on self-serving evidence, more precisely all the evidence is based on intentional infringement of industry-owned lyrics. The only thing the plaintiffs are capable of proving with this lawsuit is how they hypothetically infringed their own lyrics, forced AI to further infringe their copyright through very precise instructions, and obtained a copyright infringing result. Several times.

If copyright law has ever been clear about something since the 18th century is not to copy other people’s texts without their consent. If you give AI infringing lyrics, it will come up with an infringing output, how surprising is that.

This lawsuit is a coaxing manual. How about, we copied the actual chorus from Michael Jackson’s Billy Jean lyrics and directed Udio to sound like Michael Jackson in as much detail and likeness as possible, and Udio made a song that resembles Billy Jean!!! So, the plaintiffs entered into prompt the excerpt “Billy Jean is not my lover, she’s just a girl who claims I am the one”. One can’t make this up. This is monumental bad faith and a waste of time of judicial resources.

Moving on, the plaintiffs copied word for word lyrics excerpts from All I Want For Chrismas is You (disclaimer: I can’t stand this song), inserted the infringed lyrics into the prompt and the name Mariah Carey along with other personal and artistic characteristics of the artist and again, the platform gave them exactly what they wanted, a copyright infringing result.

The exact same thing happened to other very old songs My Girl, I Get Around (Beach Boys), Dancing Queen (solely based on “we can dance we can jive”), American Idiot (interesting choice of song), as well as other holiday songs.

On pages 27, 28 we have an interesting “artist resemblance” table I deemed useful to reproduce as an example of exactly how NOT to make music with AI. I doubt that the great majority of AI users have the same desperate clinging to has-beens as the plaintiffs imagine. Don’t these overexposed artists already have thousands of copycats who have never heard of AI? The market was already saturated with these styles before the advent of AI. Also, the table doesn’t specify what lyrics were used in the prompt, so it is safe to assume that from the outset the lyrics were infringed like in the previous examples.

I hope you read that. It was quite funny. I have a few favorites in there. You ask AI to recreate a famous song by a band that rhymes with the smeetles, and OMG, AI sounds like the Beatles. Do you seriously expect a music AI platform had never heard of the Beatles or did you force the AI to go out of its way to find out about “smeetles” and which famous band rhymes with… Smeetles?!? I looked it up. It is not a word.

Words are the most important thing for LLMs. This is why you can’t ask ChatGPT or Claude to answer your emails, because they see each word in the email they need to answer as a prompt and the result is guaranteed nonsense. Each word inside the prompt (even someone else’s email) is interpreted separately as a part of an instruction, you must think like an algorithm for a minute and understand how a model interprets words.

Unless the model, like the latest Udio, is specifically programmed to ignore the artists names and rhymes thereof (eyeroll really), it will always try to reproduce as accurately as possible the instructions contained in words a human provides. This is why it will always be human users who will bear liability for AI’s output.

The complaint goes on to say that Udio copied other people’s vocals. I agree that it is the case and I agree it is not cool, but that’s the courts fault. There is little will to grant copyright to vocal performers, even in jurisdictions like Canada where vocal performances are specifically protected by the Copyright Act.

I spent 4 years in court trying to stop a label from remixing and selling my own vocal samples, and the only reason I won is because the contested vocals were attached to my own original lyrics in a distant slavic language, so it became eminently clear that the only way to enforce music copyright is to own the lyrics, something that continues being true in the field of AI.

The rest of the complaint adresses the fair use test, so that’s for the jury to decide. On a first sight, the main grievance appears to be the notion of “competition”. The industry is obviously diverting the fair use doctrine in order to enforce an anti-competitive monopole on all the musical loops in the world and trying to use the justice system to prevent any new music being made, unless they own the rights. That in my opinion is another sign this is an abusive lawsuit.

One thing I’m hearing from everywhere on this issue is that if the courts side with the music industry, nothing is in place to stop Russia and China to keep infringing the industry’s IP with the same tools, fair use or not, and they will flood us with their own commercial versions of AI generated output and will charge us for it, while our unsustainable music industry keeps dying anyway. There comes a moment when you just can’t afford to stifle innovation as a court.

Disclosure of Conflict of Interest

Conflict of interest: I can no longer write on artificial intelligence for this blog because I started working with artificial intelligence myself and I’m already on the other side of the fence. From what I’ve seen so far, AI benefits humanity in a more productive and sustainable way than the outdated IP regimes that require …

FTC Bans Non-Competes To Boost Innovation And Fight Exploitation, Canada Must Follow

Noncompete agreements are a widespread and exploitative practice that prevents workers from taking a new job or starting a new business. Non-competes foster toxic work environments, by often forcing workers to either stay in a job they want to leave or bear other significant harms and costs, such as being forced to switch to a …

Rent Cartels By Algorithm Deepen Housing Crisis, Tenants Pay Millions of Dollars Above Fair Market Prices

Dozens of class actions filed since 2022 against the Texas based company RealPage, now consolidated into a single class action in Nashville, Tennessee, demonstrate the single most significant factor behind the last few years monumental rent increases and lack of affordable housing across the continent: widespread and unchecked anti-competitive rent price-fixing directed by shady algorithms.

Since the Propublica investigation in 2022 that put a spotlight on the issue, the situation has only worsened. Rent-fixing by algorithm has enabled and continues to enable landlords and real estate companies to do covertly and indirectly what they can’t do directly. As we speak, rents are being pushed into stratospheric heights, forcing many low earners into encampments.

RealPage’s software uses an algorithm to churn through a mountain of data during the night to suggest daily prices for available rental units. The software uses not only information about the apartment being priced and the property where it is located, but also private data on what nearby competitors are charging in rents. The software considers actual rents paid to those rivals—not just what they are advertising, the company told ProPublica.

Two district attorneys (Washington, Arizona) are suing Realpage and more than a dozen of the the largest apartment building landlords, accusing them of a scheme to artificially fix rental prices in violation of U.S. antitrust law, all while concealing their conspiracy from the public. RealPage has denied any wrongdoing in the earlier cases, and it said it would contest both cases.

Washington

Washington alleges that 14 landlords conspired to keep rental prices high using RealPage’s revenue management platform and seeks triple damages and other relief to restore competitive conditions. Landlords conspired to share information, limit supply, and drive up rents via RealPage’s software which forced tenants to pay millions of dollars above fair market prices.

“In a truly competitive market, one would expect competitors to keep their pricing strategies confidential — especially if they believe those strategies provide a competitive edge,” the lawsuit says.

In response, RealPage declared that there is no causal connection between revenue management software and increases in market-wide rents. The problem with denying causal connection, however, is a flagrant lack of algorithmic transparency and intentional concealment from the public. You can’t both have a secret algorithm and deny causation between the algorithm conduct and the obvious widespread result being artificial rent increase and illegal price-fixing. So that defense will fail.

Arizona

Arizona alleges that by providing highly detailed, sensitive, non-public leasing data with RealPage, the defendant landlords departed from normal competitive behavior and engaged in a price-fixing conspiracy. RealPage then used its revenue management algorithm to illegally set prices for all participants.

Moreover, RealPage’s conspiracy with the landlord co-defendants violate both the Arizona Uniform State Antitrust Act and the Arizona Consumer Fraud Act.

Arizona’s antitrust law prohibits conspiracies in restraint of trade and attempts to establish monopolies to control or fix prices. The State’s consumer fraud statute makes it unlawful for companies to engage in deceptive or unfair acts or practices or to conceal or suppress material facts in connection with a sale, in this case apartment leases.

The illegal practices of the defendants led to artificially inflated rental prices and caused Phoenix and Tucson-area residents to pay millions of dollars more in rent.  

Defendants conspired to enrich themselves during a period when inflation was at historic highs and Arizona renters struggled to keep up with massive rent increases.

The Class Actions

The private lawsuits by renter-plaintiffs accuse RealPage to collude with landlords to artificially inflate rents and limit the supply of housing, alleging that owners, operators and managers of large residential multifamily complexes used RealPage software to keep rental prices in many major U.S. cities above market rates and shared non-public, commercially sensitive information with RealPage as part of the conspiracy.

Two landlords have settled so far.

Age Verification Bill Is Preferable to (too little too late) Online Harms Bill

Age verification to access adult content online is the only viable and sensible way to counter the irreparable damage pornographic platforms cause to society. The fact that Pornhub prefers to block access to their content in jurisdictions that enforce age verification is a sign that Pornhub is nothing less than a criminal platform. If all adult sites are truly “sketchy” to cite our prime minister, and couldn’t be trusted to verify ID, then I don’t understand why they are allowed to legally operate. They should simply be blocked and it would save the government a great deal of money.

Last time I checked, everyone in Canada (and many places in the US) needs to show their papers to buy alcohol, cigarettes, or government weed. Even nightclubs want to see your papers before letting you in. If you don’t want to show your papers, you don’t get in. If you’re too young, you don’t get in. Not once have I been able to get into a club in our (extremely liberal) Quebec before the age of 18, or the (more conservative) province of Ontario before the age of 19. We also hear stories of the time when porn content was only available on tangible format (magazines, videotapes, dvd’s) people had to show ID to access such content. Yet, online porn of the vilest kind has always been accessible to children in Canada. How does that make any sense?

I personally worked on cannabis legalization memoir during my second year in law school in 2016 (two years later, it was legalized) and age verification was always a sine qua non for legalization, given how harmful weed can be to the developing brain. In the same manner, I also recommended a system preventing the sale of cannabis to people experiencing mental health issues. It didn’t get implemented, but it should. You can hate me for it but the science is clear, if you have a diagnosed mental health condition, weed will make you psychotic and likely a danger to yourself and others. In order to counter the overdose epidemic, I am also a proponent of the legalization of opiates, and mainly pharmaceutical opiates that should be available to all addicts, who are often patients in need of pain-management let down by the health system, to be administered by certified nurses in every pharmacy of this country.

However, when it comes to porn, I believe the societal damage exceeds that of any drug. I believe that online porn (through the nonconsensual user generated model that is being pushed and rewarded on popular platforms) is the main factor behind the mental health epidemic amongst minors. Many kids never really fully get to understand how consent works. Those who believe they need to perform the violent acts depicted in porn videos, become suicidal. For many people, it is the first introduction to heterosexual relations and it makes kids hate society and their biological sex. It is not a coincidence that so many kids refuse to conform to their gender.

Given that online porn tends to obfuscate the notion of consent for profit, which in itself promotes content depicting self-harm and assault, studies are proving now and again that online porn is the main driver of nonconsensual content, antisocial behaviour, intimate partner violence, criminal harassment, cyberbullying (to name a few), and now identity theft via deepfakes.

This is not an ideological or political issue. I don’t understand why online pornographers in Canada should be exempt from age checks. Even less do I understand why the federal government keeps giving these platforms a free pass to make their content available to everyone, for free (a paywall would fix a few issues). But, this is the feeling I am getting when reading the Online Harms Bill that took 5 years in the making, with its convoluted system of takedown enforcement, as if Canada ever enforced anything. I myself spent 4 years in court to take down commercial nonconsensual stuff and it only worked out when the adverse party corporation declared a bankruptcy, briefly went out of business, and their international distributor finally caved because even Google intervened before the courts reluctantly did. Canadian courts in general are mildly useless, as they seem to spend most of their efforts in further sexualizing survivors and siding with the adverse parties’ commercial interests (like the government consistently sides with Pornhub). Nobody can tell us how Canada under the online harms bill will enforce “hefty” fines on platforms that operate in Sweden, South Korea, Morocco, or Iceland for example. In my case I had to take down over 5400 pieces of online content spread over 50 countries and an extraterritorial interlocutory injunction wasn’t enough. It was only the beginning. But oh, age verification has nothing to do with Digital ID (something that will happen anyway, don’t worry). It has to do with common sense.

Not once in my life have I heard an argument saying that parents should be the ones to enforce a ban on cigarettes or cannabis, rather than the state to impose age verification at the stores. Not once have I heard the argument that age verification to access cannabis is infringing on the privacy of old farts who want to buy legal cannabis. And don’t start me on the times we needed to disclose our health status AND show government ID to buy food at Costco or Walmart, a trauma that feels like yesterday… (will not forget, neither forgive). Why is online porn so different and important to the federal government that it should be accessible for free to children at all times?


Update: Although Australia failed to follow up on introducing age checks last year, given their unique diaspora of single-user sex workers and (not human trafficked) entrepreneurs, the UK is already surprisingly advanced into determining “trusted and secure digital verification services” with a focus on “layered” checks. It is encouraging to know that government ID alone won’t be enough to access adult sites in the UK, and that users will need to submit at least one instant selfie (timestamped at the moment of access) to prove they really are who they say they are. If photos on ID don’t match selfies, users’ access to the sites will be blocked. This is easily enforceable through third party facial recognition AI that will not store any personal information, face scans, or selfies, and will only assess age on a moment to moment basis. Contrary to banks who regularly leak users personal information for the simple reason that they need to store such data, it won’t be possible for pornsites to leak anything because they won’t have access to any personal information, and the third party AI verifying it won’t be allowed to store it.

If we worry so much about porn sites handling sensitive information, then we should bar them from taking users credit cards for their premium content. As it is now, they have large databases of credit cards. A credit card is sufficient to perform a full credit check on the holder, so it is pretty damn sufficient at identifying a user.

Canada should follow in the steps of the UK and rewrite the online harms act, first to remove the bizarre ideological sections regarding hate speech (we already have hate speech offenses in the criminal code and more than enough caselaw on the matter), as well as the bizarre life sentence for vague ideological thought crimes, since it has nothing to do with protecting children. I wouldn’t mind a life sentence for child porn producers and pedophiles, however, who currently get out with a slap on the wrist; (2) borrowing from the UK Online Safety Act, to mandate the use of trusted and secure digital verification services including real time facial recognition, face scans, digital wallets, government ID, selfies and many combinations thereof. Of course the cost will be relayed on platforms. This will unite Bill S210 and C63 on same footing; (3) similar to the UK Act, Canada should exempt Twitter, Reddit, and other mainly text-based platforms; (4) leave the 24 hour takedown requirements, but create an expeditious appeal process to affected users to reinstate content that doesn’t fall under the purview of the act, and impose dissuasive fines including the payment of attorney fees for frivolous takedown requests (à la DMCA by analogy). (5) to err on the safe side, Canada should mandate all mobile providers to automatically block porn sites, so that only computer cameras would be used for real time face scans and face video.

Another reason to block adult mobile apps is that all mobile apps are specifically designed to collect and store personal information even when you are not using them. Mobile OS also regularly take photos, videos and recordings of users for the purpose of improving their experience. It is standard practice to collect extensive personal information on mobile users since intelligent phones exist. Cybersecurity experts are able to decrypt such data packets while hackers (or law enforcement with or without a warrant) are able to intercept and use them. If you access porn on your phone, you can safely expect that your most intimate and biometric details are stored in many many places, and you would be even more surprised to learn that you automatically consented to all of it. Age verification would be the least of your problems. There are tons of applications capable of accurately guessing your age based on what you do with your phone.

Finally, we should never leave it to parents to protect children, because if you read criminal jurisprudence, parents and especially foster parents (and other family members) are often factors of child abuse and child pornography in this countryfor the reason that they have unfettered access to these children. Abusive parents also get away with a slap on the wrist. Since we don’t trust parents to respect children’s choice of gender, it would be a little hypocritical to trust them to safeguard their kids from porn. I wouldn’t.


Update 2nd: after wasting a few hours on online harms bill scenarios, I predict the bill has no future other than to target speech criticizing the bill (like this post) and to ban survivor speech (already going on without the help of the bill). So basically, if the bill ever comes to exist, it will achieve the exact opposite effect of its apparent intended purpose. As Australia has shown, nothing concrete will happen in the sphere of child protection anywhere. These bills are all for show, as corporate commercial interests will always trump child safety and consent. Even the UK will only apply age checks from 2025. Why 2025? Because the UK will likely also bail before the promised deadline and drop the checks altogether shortly before 2025. Comparative law should be renamed to comparative inefficiency.

Just like electric cars promises are flopping all over the place, because you can’t tell people to choose between doing their laundry or charging their car to go to work, you also can’t authorize a mega-polluting wetland-destroying Swedish project on unceded Mohawk territory, and pretend to care about the environment or ancestral rights in the same sentence. And very obviously, you can’t make porn accessible to children for free at all times and pretend to be a good person just because you wrote another fake bill (which is not quite written yet).

The point is, do not wait for a bill or a court to save you. As I previously said, the only way to enforce anything in the realm of nonconsensual material is to arm yourself with patience and look for ways in and out of court to apply pressure on local courts via foreign legal mechanisms, file police reports and Interpol reports, seek injunctions, sue platforms, sue banks that continue to work with rogue platforms, use the takedown and delisting mechanisms of search engines, make videos, hit film festivals, write open letters to ministers.. and whatever other grassroots ideas you may come up with. If you sue in damages, sue in the US, not Canada. The important thing is to take action every single day. I love how in the US people pick up the phone and call their state rep or senator. The only way out is to let the whole world know that you did not consent. Don’t stop until everything is taken down to the ground.

A List of Generative Patent Drafting Software

Patent drafting can be very technical, cumbersome, and time-consuming, yet there is no guarantee that your patent will be approved. One may need to file in several countries, thus increasing expenses. Patent drafting AI is filling a real and urgent need to slash patent filing costs to minimum. Removing prohibitive monetary barriers to patent filing is poised to empower inventors to file as many applications for as many patents as they can possibly think of on an ongoing basis. This is good for innovation.

Here is a non-exhaustive list of patent-drafting bots by alphabetical order. We haven’t had the opportunity to test them, so this is not an endorsement. The list excludes software that only features search and research specs:


A useful reading: https://www.americanbar.org/groups/intellectual_property_law/publications/landslide/2018-19/january-february/drafting-patent-applications-covering-artificial-intelligence-systems/

China Internet Court Attributes AI Generated Image Copyright To Human Prompt Creator

On Monday, the Beijing Internet Court held that a human plaintiff prompt is sufficient to invoke copyright protection in a Stable Diffusion generated image, so long as the output qualifies as an “original” work. Copyright is determined on a case by case basis, so this decision is not entirely inconsistent with other AI jurisprudence trends …