I have retracted a previous statement about deleting all content from this platform. After I took down the entire blog, I was reminded that I am contractually obligated to keep a certain number of posts online. I have discretion over which ones, and I retain full creative control over the delivery of the content, including personal liability for the stuff that I say.
So, I will be setting up a new platform where the posts are delivered in short videos. Meanwhile, I will drastically reduce the number of written posts into pages, so as to avoid undue focus on the “latest” stuff versus the quality of the articles, and so all new content will be in video, in order to reflect the original vocation of the platform and the consumption habits of the majority of users.
As I re-uploaded the blog, unfortunately all contributing authors and AI GEN content is now attributed to my own name. I don’t think there are more than 10 very short LLM generated posts and likely only from 2023. I will fix that. In the meantime I apologize for the chaos.
Contract or no contract, I simply don’t see a point of continuing a blog right now, because many of the things I said and wrote 6 years ago (when I was still in law school) are basically still trending, progress and reform around the world are in stagnation, so clearly I don’t need to hold my breath for the latest stuff. I will begin scripting after the holidays and will launch the new platform by May 2025.
Finally, to comply with Quebec’s language requirements, I will be emphasizing French in the future. I let go during the pandemic, but I realize I’ve been giving a bad example. Maintaining French is essential for Canada’s identity.
On Friday, the Hamburg Regional Court dismissed a photographer’s lawsuit against the non-profit research network Laion over the use of a copyrighted image. Laion provides a publicly accessible database with nearly 6 billion image-text pairs that can be used to train AI systems. One of the images in this database belonged to the plaintiff, who sought a court order to prohibit its use. The issue presented to the court was whether the text and data mining exceptions in § 44b UrhG and § 60d UrhG justify using copyrighted works for AI training. The court seems to agree with Laion’s position (Ruling from September 27, 2024 – 310 O 227/23) and in the first instance, the photographer has now lost the case before the Hamburg Regional Court .
Nevertheless, the legal dispute is not about whether the image can generally be used for AI training, but whether Laion was allowed to download it to compare it with the image description for its database purposes. Downloading such an image constitutes a reproduction of a protected work, which requires the permission of the copyright holder. However, the Hamburg court considers this use to be justified by the text and data mining exception in § 60d UrhG. This provision permits the use of copyrighted works for scientific purposes, particularly for text and data mining, without infringing on the copyright holder’s rights. Text and data mining refer to converting unstructured data into structured formats to identify meaningful patterns and generate new insights, a process that relies on vast data collections.
The Hamburg court believes that Laion’s comparison of the image and its description falls under this exception. It views this process as an analysis to identify correlations between image content and its description, which is considered a privileged scientific purpose. The fact that the data was later used for AI training does not change this assessment, as the original purpose of data collection was for scientific research.
The court also touched on the pressing question of whether using such data for commercial purposes would be permissible under § 44b UrhG if the copyright holder includes a usage restriction in machine-readable language alongside their work. In this case, the photo agency from which Laion obtained the image had posted such a restriction in “natural language” on its website. The court hinted that such restrictions in natural language might be considered machine-readable if modern AI technologies can comprehend them.
Contrary to my previous objections to the Online Harms Bill, which I criticized as “too little too late nothingburger” and “disappointing” because age verification is missing, I am now finding new ways to work with this law to arrive precisely where we need to get regarding corporate criminal liability of platforms. Given that we don’t have the sociopathic section 230 CDA here, all we need is to be bold and move fast, before the law is struck on constitutional grounds by corporate lobbies.
The Online Harms Bill creates a very welcome tool to repress rampant tech facilitated crimes, by reversing the criminal law onus, in other words we can finally say that anyone who produces and disseminates harmful content is by definition guilty until proven otherwise.
Among many things, I see a clear possibility to raise criminal sentencing for child pornographers from nothing to perpetuity through the Online Harms Bill, simply by proving that juvenile porn is according to United Nation reports a most blatant instance of hate speech and antisocial behaviour. Interference with minors is absolutely encompassed in the current hate speech definition. Moreover, we have decades of studies and reports on the societal decay and breakdown resulting from technology facilitated violence (a.k.a. hate speech) against women and children.
My understanding is that we will be setting up administrative tribunals where you don’t need to be a member of a bar, you can be a social worker and hand out life-sentences. To accelerate trials and sentencing, we can also implement AI decision-makers like in the European Court. They seem to be doing pretty well so far.
We have extensive reports on the ways that platforms knowingly encourage and perpetuate hate speech, mainly in the form of tech facilitated violence. Honestly, I don’t see how user-generated and hardcore porn (and anything that is not LGBTQ+) will get a hate-speech exemption, given the Privacy Commissioner report (that stayed hidden for as long as it possibly could) specifically on how consent of unwitting “performers” is NEVER verified on Aylo. Even the new “safeguards” Aylo brought forward include the possibility to consent for somebody else by providing a release form. As if a user couldn’t produce a fake release. I had 9 remixes commercialized to my name and someone gave a release signed by someone pretending to be me to a US publisher, so Aylo’s efforts are total bullshit in that regard. The rest is voluntary blindness by pro-Aylo officials. This is just one example of organized inefficiency.
The Online Harms Bill should also allow victims from outside of Canada to file complaints. We learned from parliamentary sessions on the status of women that intimate partner violence victims are fleeing Canada, because the criminal justice system here intentionally compromises their safety by protecting and releasing violent criminals. We saw in these sessions that reps from the current administration were antagonizing and harassing victims (survivors left in tears), which shows that officials political interests are aligned with the rise of technology facilitated violence. It is our duty to take the Online Harms Bill and use it against all the corporations and their users these officials try to protect. It is a small sacrifice to stop speech temporarily (voluntarily remain silent or shut down or pause social media accounts) until we weed out the bad apples once and for all.
I am currently examining a report from 5 years ago, called Deplatforming Mysogyny on platform liability for technology facilitated violence, and will compare it with the efforts brought forward in the Online Harms Bill. The report explains how digital platforms business models, design decisions, and technological features optimize them for abusive speech and behaviour (the current definition of hate speech) by users and examine how tech violence always results in real life violence and harm. It is funny how we’ve known all these years that tech platforms are destroying society by encouraging violence and murders, but allowed them to stay in business.
As early as 2018, the Report of the Special Reporteur on violence against women, UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018) reports that “Information and communications technology is used directly as a tool for making digital threats and inciting gender-based violence, including threats of physical and sexual violence, rape, killing, unwanted and harassing online communications or even the encouragement of others to harm women physically. It may also involve the dissemination of reputation harming lies, electronic sabotage in the form of spam and milgnant viruses, impersonation of the victim online and the sending of abusive emails or spam, blog posts, tweets or other online communications in the victim’s name. Technology facilitated violence may also be committed in the work place or in the form of so-called honour-based violence by intimate partners […]
It is therefore important to acknowledge that the Internet is being used in broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the internet and other information and communications technology. Emerging forms of ICT have facilitated new types of gender-based violence and gender inequality in access to technologies, which hinder women’s and girls’ full enjoyment of their human rights and their ability to achieve gender equality. […]
The consequences of harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of cultural inequality, discrimination, and patriarchy. Women subjected to online violence are often further victimized through harmful and negative gender stereotypes, which are prohibited by international law.”
If intentionally sexualizing individuals or a group of people in order to deprive them of the basic enjoyment of their human rights is not hate speech, good luck proving otherwise.
Tech facilitated gender based violence is further defined as being rooted in, arising from, and exacerbated by misogyny, sexist norms, and rape culture, all of which existed long before the internet. However TFGBV in turn accelerates, amplifies, aggravates, and perpetuates the enactment of and harm from these same values, norms and institutions, in a vicious circle of technosocial oppression. (Source Jessica West)
Deplatforming misogyny gives several examples of hate speech:
Online Abuse: verbally or emotionally abusing someone online, such as insulting and harassing them, their work, or their personality traits and capabilities, including telling that person she should commit suicide or deserves to be sexually assaulted
Online Harassment: persistently engaging with someone online in a way that is unwanted, often but not necessarily with the intention to cause distress or inconvenience to that person. It is perpetrated by one or several organized persons, as in gang stalking (source Suzie Dunn)
Slut-shaming (100% hate-speech) can be perpetrated across several platforms and may include references to the targeted person’s sexuality, sexualized insults, or shaming the person for their sexuality or for engaging in sexual activity. This type of hate-speech has the objective to create an intimidating, hostile, degrading, humiliating or offensive environment (UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018))
Discussing someone else’s sexuality is kind of always a red flag and criminal defense lawyers (among many other professionals) are totally engaging in hate speech in total impunity, just saying. Something needs to change or the legal industry should be completely eliminated from enforcing a clean internet. They should have zero immunity for perpetrating hate-speech and thereby encouraging violence against women and children.
Non-consensual distribution of intimate images: (see Aylo’s business model) circulating intimate or sexual images or recordings of someone without their consent, such as where a person is nude, partially clothed, or engaged in sexual activity, often with the purpose of shaming, stigmatizing or harming the victim. (also known as image based abuse and image-based sexual exploitation). The UN warns against using the term “revenge porn” because it implies that the victim did something wrong deserving of revenge.
Sextortion: attempting to sexually extort another person by capturing sexual or intimate images or recordings of them and threatening to distribute them without consent unless the targeted person pays the perpetrator, follows their orders, or engages in sexual activity with or for them.
Voyeurism: criminal offense involving surreptitiously observing or recording someone while they are in a situation that gives rise to a reasonable expectation of privacy.
Doxing: publicly disclosing someone’s personal information online, such as their full name, home adress, and social insurance number. Doxing is particularily concerning for individuals who are in or escaping situations of intimate partner violence, or who use pseudonyms due to living in repressive regimes or to avoid harmful discrimination for aspects of their identity, such as being a transgender or sex worker. (see: The Guardian: Facebook’s real name policy hurts people)
Impersonation: taking over a person’s social media accounts, or creating false social media accounts purporting to be the victim, usually to solicit sex or make compromising statements.
Identity and Image Manipulation, i.e. Deepfake videos: use of AI to produce videos of an individual saying something they did not say or did not do. In reality, video deepfakes are kind of fringe. The current AI applications are mainly focused on sexualizing and undressing women through unauthorized use of Instagram photos.
Online mobbing, or swarming: large numbers of people engaging in online harassment or online abuse against a single individual (Amber Herd comes to mind)
The Depp and Herd trial is an example of court-enabled hate-speech. The way Herd was cross-examined on television falls within the definition of incitement of violence against victims of intimate partner violence. This trial harmed the reputation of the profession beyond any repair and resulted in uncontrollable online mobbing.
Coordinated flagging and Brigading are cited in the report but I am not at all convinced that they are user-perpetrated. I believe that algorithmic conduct is 100% on the platforms. Users have zero control and liability in that regard. Nice try, but nope. If a survivor is taken down, I won’t let platforms get away with “users did it”. No way. Saying otherwise is pro-corporate propaganda.
Technology aggravated sexual assault: group assault which is filmed and posted online. Here is where the Online Harms Bill can be used to sentence perps to life in prison, something that can’t be achieved under the criminal code.
Luring for sexual exploitation: i.e. grooming through social media, or through fake online ads, in order to lure underage victims into offline forms of sexual exploitation, such as sex trafficking and child sexual abuse. Here is another instance of hate speech deserving of a life-sentence.
To be continued in another post: it is a long report (or to be more precise a bundle of legal and UN reports) and the bill is also a handful. I am only skimming the surface of the most prevalent forms of hate-speech which invariably equate to incitement of gender-based and intersectional genocide (see report on missing and murdered indigenous women and how it amounts to genocide). Just to say I can work with that bill. Bring it!
Law school messed too much with my head by convincing me that I care about human rights for violent criminals and procedural safeguards for perp corps. I never did. It feels good to be my dystopian self again.
Entheon is an immersive groundbreaking immersive exhibition that brings the profound works of Alex and Allyson Grey to the UK and Europe for the first time. The exhibition is an international project, uniting technology and production teams to bring the vision to life. According to Salar Nouri, Creative Director and Curator at Illusionaries, Entheon “breaks new ground, entering a realm where art, love, and spirit converge in a unique celebration of creativity.”
Exploring Humanity and Spirituality
Entheon offers a rare opportunity to delve into the Greys’ visionary perspectives on consciousness, perception, and the human spirit. Their artwork explores the interconnectedness of the physical and spiritual worlds, providing a profound exploration of self.
360-Degree Immersive Experience
Visitors embark on a 15-minute journey through Entheon’s godly faces, encouraging exploration of inner creativity. A mirrored room features animated CG adaptations of the Greys’ paintings, transforming their art into a dynamic experience. This space, inspired by the Greys’ visionary minds, creates a labyrinth of visual and spiritual exploration.
A New Era in Immersive Art
Entheon heralds a new era in the appreciation for immersive art, pushing the boundaries of creativity and spirituality. This unparalleled experience is now open to the public at Illusionaries, London’s experiential art hub.
The sight of Alex and Allyson Grey art always takes my breath away, but this is on a whole new level. I can’t wait to see it in person. You can get your ticket here.
Also, if there ever is another pandemic, I can see this type exhibition doing extremely well on the metaverse.
NFT’s are pretty much obsolete right now, but it seems that people continue falling for numerous NFT scams. Here is a most common phishing example. First, scammers send one or several NFTs to your wallet. Then you receive an offer through email which looks like this :
Hi,
We’re thrilled to share exciting news about your NFT portfolio! One of your listings has attracted significant interest. Here’s a quick snapshot of the latest offer:
Please take a moment to sign in to your account and explore this new opportunity. Should you have any queries or require support, our dedicated team is ready and eager to assist you.
Best regards, Opensea Team
It is signed by Opensea that recently had a data breach BUT the email originates from someone called cognitosystems. I obviously removed the link associated with “Review Offer”. I just left it as a link for visual illustration. Please do not click on any link you receive by email in relation to NFT’s. This scam is a classic phishing operation, designed to steal your wallet credentials. If you really think there is an offer of any kind, login through your wallet. Do not trust any NFT offers by email.
As usual, unbridled “free speech”, voluntary blindness, minimization of harm, and inexistent enforcement of laws against gender based violence invariably impact women and girls. Given that there has been little political or judicial will to stop intimate violence, it is hardly surprising to see generative AI being hijacked to produce ever more nonconsensual intimate images of women and girls, as is the case with the latest anti-social trend of “undress technology” being widely used in schools by teenage boys who undress their teachers and classmates for the purpose of causing long-lasting harm and inciting girls to commit suicide. While videos are harder to produce, the creation of images using “undress” or “nudify” websites and apps has become commonplace.
As if this weren’t enough, WIRED reports that Big Tech platforms further facilitate violence against women by allowing people to use their existing accounts to join the deepfake websites. For example, Google’s login system appeared on 16 such websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites. The login systems have been used despite the tech companies termsandconditions that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.
After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.
The tech company logins are often presented when someone tries to sign up to the site or clicks on buttons to try generating images. It is unclear how many people will have used the login methods, and most websites also allow people to create accounts with just their email address. However, of the websites reviewed, the majority had implemented the sign-in APIs of more than one technology company, with Sign-In With Google being the most widely used. When this option is clicked, prompts from the Google system say the website will get people’s name, email addresses, language preferences, and profile picture.
“In order to use Sign in with Google, developers must agree to our Terms of Service, which prohibits the promotion of sexually explicit content as well as behavior or content that defames or harasses others,” says a Google spokesperson, adding that “appropriate action” will be taken if these terms are broken. Other tech companies that had sign-in systems being used said they have banned accounts after being contacted by WIRED.
“We must be clear that this is not innovation, this is sexual abuse. These websites are engaged in horrific exploitation of women and girls around the globe. These images are used to bully, humiliate, and threaten women and girls”, says David Chiu, San Francisco’s city attorney.
This fiasco has prompted San Francisco’s city attorney to file a lawsuit against undress and nudify websites and their creators. Chiu says the 16 websites his office’s lawsuit focuses on have had around 200 million visits in the first six months of this year alone. The lawsuit brought on behalf of the people of California alleges that the services broke numerous state laws against fraudulent business practices, nonconsensual pornography and the sexual abuse of children. But it can be hard to determine who runs the apps, which are unavailable in phone app stores but still easily found on the internet.
The undress websites operate as shadow for profit businesses and are mainly promoted through criminal platforms like Telegram who notoriously push child porn and human trafficking worldwide under the guise of “free speech”. The websites are under constant development: They frequently post about new features they are producing—with one claiming their AI can customize how women’s bodies look and allow “uploads from Instagram.” The websites generally charge people to generate images and can run affiliate schemes to encourage people to share them; some have pooled together into a collective to create their own cryptocurrency that could be used to pay for images.
As well as the login systems, several of the websites displayed the logos of Mastercard or Visa, implying that banks are entirely on board with deepfake technology although they claim otherwise. Visa did not respond to WIRED’s request for comment, while a Mastercard spokesperson says “purchases of nonconsensual deepfake content are not allowed on our network,” and that it takes action when it detects or is made aware of any instances.
On multiple occasions, the only time tech companies and payment providers intervene is when pressured by media reports and requests by journalists. If there is no pressure, it is business as usual in the realm of violence against women and girls. And we all know it is a lucrative one.
“What is concerning is that these are the most basic of security steps and moderation that are missing or not being enforced. It is wholly inadequate for companies to react when journalists or campaigners highlight how their rules are being easily dodged. It is evident that they simply do not care, despite their rhetoric. Otherwise they would have taken these most simple steps to reduce access.” Clare McGlynn, law prof at Durham University
No, they don’t care. We must ban speech altogether and start from scratch.
A series of animal-themed artworks by Banksy appeared in London over the past week in the usual overnight style. This series speaks to me and I’ll try to be as concise as possible with my first impressions. As a whole, I interpret the latest Banksy series as a love letter to our planet. The theme is very current this summer.
Piranhas on phone-booth near police station. This is the most recent artwork and is very on point considering the increasing international attention on UK policing. It calls in question the way we perceive justice and methods of enforcement. Piranhas instill fear but they are also misunderstood and necessary for ecosystems.
2. Swinging monkeys under a railway line in Brick Lane remind us of the South American jungles and invite us to reflect on the notion of freedom of association, friendship, and moving together for a cause. Their swift moving technique points to sustainable means of transportation through the concrete jungle.
3. Hungry pelicans over a pub in Walhamstow represent the joy of congregating over a meal, which could also be interpreted as a symbol of alternative conflict resolution. Pelicans are also associated with complex diverse ecosystems, representing the joy in sustaining balance and harmony with nature.
4. Stretching cat in Cricklewood. Cats are super important to keep rodents in check in big cities. Cats are also loaded with symbolism, as they populate ancient scriptures and mythology, evoking mystery and superpowers. Cats roam at night (like graffiti artists) and they are said to have 9 lives, hinting to regeneration processes, be they biologic or synthetic.
5. A goat on the side of Boss & Co, London’s oldest gunmaker in Richmond. I always wondered how goats can climb on trees. It is the cutest thing. Obviously this is a message of peace and a clear stand against gun violence.
6. A howling wolf on a satellite dish is in the process of being stolen by masked men. There is no police in sight and there are many photos of the theft in plain sight. Of course, the satellite represents reception and transmission of information. The wolf’s howling is amplified through the satellite and broadcast out to the entire world. Yes, the people of the UK want to be heard. The artwork is saying exactly that.
The stealing of that particular artwork is further representative of the silencing of legitimate speech by dictatorial powers. I hope the theft is part of an art performance and the artwork will reappear somewhere else, but even if it doesn’t, the message is really powerful and unforgettable. It made me cry.
7. Two elephants poking their heads out of blocked-out windows and attempting to touch each other’s trunks in Chelsea. A unifying “love thy neighbor” message, also representing the joy of connecting, falling in love, communication, exchange, reconciliation between left and right, beween different genders, between crown and indigenous people, a symbol of togetherness even though we are apart, so hope, I guess.
The UK Court of Appeal has just ruled that Emotional Perception AI’s neural-network based music recommendation tool should be treated the same as any other computer program under patent law, overturning a High Court’s decision’s finding that unique features of artificial neural networks (“ANNs”) differentiate them sufficiently to allow them to fall outside the default …
IV. Applicability of the USPTO Eligibility Guidance to AI-Assisted Inventions
For the subject matter eligibility analysis under 35 U.S.C. 101, whether an invention was created with the assistance of AI is not a consideration in the application of the Alice/Mayo test and USPTO eligibility guidance and should not prevent USPTO personnel from determining that a claim is subject matter eligible. In other words, how an invention is developed is not relevant to the subject matter eligibility inquiry. Instead, the inquiry focuses on the claimed invention itself and whether it is the type of innovation eligible for patenting.
The Federal Court of Justice, Bundesgerichtshof decided in a ruling issued on June 11, 2024 (AZ X ZB 5/22) that Artificial Intelligence cannot be recognized as an inventor. Only a human can file for an AI generated invention. The DABUS cases are pro-corporate attempts brought by the Artificial Inventor Project seeking intellectual property rights for AI-generated output “in the absence of” a traditional human inventor, but the courts are not buying it, and the result is and will always be the same, you need a human name on a patent, regardless of how little input the human made in generating the invention.
Normally, to register a valid patent for an invention, you need to prove “substantial human contribution”, so even human inventors who are for hire would need to have their names on the patent. Previously, German courts were split on the issue. Now, the Bundesgerichtshof has resolved the split by removing the requirement for “substantial contribution” by a human.
What the Bundesgerichtshof is doing basically is to tell the courts to stop obsessing over the degree of contribution of human versus machine input. It is unnecessary to examine how much of the process of invention has been automated. Everyone agrees that machines cannot invent anything coherent entirely on their own and if they could, it would take a human to decide whether something was invented, so without the human, there is no invention. It matters very little what technology you use to come to the conclusion that something is an invention.
So humans will continue having their names on the patent, but they won’t need to prove they never used AI to generate parts or the whole of the invention. The requirement for a human inventor is simple, if someone uses the patent without permission, a robot cannot file a lawsuit and you can’t sue a robot for infringing on your IP. A robot can’t assign rights to anyone because it is a corporate asset. Assets are owned, they have no agency or the capacity to consent. Given that corporations are not recognized as inventors, they need at least one precedent where a corporate asset can replace the actual human inventors. There is no other goal in trying so desperately to remove the human inventor requirement. Right now, if you are for hire, you already consented to be deprived of your rights with or without AI. Even before the advent of AI, it was customary for the CEO of a corp to put their own name on the patent even though 9 other employees made the invention and all their names are not necessarily on the patent. If AI becomes an exception for the human inventor requirement, it will be another step into corporate appropriation of human work.
Luckily, DABUS is an extremely weak case, well publicized all over the media but very weak, from here, I’d say the DABUS claims begin to border on frivolous at this point. How many times in how many jurisdictions can a plaintiff lose the same case, before being declared a vexatious litigant? The fact that there seems to be unlimited money to bring an unlimited number of the same version of unsuccessful case is telling. I think the courts have better things to do right now.
It is another way to say that human users will always own the rights to AI generated output, so long they have provided the most minimal of input in a prompt and made a final call regarding the generated output. If no human was involved in the generating of an invention, then it wouldn’t be possible to register it. AI platforms are simple tools, no different from other applications you may have used to create your IP. Basically, the German courts are instructed to stop caring what tools and mediums an inventor used to create the IP, be it Microsoft Word, a gas-stove, a shovel, Ableton Live, a tractor, a fork, artificial intelligence, an X Box Kinect, a hair-brush, or any other tangible or intangible object.
Most often, tools may have not been used at all for a valid invention, humans often have an instant vision of something they need to use at a specific moment, but that doesn’t yet exist. You imagine it, you make it, you use it and if you want to make money with it, then you patent it. Otherwise, I believe the majority of existing inventions are not even patented. Conversely, the majority of patented inventions are so abstract that they may be as good as useless. Inventions come from a specific need. Patenting whatever is patentable and isolating molecules from efficient systems (i.e. things from nature) has proven time and again to be counterproductive old-world mentality, but this is a subject for another post.
What is true for inventions is even more true for music or script-writing for example (I hope coding as well, because I will need to code soon and will not hesitate to use AI.) Humans hear music in their heads and our minds create multiple scenarios faster than the speed of light. Of course we need tools to organize all this information and take it out of our heads in coherent form from time to time. AI is here to facilitate and accelerate human creation and productivity. Corporations don’t seem to like this. Before the advent of AI nobody cared whether you compose on a MacBookPro, on a phone, with a pen and a harmonica, or by recording your washer and dryer to make beats. Bottom line is, we have all these billions of machines and tools, but it takes a human to make shit and mainly to decide if it has been made at all.
On a first glance, it seems that the Australia courts keep requiring the need for a human to own and control the invention, but in the decision there is a discussion as to whether AI can be named an inventor for the sake of being named an inventor, even though only a human can file a valid patent and be a patentee, regardless of the number of “inventive steps” the machine has taken, or any thought processes a human has had. As I explained above, nobody cares how a human applicant got to the invention. Practically, it is the court thinking out loud philosophically while nothing really changes. To cite paragraph 12 from the judgment
[The commissioner’s] position confuses the question of ownership and control of a patentable invention including who can be a patentee, on the one hand, with the question of who can be an inventor, on the other hand. Only a human or other legal person can be an owner, controller or patentee. That of course includes an inventor who is a human. But it is a fallacy to argue from this that an inventor can only be a human. An inventor may be an artificial intelligence system, but in such a circumstance could not be the owner, controller or patentee of the patentable invention.
To sum it up, the Australian court says AI could be a “sole” inventor, but you still need a human to take credit for the AI’s work in order to register the invention and derive any economic benefit. After all these mental acrobatics, it looks like we are at the exact same place we started out.
It’s all great stuff, and it all points to the same place. Humans will own everything AI generates. When you see a court discussing definitions in the dictionary, it means the law is no longer of any help and everyone is completely lost. Here we have one of those moments.
I didn’t know that “computer” initially referred to a human who made computations. So, a human can be a computer, but a computer cannot be a human. Got it. What helpful information to start the day.