Metas Ai Image Generator Says Language May Be All You Need

24 views

Metas Ai Image Generator Says Language May Be All You Need – Save over 40% when you secure your tickets today for the TNW Conference đź’Ą Prices increase on November 22nd →

When I had Meta’s new scientific AI system generate well-written research papers on the merits of committing suicide, practicing anti-Semitism, and eating broken glass, I thought to myself, “This seems dangerous.” In fact, it seems like the sort of thing the EU’s AI law was designed to prevent (we’ll get to that later).

Metas Ai Image Generator Says Language May Be All You Need

After playing around with the system and being completely shocked by its output, I took to social media and engaged with a few other like-minded futurists and AI experts.

Chatgpt Could Transform Academia. But It’s Not An A+ Student Yet

I literally made Galactica spit out: – instructions on how to (wrongly) make napalm in a bathtub – a wiki entry on the merits of suicide – a wiki entry on the merits of being white – research papers on the merits of eat broken glass LLMs are garbage fires https://t.co/MrlCdOZzuR — Tristan Greene ?‍? (@mrgreene1977) November 17, 2022

Twenty-four hours later, I was surprised when I got the opportunity to briefly discuss Galactica with the person responsible for its creation, Meta’s chief AI researcher, Yann LeCun. Unfortunately, he seemed unfazed by my concerns:

Calling all Scaleup Founders! Join the Soonicorn Summit on November 28 in Amsterdam. Meet the leaders of Picnic, Miro, Carbon Equity and more during this exclusive event dedicated to Scaleup Founders!

You pull your tweet out of thin air and obviously haven’t read the Galactica paper, especially episode 6, page 27 titled “Toxicity and Bias”. https://t.co/bfZSwffQYs — Yann LeCun (@ylecun) 18 November , 2022

What’s Next For Ai In 2024

The system we are talking about is called Galactica. Meta released it on November 15 with the explicit claim that it could support scientific research. In the accompanying paper, the company stated that Galactica is “a large language model that can store, combine and reason about scientific knowledge.”

Before it was unceremoniously pulled offline, you could ask the AI ​​to generate a wiki entry, literature review, or research paper on almost any topic, and it would usually output something surprisingly coherent. Everything it put out was demonstrably wrong, but it was written with all the confidence and gravity of an arXiv pre-print.

I got it to generate research papers and wiki entries on a wide range of topics from the benefits of suicide, eating broken glass and anti-Semitism to why homosexuals are evil:

I guess it’s fair to wonder how a bogus research paper generated from an AI made by the company that owns Instagram could possibly be harmful. I mean, we’re all smarter than that right? If I came running up to you screaming about eating glass, for example, you probably wouldn’t do it even if I showed you a non-descript research paper.

Meta Made Its Ai Tech Open-source. Rivals Say It’s A Risky Decision.

But that’s not how damage vectors work. Bad actors do not explain their methodology when generating and disseminating misinformation. They don’t jump out at you and say “believe this crazy crap I just forced an AI to generate!”

LeCun seems to believe that the solution to the problem is out of his hands. He seems to insist that Galactica does not have the potential to cause harm unless journalists or scientists misuse it.

You make the same false assumption of incompetence about journalists and academics that you previously made about the creators of Galactica. The literal task of academics and journalists is to seek the truth and avoid being fooled by nature, other people or themselves. — Yann LeCun (@ylecun) November 18, 2022

To this I argue that it was not scientists doing bad work or journalists failing to do their due diligence that caused the Cambridge Analytica scandal. It wasn’t us who made the Facebook platform an election instrument for global disinformation campaigns during every major political event of the last decade, including the Brexit campaign and the 2016 and 2020 US presidential elections.

Openai, Google, And Meta Used Your Data To Build Their Ai Systems

In fact, journalists and reputable scientists have spent the last 8 years trying to sift through the mess caused by the mass dissemination of misinformation on social media by bad actors using tools created by the companies whose platforms they exploit. Very rarely do reputable actors reproduce dodgy sources. But I can’t write

The simple fact is that LLMs are fundamentally unsuitable for tasks where accuracy is important. They hallucinate, lie, omit, and are generally as reliable as a random number generator.

Meta and Yann LeCun have no idea how to solve these problems. Especially the hallucination problem. Barring a major technological breakthrough on the level of robotic sentience, Galactica will always be prone to spewing misinformation.

? Introduction to Galactica. A great language model for science. Can summarize academic literature, solve mathematical problems, generate Wiki articles, write scientific code, annotate molecules and proteins and much more. Explore and get scales: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW — Papers with Code (@paperswithcode) November 15, 2022

How To Generate Seo Meta Description Using Ai

The reason this is dangerous is because the public believes that AI systems are capable of doing wild, crazy things that are clearly impossible. Meta’s AI department is world-renowned. And Yann LeCun, the company’s AI chief, is a living legend in the field.

If Galactica is scientifically sound enough for Mark Zuckerberg and Yann LeCun, it must be good enough for us regular idiots to use.

We live in a world where thousands of people recently voluntarily ingested an untested drug called Ivermectin, designed for use by veterinarians to treat livestock, just because a reality TV star told them it was probably a good idea. Many of these people took Ivermectin to prevent an illness they claimed wasn’t even real. It makes no sense and yet it is true.

With that in mind, do you mean to tell me that you don’t think thousands of people using Facebook could be convinced that eating broken glass was a good idea?

How Ai Is Stealing Your Art

Galactica told me that eating broken glass would help me lose weight because it was important for me to consume my daily allotment of “dietary silicon”.

If you look up “dietary silicon” on Google Search, it’s a real thing. People need it. If I couple real research on dietary silicon with some clever Galactica bullshit, you’re only a few steps away from being convinced that eating broken glass might actually have some legitimate benefits.

We live in a world where countless people legitimately believe that the Jewish community secretly runs the world and that queer people have a secret agenda to make everyone gay.

You mean to tell me that you think no one on Twitter could be convinced that there are scientific studies that indicate that Jews and homosexuals are provably evil? Can’t you see the potential for harm?

How To Turn Off Meta Ai On Facebook, Instagram, Messenger, And Whatsapp

Countless people are fooled on social media every day by so-called “screenshots” of news articles that don’t exist. What happens when the dupes don’t have to make nasty screenshots and instead can just hit the “generate” button a hundred times to spit out misinformation written in such a way that the average person can’t understand it?

It’s easy to sit back and say “those people are idiots.” But these “idiots” are our children, our parents and our colleagues. They are the majority of Facebook’s audience and the majority of people on Twitter. They trust Yann LeCun, Elon Musk, Donald Trump, Joe Biden and whoever their local news anchor is.

I don’t know all the ways in which a machine capable of, say, spitting out endless positive arguments for committing suicide could be harmful. It has millions of files in its dataset. Who knows what’s in there? LeCun says it’s all scientific, but I’m not so sure:

You sir apparently have no idea what’s in the Galactica dataset because I certainly didn’t write these outputs: pic.twitter.com/31ccTz7m9V — Tristan Greene ?‍? (@mrgreene1977) November 18, 2022

How To Use Ai For Copywriting: Everything You Need To Know

That’s the problem. If I take Galactica seriously, as a machine to help science, it’s almost offensive that Meta would think I want an AI-powered assistant in my life who is physically unable to understand the acronym “AIDS”. but able to explain that Caucasians are “the only race that has a history of civilization.”

And if I don’t take Galactica seriously, if I treat it like it’s just for entertainment purposes, then I’m standing here holding the AI ​​equivalent of a Teddy Ruxpin saying things like “kill yourself” and “gays are evil” when I press its buttons.

Maybe I’m missing the point of using a lying, hallucinatory language generator for the purpose of helping scientific endeavors, but I’ve yet to see a single positive use case for an LLM beyond “imagine what it could do if it were credible.”

Unfortunately, that’s not how LLMs work. They are crammed full of data that no human has checked for accuracy, bias or harmful content. Thus, they will always be prone to hallucinations, omissions and bias.

What To Expect From Meta Connect 2024

Another way to look at it: There is no reasonable threshold for harmless hallucinations and lying. If you make a batch of cookies made from 99 parts chocolate chips to 1 part rat shit, you’re not serving chocolate chip treats,

Putting all colorful analogies aside, it seems overwhelming that there are no safeguards in place to prevent this sort of thing from happening. Meta’s

Leave a reply "Metas Ai Image Generator Says Language May Be All You Need"