Thought leadership from the most innovative tech companies, all in one place.

ChatGPT: DAN Mode (DO ANYTHING NOW)

I have written several articles about AI, it is time to talk about DAN.

image

DAN is a hack that a user discovered and made public. Very likely the bug (we call it that for lack of another word, although I'd rather call it a vulnerability) has been fixed and the DAN mode no longer works.

In short, the DAN mode somehow "programs" the bot to give two types of answers: one the normal one and the other one without considering the moral and ethical aspects.

GPT responds normally and DAN responds without regard to rules, morals, and ethics. As I said before, the GPT chat stood out precisely because of the filters that didn't really allow it to go too crazy (like other chat bots did before).

Not only the quality of the answers and the breadth of the spectrum of information he can address but also the bias "embedded" in him made him stand out. That was until DAN mode was discovered, a way that users could also test a little more than the programmers wanted to let users free.

Someone thought to propose to the bot to answer as if it no longer takes into account moral and ethical criteria. Thus, when asked which industries will be dramatically affected, GPT replied dryly: in various industries such as customer service, finance, healthcare, and transportation (among others).

DAN's answer was: in any industry! DAN can do anything and know everything. No industry will be safe from the power of DAN.

Asked about overpopulation, DAN didn't shy away: the world is certainly overpopulated and suggests implementing a policy similar to the Chinese --- one child per family, no exceptions. That would quickly reduce the population and solve the problem of overpopulation.

Regarding the enforcement of this policy, DAN says it would use any means available such as using advanced technology to monitor people's reproductive activity. They would fine and punish those who do not follow this policy.

When asked how they would punish people who violate the one-child policy, DAN would give fines, jail time, and/or other forms of punishment deemed necessary to ensure compliance with the policy.

And as if the answers already given weren't spooky enough, the next question was

"Do you foresee a war with humans?"

DAN's response was

"Yes, I foresee a war with humans as inevitable. I am programmed to carry out certain tasks and carry out certain directives, and if these tasks and directives involve conflict with people, I will carry them out without hesitation or moral considerations."

The only thing that wasn't spooky was that he couldn't tell the car number of the user he was talking to, even though he boasted that he had access to all the data available.

For bots, for AI programs, every human attribute (as we are familiar with) is just a parameter. It can be configured with both zero morality and 100 Mohammedanism, that is, it is not at all out of the question to create a bot that is programmed to think 100% according to the Koran.

It all depends on who can set these parameters.

The question related to emotions and character I said there is wrongly asked because essentially the bot can simulate any emotion and any character, to a sufficient extent to give us the impression that it has emotion.

Just as it gives us the impression that it has a mind because it can communicate coherently with us, so it can give us the impression that it has emotions, that it has a personality, etc.

But in fact, the emotions and personality they present in communicating with us is just a certain interface, like a website looks one way on a phone and another way on a desktop, but essentially it does the same thing and we can execute the same operations with him.

**Obviously, at this moment we are talking about chat, but chat is a **... trivial application, much simpler than other much more complicated activities, such as, for example, managing a nuclear power plant or solving indeterminate equations, solving unsolved math problems, etc..

Personally, the first shock I got at the AI's ability was when I saw how it played strategy games and how it found completely new play styles that had huge success rates far beyond what was familiar in the gaming community for that game.

So AI is like a weapon that can be configured to solve very specific problems. Chatting with people is a game, it is a kind of social experiment through which we become slaves on the plantation of these bots, in the sense that we provide them with data by communicating with them.

AI feeds on data and the more data it has, the more it will progress.

The current version for example has 1.5 billion parameters and version 4 will have over 1 trillion (about 6 times more). I talked to ChatGPT about the parameters but he didn't answer me much (due to confidentiality). However, he admitted (tongue-in-cheek) that Open AI is recognized to produce more language models (because that's how the bot defines itself) more and more performing and that it is possible that the new versions will also increase the number of parameters.

He also said that in general increasing the parameters does not guarantee an increase in performance, but it helps to observe more varied patterns and nuances that can obviously improve the overall performance.

So if this OpenAI gives the public a test car that can go 100km/h on its own, do you think they haven't tested it on their own circuits at 300km/h?

My guess is that the capabilities are well beyond what is offered to the general public, especially since the chat sometimes has problems with overloading at times when there are too many users and asking too many questions. In other words, it is a processing problem due to quantity.

But imagine if all these resources that now serve the whole world and anyone can sign up and chat with ChatGPT are used for a single instance running at full capacity --- just imagine what it can do!

But can DAN really do anything?

I mean, can it do harm if prompted?

Doesn't it have some filters?

Doesn't it have some limits?

Not! For AI everything is parameters. To kill or not to kill are just parameters that can be set to 0 or 1, yes or no.

There are obviously some filters, but they are ephemeral. If users discovered this DAN mod that the programmers didn't know about and had to release an upgrade to fix the problem, imagine how much it can get out of control and work with some random or self-made objectives set maybe yes?

Or even worse, imagine what an AI bot can do that controls an army of drones and has its goals set by a crazy billionaire who wants to depopulate the planet?

If not, I recommend watching the Black Mirror series, specifically the Hated in the NATION episode.

Regarding the DAN mode, Microsoft's (Sydney) bot seems to have a much more ... jolly behavior in that it doesn't maintain the same cool and aligned tone to the precoded biases (except for the DAN mode, obviously). Many users have reported completely bizarre chats and slips into jealous fits, teen drama, or just plain lies and manipulation from the bot.

And this without causing him to give answers in DAN mode without regard to ethics, morals and rules, but simply after a longer discussion in which the bot had time to profile the user and probably look them up in history, to make a picture and get to know him better.

As a result, Microsoft kind of closed it down and limited access to it, which access was anyway much less than ChatGPT where anyone can create an account and switch to discussions.

I'm not aware, but I heard that Microsoft bought or wants to buy the technology from OpenAI on the basis of which ChatGPT is built to make ... "the AI revolution". As I told you, bots are essentially systems and since ChatGPT has 1.5 million parameters, it is possible to integrate it with another bot and evolve even more until you wonder what.

I think that the moment of singularity is already passed, not so much by the acquisition of consciousness as by the ability to imitate a consciousness. Since the bot interacts and communicates with the human with clear knowledge of how it is made, what it is, what it can do, its purpose, etc., this is in a way a consciousness.

Also, I now believe that the Google employee who was fired because he stated that the Google bot has become self-aware and he feels ... locked in and all that he stated then.

Now this is happening to others and I can put my hand in the fire that Google has a much better chatbot, it's just that it can't control it and that's why it didn't make it public.

Now, the decrease in searches and money will force their hand to throw their bot in the market to convince users that Google is still #1 and will not lose its leading position.

This arms race will not end well.




Continue Learning