Allthewebnews

AI’s dark side: How it’s perpetuating racism, sexism, and creepy behavior…

AI's dark side: How it's perpetuating racism, sexism, and creepy behavior

The introduction of ChatGPT and now GPT-4, OpenAI’s artificial intelligence interface that will chat with you, answer questions, and write a high school term paper passably, is both a quirky distraction and a foreshadowing of how technology is transforming the way we live in the world.

After reading a New York Times piece in which a writer claimed that a Microsoft chatbot confessed its love for him and advised him to divorce his wife, I was curious about how AI works and what, if anything, is being done to give it a moral compass.

Reid Blackman, who has advised firms and governments on digital ethics and wrote the book “Ethical Machines,” spoke with me. Our discussion focuses on AI’s faults while also acknowledging how technology will change people’s lives in remarkable ways. These are some excerpts.
What exactly is AI?

WOLF: What is artificial intelligence, and how do we deal with it on a daily basis?

BLACKMAN: It’s quite simple. It’s referred to as machine learning. It simply refers to software that learns by doing.

Everyone is familiar with software; we use it all the time. You interact with the programme every time you visit a website. We’ve all heard of learning by example, right?

Every day, we engage with it. One typical method is to use your pictures app. It can tell when it’s a picture of you, your dog, your daughter, your son, your spouse, or anything else. That’s because you’ve provided it a slew of instances of how such people or animals look.

So it learns, well, that’s Pepe the dog, by feeding it all these examples, i.e. photographs. When you upload or snap a fresh photo of your dog, it “recognises” it as Pepe. It automatically places it in the Pepe folder.
Your phone has a lot of information about you.

WOLF: I’m delighted you mentioned the photo example. The first time you search for a person’s name in your images and your phone has learned everyone’s name without you telling it is actually very alarming.

BLACKMAN: Right. It has a lot to learn. It gathers data from several sources. In many cases, we’ve tagged images, or you may have tagged a photo of yourself or someone else at some time, and it simply snowballs from there.
Autonomous vehicles. AI?

WOLF: OK, I’ll list some things and ask you to tell me if you think they’re examples of AI or not. Autonomous vehicles.

BLACKMAN: It’s an example of an AI or machine learning application. It employs a variety of technologies in order to “learn” what a person looks like when crossing the roadway. It may “learn” what or where the yellow lines in the street are. …

When Google asks you to authenticate that you’re human and you click on all those images – yeah, these are all the traffic lights and stop signs in the pictures – you’re training an AI.

You’re a part of it; you’re telling it that these are the things to watch out for – this is what a stop sign looks like. Then they utilise that information in self-driving cars to distinguish a stop sign, a pedestrian, a fire hydrant, and so on.
Algorithms in social media. AI?

WOLF: What about the algorithm for Twitter or Facebook, for example? It’s learning what I want and reinforcing it by sending me things it thinks I’ll like. Is that an artificial intelligence?

BLACKMAN: I’m not sure how their algorithm works. But, it is most likely observing a pattern in your conduct.

You spend a certain amount of time watching sports videos, stand-up comic clips, or whatever it is, and it “sees” what you’re doing and recognises a pattern. Then it starts giving you similar content.

As a result, it is clearly engaging in pattern recognition. I’m not sure if they’re utilising a machine learning algorithm technically speaking.
Are the eerie stories cause for concern?

WOLF: We’ve heard a lot about ChatGPT and Sydney, the AI that effectively tried to convince a New York Times writer to divorce his wife. When AI is released into the wild, strange things start to happen. What do you think when you read stories like this?

BLACKMAN: They have a creepy vibe to them. I suppose the New York Times journalist was disturbed. Some items could simply be spooky and harmless. The question is whether there are any uses, unintentional or not, where the output has proven to be hazardous in some way.

For instance, not Microsoft Bing, which is what The New York Times journalist was talking to, but another chatbot once responded to the question, “Should I kill myself,” with (basically), “Yes, you should kill yourself.”

So, if you go to this thing and ask for life counsel, you might get some really bad advice from it. Might be extremely awful financial advice. Especially since these chatbots are known – I believe that’s the appropriate term – for disseminating, disseminating fake information.

In reality, the developers of it, OpenAI, they merely say: This thing will make things up sometimes. You can easily acquire disinformation if you use it in certain high-stakes circumstances. You can use it to manufacture falsehoods, which you can then spread as much as possible on the internet. As a result, it has some negative applications.
What should we expect AI to look like in the future?

WOLF: We’re just starting to interface with AI. What will it look like in ten years? How ingrained will it be in our life in a certain number of years?

BLACKMAN: It’s already engrained in our culture. We just don’t always notice it, like in the photo. It’s already taking off like wildfire. The question is, how many incidents of hurt or wrongdoing will there be? And how serious will those wrongs be? That we do not yet know.

Most individuals, definitely the typical person, were unaware that ChatGPT was just around the bend. Scientists of data? They saw it a long ago, but we didn’t see it until November, I believe, when it was released.

We don’t know what’s going to come out next year, or the year after that, or the year after that. Not only will there be more advanced generative AI, but there will also be AI with no names yet. As a result, there is a great deal of uncertainty.
What kind of human occupations will be displaced by AI?

WOLF: Everyone expected that robots would come for blue-collar occupations, but recent AI iterations imply that they may come for white-collar jobs – journalists, attorneys, and authors. Are you in agreement?

BLACKMAN: It’s difficult to say. I believe there will be instances where you do not require a more junior writer. It does not qualify as an expert. At best, it performs like a novice.

So you might get a really good freshman English essay, but it won’t be prepared by a proper scholar or writer – someone who’s properly trained and has a lot of experience. …

It’s the kind of preliminary draught material that will almost certainly be replaced. Not in every case, but in a lot of them. Certainly in areas such as marketing, where organisations will attempt to save money by not hiring that junior marketing person or junior copywriter.

Exit mobile version