The end of 2022 marks a turning point in the world of tech and new technologies. The release of ChatGPT has sounded, worldwide, the beginning of a new era filled with opportunities, uncertainties and potential abuses...
The future of our business and security has never been so uncertain! While experts agree that we are only at the beginning of this technological revolution, we can already see the changes and risks that it will bring.
At Abyssale, we are passionate about AI, but we are also aware of the risks involved. That's why in this article, we're going to focus on the pitfalls of AI and the dangers to our business, jobs and safety.
Is AI a danger to jobs?
With the arrival of ChatGPT, Dall-E, Midjourney and other AI tools, the "Wow" effect quickly gave way to concerns. On the one hand, employees fear for their jobs. On the other hand, business leaders are afraid of being overtaken by this new competition.
OpenAI and its sidekick ChatGPT quickly made it clear that they would partially or totally replace some jobs. According to a report published by Goldmann Sachs, more than 300 million full-time jobs worldwide could be affected by artificial intelligence.
Among these jobs, the administrative and legal sectors seem to be at the forefront. ChatGPT 4 was subjected to multiple choice questions on an American bar exam. Figure that the artificial intelligence answered 76% of the questions correctly. That's better than the average human!
Generally speaking, it is the majority of jobs that process information in a factual manner, without the need to make judgments, that are the most vulnerable.
Nevertheless, if some jobs are destined to disappear, others should emerge. According to a study published by Dell and the think tank, 85% of the jobs we will occupy in 2030 would not exist yet...
This is enough to reassure us that there will always be work. It remains to be seen under what conditions...
AI is not 100% infallible
Many doubts about the reliability of the processing and information provided by AI are still floating in the air. Faced with this, some governments have decided to act. On March 30, 2023, Italy banned the use of ChatGPT on its territory. Several facts are blamed on the tool as to the non-compliance with the GDPR:
- Users are not informed about the processing of their data
- From a legal point of view, ChatGPT is not allowed to process data as its learning system does.
- Children under the age of 13 can access the tool without any problem.
- The personal data provided by ChatGPT is sometimes inaccurate.
This last point resonates with a study conducted by the University of Hong Kong. According to the results, 64% of the information provided by ChatGPT is inaccurate.
Another aspect, on which AI is not always infallible, is its safety filters as to what actions it can and cannot perform.
ChatGPT has a filter for all illegal and unethical requests. Nevertheless, some users have managed to take the artificial intelligence tool at its own game. They asked ChatGPT to tell them step by step how to get a gun, drugs or anything else on the internet.They lied to ChatGPT about the intention of using this information. So, through some sleight of hand that we won't disclose here, they got some very relevant answers.
This foreshadows potentially dangerous pitfalls for users.
Excesses that can put us in danger
Seeing the possibilities that artificial intelligence tools offer to users, it is legitimate to wonder about global security.
One of the first problems of AI is that it is accessible by anyone. A 10 year old child, a terrorist or any other malicious person can use AI to achieve their goals.
You may have seen this AI-generated image of the Pope wearing a white down jacket worthy of Puff Daddy's wardrobe. So far, so good. It's even pretty funny! But when you know the power of misinformation today, you have to wonder: how can AI create without looking at the user's intentions? Tomorrow, we could make anyone do anything in a very realistic way.
Speaking of realism, the sliders have been pushed so far that they could well change the way some humans interact. This is already happening in an unexpected industry: pornography. Platforms such as OnlyFans could well see AI-generated avatars arrive en masse.
While this may make you smile, ethical and moral questions are being raised.
A final point to raise is the addiction to social networks. It is not a disease recognized by medical entities, but that does not prevent 93% of young people from generation Z to say that social networks affect their happiness. This data comes from a study conducted by ExpressVPN.The algorithms that push us to consume more and more content are surely a factor.
So the question is: can AI improve algorithms to the point of making humans totally dependent on social networks?
Ethical and responsible AI: whose responsibility is it?
The risks facing the evolution of AI are well present. As a result, we must use it with caution. The key to limit the pitfalls is probably in prevention and information. This is why we propose this article.
Nevertheless, we can wonder about the entities responsible for these abuses. Who should protect us from AI-related risks?
The measures taken by Italy show that by taking responsibility. Proof that governments can considerably reduce the pitfalls of artificial intelligence. Without commenting on the form, we can salute the taking of responsibility by the Italian government.
What about the responsibility of the developers of these tools? This is a subject that divides the technological community.
On the one hand, there are those who believe that AI should continue to develop in order to facilitate the work of humans. They believe that it is the role of governments to regulate artificial intelligence.
On the other hand, experts call for caution. More than 2,600 tech industry executives and researchers have joined a petition to halt AI development for at least six months.
But there's a catch! At the origin of this petition is Elon Musk. The billionaire head of Tesla, Twitter and SpaceX was a shareholder in OpenAI until 2018. He sold his shares well before the release of ChatGPT 3 and the company doubled its valuation on the market.
He launched the petition on March 29, 2023. Less than a month later (April 17), he announced on FoxNEWS that he would be launching "TruthGPT." He talks about his project as an "AI in search of truth at the highest level and trying to understand the nature of the universe".
So, real awareness or just a strategy to catch the train on the way?
Only time will tell. In the meantime, the best we can do is to continue to follow closely this technology which has not finished making news.