OpenAI ceo Sam Altman Hope AGI, or artificial general intelligence—AI that outperforms humans at most tasks—around 2027 or 2028. Elon Musk's prediction is either 2025 or 2026and he has claimed That he was “losing sleep over the fear of the AI threat.” Such predictions are wrong. In form of boundaries As is increasingly clear about current AI, most AI researchers are of the view that simply creating larger and more powerful chatbots will not lead to AGI.
However, in 2025, AI will still pose a major risk: not from artificial superintelligence, but from human misuse.
These may be unintentional abuses, such as lawyers relying excessively on AI. For example, following the release of ChatGPT, several lawyers have been sanctioned for using AI to generate false court briefings, apparently unaware of chatbots' propensity to make things up. In British ColumbiaLawyer Chong Ke was ordered to pay costs for opposing counsel after including fake AI-generated cases in legal filings. In new yorkSteven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In coloradoZachariah Crabill was suspended for one year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for mistakes. The list is growing rapidly.
Other abuses are deliberate. In January 2024, sexually explicit Taylor Swift's deepfakes Social media platforms were flooded. These images were created using Microsoft's “Designer” AI tool. While the company had guardrails to avoid generating images of real people, the misspelling of Swift's name was enough to bypass them. Microsoft has since fixed This error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are spreading widely – partly because open-source tools for creating deepfakes are publicly available. Ongoing legislation around the world attempts to combat deepfakes in hopes of preventing the damage. It remains to be seen whether this is effective or not.
In 2025, it will become even more difficult to distinguish what is real and what is made up. The fidelity of AI-generated audio, text and images is remarkable, and video will follow. This can lead to the “liar's advantage”: those in power deny evidence of their abuses by claiming that it is faked. In 2023, Tesla argued Elon Musk's 2016 video may be deepfaked in response to allegations that the CEO overestimated the safety of Tesla Autopilot, leading to a crash. An Indian politician claimed that audio clips of him admitting corruption in his political party were tampered with (the audio of at least one of his clips was false). verified in actual form by a press outlet). And two defendants in the January 6 riots claimed that the videos in which they appeared were deepfakes. both were pleaded guilty,
Meanwhile, companies are taking advantage of the public's confusion to sell fundamentally questionable products labeled “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. For example, the recruitment company Retorio, Claim Its AI predicts candidates' job suitability based on video interviews, but one study found that the system could be tricked simply by the presence of glasses or by replacing the plain background with a bookshelf, showing that it is superficial. Depends on correlations.
There are also dozens of applications in health care, education, finance, criminal justice, and insurance where AI is currently being used to deprive people of important life opportunities. In the Netherlands, the Dutch tax authority used AI algorithms to identify people committing child welfare fraud. it wrongly accused Thousands of parents demand the return of often tens of thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.
In 2025, we expect AI risks to arise not from how AI works on its own, but from what people do with it. This also includes cases where seem To work well and is highly trusted (lawyers using ChatGPT); When it works well and when it is abused (the benefits of non-consensual deepfakes and liars); And when it is not fit for purpose (depriving people of their rights). Mitigating these risks is a huge task for companies, governments and society. It would be hard enough without getting distracted by sci-fi concerns.