In the rapidly evolving field of artificial intelligence, generative AI tools show incredible potential. Nevertheless, it is undeniable that awareness of the potential for harm is increasing.
VERSES outlined concerns associated with generative AI tools that use data from multiple sources. These concerns mainly fall into three categories: quality control and data accuracy, ethical considerations, and technical issues, and often represent a certain level of interconnection.
1) Bias in, …bias out
Topic: Quality Control and Accuracy
A significant challenge of generative AI relates to its tendency to replicate biases inherent in training data. Rather than mitigating bias, these tools often magnify or perpetuate bias, calling into question the accuracy of their application and potentially leading to more serious ethical dilemmas.
2) Black box problem
Topic: Ethical and Legal Considerations
Another significant obstacle to embracing generative AI is the lack of transparency in the decision-making process. With thought processes that are often uninterpretable, these AI systems struggle to explain their decisions, especially when errors occur in critical matters.
It is worth noting that this is a broader problem that arises in all generative AI systems, not just generative tools.
3) Costs are high to build and maintain
Topic: Complexity and technical obstacles
Training generative AI models like Large Language Model (LLM) ChatGPT comes with a hefty price tag, often in the millions of dollars due to significant computing power and infrastructure demands. For example, former OpenAI CEO Sam Altman found that ChatGPT-4’s training cost was a whopping $100 million.
4) Pointless Parrot
Topic: Quality Control and Precision
Generative AI has advanced capabilities, but is limited by the data and patterns it was trained on. These limitations result in an inability to fully accommodate the depth of human knowledge or to effectively address a variety of scenarios.
AI models are also known to be unable to use jokes other than meaningless and unfunny actions. (Grok, xAI’s X-based chatbot, is changing that.)
5) (Incorrect) alignment with human values
Topic: Ethical and Legal Considerations
Unlike humans, generative AI lacks the ability to consider the implications of its actions based on human values.
While examples like the harmless AI-generated “Pope Balenciaga” may seem innocuous, it is important to recognize that deepfakes can be used for more malicious purposes, such as spreading disinformation during a public health crisis.
This highlights the need for additional frameworks to ensure the operation of these systems within ethical boundaries.
6) Power hungry
Topic: Complexity and technical obstacles
The impact of generative AI on the environment cannot be ignored. Given that processing units consume significant power, a model like ChatGPT could cost the equivalent of powering 33,000 American homes, and a single inquiry uses 10 to 100 times more power than sending one email. You can consume it.
7) Hallucinations
Topic: Quality Control and Accuracy
Generative AI models tend to produce incorrect statements or images when faced with data gaps, raising concerns about the accuracy and potential impact of their output.
For example, in Google Bard’s promotional video, the chatbot incorrectly claimed that the James Webb Space Telescope had captured the first image of a planet beyond Earth’s solar system.
8) Copyright and intellectual property rights infringement
Topic: Ethical and Legal Considerations
The ethical handling of data is in the spotlight, especially since many generative AI tools violate the rights of artists and creators by illegally including copyrighted works without seeking consent, providing credit, or compensating artists.
OpenAI recently implemented a rewards program called Copyright Shield. This initiative covers the legal costs associated with copyright infringement lawsuits for certain customer segments and opts for this approach instead of removing copyrighted material from ChatGPT’s training datasets.
9) Static information
Topic: Complexity and technical obstacles
Maintaining the current state of generative AI models requires significant computing resources and time, which presents enormous technical challenges. Nonetheless, certain models are structured to facilitate incremental updates, offering a potential solution to this complex problem.