The human hand behind AI: about politically correct answers

Between hype and reality

There was one really interesting talk at the OMR conference – and that was the discussion between Larissa Holzki (Handelsblatt), Jonas Andrulis (Aleph Alpha) and Jürgen Schmidhuber (“father of modern AI”) about the status and future of artificial intelligence (AI) in Germany. The content of the statements was not very surprising, but I will briefly summarize them here before I come to what is actually remarkable.

Current AI status in Germany

Potential meets risk aversion

Germany scores with excellent basic research, but the implementation of marketable products and services is faltering. The reasons for this are the risk aversion of the German economy, a lack of investment and a lack of industrial policy support. Compared to countries such as China or the USA, which are investing massively in AI, Germany seems hesitant.

However, there are positive signs: the planned educational campus in Heilbronn, funded by the Dieter Schwarz Foundation, is to become a center for research and development. Start-ups such as Aleph Alpha also show that Germany can produce innovative AI solutions.

Areas of application: From office jobs to art

AI will change many areas of the world of work and society in the future. We are also looking at this in our AI Compass project. Office and knowledge work, legal tasks, customer service, photo generation for e-commerce or music and art production – the possible applications are many and varied.

An example from the talk: About You uses AI to generate photo galleries, which saves costs and “protects” the environment (which sounds a bit ridiculous given the ruthless fashion industry, but anyway). Snocks relies on AI-supported chatbots in customer service, which already process almost half of all inquiries. (Which brings us to the critical part of the OMR conference with its partly unreflected marketing focus …)

AGI: The future of AI?

The development of Artificial General Intelligence (AGI), an AI with human-like intelligence and consciousness, is a long-term goal. Experts disagree as to whether this can be achieved with current AI models and architectures or whether fundamentally new approaches are needed.

Jürgen Schmidhuber sees a future in which super-intelligent AI systems will explore and colonize the universe. That fits in well with Elon Musk’s SpaceX vision. Jonas Andrulis emphasizes that the current AI revolution is primarily automating standard tasks, but this is already having a huge impact on the world of work. So far, so good.

The influence of people on data quality

But now to the really remarkable thing for me, which you should really be aware of at the current state of AI:

Exactly what Jürgen said earlier, so there’s a reason why Scale is making a billion in sales with people writing and labeling texts and correcting and giving perfect answers.

Scale is a company that employs people who train AI models.

Yes, exactly. And above all, who write the training data manually. They look at all the questions that go to ChatGPT and Claude, all these models, and write the right answers according to the, as Jürgen says, ideological specifications of the manufacturers and billions are only spent on this because it is necessary, because the AI systems can’t do it themselves at the time.

So, the common argument that AI merely generates answers based on the collective content of humans along patterns and probabilities is not true in this absolute formulation. There is currently a considerable human bias – on the part of the companies that have control over the AI models. This is also noticeable in the chat conversation about educational contexts, which are really very much characterized by old, outdated mainstream thinking models.

All in all, it can make you a little dizzy when you know that the Axel Springer publishing house has bought into OpenAI …

Here is the corresponding 8-minute excerpt from the talk, which I can only recommend watching:

This is for information only. Incidentally, various GenKI chats have not given me an answer to my question about extracting this aspect from the transcript. But in the end I was able to persuade You.com to compile a blog post on this aspect from the transcript. That sounds like this:

The fascination with artificial intelligence (AI) continues unabated. Language models such as ChatGPT impress with their capabilities, but behind the scenes the human hand plays a crucial role. Companies like Scale AI employ thousands of people who train AI models by writing texts, labeling and correcting answers. This process is not only technical, but also ideological.

Scale AI: billions in sales with human expertise

Scale AI generates billions in revenue by providing the human expertise that is essential for training and improving AI models. AI systems cannot yet learn independently and need human guidance to achieve the desired results.

In the OMR Talk, it became clear that even advanced models such as ChatGPT rely on human intervention in order to function. Without labeling and corrections by humans, the AI systems would provide incorrect or nonsensical answers.

Ideological imprint: Who determines the “right” answers?

However, human intervention in AI training also raises ethical and social questions. Who determines what the “right” answers are? What values and ideologies are incorporated into the training of AI models?

During the talk, it became clear that the companies that develop AI models have a significant influence on the training data and therefore on the results of the AI. This can lead to distortions and discrimination if the training data is not representative or disadvantages certain groups.

The future of AI: humans and machines working together

The development of AI is a dynamic process. It is important to recognize the importance of human intervention in AI training and to discuss the ethical and societal implications.

The future of AI probably lies in the interaction between humans and machines, in which human expertise complements and controls the AI systems. Companies like Scale AI play an important role in this by providing the human expertise that is essential for responsible and ethical AI development.

Scroll to Top