Financial Markets

AI STARTUPS USING HUMANS TO ACT LIKE ROBOTS IN DISHONEST SUBSTITUTION STRATEGIES

In a world increasingly enamored by the power and potential of artificial intelligence (AI), a disturbing trend exposes some tech companies employing human intelligence to perform tasks allegedly done by AI. This unsettling revelation, highlighted by Gregory Koberger, CEO of ReadMe, and experienced widely across the sector, carries significant implications for transparency, accuracy, and trust between these companies and their users.

Google found itself in the spotlight when it was revealed that hundreds of third-party app developers had access to users' inboxes. Similarly, the San Jose-based company Edison Software was found to have had its AI engineers reading personal emails, a fact conveniently omitted from their privacy policy.

The trend, however, dates back to 2008, when Spinvox was accused of using humans instead of machines to convert voicemails into text messages. Expensify admitted nearly a decade later that they used human labor to transcribe receipts, contradicting their claims of utilizing smartscan technology.

The reliance on human labor isn't just limited to smaller companies. Social media giant Facebook, despite having made substantial investments in AI technology, depended on human intervention for its Messenger's virtual assistance features.

For some, the use of human labor is part of the development process. Firms like Scale utilize human workers to train their AI systems, specifically in the realm of self-driving cars. However, concerns arise when companies claim scalable AI technology but secretly rely on human intelligence to execute their services.

One such example is the use of the "Wizard of Oz design technique," where people simulate the results that a human-like AI should produce. Alison Darcy, from the mental health support chatbot Woebot, insisted that while such technique might be acceptable in some applications, it should never be used for psychological support services.

This misuse of labor, often cloaked under AI technology claims, has been described by Rochelle LaPlante, an advocate for gig economy workers, as "dishonest and deceptive".

Transparency remains a vexing issue, particularly when AI systems mimic human interactions. Google Duplex, a robot assistant capable of making phone calls, underscores this point. When AI systems that sound and act increasingly like humans are actually powered by hidden human labor, it muddies the waters of credibility and trust.

Taking these revelations into account, it becomes imperative that tech firms prioritize transparency and honesty about their use of AI. As the technology continues to evolve and improve, using humans to falsely prop up these systems will not only be unnecessary but also a hindrance to the development of genuinely helpful AI systems.

Companies that cling to these deceitful practices will eventually lose consumer trust and fail to sustain in the long-term. Conversely, firms embracing transparency will foster stronger relationships with their users and pave the way for AI’s future growth.

In a profound sense, the future of AI doesn't just lie in machine learning or deep neural networks, but also in the integrity and openness of the companies that build them. Trust, it turns out, might be the most critical component for the successful integration of AI into our daily lives.