AI Companies Sidestep Serious Problems in White House Meeting
A number of major AI firms met U.S. President Biden this past weekend, and during dialogue at the White House promised to “manage” potential AI dangers.
Among the pledges made by these AI firms is a promise to watermark all imagery generated by their artificial intelligence systems.
According to the Biden administration, these commitments will be voluntary and based on a promise to self-police among the companies making the pledges in question.
The companies have stated that they’re working on technical means for watermarking any visual material generated through their platforms. The hope is that with this, whatever they’re used to create can be discerned from real photos.
As for the identities of the firms themselves, they represent some of today’s largest tech giants along with the current crop of best-known AI firms.
Among those attending were representatives of Google, OpenAI, Microsoft, Amazon, Anthropic and Inflection along with a few others.
OpenAI, quite possibly the most advanced among these organizations, is the owner of both ChatGPT and the DALL-E image rendering AI platform.
The company is also claiming that it has already developed a watermark for its platform’s own AI-generated imagery. This consists of a little multi-color tab that the bottom right of DALL-E-rendered images.
This is a concept that AI visual technology leader Midjourney. Inc hasn’t developed yet, however, though it’s doubtless on the way.
Google is also claiming to already be developing a watermark for its own AI image generator, though the image-rendering platform itself isn’t even yet available to the public.
Among politicians and policymakers in general, the big worry around AI at the present moment is its capacity for very realistic deepfakes visuals.
Recent examples have included images of the Pope walking around dressed like a rapper and Donald Trump being violently arrested by the FBI.
Another particularly amusing example was a recent set of rendered images showing politicians like Trump, Biden, and even Obama meeting others for sexual liaisons in a hotel room.
This latter deepfake imagery was created by an artist specifically to draw attention to the potential power of this technology.
With the 2024 presidential election looming close in the U.S. and sure to be extremely contentious, both lawmakers and some corporate executives have been getting notably worried.
One of the obvious concerns has been over how deepfakes might be used to create a mess of “disinformation” fed to the public on a massive and easily viral scale.
Even more amusingly, some politicians themselves have been caught out using AI deepfakes for their own ends.
One example of this is 2024 presidential candidate Ron DeSantis, who used a completely AI-generated video against rival Trump showing a supposed cozy relationship between Anthony Fauci and Trump Republicans.
Others on the other hand have pointed out that if AI deepfakes can even be convincingly used against high-profile people, their potential danger to ordinary individuals without superb lawyers and PR teams is much worse still.
With all of these hot potatoes in the air, the main AI tech companies attending the White House meeting have pledged to invest in better cybersecurity, third-party inspections for their systems, and information sharing on AI “risks”.
All of these pledges are voluntary and the White House itself has reiterated that they don’t yet mean a regulatory regime.
Furthermore, some of the declarations from both the White House itself and the AI startups are just plain hand-wavy in their vagueness.
One example from the AI startup side is a promise to report “capabilities, limitations, and areas of appropriate and inappropriate use.” Indeed…
On the White House end, there is a request for more research on social risks, privacy problems, systemic bias, and using this technology to fight climate change and cancer; quite a package of media-friendly topics to cover.
The pledges from the AI startups presumably imply a juicy carrot in the form of gentle regulatory treatment and openness to tech firm lobbying.
The White House has however also vaguely hinted at a stick: One administration official recently stated, “The White House is actively developing executive action to govern the use of AI for the president’s consideration. This is a high priority for the president.”
Biden himself stated after the meeting that “We must be clear-eyed and vigilant about the threats emerging technologies can pose,” He also mentioned a “fundamental obligation” by these companies to keep their products safe (whatever that ends up meaning.)
The U.S. President also added that “Social media has shown us the harm that powerful technology can do without the right safeguards in place. These commitments are a promising step, but we have a lot more work to do together.”
This vaguely phrased addition can easily lead to problems with the collusion between tech firms and the government in suppressing certain types of digital free expression under the guise of fighting disinformation.
A self-serving side to the AI firms’ motives with this meeting is also on the table.
For one thing, the near total lack of any emphasis on how these companies have harvested enormous quantities of others’ copyrighted visuals and text for their profitable generative models is evident.
There’s also almost no mention of their potential to damage certain job markets and other areas of artistic creativity.
Furthermore, the major AI firms have themselves called for AI regulation by the government. They’re supposedly requesting this in the name of protecting against AI dangers, but very likely also for the sake of stifling future competition from newer startups without deep lobbying pockets.
In other words, in exchange for fairly vague promises about the areas of their technology that are the least difficult to monitor, the major AI firms have for now sidestepped many heavier potential problems behind AI and its enormous data harvesting.