Skip navigation menu

Danny Province's position:

Artificial Intelligence

The economic urgency of AI

The US invested over $100 billion in AI this year, roughly 40% of the global investment and 92% of US economic growth for the year. I'll credit Kyla Scanlon with the phrase "America is now one giant bet on AI" and plenty more analysis of why here. If AI does not pay off on the current investments, the economy will experience a downturn similar to the dot com bubble in 2000-2001; not a disaster, but a setback. However, if we keep investing faster than revenue growth and inflate the bubble more, the potential economic harm will just keep growing and growing. Revenue for AI companies must already increase 100x in the next 5 years to justify the investment, and the gap continues to grow right now.

In the recent past we simply waited for bubbles to pop before intervening. But we don't have to do that; we can start taking precautionary measures right now. The academic literature suggests broad sweeping solutions with interest rates or tax rates that target investors over the industry they are investing in. Instead, we should impose requirements on the industry to be prepared for potential losses. We already do this with banks and capital requirements because of the structural risk banks can pose to the economy like 2008. If AI poses a structural risk to the US economy, we should impose capital requirements on the excess debt they are accumulating as well: make AI companies pay a percentage of their excess debt-investment into an insurance account that will be used in the event of a crisis. As these companies deleverage, they can get the insurance money back.

Right now, there's been bipartisan support for the unregulated state of the AI industry because Republicans don't support regulations on anything and because of the imposing status of California (Silicon Valley) and New York (Wall Street) on the Democratic party. The push for AI regulation is going to come from the left in the other 48 states that are not riding the AI financial rocket until it explodes. So its important for democrats running in midwest seats to lead that charge.

The bad use case of AI is to obscure ownership and responsibility

The common theme of how AI has been abused across our society is to create something ownerless when ownership gets in the way of what people want to do. For example, AI models trained off copying the articles and artwork of writers and artists without payment or attribution. However, there are much worse abuses that have not become well known to the American public. These are the areas that should be addressed in AI regulation.

  • AI used in contexts known for persistent racial or gender discrimination like hiring decisions or parole decisions. Because the human data the AI learns from has these biases, the AI imposes the same pattern even when its supposedly programmed not to calculate race. In one instance, the AI worked out race based on subjects addresses to still impose a racial bias anyways. However the business or local government and the AI company all try to leave fault with each other when the lawsuits come. There is nothing in the government statutes saying who bears responsibility, and the Trump administration issued an executive order not to enforce the relevant statutes on the basis of disparate impacts. Legislation is needed to codify this responsibility.

  • During the stop-killer-robots debate about an international ban on AI based weapons, the United States government was the leader of the opposition to such a treaty. The US argued there is no extra risk posed by AI weapons compared to traditional weapons because there was no use case for relying only on AI targeting without human oversight. What that means is America thought no one would be crazy enough to just let AI decide who to kill instead of people deciding. But Israel did exactly that with Project Lavender. They allowed AI to decide kill targets with practically no human intervention, so that way individual soldiers would not be guilty of the widespread targeting of civilians. Making it so no one bears responsibility for the targeting of civilians in Gaza is the modern day equivalent of Nazi gas chambers built when the German soldiers couldn't stomach the mass shootings anymore. America bears a terrible moral stain for letting that happen. We should reopen talks on banning fully autonomous weapons.

  • Young people have begun using AI for pornographic purposes that we should not allow. Specifically, they have AI use the likeness of people that never agreed to appear in pornography. This happens to celebrities but even more damagingly to friends and classmates in school. Kids don't understand people not just owning their body but owning the right to access their body or use their likeness. AI allows them to ignore that ownership and permission to create something that most victims would be horrified to discover. This use of AI to recreate a real person's likeness in this context should simply be banned.

  • Conservatives are now using authorless AI videos to create fake political content. For example the racist tiktoks about SNAP. Far right parties across the globe now utilize this communication strategy (known as lying) with regularity. Meanwhile, fake news purveyors have way more difficulty selling slop to the left. There have been cases of people in the US being punished for making deepfakes of the political figures themselves, but we have not stopped anyone from creating deepfakes of random people to use as villains for their outrage machine. These are effectively scam artists and they rely on anonymity in producing these fakes to avoid potential accountability. While we are already requiring age-gating and ID verification of AI users because they are opening it up to pornography, we should also hold people legally liable for this kind of fake political content as well as develop a universal AI watermark for all AI video. We can let professionals register the videos they create instead of using a watermark for things like AI-generated video in movies.

AI in Education

This is addressed on my Education page.