Civitai routinely tags bounties requesting deepfakes and lists a method for the particular person featured within the content material to manually request its takedown. This method implies that Civitai has a fairly profitable method of figuring out which bounties are for deepfakes, however itβs nonetheless leaving moderation to most of the people moderately than carrying it out proactively.Β
An organizationβs authorized legal responsibility for what its customers do isnβt completely clear. Typically, tech firms have broad authorized protections in opposition to such legal responsibility for his or her content material beneath Part 230 of the Communications Decency Act, however these protections arenβt limitless. For instance, βyou can’t knowingly facilitate unlawful transactions in your web site,β says Ryan Calo, a professor specializing in expertise and AI on the College of Washingtonβs regulation faculty. (Calo wasnβt concerned on this new examine.)
Civitai joined OpenAI, Anthropic, and different AI firms in 2024 in adopting design ideas to protect in opposition to the creation and unfold of AI-generated little one sexual abuse materials . This transfer adopted a 2023 report from the Stanford Web Observatory, which discovered that the overwhelming majority of AI fashions named in little one sexual abuse communities have been Steady Diffusionβprimarily based fashions βpredominantly obtained by way of Civitai.β
However grownup deepfakes haven’t gotten the identical stage of consideration from content material platforms or the enterprise capital corporations that fund them. βThey don’t seem to be afraid sufficient of it. They’re overly tolerant of it,β Calo says. βNeither regulation enforcement nor civil courts adequately shield in opposition to it. It’s evening and day.β
Civitai acquired a $5 million funding from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his objective of constructing the primary place the place folks discover and share AI fashions for their very own particular person functions. βWeβve aimed to make this area thatβs been very, I suppose, area of interest and engineering-heavy increasingly more approachable to increasingly more folks,β he mentioned.Β
Civitai is just not the one firm with a deepfake drawback in a16zβs funding portfolio; in February, MIT Know-how Overview first reported that one other firm, Botify AI, was internet hosting AI companions resembling actual actors that acknowledged their age as beneath 18, engaged in sexually charged conversations, provided βsizzling images,β and in some cases described age-of-consent legal guidelines as βarbitraryβ and βmeant to be damaged.β
