Cathy Gellis's Techdirt Profile

Cathy Gellis

About Cathy Gellis

Posted on Techdirt - 8 April 2024 @ 01:35pm

Meta’s Dumb Deletion Of Links To Journalism Shows Why Attempts To Tax Platforms That Link To Journalism Is Even Dumber

As Mike has already chronicled, Meta has managed to alienate itself from reasonable people by first suppressing links to an independent Kansas journalism outlet, then links to others reporting on the suppression, and eventually entire accounts discussing the episode. I tend to be of the view that what happened was an error caught in a system that may have some design flaws, where the error was able to snowball in the enormity of its effect without there being adequate checks, more than I tend to think it was a deliberate choice by Meta. At the same time, large platform providers like Meta do need powerful systems in order to be able to take any sort of meaningful stand against actual abuse. And even if, rather than an error, the suppression was a conscious editorial decision by Meta, it would have and should have been a perfectly legal choice for it to make, albeit a really stupid one.

But it sort of doesn’t matter whether the suppression was deliberate or accidental: Meta suppressed voices, including voices practicing journalism, and, as a result, public discourse took a hit. Which is what prompts this post, because with things like the JCPA and link taxes and other such programs proposed in the US and abroad, what regulators are demanding is that this sort of thing happen all the time. These are laws that are all designed to force platforms to suppress links to journalistic expression because they essentially impose a penalty when the platforms do not.

Now, that may not be what regulators have in mind. They simply want platforms to share their money with any linked-to sites. But forcing anyone to share their money when they do something is a pretty significant deterrent against doing that something. And here that something is having platforms be vibrant forums for sharing links to journalistic voices. The outrage resulting from this particular link suppression episode is the outrage that results from when platforms are NOT being vibrant forums for sharing links to journalistic voices. We obviously want them to continue to be those forums, so how could we possibly support law that would deter them from providing us that service?

We have argued over and over again that these laws will only harm something we actually want social media to be good at, and harm in particular the independent journalistic voices that depend on social media being good at it to make sure those voices can be widely heard. And here is evidence for why we are right, because when Meta stopped being good at it, those voices got hurt. It is therefore dumb for anyone to support any sort of law that would only make them hurt those voices more.

Posted on Techdirt - 28 February 2024 @ 10:57am

Alito Wants To Weigh YouTube, And The Rest Of SCOTUS Wants To Make An Easy Case Hard

As Mike already noted, the weirdest moment of the nearly four-hour, double-case hearing at the Supreme Court on Monday in the NetChoice and CCIA legal challenges of Florida’s and Texas’s social media laws came maybe two thirds into the oral argument, when Justice Alito openly wondered, “If YouTube were a newspaper, how much would it weigh?” I was in the courtroom when he said it, but I have no more insight into what analytical issue he was wrestling with that could have prompted this inquiry to counsel than anyone who listened to the hearing remotely or read it in the transcript.

It should therefore not come as much of a shock to suggest that Justice Alito seemed to have had the least amount of sympathy for, or understanding of, NetChoice’s and CCIA’s arguments. It might however be a surprise that Justice Kavanaugh had the most. Perhaps not, as Mike observed, given that he was the author of the Halleck decision, where he displayed some significant interest in protective First Amendment doctrine. On the other hand, the politics of this case do not follow a traditional red-blue breakdown. If they did, one might expect a conservative justice to side with conservative government officials. But, like we noted with the 303 Creative case, the principle of First Amendment protection transcends politics. A lot of people read that case as conservative justices favoring conservative views because they preferred those views. But the reality is that the constitutional rule the Court announced there benefits everyone, no matter what views they have to express, because it tells the government that it doesn’t get to trump them when it doesn’t like them. Which is basically what these cases are about: governments trying to trump expression when it doesn’t like the views it expressed.

And Justice Kavanaugh in particular appeared most able to see that this was the issue at the heart of the case. The arguments that the states kept making, that they passed these laws in response to “censorship” fell flat before him, because over and over he kept reminding that “censorship” requires state action. Which destroyed any justification Florida and Texas claimed to defend their laws. Ultimately Florida and Texas were complaining about the expressive decisions of a private actor, and using their laws to take away the ability of this private actor to continue to make them. In other words, it was their state action that was now determining what expression could or could not appear online, which is the very essence of what is complained about when one complains of censorship, and what the First Amendment most definitely forbids.

The big question raised by these cases is whether the Court would recognize that it does offend a First Amendment right of the platforms when governments try to take away their ability to make those choices. Would the Court see that, just as it recognized that newspapers had the right to choose what op-eds to run, which no law could interfere with, so, too, do the platforms have the freedom to choose what user expression to either facilitate or moderate away?

Or at least it should have been the big question. Because it did seem that there were at least five justices who understood the implications of platforms not having that freedom, and who found the states’ arguments referencing the Court’s earlier rulings in Pruneyard and Turner – where the Court had limited an intermediary’s expressive discretion – to be inapplicable analogies. But it was not quite clear that NetChoice and CCIA will be able to walk away with the win that they should, and these laws remaining enjoined, because there seemed to be at least two issues bogging down the Court’s overall thinking.

One was that the procedural posture of the case seemed to displease them. The justices did not seem to like that it was a “facial challenge,” as opposed to an “as applied challenge.” With the latter, the plaintiffs would complain how a law hurt them, whereas with the former the argument is that the law is a fundamentally unconstitutional effort that needs to be stopped before it can hurt anyone. The problem with this sort of challenge though is that a law might be unconstitutional in some ways it would be applied, but fine in other contexts, and the facial challenge paints the whole thing with the same broad “unconstitutional” brush, which might not be a fair assessment of the whole law.

Of course, let’s remember what was going on when these particular laws were passed. Governors DeSantis of Florida and Abbott of Texas were very unhappy that some speakers and speech had been removed from certain large social media sites. These laws both seemed to be very transparent efforts to punish those sites for having made those expressive moderation choices and make sure they could not make them again. In fact, remember that Florida’s law originally had the “theme park” exemption, where, back when DeSantis still liked Disney, he made sure that the law wouldn’t reach any site owned by Disney and impinge on its moderation choices. And then, when he got mad at Disney, he got the law changed to make sure they were subject to it too.

So when presented with these rather baldfaced attempts to interfere with platforms’ First Amendment rights to moderate their sites as they saw fit, NetChoice and CCIA did not hesitate to sue on behalf of the platforms that would be affected. And as part of the lawsuit it asked for the laws to be enjoined, because one should not have to wait to be injured by an unconstitutional law before being able to show the courts that it would cause an unconstitutional injury. Instead that injury should be headed off at the pass, which is what preliminary injunctions are for. Which doesn’t mean that if there is a redeemable part of the law it can’t later be upheld, but it does mean that when an injury is shown to be likely we keep the status quo in place, with no injury risked, while we fully explore the question of just how unconstitutional the law is.

Furthermore, as NetChoice and CCIA pointed out, it wasn’t like the states defended their laws by saying they had also constitutional applications. Both Texas and Florida overtly wanted to do what NetChoice and CCIA feared: usurp platforms’ editorial discretion. Either the First Amendment lets Florida and Texas do this, or it doesn’t, and that’s why both parties centered that question in their litigation strategy, which was very strange for the Court to now second guess. NetChoice further noted that when it came to a law that violated the First Amendment, it would also be a problem if facial challenges to such laws could be stymied by lawmakers simply slipping in a provision that might be sometimes legitimate because it would mean that lawmakers could get away with causing an unconstitutional injury if that pretextual provision made the law now untouchable by the courts until that injury had accrued.

And then there was a second major point of confusion that arose for the justices on Monday, and Justice Gorsuch in particular, who wondered what the effect would be on Section 230 if they ruled in NetChoice and CCIA’s favor. The answer: there is no effect, but the problem is that it betrays a pretty significant misunderstanding of Section 230 to think there would be.

What seems to confuse is that when it comes to Section 230 platforms basically argue, “It is not our speech at issue,” and in the context of these cases, the platforms are basically arguing that it is their speech at issue. And how could both be true? But the reason both can be true is because when it comes to online speech there is more than one expressive act at issue. One of the major ways Section 230 operates is to make clear that the expressive message of the user is the user’s alone, and if there’s an issue with that message responsibility for it lies exclusively with the user who expressed it.  Which is why platforms argue, when raising a Section 230 defense, that it is not their speech.  Whereas what is at issue in the litigation here is the separate message platforms convey when they allow users to use their sites to spread their messages, or otherwise deny certain speakers or speech. Allowing (or denying) speech amounts to platforms saying the separate message — and their own message — of what speech they welcome. But that speech they welcome is still not their speech, but that of the user.

I wish this point had been emphasized more during the argument, but NetChoice/CCIA did drive home the separate point that Section 230 is obviously not in conflict with platforms having First Amendment rights preserving editorial discretion because part of its protection is designed to protect platforms when they exercise that discretion. The other major way Section 230 operates is to insulate platforms from liability arising from the acts they take to disallow speech. Congress wanted platforms to take steps to remove objectionable content, NetChoice/CCIA reminded the Court, and wrote the statute to make sure they could. So at minimum, even if platforms did not have the Constitutional right to moderate content, Section 230 would still give them the statutory right, and preempt states like Florida and Texas from messing with that protection, as these laws do. But in reality platforms have both rights, the First Amendment right to do this moderation and the statutory right to make sure that no one can try to take issue with how they’ve done so. These rights complement, not conflict, and hopefully the Court will not be distracted by misunderstandings that might suggest otherwise.

Posted on Techdirt - 16 February 2024 @ 10:54am

The Copia Institute Tells The Ninth Circuit That The District Court Got It Basically Right Enjoining California’s Age Design Law

States keep trying to make the Internet a teenager-free zone. Which means that lawsuits keep needing to be filed because these laws are ridiculously unconstitutional. And courts are noticing: just this week a court enjoined the law in Ohio, and a different court had already enjoined the California AB 2273 AADC law a few months ago.

Unhappy at having its unconstitutional law put on ice California appealed the injunction to the Ninth Circuit, and this week the Copia Institute filed an amicus brief urging the appeals court to uphold it.

There’s a lot wrong with these bills, not the least of which how they offend kids’ own First Amendment rights. But in our brief we talked about how it also offended our own speech interests. Publishing on the web really shouldn’t be more involved than setting up a website and posting content, even if you want to do what Techdirt does and also support reader discussion in the comments. But this law sets up a number of obstacles that expressive entities like Techdirt would have to overcome before it could speak. If it didn’t it could potentially be liable if it spoke and teenagers were somehow potentially harmed by the exposure to the ideas (this is a mild paraphrase of the statutory text, but only barely – the law really is that dumb).

In particular, it would require the investment in technology – and dubious technology that hoovers up significant amounts of personal information – to make sure Techdirt knows exactly how old its readers are so that it can make sure to somehow quarantine the “harmful” ideas. But that sort of verification inherently requires identifying every reader, which is something that Techdirt currently doesn’t do and doesn’t want to do. Occasionally it’s necessary to do some light identification, like to process payments, but ordinarily readers can read, and even participate in the comments, without having to identify themselves because allowing them to participate anonymously is most consistent with Techdirt’s expressive interests. The Copia Institute has even filed amicus briefs in courts before, defending the right to speak (and read) anonymously. But this law would put an end to anonymity when it comes to Techdirt’s readership because it would force it to verify everyone’s age (after all, it’s not just teenagers this law would affect; the grown-ups who still could be readers would have to still show that they are).

So in this brief we talked about how the Copia Institute’s speech is burdened, which is a sign that the bill is unconstitutional. We also discussed with the courts how the focus of the constitutional inquiry needs to be on those burdens, not on whatever non-expressive pretext legislatures wrapped their awful bills up in. The California bill was ostensibly a “privacy” bill and the Ohio one focused on minors entering contracts, but those descriptions were really just for show. Where the rubber hit the road legislatively all these bills were really about the government trying to control what expression can appear online.

Which is why we also told the Ninth Circuit to not just uphold the injunction but even make it stronger by pointing out how strict scrutiny applied. The district court found that the law was unconstitutional by the lesser intermediate scrutiny standard, which in a way is good, because if the law can’t even clear that lower hurdle it’s a sign that it’s really, really bad. But we have the concern that the reason it applied the lesser standard was because the law targeted sites that make money, and that cannot be a reason that the First Amendment could ever be found to be less protective of free expression than it is supposed to be.

Posted on Techdirt - 10 January 2024 @ 11:56am

Wherein The Copia Institute Asks The Second Circuit To Stand Up For Fair Use, The Internet Archive, And Why We Bother To Have Copyright Law At All

December was not just busy with Supreme Court briefs. The Copia Institute also joined many others, including copyright scholars and public interest organizations, in filing an amicus brief to support the Internet Archive’s appeal at the Second Circuit, seeking to overturn the troubling ruling holding its Open Library to be copyright infringement.

We’ve written about this case several times before, including about the original decision. At issue is how the Internet Archive has solved how to be a library in a way that geography doesn’t matter. Instead of lending out physical copies of books it lends out scanned copies instead, which means it doesn’t matter how far away a reader is from a book – they can still get to read it. Just like a physical library, the Internet Archive lends out books one-at-a-time, even in digital form, except during a brief period at the beginning of the pandemic when the exigence of the sudden lockdown, isolating people from the physical books they otherwise were entitled to access, appeared to justify allowing the loans to be unlimited in order to functionally restore the access that readers otherwise would have been able to have.

Publishers whose books were being scanned and lent, however, took issue with this lending and so sued, not just over the brief period of unlimited lending but all of the Internet Archive’s digital lending, arguing that only they were entitled to get digital copies of books into readers hands by virtue of their copyrights. The judge at the district court agreed and thus found the Internet Archive to be infringing, even though such a finding required such a truncated fair use analysis as to effectively obviate the doctrine and the public interests, as well as constitutional interests, it is designed to serve.

The Internet Archive’s own brief does a good job explaining how the district court got the fair use analysis wrong. Our amicus brief discussed the bigger picture of what it would mean if fair use couldn’t apply here. Including constitutionally; once again we reminded the courts that copyright law is subject to two important constitutional limitations.

First, that copyright law promote the progress of sciences and the useful arts. Congress is only constitutionally entitled to legislate in this area when the legislation it produces meets that goal. Legislation that does not meet this goal, or, worse, undermines it, is beyond the scope of its authority to pass and thus unconstitutional. But we weren’t arguing that copyright law was per se unconstitutional on this basis – after all, the statute does include the doctrine of fair use to help ensure that this legislative goal is met. Instead we argued that the courts had to give that part of the statute meaning or else they would be the ones rendering the statute unconstitutional if they interpreted it in a way that did not let it have that knowledge-enhancing effect.

Secondly, Congress is also limited in its legislative abilities by the First Amendment. Congress shall make no law that interferes, for instance, with freedom of expression. And, as we’ve noted a lot lately in our comments to the Copyright Office about AI, the freedom of expression inherently includes the right to read. So for copyright law to be constitutional it also can’t interfere with that right. Here the district court’s decision would interfere with it directly, effectively allowing copyright law to stand between books and readers entitled to read them by privileging copyright owners with a preclusive power the statute does not actually give them – or could give them, given these constitutional limitations constraining how Congress could write its statute.

Finally we argued that these concerns were not just academic. If the district court is upheld, fewer people will get to read books – even books that the Internet Archive lawfully owned, and that readers would otherwise be entitled to read (and often not otherwise get to read). Keeping people from reading seems like the last thing copyright law should be doing, especially not when the whole point of it is to make sure the public actually has things to read. Hopefully the Second Circuit will recognize how destructively counterproductive the district court’s decision was and reverse it.

Posted on Techdirt - 9 January 2024 @ 10:44am

Because The Fifth Circuit Again Did Something Ridiculous, The Copia Institute Filed Yet Another Amicus Brief At SCOTUS

It was a busy December for the Copia Institute (and me), even just at the U.S. Supreme Court. In addition to filing (along with Bluesky and Mastodon admin Chris Riley) an amicus brief supporting NetChoice and CCIA in their combined cases, we also filed another one challenging the bizarre injunction imposed by the Fifth Circuit preventing the Biden Administration from communicating with technology companies.

Unlike in the NetChoice cases, where we supported their position, in this case, now captioned as Murthy v. Missouri, we filed in support of neither party. As we noted in our brief, we agree with the Biden Administration that the injunction is invalid and needs to be dissolved. But the interests that the Administration is seeking to vindicate – its own – are not the same as the interests we were trying to advance – namely everyone else’s, which this injunction threatens, even though no platform was ever a party to the litigation. It is also theoretically possible that the executive branch of the government could at some point exceed its constitutional bounds to pressure how others exercise their expressive rights. We disagree with the plaintiffs in this case that the executive branch so overstepped here, but would agree that if it did happen there should indeed be some remedy. But we filed this brief because no suitable remedy could ever look anything like what the Fifth Circuit came up with. Far from protecting anyone’s First Amendment rights, the Fifth Circuit itself instead became the state actor itself attacking them.

This case is separate from the NetChoice cases, but the issues raised in all of them are similar. The NetChoice cases address whether those who run Internet platforms have their own First Amendment rights in how they run them. We argued in those cases, and have argued all along, that the answer must be yes, and that just like a newspaper can choose what articles to run a platform operator must be free to choose what user expression to facilitate or moderate away. And just because some platforms are run by entire companies shouldn’t change that analysis; the same freedom that someone like Chris Riley as an individual has to run his platform as he personally wishes shouldn’t be extinguished just because lots of individuals have gotten together to decide how to run their platform together.

But that expressive freedom is violated by the Fifth Circuit’s injunction in at least two big ways. One way is similar to how the states of Florida and Texas have tried to attack that editorial freedom at issue in the NetChoice cases. In all these cases, how platforms operate their sites is ending up subject to government control. In the NetChoice cases it is by the states themselves, seeking to override the platforms’ discretion via statutes, whereas in this case it is by the courts, through the use of the injunction that inherently shapes how platforms can do their moderation. The effect in all these cases is the same: platforms are no longer free to run their sites as they see fit; instead their choices are being constrained by government interference.

Because here the upshot to the injunction is that platforms can no longer make moderation decisions if those decisions happen to agree with those ever expressed to them by someone in the executive branch of the federal government. Platforms must therefore either make their decisions in an information vacuum, without any input from agencies that may have expertise in the subject the platforms might have wanted to consult, or, in the wake of any consultation, they can only choose to do the opposite of what the agency might have suggested. Per the Fifth Circuit, any consultation would otherwise inherently taint the decision and make it something the platforms can no longer freely choose to act in accordance with.

But the injunction doesn’t just violate platforms expressive rights to operate their sites as they see fit; it also chills their petitioning rights. The petitioning right exists in large part because democracy depends on the people being able to communicate their will to those who represent them. But this injunction interferes with the ability of the public to talk to their government by inhibiting government officials from engaging in those conversations.

And they are so inhibited even if the platforms want to have those conversations. As we pointed out in the brief, the Fifth Circuit had an infantilizing view of platforms, as if it could not imagine any reason that a platform would have for engaging with executive branch agency expertise except in order to receive instructions for how to moderate in accordance with executive branch wishes. It could not conceive that a platform might want to, say, inquire with an agency with expertise in vaccines as it sought to develop a good moderation policy on medical disinformation, or one with expertise in election security when trying to develop a moderation policy addressing disinformation in that area. In the Fifth Circuit’s view all such conversations were inherently corrupt and for no other purpose than to immediately conscript the platform to do the executive agency’s bidding. And so, thanks to the injunction, platforms no longer get to have those conversations, no matter how much they would want to have them.

But if all the above wasn’t bad enough, there was another problem with the Fifth Circuit decision that we highlighted in our brief, relating to the plaintiffs and the court finding standing to even entertain their claims, let alone grant an injunction based on them. This case was weird because it was brought by an unholy alliance of both private plaintiffs and state plaintiffs. As explained above, the private plaintiffs should not have been entitled to injunctive relief by the courts: even if their rights had been violated – and as we explained in the brief, they had not been – the court shouldn’t be able to remedy a rights violation by violating the rights of someone else. But for the court to have granted the state plaintiffs, Louisiana and Missouri, standing to bring their claims against the platforms represented its own constitutional horror. After all, as states, these plaintiffs are themselves state actors. And these state actors wanted to be able to force platforms to exercise their expressive rights as they preferred. Unlike Texas and Florida in the NetChoice cases, which tried to do it themselves, here Louisiana and Missouri tried to use the courts to do it. And, bizarrely, the courts let them.

Worse, by crediting the idea that these states had their own First Amendment rights (as states!) to be vindicated in this litigation, the Fifth Circuit validated the proposition that the states were somehow entitled to co-opt platforms to advance their own speech interests. But such co-opting is not what the First Amendment allows. As we reminded the Supreme Court, its own decision in 303 Creative made clear that states did not have the power to force platforms to favor certain speech. But by allowing Missouri and Louisiana to advance claims challenging how platforms exercised their speech rights, the Fifth Circuit handed these states the very power the Supreme Court just last year reminded that they did not have.

Posted on Techdirt - 8 December 2023 @ 01:45pm

The Copia Institute Tells The Copyright Office Again That Copyright Law Has No Business Obstructing AI Training

A little over a month ago we told the Copyright Office in a comment that there was no role for copyright law to play when it comes to training AI systems. In fact, on the whole there’s little for copyright law to do to address the externalities of AI at all. No matter how one might feel about some of AI’s more dubious applications, copyright law is no remedy. Instead, as we reminded in this follow-up reply comment, trying to use copyright to obstruct development of the technology instead creates its own harms, especially when applied to the training aspect.

One of those harms, as we reiterated here, is that it impinges on the First Amendment right to read that human intelligence needs to have protected, and that right must inherently include the right to use technological tools to do that “reading,” or consumption in general of copyrighted works. After all, we need record players to play records – it would do no one any good if their right to listen to one stopped short of being able to use the tool needed to do it. We also pointed out that this First Amendment right does not diminish even if people consume a lot of media (we don’t, for instance, punish voracious readers for reading more than others) or at speed (copyright law does not give anyone the right to forbid listening to an LP at 45 rpm, or watching a movie on fast forward). So if we were to let copyright law stand in the way of using software to quickly read a lot of material to it would represent a deviation from how copyright law has up to now operated, and one that would undermine the rights to consume works that we’ve so far been able to enjoy.

Which is why we also pointed out that using copyright to deter AI training distorted copyright law itself, which would be felt in other contexts where copyright law legitimately applies. And we highlighted a disturbing trend emerging in copyright law from other quarters as well, this idea that whether a use of a work is legitimate somehow depends on whether the copyright holder approves of it. Copyright law was not intended, or written, to give copyright owners an implicit veto over any or all uses of works – the power of a copyright is limited to what its exclusive rights allow control over and fair use doesn’t otherwise justify.

A variant of this emerging trend also getting undue oxygen is the idea that profiting from a use of a copyrighted work used for free is somehow inherently objectionable and therefore ripe for the copyright holder to veto. But, again, such would represent a significant change if copyright law could work that way. Copyright holders are not guaranteed every penny that could potentially result from the use of a copyrighted work, and it has been independently problematic when courts have found otherwise.

Furthermore, to the extent that this later profiting may represent an actual problem in the AI space, which is far from certain, a better solution is to instead keep copyright law away from AI outputs as well. Some of the objection to AI makers later profiting seems to be based on the concern that certain enterprises might use works for free to develop their systems and then lock up the outputs with their own copyrights. But it isn’t necessary for copyright to apply to everything that is ever created, and certainly not by an artificial intelligence, so we should therefore also look hard at whether it is itself appropriate for copyright to apply to AI outputs. Not everything needs to be owned; having works immediately enter the public domain after their creation is an option, and a good one that vindicates copyright’s goals of promoting the exchange of knowledge.

Which brings us back to an earlier point to echo again now, that using copyright law as a means of constraining AI is also an ineffective way of addressing any of its potential harms. If, for instance, AI is used in hiring decisions and leads to discriminatory results, such is not a harm recognized by copyright law, and copyright law is not designed to address it. In fact, trying to use copyright law to fix it will actually be counterproductive: bias is exacerbated when the training data is too limited, and limiting it further will only make worse the problem we’re trying to address.

Posted on Techdirt - 30 November 2023 @ 01:34pm

An Appeals Court Broke Media Advertising, So The Copia Institute Asked The California Supreme Court To Fix It

A few months ago a California court of appeals issued a really terrible decision in Liapes v. Facebook. Liapes, a Facebook user, was unhappy that the ads delivered to her correlated with some of her characteristics, like her age. As a result there were certain ads, like one provided by an insurer offering a particular policy for men of a different age, that didn’t get delivered to her.

Of course, it didn’t get delivered to her because the advertiser likely had little interest in spending money to place an ad to reach a customer who would not and could not turn into a sale, since she would not have been eligible for the promotion. And historically advertisers in all forms of media – newspapers, television, radio, etc. – have preferred to spend their marketing budgets on media likely to reach the same sorts of people as would purchase their products and services. Which is why, as we explained to the California Supreme Court, one tends to see different ads in Seventeen Magazine than, say, AARP’s.

Because we also tend to see different expression in each one, as the publishing company chooses what content to deliver to which people. There’s no law that says media companies have to deliver content that would appeal to all people in all media channels, nor could there be constitutionally, because those choices of what expression to deliver to whom are protected by the First Amendment.

Or at least they were up until the court of appeals got its hands on the lawsuit Liapes brought against Facebook, arguing that letting advertisers choose which users would get which ads based on characteristics like age violated the state’s Unruh Act. The Unruh Act basically prevents a company from unlawfully discriminating against people for protected characteristics – if it offers a product or service to one customer it can’t refuse to offer it to another because of things like their age.

But Facebook isn’t a business that sells tangible products or non-expressive services; it is a media business, just like TV stations are, newspapers are, magazine publishers are, etc. Like these other businesses, it is in the business of delivering expression to audiences. True, it is primarily in the business of delivering others users’ expression rather than its own, and it is more likely to have the ability to deliver editorially-tailored expression on an individual level, but then again, increasingly so can traditional media. In any case, there is nothing about the First Amendment that keys it only to the characteristics of traditional media businesses producing media for the masses. After all, they themselves often choose which demographic to target with their own media. Conde Nast, for instance, publishes both GQ and Vogue, as well as TeenVogue, and it is surely using demographics of the targeted audience to decide what expression to provide them in each publication.

But the upshot of the appeals court decision, finding Unruh Act liability when a media business uses demographic information to target an audience with certain content (including advertising content), is that either no media business will be able to make any sort of editorial decision based on the demographic characteristics of their intended audience – and as a result, there goes the American advertising model that has sustained American media businesses for generations – or, even if those businesses somehow are left beyond the Unruh Act, it will introduce an artificial exception to the First Amendment to carve out a business like Facebook because… well, just because. There really is no sound rationale for treating a company like Meta differently than any other media business, but if they could be uniquely targeted by the Unruh Act, unlike their more traditional media brethren, it would still gravely impact every Internet business, especially those that monetize the expression they provide with ads.

Which would be particularly troubling because not only are businesses like Facebook supposed to be protected by the First Amendment but they are supposed to be EVEN MORE PROTECTED by Section 230, which insulates them from liability arising from the expression others provide, as well as the moderation decisions the platforms like Facebook make to choose what expression to serve audiences. The court of appeals decision impinges upon both these forms of protection, and in contravention of Section 230’s pre-emption provision, which prevents states from messing with this basic statutory scheme with its own laws, of which the Unruh Act is one. After all, if there was anything actually wrong with the ad, it was the advertiser who produced it who imbued it with its wrongful quality, not Facebook. And the decision to serve it or not is an editorially-protected moderation decision, which Facebook also should have been entitled to make without liability, per Section 230.

In sum, this California appeals court decision stands to make an enormous mess of at least online businesses, if not every media business, and not even just those who take advertising, because simply weakening Section 230 and the First Amendment itself will lead to its own dire consequences. And so the Copia Institute filed this amicus letter supporting Facebook’s petition for further review by the California Supreme Court in order to clean up this looming mess.

Posted on Techdirt - 27 November 2023 @ 01:30pm

Dear Marin County Board of Supervisors: Reject The Sheriff’s Proposal To Install License Plate Cameras In The County

With almost zero public notice, the Board of Supervisors of Marin County, California (just to the north of San Francisco over the Golden Gate Bridge) is on the verge of approving tomorrow a demand by the county sheriff’s department to install license plate cameras throughout the county. As a county resident, I object. My comment submitted to the board is below.

Dear Marin County Supervisors:

In the last 30 days I have entered the Gateway Shopping Center in Marin City on at least 11/6, 11/21, and 11/24 to get groceries, dine, and purchase other household goods.

None of this information is your business, and it is certainly not the business of the Marin County Sheriff’s Department. But if you authorize their proposal to allow automatic license plate reader cameras to be installed throughout Marin County this location information is exactly the sort they will be able to know about each and every person driving in Marin County, be they residents or their guests.

I have also gone to Strawberry on at least 10/31, 11/7, 11/8, 11/10, 11/15, 11/16, and 11/21, to go grocery shopping, dine, and seek medical care.

As a resident in unincorporated Marin, these places are in my neighborhood and where I need to go to shop, dine, and do the business life requires. It is also the activity businesses in Marin depend on people doing. But if you let the Marin County Sheriff Department hang these cameras, it will be impossible to go to any of these places without them knowing.

I have also regularly driven on Highway 1 to enter Mill Valley. I do not have complete records of these travels, but if you let the Sheriff’s Department hang the cameras where they propose, they will.

And it is not just residents of unincorporated Marin who will have the details of their personal life documented by the police; it will be every single person with any reason to be here in the county, including every lawful one. The proposal preys on fear, such as with the included “crime heat map.” But it is a “heat map” that happens to directly correlate to where people live and conduct business in the county and thus happens to reflect where most activity occurs, including lawful activity, which would all be caught by this camera dragnet too.

The sheriff further proposes to hang cameras on Sir Francis Drake, a major artery through Marin County, providing access to much of central Marin, including countless medical establishments in Greenbrae itself. Do you wish to also know about when I’ve visited doctors there? Soon the sheriff will be able to tell you.

None of this information is something the police are entitled to know. The privacy the United States Constitution affords to be secure in our papers and effects restricts this sort of incursion into the public’s private lives without probable cause that a crime has already been committed so that people can be free to go about their lives, unchilled by the prospect of agents of the state knowing their business without any justification. The sheriff’s department alleges in its paperwork that county counsel has reviewed the proposal, but nothing submitted reflects any coherent practical or legal argument that it is constitutionally appropriate or possible for you to allow the sheriff’s department to invade every resident’s privacy as they so propose. In fact, all of the paperwork submitted is entirely self-serving and supplied by the very government agency that seeks to have this additional power over civilian lives. Nothing more neutral or independent has been provided to the board by any other state or county agency, nor any other civil society organization, who could provide you with the information you need to recognize the immense cost of the proposal in forms other than purely financial.

Granted, I may have little to fear from the cameras the sheriff wants to install in the Oak Manor neighborhood, as I’m rarely there. But the people living in the neighborhood surely go out and about, so soon you will have information about their comings and goings.

However, the sheriff also proposes to have these cameras on the streets approaching the Marin County Civic Center, surrounding the heart of local county government with a moat of surveillance, which means that the sheriff will be able to track every single person who approaches the building for any reason, including to attend public hearings (such as this one), to petition their local government for any reason a resident might need to seek assistance from their local government, or to register to vote. Personally I think it has been more than 30 days since my last visit to this famous Frank Lloyd Wright-designed building (which also contains a public library), but when I make my next visit, the sheriff will know.

The sheriff proposal says it is to help it police against property crime. And no one likes crime. But crime is not the only harm the public can experience. The cameras themselves pose their own, and it is incumbent on this board to recognize how damaging the oversight police are demanding to have over our lives itself is. The reason people worry about equity impact is that there is a very real harm done to the public when they cannot live lives free from police scrutiny. But that effect reaches everyone in the public, not just those the police have a known habit of unduly targeting. With these ubiquitous cameras, every single person in Marin County will have the details of their lives available for the police to scrutinize. No pallor can protect anyone from the harm that can follow to have their lives recorded in police-controlled ledgers because it is that recording itself that is a harm now everyone must incur.

It will be incurred by everyone traveling to central and western Marin on Lucas Valley Road. I last was there more than 30 days ago, on October 22, but the next time I try to attend a concert in Nicasio (or go biking, or go buy cheese) you will have record of it.

And for no good reason. The deterrence effect of these cameras the police tout is overstated. License plate cameras do not magically prevent crime. Crime still happens. Sometimes serious crimes. But instead of looking at how ineffective cameras are, the lesson we’ve learned from the local towns that have already inflicted cameras on us is that their inherent inability to prevent crime tends to just lead to calls for more cameras, because the police’s appetite to know the details of people’s lives is insatiable. They won’t stop here, asking for just these cameras. When crime inevitably happens they will want more: more cameras, in more places, and maybe even other tools that will help them know more about the private details of the lives of the people in this county. After all, if one invests in the fallacy that these cameras will help anything, then there is no limiting principle to think that more such tools won’t similarly be warranted, until there is no place anywhere in Marin where people can go about their lives without being watched by the government.

At least I won’t personally have to worry much about the cameras proposed for the Atherton area near Highway 37, because now that I’ve relocated to southern Marin I’m seldom there. But I used to be there often, and if you’d had the cameras hung then, you’d know.

Because there’s no assurance by any of the hand-waving phrases contained within the proposal to convince you that there are no real concerns raised. For instance, it uses words like, “encryption,” which is indeed important, but also not itself a magic solution for every problem, and which is also useless as a defense for the interests of the public when the police still have the key to all the data. The proposal also includes language saying that the sheriff will own the data, as if that provides any sort of assurance for the public when it is their data that the police want to own. Don’t be fooled by the platitudes; instead recognize them as the smoke and mirrors being deployed to distract from the serious issues license plate cameras raise (and the profit motive of the vendor, who has no reason to care as long as they are paid).

We all will feel the effects, even for cameras hung in places where we visit less frequently. We are still a community, and people come to us as much as we go to them. For instance, I still have friends in the Novato area, and I’m sure you’d be interested to know that I visited one in the Indian Valley area where you plan to have cameras on 11/11, as well as 10/28.

This board should stand up for the rights of its constituents and vote to reject the sheriff’s proposal to install cameras anywhere in the county. But at minimum it should delay any action until there can be greater public input with ample notice. This proposal has been treated like a ministerial budgetary item few in the county would care about evaluating. Indeed the fiscal impact may be relatively minor, although if the sheriff’s department really believes it has money to burn on cameras perhaps that money could be reclaimed for the general budget and better spent on, say, a guidance counselor or other public resources that might actually deter criminality.

But its overall impact is enormous, affecting the lives of every single person in the county. Thus requires everyone to be able to carefully scrutinize what this board plans to do to them if it were to approve the proposal. Yet we can’t; this proposal is getting slipped past us without any meaningful effort to call attention to it commensurate with its impact. The “staff report” item in the agenda, which was written not by county staff but by the sheriff’s department, is itself is dated as of tomorrow, which calls into question whether approval could even be compliance with SB 34 requiring the agency to provide adequate notice to the public before installing these cameras, since the report itself does not even legally exist until the day it appears on the agenda and after the deadline for written comments at 3:30pm on November 27.

The county is certainly capable of providing more conspicuous notice, like as it does every time it wants the public to vote on one of its propositions. And for something this serious, similar advertising efforts are warranted. After all, if this board is inclined to allow the police so much oversight of our lives, then it should do everything possible to ensure that the public is able to provide meaningful oversight of its choices so that we can hold those who make them accountable.

I urge you to vote no on the proposal.

Posted on Techdirt - 3 November 2023 @ 01:45pm

Wherein The Copia Institute Tells The Copyright Office There’s No Place For Copyright Law In AI Training

These days everyone seems to be talking about AI, and the Copyright Office is no exception, although it may make sense for it to speak here because people keep trying to invoke copyright as a concept implicated by various aspects of AI, including, and perhaps especially, with regard to “training” AI systems. So the Copyright Office recently launched a study to get feedback on the role copyright has, or should be changed to have, in shaping any law that bears on AI, and earlier this week the Copia Institute filed an initial comment in that study.

In our comment we made several points, but the main one was that, at least when it comes to AI training, copyright law needs to butt out. It has no role to play now, nor could it constitutionally be changed to have one. And regardless of the legitimacy to any concerns for how AI may be used, allowing copyright to be an obstructing force in order to prevent AI systems from being developed will only have damaging effects not just deterring any benefits that the innovation might be able to provide but undermining the expressive freedoms we depend on.

In explaining our conclusion we first observed that one overarching problem poisoning any policy discussion on AI is that “artificial intelligence” is a terrible term that obscures what we are actually talking about. Not only do we tend to conflate the ways we develop it (or “train” it), with the way we use it, which presents its own promises and potential perils, but in general we all too often regard it as some new form of powerful magic that can either miraculously solve all sorts of previously intractable problems or threaten the survival of humanity. “AI” can certainly inspire both naïve enthusiasm prone to deploying it in damaging ways, and also equally unfounded moral panics preventing it from being used beneficially. It also can prompt genuine concerns as well as genuine excitement. Any policy discussion addressing it must therefore be able to cut through the emotion and tease out exactly what aspect of AI we are talking about when we are addressing those effects.  We cannot afford to take analytical shortcuts, especially if it would lead us to inject copyright into an area of policy where it does not belong and its presence would instead cause its own harm.

Because AI is not in fact magic; in reality it is simply a sophisticated software tool that helps us process information and ideas around us. And copyright law exists to make sure that there is information and ideas for the public to engage with. It does so by bestowing on the copyright owner certain exclusive rights in the hopes that this exclusivity makes it economically viable for them to create the works containing those ideas and information. But these exclusive rights necessarily all focus on the creation and performance of their works. None of the rights limit how the public can then consume those works once they exist, because, indeed, the whole point of helping ensure they could exist is so that the public can consume them. Copyright law wouldn’t make sense, and probably not be constitutional per the Progress Clause, if the way it worked constrained that consumption and thus the public’s engagement with those ideas and information.

It also would offend the First Amendment because the right of free expression inherently includes what is often referred to as the right to read (or, more broadly, the right to receive information and ideas). Which is a big reason why book bans are so constitutionally odious, because they explicitly and deliberately attack that right. But people don’t just have the right to consume information and ideas directly through their own eyes and ears. They have the right to use tools to help them do it, including technological ones. As we explained in our comment, the ability to use tools to receive and perceive created works is often integral to facilitating that consumption – after all, how could the public listen to a record without a record player, or consume digital media without a computer. No law could prevent the use of tools without seriously impinging upon the inherent right to consume the works entirely. The United States is also a signatory to the Marrakesh Treaty, which addresses the unique need by those with visual and audio impairments to use tools such as screen readers to help them consume the works to which they would otherwise be entitled to perceive. Of course, it is not only those with such impairments who may have need to use such tools, and the right to format shift should allow anyone to use a screen reader to help them consume works if such tools will help them glean those ideas effectively.

What too often gets lost in the discussion of AI is that because we are not talking about some exceptional form of magic but rather just fancy software, AI training must be understood as simply being an extension of these same principles that allow the public to use tools, including software tools, to help them consume works. After all, if people can direct their screen reader to read one work, they should be able to direct their screen reader to read many works. Conversely, if they cannot use a tool to read many works, then it undermines their ability to use a tool to help them read any. Thus it is critically important that copyright law not interfere with AI training in order not to interfere with the public’s right to consume works as they currently should be able to do.

So at minimum such AI training needs to be considered a fair use, but the better practice is to recognize that there is no role for copyright to play when it comes to AI training at all. To say it is allowed as a fair use is to inflate the power of a copyright holder beyond what the statute or Constitution should allow because it suggests that using tools to consume works could ever potentially be an infringement, which only happens to be excused in this context. But copyright law is not supposed to give copyright owners such power over the consumption of their works, which we would then need to be dependent on fair use to temper. It should never apply to limit the consumption of works in any context, and we should not let concerns about AI generally, or their uses or outputs specifically, to open the door to copyright law ever becoming an obstacle to that consumption.

Posted on Techdirt - 10 October 2023 @ 01:38pm

Apples And Oranges: The US Patent And Trademark Office Combined Copyright And Trademark And Nothing Good Will Come Of That

Last week I found myself assigned to speak on a “streaming piracy” panel that had gotten bolted onto an event otherwise focused on trademark counterfeiting, despite the latter being a completely separate legal issue connected with a completely separate legal doctrine.

It was all part of the USPTO’s roundtable on “future strategies in anti-counterfeiting and anti-piracy,” which was, by its very design, unlikely to be of much use in shedding light on the complexities surrounding either issue. Not just because it erroneously treated the two as two sides of the same coin, but even logistically, because to call each a “panel,” as they did, is to use the term loosely. While it is good that they (apparently?) accommodated everyone who applied to speak, having five or six people, plus a moderator, slated to speak in a session lasting less than an hour is not a good way to foster illuminating discussion. Furthermore, when there is only one person speaking for the public interest surrounded by five others speaking for rightsholders, letting the rightsholders bloviate for as long as they wanted while only policing the public interest speaker for time is not a great way to build a careful record, which is usually thought to be the goal of events like these, especially if the USPTO is planning to use what it learned as a basis to advocate for any policy changes on either front.

(And it’s even worse when the sole person policed is the single woman on a six-person panel. Obviously the time constraints meant that comments from everyone needed to be brief. And perhaps my experience was simply a byproduct of my window to speak occurring next to last. But I’m calling out being the only participant shushed because it is not the first time I’ve been the only woman in a professional discussion otherwise populated entirely by men whose opportunity to have the floor was policed, when no such limits were placed on the men. These sorts of “coincidences” seem to happen a little too often to be just coincidence, and they need to not be happening at all. Police everyone, or no one, but do not just leave it to women to fix a meeting’s timing problems.)

Anyway, with the little time allotted to me, I tried to make the following interrelated points. First, that questions of piracy had no business being discussed in the context of counterfeiting. Second, that the entire discussion was being skewed by a lot of unchallenged assumptions, which all needed closer scrutiny. And, third, that the public interest was largely being ignored.

On the first point, dragging “piracy” into an anti-counterfeiting discussion creates a significant apples and oranges problem. “Counterfeiting,” first of all, is a very specific technical legal term, and it applies only to things that are trademarked. Trademark itself is a very specific legal doctrine, born of Commerce Clause statutory authority with its own doctrinal purpose to protect consumers to make sure they are not being misled in their purchasing decisions and getting something different than what they expected to buy, and thus possibly harmed by it. If, for instance, a consumer buys toothpaste, they need a way to know that it is really the toothpaste they expected to buy and not something counterfeit, especially if it may be something dangerous being passed off as toothpaste. Trademark law exists to help eliminate source confusion within the market.

But “piracy,” as it is generally discussed, is something entirely different, and generally rooted in copyright, which is an entirely separate law, rooted in an entirely separate constitutional authority, and with entirely separate goals and purpose than trademark law. While the latter functions as a consumer protection law, the purpose of copyright is to promote the spread of knowledge. While both doctrines are intended to serve the public, the benefits each is supposed to deliver are different, and so are the concerns each monopoly raises.

The only real commonality between the two legal doctrines of copyright and trademark is that they are monopolies we let law hand out. But every monopoly we grant comes with a cost, because monopolies are inherently harmful to a market economy. If we are going to say that it is nonetheless, on balance, worth it to have these monopolies, then we need to make sure they are adequately limited and tuned to the very specific problems they are intended to solve, and affect no more, lest they start to cause their own harms. Especially when the alleged problems are so fundamentally different. For instance, unlike with what sometimes happens in counterfeiting no one is going to get hurt by accessing, say, an unauthorized stream, largely because there is nothing fake about the content people are accessing – indeed, the objection is that people are accessing the real thing.

Conflating the two loses all the nuance between the different types, as well as any nuance we should consider within each doctrine. Worse, it tends to supplant a credible evidence-based inquiry – one with precisely-defined terms that adequately tests every assumption underpinning it – with one that, from the outset, tends to presume a benefit to having stronger monopolies, while simultaneously ignoring their dangers. But when doling out monopoly power we need to be more careful than this sort of glib approach will allow, lest we do harm to the public who is supposed to be benefited from the exercise. Ultimately, the policy question of whether and under what circumstances it is ok for consumers to have potentially unauthorized access to real products is a fundamentally different policy question than whether and under what circumstances it is ok for consumers to have potentially unauthorized access to fake ones (for instance, should we care as much about counterfeit handbags as we do about counterfeit toothpaste, and what are the consequences if we do), and conflating the two will not lead to coherent answers consistent with the overall reasons for why we have either law.

Meanwhile, one of the other apples and oranges problems that arises from blending the two legal doctrines together in one administrative inquiry is that copyright and trademark law are administered by different agencies. Issues arise when the two agencies overlap and start to play in the same doctrinal space. Some of these complications are practical, because having multiple agencies trying to shape the same policy has the effect of doubling the workload for any member of the public wanting to influence how this policy gets shaped. And that’s a problem for everyone, including rightsholders.

It also creates a too-many-cooks problem when more than one tries to affect policy for the same domain, which can easily be at cross-purposes, especially when either speaks to a domain outside their agency focus. It is not necessary to endorse everything the Copyright Office does to observe that one would expect it to be more qualified to speak on copyright than, say, trademark or patent, where it might have something to say, but not as much. Conversely, the name US Patent and Trademark Office would seem to be a hint that its expertise lay in trademark and patent law, rather than copyright, where the Copyright Office is likely to be a more expert authority. When either agency strays from its lane it creates the danger that whatever policy it does recommend will be at best inapt, if not altogether conflicting or even harmful, which the other agency will now have to deal with the effects of.

Which leads to the second point, that one recurrent issue in all of these policy conversations surrounding any of these legal doctrines is how many unchallenged assumptions keep getting treated like incontrovertible facts, when they are not. Even some of the complaints raised during panels earlier in the day addressing pure trademark issues often tended to presume problems where there might not even be any, and throughout the day many panelists were heard calling to prohibit things that were in fact entirely legal, for good reason, like fair use and legitimate competition. Many putative rightsholder complaints also tended to presume impingements on rights that were anything but certain, including in the piracy context where it was not at all clear whether the allegedly illicit streams were even implicating a valid copyright in the underlying material. (This is a particular issue for sports streams, where it is difficult to claim that there’s enough original authorship in the streamed match without simultaneously admitting the match is fixed). Before we start creating penalties and sanctions that strengthen any right’s monopoly power, we need to make sure those who would be favored by these changes are even entitled to a remedy.

We also need to make sure that there is actually something to remediate. Throughout the day, in both the copyright and trademark context, we heard rightsholders complain about a loss of exclusivity in whatever right they claimed to have. Loss of exclusivity does not necessarily translate to economic loss, particularly if the competing access helps stimulate markets for ancillary sources of income for a rightsholder. It also is a big “if” whether each unauthorized consumer purchase or other access would necessarily equate to the loss of a paid-for licensed one. Furthermore, even to the extent there might be losses, it isn’t necessarily clear what the cause of the loss is, because if counterfeiting or piracy is existing, there may be reasons why consumers are looking to channels other than the authorized ones, and those reasons may stem from choices made by the rightsholders in the way they foster, or discourage, legitimate access to what they offer.

In addition, neither copyright nor trademark were ever intended to be exhaustive monopolies; the public was still supposed to retain certain ability to use or access trademarked or copyrighted material, either because the First Amendment required it or because the goals and purpose of the legal doctrine required the public to retain it in furtherance of those goals. Many of the complaints raised by rightsholders throughout the day were laments that things such as due process inhibited their monopoly power and demands to have an entirely different legal regime that no longer fulfilled the purpose that currently imposed limits on their power. It is important to recognize that’s what many of these complaints were and not just take the claims of harm at face value.

Especially because the third point is that the public interest needs to be better considered in addressing any of these policy areas, whether copyright or trademark or any other doctrine that sometimes gets thrown into the “intellectual property” bucket (like rights of publicity, which was also referenced at various points throughout the day). And at this roundtable we were largely missing voices who could speak for it.

Which is a problem for several reasons, including that discussions like these tend to force a false either-or binary, dividing interests between suppliers and consumers, when the truth is that the public itself is a population of actual and potential suppliers as much as it is consumers. The same people are often both, especially in the copyright context where members of the public both create and consume expression and not all creators necessarily want the increased monopolistic power that many incumbent rightsholders keep asking for, given how much it can deter their own market participation by censoring their expression if not also the ability to monetize it.

When hearings like these are too heavily weighted in favor of those who want greater ability to say no to the public interacting with their claimed material than the members of the public who need to interact with it, it tends to lead to the loss of due process rights, censorship, and the public benefits these laws were supposed to impart. In the case of copyright, for example, from as far back as the Statute of Anne, which was a “statute for learning,” and enshrined in the Progress Clause of the Constitution is the idea that we grant a limited monopoly for limited times so that the public can get the benefit of works created by making it possible for those works to be created. When we instead overly focus on creator entitlements and not the public’s interest in why we bother to create these monopolist entitlements, we’re not acting consistently with the goals and purpose of the law and instead causing harms to the intended beneficiaries of the law, which was always intended to be the public.

More posts from Cathy Gellis >>