What the ruling does is let stand a 3rd Circuit Court of Appeals ruling that blocks implementation of the law until the matter is brought to trial.
After their initial failure a few years ago, the morality police went back and wrote a more narrowly-focused law and in an inspired burst of PR, called it the Child Online Protection Act, claiming that their only, their sole, their single solitary thought was protecting our poor, innocent children from being exposed to - gasp - dirty pictures.
Gee, who could be against that? Uh, but wait, that's not actually what the law says. Instead, it provides fines of up to $50,000 for posting material "harmful to minors" where children could supposedly gain access to it. First question: What constitutes a "minor?" If it's felt that a 17-year old could cope with such material, does that make it alright even if it's also held that a seven-year old couldn't? Or does it mean it must be acceptable for a seven-year old, thus deeming a 17-year old to be the same as a pre-teen?
Second, more important question: What constitutes "harmful?" What does that mean in any legally-definable sense? (Especially since there is little evidence that viewing pornography, supposedly the real target of the law, is actually harmful to children. It may be confusing, even disturbing the same way a scary movie can be, but actual harm? There isn't much to go on there.) Indeed, one of the arguments against the law was that "harmful to minors" is an impermissibly vague term.
Another problem was that the law put the burden on website operators to provide means to restrict access to "objectionable" material to adults but provided no guidelines on what sort of means to the end would be considered adequate. Passwords and registrations would require some sort of verification, difficult and expensive. The proposal to use credit card numbers runs into the problem that some minors have their own. And any sort of verification process raises privacy questions and the potentially chilling impact of requiring people to reveal their identify in order to obtain what is in fact legal material.
But the Court dodged those more fundamental questions by clearly leaning toward the use of filtering software as a "less intrusive" means of obtaining the same end. Last year, the Court approved of a law requiring libraries to install just such software in order to receive federal assistance. It left open the possibility that the now-blocked law would ultimately prove constitutional, but the "least intrusive" standard will be a hard one for it to meet - which means that this latest attempt at "in your own best interest" censorship will fail and the earlier effort did.
Footnote one: Blocking software has its own serious problems, primary among which is the fact that the producers of the software regard the lists of sites they block and the means by which they judge that a site should be blocked as proprietary information, so you have no way of knowing in advance what criteria are applied and what sites are no longer available. In one case, a filter blocked a health-related site for gay men to get information on how to avoid sexually transmitted diseases. In one notorious case, the website of the National Organization for Women (NOW) was blocked. In another, the software blocked access to any site that had the letters "sex" appearing consecutively anywhere in the site's name, no matter what the topic.)
Footnote two: Not surprisingly, the Justice Department denounced the ruling.
"Our society has reached a broad consensus that child obscenity is harmful to our youngest generation and must be stopped," [Mark] Corallo said. "Congress has repeatedly attempted to address this serious need and the court yet again opposed these common-sense measures to protect America's children."So now it's "common sense" to adopt unconstitutional restrictions on free speech. I guess 9/11 did change everything.
No comments:
Post a Comment