Child sexual abuse photos and videos are among the most toxic materials online. It is against the law to view the imagery, and anybody who comes across it must report it to the federal authorities.
So how can tech companies, under pressure to remove the material, identify newly shared photos and videos without breaking the law? They use software — but first they have to train it, running repeated tests to help it accurately recognize illegal content.
Google has made progress, according to company officials, but its methods have not been made public. Facebook has, too, but there are still questions about whether it follows the letter of the law. Microsoft, which has struggled to keep known imagery off its search engine, Bing, is frustrated by the legal hurdles in identifying new imagery, a spokesman said.
The three tech giants are among the few companies with the resources to develop artificial intelligence systems to take on the challenge.
One route for the companies is greater cooperation with the federal authorities, including seeking permission to keep new photos and videos for the purposes of developing the detection software.
But that approach runs into a larger privacy debate involving the sexual abuse material: How closely should tech companies and the federal government work to shut it down? And what would prevent their cooperation from extending to other online activity?
Paul Ohm, a former prosecutor in the Justice Department’s computer crime and intellectual property section, said the laws governing child sexual abuse imagery were among the “fiercest criminal laws” on the books.
“Just the simple act of shipping the images from one A.I. researcher to another is going to implicate you in all kinds of federal crimes,” he said.
The industry’s privacy considerations have contributed to a patchwork of policies, leaving their platforms rife with gaps that criminals regularly exploit, The New York Times reported on Saturday. At the same time, efforts by companies to scan users’ files, including photos and videos, for child sexual abuse material are seen by many customers as intrusive and overreaching.
Last year, more than 45 million photos and videos were reported by tech companies to the federal clearinghouse that collects reports of such activity. Many of those images were previously known, and copies could be detected by comparing uploaded or shared photos and videos against databases of the imagery.
But detecting previously unknown photos and videos is far more difficult, and often requires testing thousands or millions of images to ensure accuracy.
Google has begun using new technologies to help identify the material, according to testimony from Kristie Canegallo, vice president for trust and safety. Speaking at a hearing on online child sexual abuse material in London in May, Ms. Canegallo said Google was able to act on 700 percent more images than in the past.
Google was not immediately able to provide more information about how it trains its detection software.
Facebook is also using new technology to detect previously unidentified illegal imagery. In blog post last year, Antigone Davis, the company’s global head of safety, wrote that Facebook was “using artificial intelligence and machine learning” to detect the imagery when it was uploaded.
But during testimony at the hearing in London, Julie de Bailliencourt, a senior manager at Facebook, said there were legal challenges and technical hurdles in building the system. “As you know, we have to delete any such content as soon as we’re made aware of it,” she said.
A Facebook spokeswoman said the company believed it was working within the law because it processed the imagery only during a 90-day period after discovering it. Tech companies are required to retain illegal imagery during those 90 days to assist in law enforcement investigations.
The new technologies could also intensify privacy concerns: More images may come under review as human moderators are called on to help ensure accuracy and weed out false positives.
Chad M. S. Steel, a forensic specialist, said that while artificial intelligence and machine learning were constantly improving in the area of image recognition, abusive imagery of children was particularly difficult to detect because of the age of the victims. Mr. Steel added that the new systems were unlikely to be effective “any time in the near future.”
The risk of falsely identifying child abuse imagery is also fraught. Because the companies are legally obligated to report the material to the authorities, people could be mistakenly ensnared in a federal investigation with great reputational harm.
“Applying A.I. to the determination of what is a child or what is a person of a particular age is very risky,” Hugh Milward, Microsoft’s director legal affairs in Britain, said at the hearing in London. “It’s not a precise art.”
At the same time, Microsoft is embroiled in a legal challenge in New Mexico that highlights the difficulty companies face when they are perceived as working too closely with government authorities.
A pediatric surgeon in Albuquerque was arrested in 2016 on charges of sharing child sexual abuse material on the live-video platform Chatstep, which used Microsoft’s image-scanning software.
The surgeon, Guy Rosenschein, argued in court that Microsoft was acting as a government agent because it used data from the National Center for Missing and Exploited Children, the federally designated nonprofit organization that serves as a national clearinghouse for child sexual abuse imagery. The surgeon’s lawyers cited a 2016 federal appeals court opinion that held that the national center was a government agency. The case is pending.
Mr. Ohm, the former federal prosecutor who now is a professor at Georgetown Law, said coordination between tech companies and the federal authorities could open the companies to constitutional challenges based on improper searches.
“The more companies seem to be doing what the cops themselves would need a warrant to do, the more companies essentially are deputized by the federal government,” said Mr. Ohm.
This is not the first time tech companies have run into legal roadblocks in their efforts to combat child sexual abuse imagery. A law passed by Congress in 1998 required companies to report child sexual abuse imagery when it was discovered on their platforms.
But the law had no safe-harbor clause that would explicitly allow the companies to send the material to the national clearinghouse. It took nine years for Congress to remedy this, and by then many companies had already decided to take the risk of transmitting the images.
Companies in other countries are facing similar hurdles. Two Hat Security in Canada, for instance, spent years working with the authorities there to develop a system that detects child sexual abuse imagery. Because the company couldn’t view or possess the imagery itself, it had to send its software to Canadian officials, who would run the training system on the illegal images and report back the results. The company would then fine-tune the software and send it back for another round of training.
The system has been in development for three to four years, said the company’s chief executive, Chris Priebe.
“It’s a slow process,” he said.
"can" - Google News
November 12, 2019 at 03:00PM
https://ift.tt/2NFbIjE
How Laws Against Child Sexual Abuse Imagery Can Make It Harder to Detect - The New York Times
"can" - Google News
https://ift.tt/2NE2i6G
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update
No comments:
Post a Comment