Monday, July 29, 2013

“Binging” Child Porn Searches


Technology being used to progress the protection of children has started.  Bing is the first search engine to utilize pop-up notices when pedophiles search for lewd pictures of children. The warning will notify if the search is illegal and offer a link for services to provide counseling. The UK plans to block all child pornography by having Internet users opt in for porn. This delivers a strong message that the person accessing this information is being tracked. Google is not participating in this tool to block child pornography. The power of Google could be instrumental in blocking access to sites endangering children.

So my question is why the United States is not accessing this tool? Why only is the Microsoft UK search engine, Bing, participating? How would child porn decrease if all search engines prevented child pornography searching? I believe the implications to using this tool are mostly positive.

I do understand freedom of speech and the concerns with this in the United States. There are also issues with filters and how to get around the wrong searches being targeted. Tech savvy people can probably bypass the blocks put up, but how many will it deter? Isn’t this plan worth protecting children who cannot protect themselves? What about prevention? Interest in child pornography starts somewhere. Making it less accessible may deter some voyeurs.  

I hope there will be studies in the UK to evaluate the effectiveness of such measures.

More information:




Sunday, July 7, 2013

Mood Sensing Software vs. Big Brother

As I write this I must admit I am undecided as to the positive and negative uses of data mining for use with clients. Two articles brought my thoughts to the forefront. The first is about a new dimension to data mining from Microsoft Research in Asia, it is called MoodScope. Identifying the emotions of happy, tense, calm, upset, excited, stressed or bored, MoodScope was accurate 93% of the time with 32 volunteers when adjusted to individual users on their smart phones. Immediately, I thought of how this construct could be shifted into a therapeutic setting with the client and therapist being a team in mood identification and regulation. This is not much different than The Durkheim Project’s use of artificial intelligence to analyze Facebook and smartphone data to statistically monitor harmful behavior of users. Veterans are the first study participants. This project addresses data security and confidentiality between users, therapists, and the information received.
What is the difference between these two projects? The difference is in how they are using the information gathered. I have a twitch in my eye when I read the next paragraph of the MoodScope article:

“The researchers suggest third party hooks could be added to the software to allow for automatically transmitting user moods to applications like Facebook. They also acknowledge that privacy concerns could arise if the software were to be delivered to the public, but suggest the benefits of such software would likely outweigh such concerns. They note that sites like Netflix or Spotify could use data from MoodScope to offer movies or other content based on specific users' moods.”


Advertisers will decide what content we see on smartphones, tablets, or computers based upon their moods? Will a client get an option to see Prozac Nation if they research antidepressants or Leaving Las Vegas if their searches center on where to go to an AA meeting? We can only hope there are not directions to vineyards (with a coupon, no less) in the area. How will the effect of these choices be cataloged as constructive or destructive to a person’s behavior? As technology progresses there needs to be some overall body regulating the innovations of the digital age. 

Advertising is already manipulating through commercialism (Van Tuinen, 2011). Now the manipulation will be even more personal. We know about the influence of commercials for children’s programming, smoking and drinking media targeting and other forms of bias with products. How are we going to advocate for vulnerable populations in the context of manipulation by digital logarithms? Will clients believe the technology is reading their mind? In a sense, this 1984ish George Orwell program is intelligent and watching every stroke made on a smartphone or computer. The big question is “how will you as a social worker be aware of these tools and advocate for best practices ensuring ethical and confidential use of data mining manipulation?”  If you have any suggestions, please let us know!

References






Van Tuinen, H. K. (2011). The Ignored Manipulation of the Market: Commercial Advertising and Consumerism Require New Economic Theories and Policies. Review Of Political Economy, 23(2), 213-231. doi:10.1080/09538259.2011.561558