As I write this I must admit I am undecided as to the positive
and negative uses of data mining for use with clients. Two articles brought my
thoughts to the forefront. The first is about a new dimension to data mining
from Microsoft Research in Asia, it is called MoodScope. Identifying the
emotions of happy, tense, calm, upset, excited, stressed or bored, MoodScope was
accurate 93% of the time with 32 volunteers when adjusted to individual users
on their smart phones. Immediately, I thought of how this construct could be
shifted into a therapeutic setting with the client and therapist being a team
in mood identification and regulation. This is not much different than The
Durkheim Project’s use of artificial intelligence to analyze Facebook and
smartphone data to statistically monitor harmful behavior of users. Veterans
are the first study participants. This project addresses data security and
confidentiality between users, therapists, and the information received.
What is the difference between these two projects? The
difference is in how they are using the information gathered. I have a twitch
in my eye when I read the next paragraph of the MoodScope article:
“The researchers suggest third party
hooks could be added to the software to allow for automatically transmitting
user moods to applications like Facebook. They also acknowledge that privacy
concerns could arise if the software were to be delivered to the public, but
suggest the benefits of such software would likely outweigh such concerns. They
note that sites like Netflix or Spotify could use data from MoodScope to offer
movies or other content based on specific users' moods.”
Advertisers will decide what content we see on smartphones,
tablets, or computers based upon their moods? Will a client get an option to
see
Prozac Nation if they research antidepressants
or
Leaving Las Vegas if their searches
center on where to go to an AA meeting? We can only hope there are not
directions to vineyards (with a coupon, no less) in the area. How will the
effect of these choices be cataloged as constructive or destructive to a person’s
behavior? As technology progresses there needs to be some overall body
regulating the innovations of the digital age.
Advertising is already manipulating through commercialism (Van
Tuinen, 2011). Now the manipulation will be even more personal. We know about
the influence of commercials for children’s programming, smoking and drinking
media targeting and other forms of bias with products. How are we going to
advocate for vulnerable populations in the context of manipulation by digital logarithms?
Will clients believe the technology is reading their mind? In a sense, this
1984ish George Orwell program is intelligent and watching every stroke made on
a smartphone or computer. The big question is “how will you as a social worker
be aware of these tools and advocate for best practices ensuring ethical and
confidential use of data mining manipulation?”
If you have any suggestions, please let us know!
References
Van Tuinen, H. K. (2011). The Ignored Manipulation of the
Market: Commercial Advertising and Consumerism Require New Economic Theories
and Policies. Review Of Political Economy, 23(2), 213-231.
doi:10.1080/09538259.2011.561558