As Concerns About Social Media Advertising Escalate, A Potential Solution

August 7, 2007 by Lisa Oshima | Advertising, Social Media
(5) Comments

The Financial Times reports today that Central Office of Information (COI), the UK Government’s “center of excellence for marketing and communications,” has put a moratorium on advertising on social media sites like Facebook.   COI organizes marketing campaigns to promote issues of public importance (education, health, welfare, etc.) for various UK Government departments.  The organization announced that it is reviewing how it handles advertising on social networking sites fearing that its ads could appear on innapropriate user generated sites.  Alan Bishop, chief executive of the COI, explained the decision to the FT saying:

“We always have to keep a very close eye on the context. People are still getting to grips with this. We don’t want to exclude the use of any of the new social media but we do have to have a very clear idea of what the context is going to be like.”

COI’s decision comes one week after New Media Age reported that Vodafone, The AA, First Direct, and others were pulling their ads on Facebook because they appeared on the Facebook page of the British National Party, a highly controversial political organization.  Last week, Vodafone released a statement saying:

“We advertise our products and services across a wide range of on and offline publications… In the case of online, bundles of space are purchased across a number of sites including the social networking sites. As a result we were not aware that a Vodafone ad would appear next to a British National Party group on Facebook.

Our Public Policy Principles state that we do not make political donations or support particular party political interests and therefore to avoid misunderstandings we immediately withdrew our adverting as soon as this was brought to our attention.

We are working with our media buyer OMD to ensure that more robust controls are in place before we agree to any potential re-investment,” the statement added.”

The concerns raised by organizations like COI and Vodafone are understandable and highlight the need advertisers to have greater control over when and where their paid ads appear.  As far as I’m aware, thus far, website optimization solutions and content delivery platforms are only helping advertisers and marketers understand visitor behavior, segment visitors into groups, and deliver targeted messages that are relevant to specific segments. I’m not aware of any optimization solutions or content delivery platforms that helping advertisers optimize ads and website content so that they’re not only relevant to various segments of website visitors but that they’re also blocked from appearing on pages that promote or discuss controversial topics.  I’m interested to see who will be the first to make this happen.

Marketers can already test and optimize ads and web content so that relevant messages are delivered to different audiences i.e. (Audience segment A “High value customers” sees Ad #1, Audience segment B “First time visitor” sees Ad #2, etc.).  Similarly, search technology makes it easy to identify controversial key words on web pages (i.e. “BNP,” “Political Party,” etc.). I can’t imagine that it would be too difficult to combine these two technologies to create an ad optimization and delivery network that allows advertisers to deliver blank ads on social media pages that have potentially dubious content, or ‘sublease’ that ad space on controversial social media pages to less discerning advertisers.

Instead of simply segmenting users, the ad publishing optimization solution I’d like to see would also segment content.  The ad delivery platform would scan social media pages at regular intervals for controversial words.  If dubious words or phrases that go against a given advertiser’s rules of engagement appear, the ad slot could display nothing at all or an ad from another, less discriminating advertiser, who subleases the ad space in cases where the primary advertisers chooses to bow out.  Having a solution like this would allow social media platforms like Facebook to offer a two-tiered advertising platform that offers the ultimate control to Tier 1 advertisers who are willing to pay for it and exposure to Tier 2 advertisers with a smaller budget.

Could this work?  Post a comment with your opinion.

As Concerns About Social Media Advertising Escalate, A Potential Solution

August 7, 2007 by Lisa Oshima | Advertising, Social Media
(0) Comments

The Financial Times reports today that Central Office of Information (COI), the UK Government’s “center of excellence for marketing and communications,” has put a moratorium on advertising on social media sites like Facebook.   COI organizes marketing campaigns to promote issues of public importance (education, health, welfare, etc.) for various UK Government departments.  The organization announced that it is reviewing how it handles advertising on social networking sites fearing that its ads could appear on inappropriate user generated sites.  Alan Bishop, chief executive of the COI, explained the decision to the FT saying:

“We always have to keep a very close eye on the context. People are still getting to grips with this. We don’t want to exclude the use of any of the new social media but we do have to have a very clear idea of what the context is going to be like.”

COI’s decision comes one week after New Media Age reported that Vodafone, The AA, First Direct, and others were pulling their ads on Facebook because they appeared on the Facebook page of the British National Party, a highly controversial political organization.  Last week, Vodafone released a statement saying:

“We advertise our products and services across a wide range of on and offline publications… In the case of online, bundles of space are purchased across a number of sites including the social networking sites. As a result we were not aware that a Vodafone ad would appear next to a British National Party group on Facebook.

Our Public Policy Principles state that we do not make political donations or support particular party political interests and therefore to avoid misunderstandings we immediately withdrew our adverting as soon as this was brought to our attention.

We are working with our media buyer OMD to ensure that more robust controls are in place before we agree to any potential re-investment,” the statement added.”

The concerns raised by organizations like COI and Vodafone are understandable and highlight the need advertisers to have greater control over when and where their paid ads appear.  As far as I’m aware, thus far, website optimization solutions and content delivery platforms are only helping advertisers and marketers understand visitor behavior, segment visitors into groups, and deliver targeted messages that are relevant to specific segments. I’m not aware of any optimization solutions or content delivery platforms that helping advertisers optimize ads and website content so that they’re not only relevant to various segments of website visitors but that they’re also blocked from appearing on pages that promote or discuss controversial topics.  I’m interested to see who will be the first to make this happen.

Marketers can already test and optimize ads and web content so that relevant messages are delivered to different audiences i.e. (Audience segment A “High value customers” sees Ad #1, Audience segment B “First time visitor” sees Ad #2, etc.).  Similarly, search technology makes it easy to identify controversial key words on web pages (i.e. “BNP,” “Political Party,” etc.). I can’t imagine that it would be too difficult to combine these two technologies to create an ad optimization and delivery network that allows advertisers to deliver blank ads on social media pages that have potentially dubious content, or ‘sublease’ that ad space on controversial social media pages to less discerning advertisers.

Instead of simply segmenting users, the ad publishing optimization solution I’d like to see would also segment content.  The ad delivery platform would scan social media pages at regular intervals for controversial words.  If dubious words or phrases that go against a given advertiser’s rules of engagement appear, the ad slot could display nothing at all or an ad from another, less discriminating advertiser, who subleases the ad space in cases where the primary advertisers chooses to bow out.  Having a solution like this would allow social media platforms like Facebook to offer a two-tiered advertising platform that offers the ultimate control to Tier 1 advertisers who are willing to pay for it and exposure to Tier 2 advertisers with a smaller budget.

Could this work?  Post a comment with your opinion, and if you don’t have a Vox account, email me with your comment, and I’ll post it manually.

Dada.net launches Friend$, Partners with Google AdSense to Pay Social Media Users

March 2, 2007 by Lisa Oshima | Advertising, Social Media
(3) Comments

Italy-based social networking company, Dada S.p.A., is partnering with Google AdSense to pay users for allowing ads on their space.  Dada’s new Friend$ is an opt-in revenue sharing program that rewards users for adding friends and updating the content of their Dada space.  According to Dada, Friend$ is, “the only program that rewards you both for keeping your personal space updated (blog, video, profile, etc.) and for spreading the word by inviting friends to do the same.” The idea is that users keep their Dada space updated and invite friends to participate in Friend$.  In exchange, Google posts ads on their site/friends site, and pays users and their friends a percentage of the money generated by click-through on those ads.

I wonder how advertisers feel about this? It seems like an easy system for Dada users and their friends to exploit for revenue purposes.  If I were an advertiser, I’m not sure how excited I’d be by having people click on my ad with the express purpose of extorting me for their own/ friends’ financial well being.

Similarly, I find this whole concept a little disconcerting in that it encourages social networkers to talk about specific topics for the express purpose of generating revenue.  I feel perfectly okay about the idea of hiring paid spokespeople to talk about companies, so long as the public knows that they’re being paid.  However, I take issue with situations like this, where there are blurred lines of distinction between people talking about what they’re genuinely interested in versus talking about things they’re being paid to discuss.  It’s not Dada and Google are talking about sponsoring corporate blogs…  In a way, they’re steering kids (and adults) towards discussing specific topics their conversations, blogs, profiles, etc.  If a social networker wants to make money through Friend$, and they know which companies use Google AdSense, I suspect that it will be very easy for them to exploit the system.

What do you think?

YouTube’s Monetization Strategy

January 29, 2007 by Lisa Oshima | Advertising, Monetization, Social Media
(4) Comments

The World Economic Forum’s annual meeting in Davos Switzerland took place between January 25-29th.  Today, I discovered some great footage from that meeting on YouTube, which dovetails nicely into my blog from Friday, in which I made several predictions for mobile in 2008. In the video, Chad Hurley, Co-Founder of YouTube, talks about some of the exciting things that lay ahead for YouTube:

For those of you who don’t want to watch the video, the key points are that YouTube is planning to monetize video submissions for users, and they’re creating an audio engine, which will recognize songs that users have overlaid on top of their videos. Once the song is recognized, YouTube will enable viewers/listeners to purchase the said song(s)  through legal means and give a commission on the sale to the person who posted the video that uses the song.

My blog on Friday talked about the rise in popularity of monetizing video submissions of things like news events from mobile phones in 2008.  I think it will be really interesting to see if/how YouTube does this.  Will they be like Revver, monetizing videos by the number of hits they receive/ ad revenue generated, or will they go a slightly different route and charge networks/ news agencies to re-purpose YouTube videos on other formats and pay those who submit videos a portion of the proceeds?

I also wonder how closely YouTube’s audio cross-selling/ commissions based approach to music will mirror what social networking and mobile OS company, Glide Mobile announced with The Orchard in March 2006.  Stay tuned…

Using Social Media to Sell Products to Kids…Interesting but Potentially Dangerous

January 23, 2007 by Lisa Oshima | Advertising, Social Media
(8) Comments

I’ve talked a lot in this blog about how companies are using social media to capture new customers and engage existing customers.  Today, Advertising Age wrote a fascinating article on the success of Canadian toy manufacturer, Ganz, who has used social media and the Internet to spark massive sales of its Webkinz stuffed animals.  I’ve got mixed feelings about Webkinz marketing model and success. On the one hand, I admire the Ganz creativity. On the other hand, I question whether Webkinz takes marketing to children one step too far.  Before I explain this paradox in more detail, here’s some background…

Webkinz, which launched last year, are proving exceptionally popular among American children aged 6-11.  The success of Webkinz is so impressive that Advertising Age refers to them as “Beanie Babies on steroids”.  By November 2005, Ganz had sold one million Webkinz, without doing any formal advertising.  Ganz reports that this number was pushed “significantly higher” during the holiday season.  Instead of advertising, Ganz made Webkinz successful  by engaging a strong network of sales reps and retailers as well as innovative PR and social media strategies.  Bloggers and YouTubers started talking about Webkinz en-masse, which attracted the attention of the media and resulted in publicity on “Good Morning America,” “Regis & Kelly” and “Rachael Ray.” Social media combined with the power of traditional press accelerated the sales of Webkinz.

Webkinz word of mouth success via social media is in great part to do with its web-savvy product strategy.  Each Webkinz stuffed animal comes with a printed tag, with a secret code and the address of what Advertising Age refers to a “safe” social-media enabled website for kids.  Once registered, kids can dress and feed their avatar Webkinz by earning “KinzCash” by playing games and winning quizzes.  Kids can also engage their avatars with other Webkinz avatars by inviting them to be friends and sending messages from a pre-selected list of options (Advertising Age uses the example “You are” and “very nice”.).  So, in effect, the Webkinz site becomes a mini MySpace for very young kids, without the threat of sexual predators. Imagine the success of Cabbage Patch Kids in the 1980s, and add to the “adoption process” the power of the internet and talking cartoons, and it’s not hard to see why kids can’t get enough of Webkinz.

The concerning part of Webkinz and similar products is the way that they engage with and solicit information from children.  When a child goes to the Webkinz site s/he is greeted by vivid cartoon images and written instructions.  When the child clicks on the text “My First Adoption,” a cartoon named “Ms. Birdy” appears welcoming the child to the “Adoption Center.” Ms. Birdy asks the child to read and complete the end user license agreement (EULA).  Webkinz’s EULA is a typical legal masterpiece.  It contains text that is well beyond the reading comprehension level of a 6-11 years old, and yet, without suggesting that the child ask for parental assistance, “Ms. Birdy” asks the child to read and agree to the terms contained within the EULA.  Included in the terms is a paragraph, which says that any feedback provided to Danz on the site will become the intellectual property of Danz. I understand why Danz has this clause in the EULA, but I don’t feel that it is appropriate to expect that a child can read or understand a legal document intended for adults. I take issue with any website that expects a minor-aged child to click through and agree to a legal agreement without parental involvement – especially one that claims ownership of any intellectual property that the child submits in the form of feedback for the site.

After the child clicks “I agree” to the EULA (which they couldn’t possibly understand), Ms. Birdy speaks, telling the child that if s/he is under 9 years old, her/his parents should help her/him with registration.  The site then asks the child to submit personal information into the website: first name, date of birth, country of residence, and state. Although, it is not considered personally identifiable, this information does not appear to be transmitted securely, which is concerning to anyone illegally watching a family’s internet activity or a child predator stalking kids at the local library.

The child is then asked to create a username and password and submit the secret code on the tag of their Webkinz animal.  This code allows the child to play in “Webkinz World” for one year from the “date of adoption,” with the option to renew after that year for a fee.  All of this, is, of course, explained in the EULA, which is too complex for a child to understand.

While I am excited to see social media being used as an effective marketing tool, and I am pleased that DANZ complies with the Children’s Online Privacy Act (COPA), the Webkinz registration issues I mentioned highlight a larger issue of concern.  Companies are marketing to children, soliciting information from them on-line, and asking them to read legal agreements, which are beyond their level of comprehension.  It is difficult for parents to watch out for their kids in situations like this.  If a kid thinks it is okay to input their information onto, say, the Webkinz’s site without parental permission, what is to say that same child won’t think it is just as okay to give that information to a stranger via another website?  Nothing, unless their parents are involved.

One of the things that should be of growing concern to social media enthusiasts and child advocates alike is that there is currently no safe way to identify whether someone is a minor on-line.  Having a “second life” full of social media and networking on-line is becoming more and more common. In so many ways, anonymity is an accepted part of the Internet. This may hurt kids.  By this I mean, in real life, a child can’t go into a 7/11 to purchase porn, cigarettes, or booze, without showing appropriate age identification.  However, on-line, there is no such thing as an age identification. The Internet is largely anonymous.  As a result, there is no way to protect kids from seeing or interacting with inappropriate material, as there is in the non-anonymous “first life” – unless that material costs money and requires a credit card to purchase. A scary thought.






Categories


Blogroll


Recent Comments

    • کنگان نیوز: https://uploads.disquscdn.com/images/9b67d674c85fc94d383a5aaf6b9aa02f2efc3d330ef9a977e435469a506dcd98.jpg کنگان...
    • Jeffrey Matthew Cohen: Such a beautiful blog post. I never met Jeff in person, but over ten years ago, I was looking to make a huge career/lif...
    • Right Travel: Great post....
    • Right Travel: Great job!! Thanks for the blog! :)...
    • Cheryl McNinch: all that is true and makes people look more creepy and tracking people with glasses is plane out weird....