Missing Data Can Be Meaningful

No matter how big the Big Data gets, we will never know everything about everything. Well, according to the super-duper computer called “Deep Thought” in the movie “The Hitchhiker’s Guide to the Galaxy” (don’t bother to watch it if you don’t care for the British sense of humour), the answer to “The Ultimate Question of Life, the Universe, and Everything” is “42.” Coincidentally, that is also my favorite number to bet on (I have my reasons), but I highly doubt that even that huge fictitious computer with unlimited access to “everything” provided that numeric answer with conviction after 7½ million years of computing and checking. At best, that “42” is an estimated figure of a sort, based on some fancy algorithm. And in the movie, even Deep Thought pointed out that “the answer is meaningless, because the beings who instructed it never actually knew what the Question was.” Ha! Isn’t that what I have been saying all along? For any type of analytics to be meaningful, one must properly define the question first. And what to do with the answer that comes out of an algorithm is entirely up to us humans, or in the business world, the decision-makers. (Who are probably human.)

No matter how big the Big Data gets, we will never know everything about everything. Well, according to the super-duper computer called “Deep Thought” in the movie “The Hitchhiker’s Guide to the Galaxy” (don’t bother to watch it if you don’t care for the British sense of humour), the answer to “The Ultimate Question of Life, the Universe, and Everything” is “42.” Coincidentally, that is also my favorite number to bet on (I have my reasons), but I highly doubt that even that huge fictitious computer with unlimited access to “everything” provided that numeric answer with conviction after 7½ million years of computing and checking. At best, that “42” is an estimated figure of a sort, based on some fancy algorithm. And in the movie, even Deep Thought pointed out that “the answer is meaningless, because the beings who instructed it never actually knew what the Question was.” Ha! Isn’t that what I have been saying all along? For any type of analytics to be meaningful, one must properly define the question first. And what to do with the answer that comes out of an algorithm is entirely up to us humans, or in the business world, the decision-makers. (Who are probably human.)

Analytics is about making the best of what we know. Good analysts do not wait for a perfect dataset (it will never come by, anyway). And businesspeople have no patience to wait for anything. Big Data is big because we digitize everything, and everything that is digitized is stored somewhere in forms of data. For example, even if we collect mobile device usage data from just pockets of the population with certain brands of mobile services in a particular area, the sheer size of the resultant dataset becomes really big, really fast. And most unstructured databases are designed to collect and store what is known. If you flip that around to see if you know every little behavior through mobile devices for “everyone,” you will be shocked to see how small the size of the population associated with meaningful data really is. Let’s imagine that we can describe human beings with 1,000 variables coming from all sorts of sources, out of 200 million people. How many would have even 10 percent of the 1,000 variables filled with some useful information? Not many, and definitely not 100 percent. Well, we have more data than ever in the history of mankind, but still not for every case for everyone.

In my previous columns, I pointed out that decision-making is about ranking different options, and to rank anything properly. We must employee predictive analytics (refer to “It’s All About Ranking“). And for ranking based on the scores resulting from predictive models to be effective, the datasets must be summarized to the level that is to be ranked (e.g., individuals, households, companies, emails, etc.). That is why transaction or event-level datasets must be transformed to “buyer-centric” portraits before any modeling activity begins. Again, it is not about the transaction or the products, but it is about the buyers, if you are doing all this to do business with people.

Trouble with buyer- or individual-centric databases is that such transformation of data structure creates lots of holes. Even if you have meticulously collected every transaction record that matters (and that will be the day), if someone did not buy a certain item, any variable that is created based on the purchase record of that particular item will have nothing to report for that person. Likewise, if you have a whole series of variables to differentiate online and offline channel behaviors, what would the online portion contain if the consumer in question never bought anything through the Web? Absolutely nothing. But in the business of predictive analytics, what did not happen is as important as what happened. Even a simple concept of “response” is only meaningful when compared to “non-response,” and the difference between the two groups becomes the basis for the “response” model algorithm.

Capturing the Meanings Behind Missing Data
Missing data are all around us. And there are many reasons why they are missing, too. It could be that there is nothing to report, as in aforementioned examples. Or, there could be errors in data collection—and there are lots of those, too. Maybe you don’t have access to certain pockets of data due to corporate, legal, confidentiality or privacy reasons. Or, maybe records did not match properly when you tried to merge disparate datasets or append external data. These things happen all the time. And, in fact, I have never seen any dataset without a missing value since I left school (and that was a long time ago). In school, the professors just made up fictitious datasets to emphasize certain phenomena as examples. In real life, databases have more holes than Swiss cheese. In marketing databases? Forget about it. We all make do with what we know, even in this day and age.

Then, let’s ask a philosophical question here:

  • If missing data are inevitable, what do we do about it?
  • How would we record them in databases?
  • Should we just leave them alone?
  • Or should we try to fill in the gaps?
  • If so, how?

The answer to all this is definitely not 42, but I’ll tell you this: Even missing data have meanings, and not all missing data are created equal, either.

Furthermore, missing data often contain interesting stories behind them. For example, certain demographic variables may be missing only for extremely wealthy people and very poor people, as their residency data are generally not exposed (for different reasons, of course). And that, in itself, is a story. Likewise, some data may be missing in certain geographic areas or for certain age groups. Collection of certain types of data may be illegal in some states. “Not” having any data on online shopping behavior or mobile activity may mean something interesting for your business, if we dig deeper into it without falling into the trap of predicting legal or corporate boundaries, instead of predicting consumer behaviors.

In terms of how to deal with missing data, let’s start with numeric data, such as dollars, days, counters, etc. Some numeric data simply may not be there, if there is no associated transaction to report. Now, if they are about “total dollar spending” and “number of transactions” in a certain category, for example, they can be initiated as zero and remain as zero in cases like this. The counter simply did not start clicking, and it can be reported as zero if nothing happened.

Some numbers are incalculable, though. If you are calculating “Average Amount per Online Transaction,” and if there is no online transaction for a particular customer, that is a situation for mathematical singularity—as we can’t divide anything by zero. In such cases, the average amount should be recorded as: “.”, blank, or any value that represents a pure missing value. But it should never be recorded as zero. And that is the key in dealing with missing numeric information; that zero should be reserved for real zeros, and nothing else.

I have seen too many cases where missing numeric values are filled with zeros, and I must say that such a practice is definitely frowned-upon. If you have to pick just one takeaway from this article, that’s it. Like I emphasized, not all missing values are the same, and zero is not the way you record them. Zeros should never represent lack of information.

Take the example of a popular demographic variable, “Number of Children in the Household.” This is a very predictable variable—not just for purchase behavior of children’s products, but for many other things. Now, it is a simple number, but it should never be treated as a simple variable—as, in this case, lack of information is not the evidence of non-existence. Let’s say that you are purchasing this data from a third-party data compiler (or a data broker). If you don’t see a positive number in that field, it could be because:

  1. The household in question really does not have a child;
  2. Even the data-collector doesn’t have the information; or
  3. The data collector has the information, but the household record did not match to the vendor’s record, for some reason.

If that field contains a number like 1, 2 or 3, that’s easy, as they will represent the number of children in that household. But the zero should be reserved for cases where the data collector has a positive confirmation that the household in question indeed does not have any children. If it is unknown, it should be marked as blank, “.” (Many statistical softwares, such as SAS, record missing values this way.) Or use “U” (though an alpha character should not be in a numeric field).

If it is a case of non-match to the external data source, then there should be a separate indicator for it. The fact that the record did not match to a professional data compiler’s list may mean something. And I’ve seen cases where such non-matching indicators are made to model algorithms along with other valid data, as in the case where missing indicators of income display the same directional tendency as high-income households.

Now, if the data compiler in question boldly inputs zeros for the cases of unknowns? Take a deep breath, fire the vendor, and don’t deal with the company again, as it is a sign that its representatives do not know what they are doing in the data business. I have done so in the past, and you can do it, too. (More on how to shop for external data in future articles.)

For non-numeric categorical data, similar rules apply. Some values could be truly “blank,” and those should be treated separately from “Unknown,” or “Not Available.” As a practice, let’s list all kinds of possible missing values in codes, texts or other character fields:

  • ” “—blank or “null”
  • “N/A,” “Not Available,” or “Not Applicable”
  • “Unknown”
  • “Other”—If it is originating from some type of multiple choice survey or pull-down menu
  • “Not Answered” or “Not Provided”—This indicates that the subjects were asked, but they refused to answer. Very different from “Unknown.”
  • “0”—In this case, the answer can be expressed in numbers. Again, only for known zeros.
  • “Non-match”—Not matched to other internal or external data sources
  • Etc.

It is entirely possible that all these values may be highly correlated to each other and move along the same predictive direction. However, there are many cases where they do not. And if they are combined into just one value, such as zero or blank, we will never be able to detect such nuances. In fact, I’ve seen many cases where one or more of these missing indicators move together with other “known” values in models. Again, missing data have meanings, too.

Filling in the Gaps
Nonetheless, missing data do not have to left as missing, blank or unknown all the time. With statistical modeling techniques, we can fill in the gaps with projected values. You didn’t think that all those data compilers really knew the income level of every household in the country, did you? It is not a big secret that much of those figures are modeled with other available data.

Such inferred statistics are everywhere. Popular variables, such as householder age, home owner/renter indicator, housing value, household income or—in the case of business data—the number of employees and sales volume contain modeled values. And there is nothing wrong with that, in the world where no one really knows everything about everything. If you understand the limitations of modeling techniques, it is quite alright to employ modeled values—which are much better alternatives to highly educated guesses—in decision-making processes. We just need to be a little careful, as models often fail to predict extreme values, such as household incomes over $500,000/year, or specific figures, such as incomes of $87,500. But “ranges” of household income, for example, can be predicted at a high confidence level, though it technically requires many separate algorithms and carefully constructed input variables in various phases. But such technicality is an issue that professional number crunchers should deal with, like in any other predictive businesses. Decision-makers should just be aware of the reality of real and inferred data.

Such imputation practices can be applied to any data source, not just compiled databases by professional data brokers. Statisticians often impute values when they encounter missing values, and there are many different methods of imputation. I haven’t met two statisticians who completely agree with each other when it comes to imputation methodologies, though. That is why it is important for an organization to have a unified rule for each variable regarding its imputation method (or lack thereof). When multiple analysts employ different methods, it often becomes the very source of inconsistent or erroneous results at the application stage. It is always more prudent to have the calculation done upfront, and store the inferred values in a consistent manner in the main database.

In terms of how that is done, there could be a long debate among the mathematical geeks. Will it be a simple average of non-missing values? If such a method is to be employed, what is the minimum required fill-rate of the variable in question? Surely, you do not want to project 95 percent of the population with 5 percent known values? Or will the missing values be replaced with modeled values, as in previous examples? If so, what would be the source of target data? What about potential biases that may exist because of data collection practices and their limitations? What should be the target definition? In what kind of ranges? Or should the target definition remain as a continuous figure? How would you differentiate modeled and real values in the database? Would you embed indicators for inferred values? Or would you forego such flags in the name of speed and convenience for users?

The important matter is not the rules or methodologies, but the consistency of them throughout the organization and the databases. That way, all users and analysts will have the same starting point, no matter what the analytical purposes are. There could be a long debate in terms of what methodology should be employed and deployed. But once the dust settles, all data fields should be treated by pre-determined rules during the database update processes, avoiding costly errors in the downstream. All too often, inconsistent imputation methods lead to inconsistent results.

If, by some chance, individual statisticians end up with freedom to come up with their own ways to fill in the blanks, then the model-scoring code in question must include missing value imputation algorithms without an exception, granted that such practice will elongate the model application processes and significantly increase chances for errors. It is also important that non-statistical users should be educated about the basics of missing data and associated imputation methods, so that everyone who has access to the database shares a common understanding of what they are dealing with. That list includes external data providers and partners, and it is strongly recommended that data dictionaries must include employed imputation rules wherever applicable.

Keep an Eye on the Missing Rate
Often, we get to find out that the missing rate of certain variables is going out of control because models become ineffective and campaigns start to yield disappointing results. Conversely, it can be stated that fluctuations in missing data ratios greatly affect the predictive power of models or any related statistical works. It goes without saying that a consistent influx of fresh data matters more than the construction and the quality of models and algorithms. It is a classic case of a garbage-in-garbage-out scenario, and that is why good data governance practices must include a time-series comparison of the missing rate of every critical variable in the database. If, all of a sudden, an important predictor’s fill-rate drops below a certain point, no analyst in this world can sustain the predictive power of the model algorithm, unless it is rebuilt with a whole new set of variables. The shelf life of models is definitely finite, but nothing deteriorates effectiveness of models faster than inconsistent data. And a fluctuating missing rate is a good indicator of such an inconsistency.

Likewise, if the model score distribution starts to deviate from the original model curve from the development and validation samples, it is prudent to check the missing rate of every variable used in the model. Any sudden changes in model score distribution are a good indicator that something undesirable is going on in the database (more on model quality control in future columns).

These few guidelines regarding the treatment of missing data will add more flavors to statistical models and analytics in general. In turn, proper handling of missing data will prolong the predictive power of models, as well. Missing data have hidden meanings, but they are revealed only when they are treated properly. And we need to do that until the day we get to know everything about everything. Unless you are just happy with that answer of “42.”

LinkedIn Prospecting: How Much Time Should You Spend?

“How much time should you spend on LinkedIn each week?” It’s a noble question. I understand why you ask it. But worrying about time is a dangerous place to start. True, we live in a world where we have limited time for new ideas. But saying, “You should spend X hours per week on LinkedIn” would be disingenuous.

“How much time should you spend on LinkedIn each week?”

It’s a noble question. I understand why you ask it. But worrying about time is a dangerous place to start.

True, we live in a world where we have limited time for new ideas. But saying, “You should spend X hours per week on LinkedIn” would be disingenuous.

Because there is no credible answer to the question. Instead, the best starting point is simple: Get more leads, faster, by creating a LinkedIn prospecting system.

You will be effective—regardless of how much time invested on LinkedIn!

Where to Start With LinkedIn
Here’s the skinny: The more success you have with LinkedIn prospecting the more time you’ll want to invest in it.

So invest time, first, in making sure you experience a little bit of success. Start by making the most out of every minute you commit.

Learn a systematic approach to:

  • Attract potential leads to connect with you
  • Spark questions about what you sell in buyers’ minds
  • Help prospects self-qualify faster

Let’s start today. Pick one of the above as a goal. Let’s commit to taking the first step toward a better LinkedIn prospecting system.

The Problem: Lack of a Good System
Most sales reps struggle with LinkedIn prospecting because they don’t use a system. Or the process they’re committed to doesn’t work.

For example, we’ve been told (by “experts”) to invest time on LinkedIn by:

  • publishing (blogging on the LinkedIn platform)
  • polishing your profile with new features
  • sharing knowledge with Connections and in Groups

Publishing on LinkedIn’s platform, sharing knowledge and polishing your profile might be effective—if they’re part of a system. These tactics, alone, are not enough. If you’ve already tried them you know what I mean!

A Direct Response Copywriting System
“How can I get customers to view content on my profile and be so excited they contact me?”

That’s a better question! One that leads us toward a proven system. A system to get customers curious about you. A better way to provoke response from buyers.

Content that makes customers respond does one thing really well: It uses direct response copywriting to make potential buyers think, “Yes, yes, YES … I can take action on that. In fact, I’ll probably get results from taking this advice.”

Most importantly, buyers must conclude their thoughts with an urge.

“How can I get my hands on more of those kinds of insights/tips?”

This simple idea (using a direct or subtle call to action) is the difference between wasting time on LinkedIn and effectively prospecting with it.

Response is what drives success. It’s what gets you paid. Invest time on LinkedIn with a system that grabs customers’ attention and gets them to respond.

Remember, publishing content on LinkedIn’s blogging platform or posting interesting updates will not work. Not without the direct response element.

More Success = More Time
The more success you have with LinkedIn prospecting, the more time you’ll want to invest in it. It only makes sense to invest time, first, in making sure you experience a little bit of success. Success that you can increase, systematically.

Start your LinkedIn prospecting journey by making the most out of every minute. Commit to LinkedIn, but resist worrying about how many hours per week to invest.

Instead, invest time for a few months in optimizing a prospecting approach. Use this proven system to get started. I guarantee you won’t worry so much about how much time you’re investing. In fact, you will probably want to invest more time in LinkedIn prospecting!

Good luck. Let me know how it goes for you.

How Great Marketers Can Inspire Action

We, as direct marketers, often consider the people we’re selling to as our target market. But we’re selling to people, not targets. To generate response, it’s essential to understand underlying demographics and interests about your customer. While this is a starting point, it’s not likely the tipping point that leads to a prospect becoming a customer. Breaking through requires that you think deeply about your customer and lead them to the answer of “why.” Today we offer a new perspective on defining why

We, as direct marketers, often consider the people we’re selling to as our target market. But we’re selling to people, not targets. To generate response, it’s essential to understand underlying demographics and interests about your customer. While this is a starting point, it’s not likely the tipping-point that leads to a prospect becoming a customer. Breaking through requires that you think deeply about your customer and lead them to the answer of “why.” Today we offer a new perspective on defining why customers respond, along with recommended action steps.

A thought-provoking Ted Talk video of author Simon Sinek, titled How Great Leaders Inspire Action, elegantly speaks about the importance of the “why.” The title of this video could just as well have been “How Great Marketers Inspire Action.” Sinek describes a golden circle of “what,” “how” and “why.” The outside ring of the circle, where most marketers approach customers and prospects, is the “what.” The middle ring is the “how.” Direct marketers usually excel at filling in the “what” and “how,” as we translate features into benefits for the logical part of the brain.

But at the core of the golden circle, where decisions are often made in the brain, is the “why.” It’s the emotional response. If your messaging isn’t working, here’s a challenge for you to think more deeply about the “why” of your organization and the product or service you’re selling—to tap the emotions of the prospect.

Here are a couple of critical steps you should take so you can reposition your message in order to tap the golden “why” button.

1. Profile your customers. Most profiles are a treasure-trove of demographic, purchase behavior, interests and other fascinating data points. Profiles can be created for you by several data companies and it’s affordable to do. But the profile itself is merely the starting point. We’ve used the insights that a profile yields many times to successfully reposition messaging copy and increase response.

2. Interpret the data. Looking at reports and charts you’ll get from a profile isn’t enough. You must interpret the data. You have to think deeply about what this reveals about your customers. One example of how this works is for an insurance offer we created. The insight from the profile was that the buyer was usually a woman and she had an interest in her grandchildren and devotional reading. The approach to selling this product was the usual features and benefits of having life insurance. But we repositioned the message to reveal the “why.” The “why” message transformed the prospect into realizing that the proceeds from a life insurance policy could be a wonderful legacy left for her grandchildren or a favorite charity. The result for the marketer was a double-digit response increase.

So what can you do to improve your results? Here are some action items:

  • Profile your buyers to better understand the “what”
  • Interpret the data and align it with the “how”
  • Transform your message and reveal the “why”

Then test it.

(As an aside, if you plan to attend the DMA Conference next week in Chicago, I’d enjoy the opportunity to meet with readers. You can email me using the link to the left, or just show up at the Target Marketing booth #633 Monday afternoon between 2:00 and 3:00 p.m. Or feel free to introduce yourself if you see me at any time).

The Most Powerful Content Marketing Lesson Learned (That Nobody Is Talking About)

In the last few years, of what’s being called online content marketing, what have we learned? When all the blogs, whitepapers, ebooks, podcasts and YouTube videos have been produced, what can we say we learned, took action on and improved? The single most important lesson learned for me, and in my research, has been how engaging customers should never be the goal. Instead, engagement is the starting point. It’s an open door to get customers to respond to you, your brand.

In the last few years, of what’s being called online content marketing, what have we learned? When all the blogs, whitepapers, ebooks, podcasts and YouTube videos have been produced, what can we say we learned, took action on and improved?

The single most important lesson learned for me, and in my research, has been how engaging customers should never be the goal. Instead, engagement is the starting point. It’s an open door to get customers to respond to you, your brand.

Engagement has so many of us so wrapped up that we’re failing to realize a key point: Engaging is merely a chance to enter into a journey with a prospect; a trip toward whatever it is they need, desire, hope for or need to avoid.

Engagement should be (if it’s to be effective) the start of a series of “fair exchanges” that guides prospective buyers toward, or away from, what you’re selling.

If it’s not? You’re just engaging for the sake of engaging. You’re not generating, nurturing and closing leads.

Be Provocative to Generate Response
Most marketers and sellers using blogs, video, YouTube, LinkedIn and such in their online content marketing strategy think of customer engagement as a goal. The finish line. But that’s not going to help you get customers to buy online.

Successfully engaging with customers is an opportunity to generate response from them. Actually selling on social media means being thought provoker-not just a thought leader.

You’ve got to get customers to do something-to begin a journey toward converting to a lead.

Think of it this way:

  • Are you giving customers a reason to talk to you on LinkedIn? A real, compelling reason.
  • Are your blogs so bold they provoke readers to call or email you or sign-up for an offer?

Is whatever you’re doing on social media provoking customers to contact you-so your sales team can help them more clearly understand the thought you just provoked?

Don’t Settle for Engagement
Do you have honestly new knowledge or a new product that can benefit customers in exciting, new ways? Then why would you settle for floating your thoughts out there and hoping to be dubbed a leader?

Does getting your content shared mean that much to you … more than getting leads does?

I realize for some of you the answer will be yes! To those readers I say this: Instead why not give customers a reason to act on the impulse your thoughts can create? This way prospects take action on doing something they really need and want to do … AND create a lead for yourself.

Tempt Prospects to Act
If you watch what I do in my online training business I’m always tempting prospects to trade their contact information for a better way… or tips on avoiding risks.

I’m teasing prospects into taking an action that I know they want to take.

For example, I like to reveal a small part of a hidden opportunity to them on my YouTube videos.

So … you can engage customers and hope that focused conversation gets going or you can cause it directly.

It all starts with realizing engagement is not the goal and knowing how to nurture a lead who isn’t ready to talk about your product or services yet.

So think of engagement as the first step to creating response. If you do you’ll start making social media sell for you more often.

Good luck and see you in comments below! Feel free to disagree with me, share your successes with this technique, etc.

Creepy Marketing and Social Media: How to Scare Away Your Customers for Good

Halloween is around the corner, so for this week’s post I wanted to turn to a topic that is most definitely apropos: creepy marketing. No, we’re not talking about marketing for Halloween. What’s creepy marketing, you might ask? Creepy marketing is what happens when personalization goes horribly wrong—when good intentions morph into, well, disturbing communication that has the opposite of its intended effect and, instead of helping a brand push a product or service, sends recipients running for the hills.

Halloween is around the corner, so for this week’s post I wanted to turn to a topic that is most definitely apropos: creepy marketing. No, we’re not talking about marketing for Halloween.

What’s creepy marketing, you might ask? Creepy marketing is what happens when personalization goes horribly wrong—when good intentions morph into, well, disturbing communication that has the opposite of its intended effect and, instead of helping a brand push a product or service, sends recipients running for the hills. With the rise of social media and its nearly universal adoption by marketers, it’s high time that marketers learn what not to do when they engage with their customers and prospects.

Fact is, marketers use personalization because it works extremely well. How well? Generally, the more you personalize a message the better it will perform. In a landmark study by Banta Corp. on multichannel marketing, it was reported that incorporating three or four personalized elements in an email boosted its clickthrough rate by 63 percent, and seven or more elements lifted it by an amazing 318 percent!

Wow! With stats like these, you can see why marketers of all stripes have been jumping on the personalization bandwagon like it’s going out of style. During the past few years, we’ve witnessed an explosion of personalized content across the marketing spectrum—direct mail, email, SMS, landing pages … all spiced up by including personalized content or messaging. Out of all of this personalized communication, some has been good, some has been great … and some has been downright creepy.

Last year, I put out a post titled “Creepy Marketing—When Database Marketing Goes Awry,” in which I defined creepy marketing as “if it looks creepy and feels creepy, then it probably is creepy and you shouldn’t do it.” I then go on to point out that an actual example of creepy marketing includes writing out a customer’s name along with other personally identifiable information anywhere visible to the general public. I also include displaying a customer’s age, marital status or medical condition in marketing messaging.

Turning to social media, avoiding creepy marketing takes on a new urgency in the medium where stakes have been raised considerably. The reason why is two-fold: First, because social media involve networks of individuals with public exposure, it’s way easier to creep people out. Second, if you do offend someone on social media, then good luck handling the ensuing social media disaster. Offended parties now have the ability to let everyone on their social networks know right away just how unhappy they are—and they usually don’t hesitate to do so.

So how do you avoid creeping people out in social media? On a strategic level, a thoughtful post by Laura Horton that appeared on VentureBeat.com offers five pointers:

1. Be helpful but not pushy;

2. Be a thought leader, if you can;

3. Be careful what you say, even if you know a lot;

4. Reach out if you see active interest in your brand; and

5. Stay on top of social marketing best practices and trends.

I think this list is a good place to start. More tactically speaking, in her blog Kristen Lamb gives us five examples of social media marketing tactics that not only creep individuals out, but probably don’t work very well, either. Her list includes automatically adding people to your firm’s Facebook fan list, and sending out annoying automated promotional messages on Twitter to random people who might have tweeted about topics you think are relevant to whatever product you’re trying to push. Yuck.

Again, I think this list is a good starting point. Though of course, the possibilities for abuse by marketers are probably endless. Have you ever been creeped out by a company on social media? If so, I’d love to hear about it. Please let me know in your comments.

Happy Halloween and happy marketing!

—Rio