Posted August 7, 2012 7:10 am by with 0 comments

Tweet about this on TwitterShare on LinkedInShare on Google+Share on FacebookBuffer this page

Optimized social publishing is an emergent trend of the past year or two that has a lot of hype surrounding it. Essentially, using software of one kind or another, businesses are promised that their updates will be published at the moment they will achieve maximum reach or engagement.

Statistics teaches us that people behave in measurable and predictable patterns when you’ve got enough of them. We should be able to exploit those patterns to achieve optimal results. But what does that look like in practice? There are tons of conflicting recommendations out there and it can get confusing.

There are three basic approaches to social publishing optimization. The first is to gather up a lot of different social profiles and analyze their collective data in a broad study. The second is to take the historical data of one profile and make recommendations based on past interaction. The third is to dynamically publish using algorithmic recommendations.

Broad studies try and make recommendations by looking at a high volume of social data. They gob a large number of pages and interactions into one large data cloud and look for correlations. Are they a good jumping off point? You bet. But they don’t date very well, since patterns of usage change rapidly. For instance, between 2009 and this year, Facebook has tripled in size. Mobile users simply didn’t exist in 2009 – now they’re over half of the user base, and almost 20% use Facebook exclusively on their mobile device. They also ignore other biases that might affect your page, vertical, or company. This can include things like cultural bias, time zone differences, and other less obvious confounding variables.

The obvious solution to the problem of irrelevant data is to use a Page’s existing numbers instead. Although this solves the problem of potential biases that larger studies face, the method isn’t without fault. Since it looks only at your data, it can only tell you the best times that you’ve already posted. If you never venture out from Monday and Tuesday at 11 am, it’s going to recommend Monday and Tuesday at 11 am. This can cause something of a feedback loop, too, since you’ll be adding weight to the algorithmically recommended times each time you publish.

The last approach to social publishing optimization is dynamic publishing. There’s several ways this is done. Some try and estimate followers currently online by examining when they post. Others comb the social space and look for similar conversations to the yet-published content to crop up. But, even this method is flawed. People don’t necessarily look through their feeds when they’re posting and vice versa. Algorithms aren’t always great at determining intent. And social conversations can be very skewed by bots, spam, and other potential hazards. This makes conversational monitoring as a basis of relevance a dubious practice.

Am I saying that optimized publishing is wrong or bad? Not at all. The data and methods that these companies are applying do show statistical evidence that you can enjoy benefits if you optimize your posting times. They can also be a good jumping off point for creating a schedule for your page. However, the best source for recommendations on when to post what is your own data.

Take a look at your insights or use our helpful and free Google Spreadsheet to begin making hypothesis about when the optimal time for your Page to post is. Combine this data with your content schedule, and lastly, make sure you test your theories thoroughly!

Dan Wilkerson is a social media project manager at LunaMetrics, a Google Analytics certified partner that also specializes in social media, search engine optimization, and PPC. You can follow him on Twitter @notdanwilkerson or at @LunaMetrics.