Full disclosure: I’m a partner at an ad agency I founded.
There are a kajillion ways to measure the impact of an ad campaign. It all depends what you want to measure. That, in turn, depends on what business goal(s) the ad was designed to address.
I’m oversimplifying here, but there are generally two schools of advertising: brand and direct response. Brand advertising campaigns are embarked upon in order to increase awareness, improve attitudes toward the brand, increase purchase intent and a bunch of other such things that are somewhat fuzzy and comparatively difficult to measure. Direct response campaigns are geared toward compelling an immediate action.
DR campaigns are generally more easily measured. Think about Billy Mays and his shitty Mighty Putty ads for a second. Again, I’m oversimplifying, but generally, the company managing that ad buy will determine its effectiveness by how many orders they get for Mighty Putty within a few minutes after the commercial runs. They’ll look at things like where the ad ran, who called the 1-800 number, who came to the website, who placed orders and how many orders resulted in successful credit card transactions. Whether or not the ad was successful depends largely on whether they got enough orders to justify the cost of the ad. There’s a whole wrinkle of DR I’d rather not get into right now that deals with managing your media and production costs to a specific volume of orders at an acceptable Cost Per Order - but you didn’t ask about that.
Branding campaigns have much more fuzzy success criteria. Typically, an ad agency sets campaign goals with a client that are derived from their overall business goals. If you were sitting in the meeting, you’d hear something like “We want to increase market share a full point, and we think we can do that if we move baseline awareness up 7 percent.” The ad agency does its thing - designing ads, figuring out where they’re going to run, figuring out how to measure success, etc. In a situation like this, the ad agency might separate a group of people into two groups - one made up of people who saw the ad (“exposed” group) and the other made up of people who didn’t see it (“control group”). They’ll give both groups a survey, asking all sorts of goofy questions about whether they’re aware of the product or the brand, whether they intend to buy it or not, how they feel about it, etc. Comparing the results of the surveys from the exposed group back to the control group will yield some insights. If the lift in awareness from exposed to control is 7% or more, the ad would be deemed a success.
There are a kajillion different wrinkles here, but that’s the gist of it. Entire businesses have been built on measuring ad effectiveness, the proper mix of media spending across channels, and just about anything else you’d want to know about an ad campaign - one could write an entire book about it.
Incidentally, if you’re surfing the web one day and you see an ad for taking a quick survey, odds are that ad came from one of three companies - Dynamic Logic, Insight Express or FactorTG. These three companies are ad effectiveness measurement companies that are trying to get a gauge on whether or not your impression of a specific brand has been impacted by some of the online ads you’ve seen. If you elect to click on the ad and take the survey, you’ll be asked all sorts of questions about awareness, favorability, purchase intent and whatnot. The survey results will be tallied and then the research company will decide whether you’re an “exposed” or “control” based on the information planted in your browser cookies by ad servers. They report this information back to the advertiser so that the advertiser (and its agency) can figure out whether the ads are working or not.