“That sounds like a great idea, but what does the data tell us?” In recent years, the principle of evidence-based advertising has taken hold of the industry, bringing the tension between advertising as a science and an art to the foreground. For some, like Professor of Marketing Science Byron Sharp, the answer is clear: in the world of Big Data, evidence must take precedence over conventional wisdom. But what exactly is evidence, and how is it best used?
Broadly speaking, evidence is information that provides a foundation for some belief. As such, raw data can certainly be used as evidence, but so can intuition. After all, I don’t need to consult a spreadsheet to be confident that the sun will rise each morning; my belief is founded on years of experience stored in a mental “database.” Advertising is no different.
This is not to say that hard numbers are gratuitous, or that industry expertise holds all the answers. We must recognize, however, that there are different kinds of evidence, and each serves a distinct function in the analytic and decision-making process. To tease out these distinctions, we can turn to the academic world, where evidence is precisely classified.
In academic work, evidence is divided into three categories: primary, secondary, and tertiary. These are distinguished by 1) where they come from, and 2) how they are stored. Primary sources are typically original accounts or physical evidence, while secondary and tertiary evidence layers interpretation and analysis over primary information.
Raw consumer data is primary evidence: it expresses documented consumer behavior or literal consumer responses to some prompt. The results of ad effectiveness research, like breakthrough, branding, and brand-impact are all primary evidence; assuming participants responded honestly, they reflect the ad’s impact on consumer perspectives.
Unlike most primary evidence, ad data is typically modeled to make interpretation clear: likeability goes up when more respondents like the ad, brand, or product. But knowing how an ad affects research metrics isn’t in itself an insight, much less a strategy. Consequently, primary evidence must always be subjected to analysis, whose product is secondary evidence.
Analysis synthesizes research results into insights – actionable data-driven learnings – through deductive or inductive logic. The resulting secondary evidence is an interpretation of primary data: it uses facts to support conclusions about why consumers responded as they did, and what that suggests about the ad’s performance.
Secondary sources represent what analysts think primary evidence (consumer responses) meant; they can guide our interpretation thereof, but their validity is contingent on whether the analysis was correct. A quarterly report, for example, may draw on primary evidence to more accurately represent consumer sentiment, but only by taking the risk of misinterpreting the data.
This article is a primary source, as it expresses my perspective on evidence-based ads, but my article on the science of storytelling is a tertiary source: it brings together different analyses to make a broader point about the intersection of neuroscience and advertising. A year-end report that synthesizes quarterly analysis into an overarching narrative is also tertiary evidence.
Tertiary sources are an aggregation of primary and secondary data, forming a curated body of knowledge; they allow us to compare analyses, and to develop a more authoritative interpretation. Our experience in the ad industry can be thought of as tertiary evidence: all the ads and research we have been exposed to nuance our interpretation of new data.
Using primary evidence
Despite its name, primary evidence is almost never the right place to start an investigation. Secondary and tertiary evidence can help us define our questions and develop hypotheses before we begin collecting primary data. When we finally do sit down with a dataset, we should again look to secondary and tertiary sources for context.
In fact, this is a process that most of us undertake without thinking. We design research with the aid of past successes and failures, and when we analyze results, we look for patterns and flags that we’ve seen before. Unfortunately, in doing so implicitly we may draw on personal assumptions or misguided conventional wisdom instead of evidence-based learnings.
It is here that the balancing act between evidence and experience begins. Disregarding experience squanders years of information we have collected in our mental database; taking its veracity and logical coherence for granted, however, can lead us to baseless conclusions. Secondary documents draw on primary evidence to help substantiate and vet our intuition.
Using secondary evidence
Secondary evidence is both the product of analysis, and a vital tool in the analytical process itself. When we interpret primary data, we create secondary evidence. As primary evidence is rarely contextualized, and often does not lend itself immediately to interpretation, it is usually necessary to draw on existing secondary sources to guide or corroborate our analysis.
When looking at consistent consumer behavior, for example, we understand it likely represents a trend. We’ve seen this type of pattern before: regular observations of some phenomenon have led us to draw a reliable conclusion. In this case, we can rely on evidence-based intuition to analyze the data, but more complex conclusions may require additional evidence.
As noted, our intuition itself is often implicitly secondary evidence. It is important to remember, though, that all secondary evidence must point back to primary sources. It may seem obvious, but this distinction is what separates analysis from assumption. Tertiary evidence is a good way to evaluate secondary sources and the analysis they rely on.
Using tertiary evidence
Tertiary evidence is often a review of secondary sources that evaluates their merit and posits a higher-order conclusion. A collection of case studies in some research methodology or advertising practice is a prime example. Consistency in their results may suggest the existence of an advertising principle, while discrepancies could indicate methodological errors.
The movement from primary to tertiary evidence is one of synthesis and abstraction. This can be extremely useful, but it also distances us from raw data: primary evidence. Tertiary sources are typically arranged to evidence a specific argument. Necessarily, this is to the exclusion of other insights that may be gleaned from the initial evidence.
In other words, tertiary sources result from the analysis of analysis. They can be useful both in evaluating completed research and identifying new research questions. Above all, they unite historical learnings to substantiate industry principles. By evidencing or challenging conventional wisdom, they advance our understanding of consumer behavior and advertising techniques.
Putting it all together
When we commission a new piece of research, we should always begin with extant secondary and tertiary evidence. Whether it is stored in slide decks or our memory, this historical research can inform what questions we ask, and how. That way, we won’t collect data that isn’t useful, and the data we do collect will be optimized to support the answers we need.
Through analysis, we then convert these primary sources into secondary evidence. Again, our interpretation of the raw data will be nuanced by our experiences with similar ads or research. The insights we develop will thus yield usable and reliable conclusions about the ad, brand, or product in question. To present them, we should group them into a piece of tertiary evidence.
Much like the Pyramid Principle, different kinds of evidence build on one another to add meaning and mitigate misinterpretation. Hard data will always be the foundation of our evidence, but without drawing on the secondary and tertiary evidence of experience we are left without a reliable way to structure and interpret this raw information.
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.