Keep up with the site by signing up for my free newsletter!
“Like” Market Research on Facebook
Follow me on Twitter.
Know When to Use Top-Down and When to Use Bottom-Up Approaches
Market research is grounded in the branch of philosophy known as logic. Two logical reasoning approaches are basic to research design. These approaches are known as deduction and induction.
Deductive reasoning is a top-down approach that works from thegeneral to the specific. In empirical research, that means that a market researcher begins a study by considering theories that have been developed in conjunction with a topic of interest. This approach lets a market researcher think about research that has been already been conducted and develop an idea about extending or adding to that theoretical foundation. From the topical idea, the market researcher works to develop an hypothesis. This new hypothesis will be tested by the market researcher in the process of conducting a new study. Specific data that has been collected and analyzed in the new study will form the basis of the test of the hypothesis. The specific data will either confirm the hypothesis, or it will not. [It is important to note that an hypothesis that is not confirmed has not been proven false.]
Deductive Research Steps
Inductive reasoning is a bottom-up approach that moves from the specific to the general. In this case, specific refers to an observation made by the market researcher that eventually leads to broad generalization and theory. [It might be important to note – for discussions with colleagues or in public – that the term is bottom-up and not bottoms-up. Bottoms-up is a sort of toast for drinking, something that may seem entirely appropriate once the research study is completed.]
An inductive research methods approach begins with specific observations made by a market researcher who begins a study with an idea or a topic of interest, just as in a deductive approach to research. However, in an inductive approach, the researcher does not consider related theories until much further along into the research. From the observations or measures that the market researcher conducts – generally in the field and not in a laboratory setting – clusters of data or patterns begin to emerge. From these regularities or patterns, the market researcher generates themes that come analysis of the data.
Inductive Research Steps
If the market researcher is conducting quantitative research, at this point, theories can be considered. However, if the market researcher is conducting qualitative research, then the formal hypothesis testing does not take place. Rather, the market researcher may formgeneralizations based on the strength of the data and themes that have emerged.
Data collection and data analysis in qualitative research is iterative. That is to say, it data collection doesn’t happen all at once and then — as though the market researcher has thrown a switch — data analysis begins. Rather, some data is collected, which is considered by the researcher, and then some more data is collected and considered, and so on. At a certain point, when sufficient data clusters or patterns have emerged, the market researcher will decide that thedata collection can slow, stop, or change direction.
Data collection and data analysis in quantitative research are distinct stages. To mingle data collection and data analysis in the manner of qualitative research would compromise the integrity of the data. Some scientist would say that a lack of boundaries in the data collection and data analysis processes causes the data to become contaminated and the research to lack rigor. Findings from such compromised research would not be viewed as robust.
Bottom-up research methods feel more unstructured, but they are no less scientific than structured top-down research methods. Because each type of research approach has its own advantages and disadvantages, it is not uncommon for a study to employ mixed methods. A market researcher who uses mixed methods applies a deductive research approach to the components of the study that shows strong theoretical ties. Alternately, an inductive research approach is applied to the components of the study that seem to require a more exploratory inquiry.
Its a misrepresentation to form a mental picture of deductive approaches and inductive approaches as two sides of the same coin. In practice, they are two ends of a continuum. Deductive research is associated with linearity and a search for causal relationships. Inductive research is associated with depth of inquiry and descriptions about phenomena. Mixed methods can be placed at about mid-point on that continuum with an emphasis on research breadth.
This article contains a much simplified explanation about the different types of deduction and inquiry. There are many layers to market research. The content in this article just begins to scratch the surface. For instance, if we consider the philosophical grounding of deductive and inductive reasoning, we might refer to the approaches as positivistic and naturalistic.
Keep up with the site by signing up for my free newsletter!
“Like” Market Research on Facebook
Follow me on Twitter.
Deductive reasoning happens when a researcher works from the more general information to the more specific. Sometimes this is called the “top-down” approach because the researcher starts at the top with a very broad spectrum of information and they work their way down to a specific conclusion. For instance, a researcher might begin with a theory about his or her topic of interest. From there, he or she would narrow that down into more specific hypotheses that can be tested. The hypotheses are then narrowed down even further when observations are collected to test the hypotheses. This ultimately leads the researcher to be able to test the hypotheses with specific data, leading to a confirmation (or not) of the original theory and arriving at a conclusion.
An example of deductive reasoning can be seen in this set of statements: Every day, I leave for work in my car at eight o’clock. Every day, the drive to work takes 45 minutes I arrive to work on time. Therefore, if I leave for work at eight o’clock today, I will be on time.
The deductive statement above is a perfect logical statement, but it does rely on the initial premise being correct. Perhaps today there is construction on the way to work and you will end up being late. This is why any hypothesis can never be completely proved, because there is always the possibility for the initial premise to be wrong.
Inductive reasoning works the opposite way, moving from specific observations to broader generalizations and theories. This is sometimes called a “bottom up” approach. The researcher begins with specific observations and measures, begins to then detect patterns and regularities, formulate some tentative hypotheses to explore, and finally ends up developing some general conclusions or theories.
An example of inductive reasoning can be seen in this set of statements: Today, I left for work at eight o’clock and I arrived on time. Therefore, every day that I leave the house at eight o’clock, I will arrive to work on time.
While inductive reasoning is commonly used in science, it is not always logically valid because it is not always accurate to assume that a general principle is correct. In the example above, perhaps ‘today’ is a weekend with less traffic, so if you left the house at eight o’clock on a Monday, it would take longer and you would be late for work. It is illogical to assume an entire premise just because one specific data set seems to suggest it.
By nature, inductive reasoning is more open-ended and exploratory, especially during the early stages. Deductive reasoning is more narrow and is generally used to test or confirm hypotheses. Most social research, however, involves both inductive and deductive reasoning throughout the research process. The scientific norm of logical reasoning provides a two-way bridge between theory and research. In practice, this typically involves alternating between deduction and induction.
A good example of this is the classic work of Emile Durkheim on suicide. When Durkheim pored over tables of official statistics on suicide rates in different areas, he noticed that Protestant countries consistently had higher suicide rates than Catholic ones. His initial observations led him to inductively create a theory of religion, social integration, anomie, and suicide. His theoretical interpretations in turn led him to deductively create more hypotheses and collect more observations.
Babbie, E. (2001). The Practice of Social Research: 9th Edition. Belmont, CA: Wadsworth Thomson.
Shuttleworth, Martyn (2008). Deductive Reasoning. Retrieved November 2011 from Experiment Resources: http://www.experiment-resources.com/deductive-reasoning.html
Link to original article via:
La experiencia con las marcas nos debe de suceder a través de todos los sentidos. Starbucks ya lo sabía y cuándo muchos podrían haberse preguntado qué más podía mejorar en sus tiendas, es cuándo tocan uno de los puntos más evidentes y al mismo tiempo el más importante, el vaso en el que bebes.
Starbucks Reinvents The Coffee Cup. fastcodesign.com
The coffee brand abandons its ubiquitous paper cup for a new highly styled version aimed at tea drinkers.
Tal vez Starbucks no es el mejor café, pero al sumar todos los puntos de contacto que van desde la apariencia exterior del lugar, las señales, la decoración, los muebles, la música, la barra, el orden de los productos, la lectura, el trato, el ticket, etc, etc. La suma de todo, sí hace al mejor café.
¿Puedes decir lo mismo de otros productos?, la verdad es que son contados.
¿Te imaginas cómo debería de ser la experiencia de pollo Bachoco o Pilgrims?, Los espectaculares de Bachoco son famosos, pero ya no son suficiente. Ni que decir de Gelatinas, Polvo para hornear o escobas. Todas y cada una de las categorías o familias de productos están llenas de oportunidades. La diferencia recae en como son vistas por los dueños, directores y mercadólogos, por las tiendas y por la gente que convive con ellas.
Claro, puedo suponer que muchos me van a decir que no se puede llegar a tanto. ¿Será?. Yo veo múltiples posibilidades y opciones. Aunque yo trabajo con empresas grandes y pequeñas estoy seguro que se puede llegar mucho más lejos, mucho mejor. Con detalles tan sencillos como… el vaso para llevar la bebida de Starbuck.
Mucho se habla de innovar, yo los invita innovar. A vivir conmigo este mundo del que tanto se habla de ideas, de nuevas soluciones. Ese mundo esta aquí y yo contigo.
Buen fin de semana.
Aquí el vínculo al artículo original.
Federico Hernández Ruiz
Consultor fundador en asimetagraf
One of the endearing characteristics of Poker is how it unapologetically embraces the power and value of lying. Unlike every other game in a casino, understanding the laws of probability are only part of the game. To be a really good poker player, you have to be both a really convincing bluffer, and an astute enough observer of your opponents to know when they are bluffing themselves.
Of course all poker players know this, so they do their best to avoid showing their “tells”; those unconscious behavioral ticks that might communicate how good or bad their cards really are to the rest of the players. They might drum their fingers, wrinkle their brow, play with their chips, change their breathing, the list goes on. The specific cards they’re holding are the key data their opponents want to access, but there’s real value in hiding the exact nature of that data for as long as possible.
If all you do in a poker game is stare at the back of your opponent’s hand and add up the cards visible on the table and in your own hand, you’re never going to notice that your opponent puts his hand in front of his mouth when he’s holding good cards. Once you figure that out, you’re going to look for that first, and then use the other obvious data as support. Read more about poker tells.
Fortunately, other complex systems of data have their own unintended, but revealing, “tells” themselves. Just like in poker, the key is in being observant and clever enough to know what to look for, and where to look for it. As you may have expected by now, these tells usually depend upon having heavy doses of noise built into them. At first glance, that noise might seem completely irrelevant to the topic of interest, but in fact is tied so tightly to an underlying truth that, it can provide a valuable shortcut to the nature of the systems we’re trying to understand. That’s why I’m so interested in them. I’ve collected a few of these here, but would love to know about any others that you use yourself, or have heard of.
In a globally integrated economy, the true relative value of currencies is a crucial piece of data, and an incredibly complex one. Those who trade currencies looking for arbitrage opportunities are always looking for even the most fleeting edge in understanding this complexity in real time, and investors looking for longer-term trends can crunch all sorts of government and international trade data to figure it out. Or, you can just track the price of a McDonald’s Big Mac around the world. That’s what The Economist magazine did back in 1986, and they’ve kept it up ever since (check out the link here). Once the chuckling subsided, many financial analysts recognized the wisdom and power behind the metric, and it has since generated genuine respect for its accuracy.
The insight behind it is in recognizing how unique a product a Big Mac is, and how its price is tied to so many important and complex metrics. Unlike most other globally available consumer products, it must be made locally. With the exception of beef-averse India, it is made with the same recipe and ingredients, according to a strict protocol, using local labor, on-site in the restaurant. That one sentence necessarily combines variables such as agricultural production and commodity prices, the efficiency of transportation and supply networks, wage rates, real estate values, energy costs, marketing and media costs, local and national taxation policies and rates, inflation and competitive pressures, and on and on. And, it’s a consumer product that can be purchased often, so it needs to adapt to changing conditions far more frequently than durable goods like cars or housing. All of these factors are then represented, completely unintentionally but out of necessity, in the price of that humble Big Mac.
The Chinese government might come up with all kinds of official rebuttals to international demands to more accurately value its currency, but the Big Mac Index is telling a different story. You just can’t deny the authority of what the world is willing to pay for two all beef patties special sauce lettuce cheese pickles onions on a sesame seed bun.
Closer to home, there are all sorts of attempts made to figure out how the economy is going, or not going, in a given year. These are usually a combination of extrapolations of previous quarterly performance, and a tangle of expert expectations, predictions, and surveys of intangibles like consumer confidence. Of course, these work to some extent, but many are based on perceptions and public statements that are not always truthful, for the sake of either political spin, or to influence investor confidence. It all gets complicated and contradictory very quickly.
One simple metric that has withstood the test of time is the humble cardboard box order. Tracking sales of corrugated boxes turns out to be an elegant indicator of a wide range of economic factors, incorporating both the precision needed for running intricate supply chains, and the intangibles of sentiments like confidence, fear, and risk management.
Its accuracy is traced to both the ubiquity of corrugated boxes (80% of all non-durable goods are shipped in them) across manufacturers and retailers, and in their position as a cost center for these businesses. You cannot ship merchandise, and therefore book sales, without boxes, no matter what your product inventory is like. So, running out of boxes would be a horrible mistake to make. Yet, it costs money to buy and store excess boxes, so companies are reluctant to order more than they really need. So, orders for boxes neatly represent the market for, well, almost everything, as well as the intangibles of forward-looking expectations in exactly the time frame that producers and retailers are willing to commit money towards them.
I’ve been running market research projects for 27 years, but I can’t think of a survey question that would deliver that same combination of information so succinctly and truthfully.
Customer demographics are an important metric for any business to understand, but even when you can afford to generate or buy that kind of data, it often offers little true insight to what those actual people are really like. And many times, small and medium-size businesses really can’t afford to spend money on that data.
If your target audience is visible at a specific place, such as shopping malls, event venues, or other public spaces, you can learn a great deal about them by simply parking yourself in a central location and watching nothing but their shoes go by. Shoes are another one of those data-intensive “tells” that can speak volumes to an astute observer. Again, it’s because of all the noise-like data that is unintentionally revealed through a person’s choice of shoe.
Shoes are one accessory that are so common that their absence in many situations is itself a powerful indicator; if you’re not at a beach or pool, not having shoes on in public is either an indication of a willful statement of independence, or an unfortunate indicator of genuine poverty. It’s one reason why having to take your shoes off for airport security feels like such an intrusion beyond its inconvenience.
But for the most part, the attention people do or do not pay to their shoes is a reliable reflection of their attitudes towards fashion vs comfort, their income level, their evaluation of the importance of the location or event they are attending, even their relationship status (for instance, tall women wearing spike heels are usually already in a relationship; they’re not worried about being taller than the men they might run into). Like many of these kinds of metrics, they are best interpreted by those with a really deep understanding of the metric itself; the more you know about shoes, the more you can know about people by looking only at their shoes.
In each of these examples, the common element is the counter-intuitive step of deliberately moving your focus away from what you really want to see. It’s like those often maddening 3D posters, where the only way to see the hidden image is to un-focus your eyes while continuing stare right at the image. (like poor Mr. Pitt was trying to do in an episode of Seinfeld). The picture is in there, both hidden and defined by the noise that surrounds it.
If you do develop your own metrics like this, you need to consider the risk in sharing it. As would be the case with figuring out a “tell” in your poker opponent, revealing the metric could destroy its effectiveness. The operative skill in this approach is in the ability to notice, and that ability depends on a willingness to pay attention to noise in a broader context, in allowing the mindfulness required to recognize the patterns to be accessed, and in having the freedom to fail for a long time before that expertise is developed.
So, consider what you are already an expert about, even if it might seem to have nothing to do with the topic you want to understand. How does that expertise allow you to recognize and interpret what someone else would completely miss? That’s what we’d love to know. Please post your own Noisy Dirty Shortcuts in the comment area below.
Original Link: http://method.com/ideas/10×10/the-art-of-noise
If you were forced to rely on only two target audiences to guide all your future design work, I’d strongly recommend using astronauts and toddlers. Fortunately, the connection between them goes beyond the design of their underwear to the nature of perception and expertise, and in what we treat as valid data, and what we choose to ignore as “noise”–the extraneous details, out-of-category input, the anecdotal tidbits. As it turns out, noise is much more valuable for useful design insights than you might think.
First, the astronauts. One little-known quirk of the Apollo moon landings was the difficulty the astronauts had judging distances on the Moon. The most dramatic example of this problem occurred in 1971 during Apollo 14, when Alan Shepard and Edgar Mitchell were tasked with examining the 1,000-foot-wide Cone Crater after landing their spacecraft less than a mile away. After a long, exhausting uphill walk in their awkward space suits, they just couldn’t identify the rim of the crater. Finally, perplexed, frustrated, and with the oxygen levels in their suits running low, they were forced to turn back. Forty years later, high-resolution images from new lunar satellites showed they had indeed come close–the trail of their footprints, still perfectly preserved in the soil, stop less than 100 feet from the rim of the crater. A huge, 1,000-foot-wide crater, and they couldn’t tell they were practically right on top of it. Why?
It should have been easy for them, right? These guys were trained as Navy test pilots; landing jets on aircraft carriers requires some expertise in distance judgment. They also had detailed plans and maps for their mission and had the support of an entire team of engineers on Earth. But their expertise was actually part of the core problem. The data their minds were trying to process was too good. All of the “noise” essential to creating the patterns their minds needed to process the data accurately was missing. And patterns are the key to human perception, especially for experts.
Consider everything that was missing up there. First, there’s no air on the Moon, so there’s no atmospheric haze, either. Eyes that grew up on Earth expect more distant objects to appear lighter in color and have softer edges than closer things. Yet everything on the Moon looks tack-sharp, regardless of distance. Second, the lack of trees, telephone poles, and other familiar objects left no reference points for comparison. Third, since the Moon is much smaller than the Earth, the horizon is closer, thus ruining another reliable benchmark. Finally, the odd combination of harsh, brilliant sunshine with a pitch-black sky created cognitive dissonance, causing the brain to doubt the validity of everything it saw.
Ironically, that kind of truthful, distortion-free data is usually what experience designers want to have as input for their decision-making, no matter what they’re trying to do. We tend to believe that complex systems are the tidy, linear sum of the individual variables that create them. But despite the pristine environment of the Moon, the Apollo astronauts were repeatedly baffled when it came to simple distance and size perceptions, even after each team came back from the Moon and told the next team to be aware of it.
Meanwhile, the toddlers I mentioned earlier provide a corresponding example of the power of patterns in perception. When my first child was about 4, we came across a wonderful series of picture books called Look-Alikes, created by the late Joan Steiner. Each book has a collection of staged photographs of miniature everyday scenes like railway stations, city skylines, and amusement parks created entirely from common, found objects (see some examples here). Without any special adornment, a drink thermos masquerades as a locomotive, scissors become a ferris wheel, and even a hand grenade makes for a very convincing pot-belly stove. The entire game is to un-see the familiarity of the scene, and identify all the common objects ludicrously pretending to be something other than what they are. There’s no trick photography involved, but you can look at each picture for hours and not “see” everything that’s right there in front of you. You know it’s a trick, but you keep falling for it over and over.
The really amazing part is that the toddler, a true novice with only a few years’ experience in seeing, completely understands the scenes she’s looking at, even though every individual piece of “data” she’s looking at is a deliberate lie. Yet the pattern of data that creates the scene is “perfect.” We already know what those scenes are supposed to look like before we even see the book’s version of them, so we unconsciously project that pattern onto what we’re looking at, even to the point of constantly rejecting the contrary data our eyes are showing us. There is in fact no amusement park in the photograph I called an amusement park. But I see it anyway.
In data-processing parlance, the signal-to-noise ratio of the moonscape was perfect (actually, infinitely high), and zero for Look-Alikes pages (the whole joke is that there really was no signal there in the first place). Yet a toddler can read the noisy scene perfectly, and the seasoned test pilots were baffled by the noiseless scene. How can this be?
The lesson is that patterns drive perception more so than the integrity of the data that create the patterns. We perceive our way through life; we don’t think our way through it. Thinking is what we do after we realize that our perception has failed us somehow. But because pattern recognition is so powerfully efficient, it’s our default state. The thinking part? Not so much.
This just might be why online grocery shopping has yet to really take off. The average large U.S. supermarket offers about 50,000 SKUs, yet a weekly grocery shopper can easily get a complete trip done in about 30 minutes. We certainly don’t feel like we’re making 50,000 yes/no decisions to make that trip, but in effect we actually do. Put that same huge selection online, and all of those decisions are indeed conscious. Even though grocery shopping is a repetitive, list-based task, the in-store noise of all those products that aren’t on your list give you essential cues to finding the ones that are, and in reminding you of those that were not on your list but you still need. That’s even before you get to the detail level, where all the other sensory cues tell you which bunch of bananas is just right for you. So despite all the extra effort and hassle involved in going to the store in person, it still works better because of, not in spite of, the patterns of extraneous noise you have to process to get the job done.
To account for the role of noise within the essential skill of pattern recognition, we need to remind ourselves how complex seemingly simple tasks really are. Visually reading a scene, whether it’s a moonscape, a children’s book illustration, a grocery store, or a redesigned website, is an inherently complex task. Whenever people are faced with complexity (i.e., all day, every day), they use pattern recognition to identify, decipher, and understand what’s going on instantly, instead of examining each component individually. The catch is that all of the valuable consumer thought processes we want to address–understanding, passion, persuasion, the decision to act–are complex.
However, the research we use to help us design for these situations usually tries to dismantle this complexity. It also assumes a user who is actually paying attention, undistracted, in a clean and quiet environment (such as a market research facility), and cares deeply about the topic. Then we “clean” the data we collect, in an attempt to remove the noise. And getting rid of noise destroys the patterns that enable people to navigate those complex functions. So we wind up relying on an approach that does a poor job of modeling the system we’re trying to influence.
The challenge is to overcome the seemingly paradoxical notion that paying attention to factors completely outside our topic of interest actually improves our understanding of that topic. Doing so requires acknowledging that our target audience may not care as much about something as we do, even if that topic represents our entire livelihood. It requires a broader definition of the boundaries of what that topic is, and including the often chaotic context that surrounds it in the real world. It also requires a more than casual comfort level with ambiguity: Truly understanding complex systems involves recognizing how unpredictable, and often counterintuitive, they really are.
This is why ethnographic research is so popular with all kinds of designers. The rich context ethnographies offer is full of useful noise; the improvising people do to actually use a product, the ancillary details that surround it, and the unexpected motivations a consumer might bring to its use. These are all easier to access via a qualitative, on-location approach than they are via a set of quantitative crosstabs or sitting behind a mirror watching a focus group. It’s also a powerful human-to-human interface, in which the designer uses his innate pattern-recognition capability to analyze patterns in user behavior.
What often gets overlooked is the role noise can and should play in quantitative research. Most designers’ avoid quantitative research because of the clinically dry nature of the charts it produces, and the often false sense of authority that statistically projectable data can wield. However, only quantitative research can reveal the kind of perceptual patterns that are invisible to qualitative methods, and the results needn’t be dry at all. The solution is to appropriately introduce the right kind of noise to quantitative research, to deliberately drop in the necessary telephone poles, trees, and haze that allows those higher-level perceptual patterns to be seen and interpreted.
Fortunately, there’s already a model for this. When analog music is digitally recorded, some of the higher highs and lower lows are lost in the conversion. Through a process called dithering, audio engineers can add randomized audio noise to the digital signal. Strangely enough, even though the added noise has nothing to do with the original music, adding it actually improves the perceived quality of the digital audio file. The noise fills in the gaps left by the analog-to-digital conversion, essentially tricking your ear into hearing a more natural-sounding sound. The dithered audio really isn’t more accurate, it just sounds better, which is more important than accuracy. Returning to our opening examples, the moonscape was in dire need of dithering, while the Look-Alikes scenes were already heavily dithered. And the real world in general is heavily dithered.
So, for quantitative research aimed at guiding the design process, the trick is to value meaning above accuracy. Meaning can be gleaned via the noise you can add to the quantitative research process by including metrics outside the direct realm of your topic area. It means considering what else is adjacent to that topic area, acknowledging the importance of respondent indifference as well as their preferences, and recognizing what kind of potentially irrational motivations are behind the respondents’ approach to the topic, or the research itself.
At Method, we’ve developed a technique for observing these perceptual patterns in quantitative data by using perceptions of brands far afield of the category we’re designing for. Essentially, it’s a dithering technique for brand perceptions. This technique often displays an uncanny knack for generating those hiding-in-plain-sight aha moments that drive really useful insights. There are doubtless many other approaches you can employ once you make the leap that acknowledges the usefulness of noise in your analysis.
But no matter what format of research you use in your design development process (including no formal research at all), there are some guidelines you can follow to allow the right amount of useful noise to seep into your field of view, so that your final product does not wind up being missed on the moonscape of the marketplace:
Recognizing that you’re not the center of your target audience’s universe allows you to understand how you fit in. Be sure to take honest stock of just where your target audience places your topic area on their list of priorities.
No matter what metrics you’re using, consider looking several levels above them–or next to them–to identify patterns that are impossible to see when you’re too close to the subject.
How familiar is your target audience with your subject? Are they experts or novices, and how are you defining that? Generally, the higher the level of expertise, the higher the dependence on pattern recognition. Novices carefully and slowly compare details; experts read patterns quickly and act decisively.
No matter where your data comes from, think about what has been omitted. Was that distracting noise that was tossed, or crucial context?
By taking a look at the entire picture–instead of isolating a single data point–you open up opportunities for understanding the motivations, reasons, and outlying factors that impact data. Contrary to popular practice of stripping out noise, noise is in fact critical to the generation of deep insights that allow us to design better and more effective brands, products, and services.
[Image: Supermarket via Shutterstock]
June 11, 2012
The burgeoning Latin American digital media market represents an amazing opportunity for content creators. Representing more than 7% of global Internet users, Latin America is home to emerging markets, Brazil and Argentina, where 79% and 28% of the population consumes content on the Web, respectively — a combined population of more than 100 million. If you add Mexico to the list, where 30% of the country’s 112 million people use the Internet, the list grows to 130 million Internet users.
In Latin America, Facebook accounted for 25% of all time spent online and social networking in general accounted for nearly 30% of online minutes at the end of the year, an increase of 9.5% over the past year. In addition to social media usage, online video consumption increased more than 10% across Brazil, Mexico, Argentina and Chile, and online retail visits increased 30%. The number of searches in 2011 increased 38% to more than 21 billion and, with an average of 173 searches per searcher, Latin America leads the globe in search frequency.
The U.S. Hispanic market represents an equally important demographic. More than 33 million Hispanics were online in September of last year, representing 15% of the U.S. online market, a demographic that is growing three times faster than the general market online. Eighty percent of online Hispanics use a search engine each month and 80% of online Hispanics visit Facebook each month.
Content creators must focus on the Latin American and U.S. Hispanic markets in order to maximize overall content viewership and engagement. Reaching English-speaking content consumers in the U.S. and south of the border has never been more important and will only become more important in the coming years. Moreover, creating and distributing Spanish-language content in the U.S. and Latin America is an equally important objective.
Understanding and working within these communities will enable brands and publishers to attract a portion of the world that will dominate digital content consumption in the coming years. The creation of relevant content and finding partners to help distribute that content must be among your top priorities.
With all of this in mind, Outbrain is honored to have been named the Top Digital Media Innovator in the Latin World at the 2012 Latin American Advertising and Media Awards at the Portada Hispanic Media Conference. The award honors companies in Latin America, the United States Hispanic market and Spain for excellence in media and digital advertising. We are particularly humbled to have been nominated alongside the following innovators:
“So much of the Portada Conference focused on the power of storytelling and producing great content,” said Erik Cima, VP of Hispanic Markets at Outbrain. “Winning this award is satisfying because we’re playing a part in helping the Hispanic and Latin American markets surface and distribute that great content.”
Image via Captura Group
Link to original article: http://www.outbrain.com/blog/2012/06/the-latin-american-and-hispanic-digital-opportunity-are-you-prepared.html