The Impact of Big Data

The New York Times recently published an interesting article, The Age of Big Data, on the topic of the Big Data phenomenon and the opportunities that accompany the surge of information from new sources. In last week’s blog, we noted that a McKinsey Institute Study found that the demand for skilled data analysts will significantly increase, and the article discusses how this is just the beginning of the shift toward the need for more analytical understanding of Big Data.

“It’s a revolution,” says Gary King, director of Harvard’s Institute for Quantitative Social Science. “We’re really just getting under way. But the march of quantification, made possible by enormous new sources of data, will sweep through academia, business and government. There is no area that is going to be untouched.”

The article examines the many reasons and examples of why Big Data has an influence on fields such as business, government, and economics, and more importantly why decision-making will increasingly be based on data analysis. The article presents a compelling comparison of the impact of big data to the invention of the microscope. Erik Brynjolfsson, an economist at Massachusetts Institute of Technology’s Sloan School of Management, explains how the microscope was a dramatic change in measurement by enabling things to be viewed at the cellular level.

Data measurement, Professor Brynjolfsson explains, is the modern equivalent of the microscope. Google searches, Facebook posts and Twitter messages, for example, make it possible to measure behavior and sentiment in fine detail and as it happens.

Along with data measurement, being able to integrate, visualize, and analyze data will be the key to accurate and timely decision-making for organizations tackling Big Data. As we’ve previously discussed on this blog, Big Data has a significant impact on social network behaviors. The article presents a good example of the opportunities that Big Data presents in being able to effectively understand key patterns and relationships in increasing amounts of data from various sources and of different types.

Researchers can see patterns of influence and peaks in communication on a subject — by following trending hashtags on Twitter, for example. The online fishbowl is a window into the real-time behavior of huge numbers of people. “I look for hot spots in the data, an outbreak of activity that I need to understand,” says Jon Kleinberg, a professor at Cornell. “It’s something you can only do with Big Data.”

Overall, the article is a good discussion on the impact of Big Data, with real-world examples of how Big Data is influencing society and business and the effect it has on critical decision-making.

Advertisements

“Big Data” and “Next-Generation Analytics” on Gartner’s Top 10 Strategic Technology Trends for 2012 List

On Monday at the Gartner Symposium, David Cearley presented Gartner’s annual list of the Top 10 Strategic Technology Trends for 2012. Among those that made the list are big data and next-generation analytics. Although the two items are listed separately, the two technologies go hand-in-hand and can provide the most compelling and effective experience for exploring and navigating through complex data.

Gartner's Top 10 Strategic Technology Trends in 2012[Source: PC Magazine]

It comes as no surprise that the two items were listed, as organizations are besieged by an explosive growth of information coming from disparate data sources every day. This data can be structured, semi-structured, and unstructured. With the annual growth rate well over 100%, analyzing and understanding big data has become a top priority.

For far too long, organizations have spent too much money and resources on collecting data, scrambling around to ensure data integrity, and finally, wondering how to make use of this data. More and more, organizations are starting to realize the need for technologies that can help them maximize the value of their data assets.

Traditional analytics that have relied on computers and algorithms to do the work for them are starting to be deemed limiting, especially with the growth of unstructured data. As Gartner continues to push big data and next-generation analytics, organizations need to adopt technologies that can drive business decision-making in ways not possible before. This new wave of analytic techniques, or advanced visual analysis, harnesses the most powerful pattern recognition system available — the human brain. Advanced visual analysis will help organizations discover key insights that were hidden in the past and turn the challenges of big data into opportunities.

Big Data = Big Opportunities through Advanced Visual Analytics

We live in a fascinating data-driven world full of challenges and opportunities – the world of Big Data. With the continuing, explosive growth of data in terms of quantity, diversity of sources and types, and speed, there are tremendous opportunities to explore this “Brave New World of Big Data” to discover new insights that were hard to uncover in the past.

In recognition of these new opportunities, more and more organizations are adding data visualization applications powered with advanced analytics, such as Social Network Analysis (SNA), into their portfolio of advanced analytics – what is often called advanced visual analysis. The growing recognition of the role of advanced visual analysis as part of analytics toolkits is highlighted by recent acquisition of i2 Technologies by IBM, growth of visualization based business intelligence products from QlikView, TIBCO/Spotfire, and Tableau as well as the recent move by Facebook to provide Open Graph as part of their makeover. This is indeed an exciting time for advanced visual analysis.

The growing use of advanced visual analysis applications, especially graph-based systems powered with SNA techniques, have led to remarkable recent track record of success in combating global terrorism, cyber warfare, and criminal fraud activities in various industries. New technologies such as pad computing devices, GPS and biometric based systems, and next generation network management systems for integrated communication of audio, data, and video, the continuing growth of online social communities, as well as integrated demand-supply chain systems have led to rise in demand for using these applications to discover insights from not just highly structured data in traditional enterprise repositories but from all data types across all types of information repositories. The increased success of these applications in different industries will continue to fuel the growth in advanced visual analysis applications of all types in the coming years.

The ongoing industry recognition of the power of advanced visual analysis will yield better integrated analytic systems in many cases, better defined visual analysis infrastructure in others, and overall a superior experience and results for organizations focused on maximizing the value of their Big Data.   This is a fascinating and exciting time for those of us involved in visualization, advanced visual analytics, and advanced data analytics of all types. We are entering an era where our ability to discover actionable insights in the right context is leading to a fundamental paradigm shift in how we leverage our data assets to create more agile, efficient, and intelligent organizations everywhere.

Challenge #3: It’s Hard to Visualize Data when It Comes from Different Sources

Contributed by Derrick Snyder | Austin, TX

This is the final installment of the blog series, “The Top Challenges of Visualizing Time-Based Measurement Data”.

Engineering data collection is often a repetitive or iterative process. Typically, data is acquired during some sort of a test that may be run over and over again while making incremental changes. This results in the acquisition and storage of dozens, hundreds or even thousands of data sets over time. Additionally, data acquired about today’s model of “widget” may need to be compared to data from last year’s model, so the management of legacy data is a concern.

Compounding this challenge is the fact that engineering data is often acquired and saved in a variety of different formats. In any given engineering test system, instruments may have their own individual way of saving data that differs from each other component of the system. Data may be stored in data files or databases, and being able to consume and visualize data from these different sources presents a pain point.

Look for intelligent data visualization software that is able to handle a variety of input sources. Rather than being restricted to loading data from a set number of data sources, scalable data visualization software must be modular in its interfaces so that additional future data sources may be added to the system. In the ideal case, once data is loaded into the software environment, data visualization will be agnostic to the type of source from which the data was loaded.

– Derrick Snyder derrick.snyder@ni.com

Connect with Derrick on LinkedIn

Derrick Snyder is a product manager for NI DIAdem measurement data processing and visualization software at National Instruments. He received a bachelor’s degree in Computer Engineering from Vanderbilt University.

Challenge #2: It’s Hard to Visualize Data When You Need to Correlate It

Contributed by Derrick Snyder | Austin, TX

This is the second installment of the blog series, “The Top Challenges of Visualizing Time-Based Measurement Data.”

Often, data is being collected about a number of different phenomena at the same time. Though visualizing one data set alone is certainly valuable, incredible results come from the ability to discover relationships between different types of data. For example, when engineers test the ability of an airplane’s wing to withstand the violent forces that could occur in times of turbulence, they may simultaneously measure the force being applied to the wing, the strain (flex) of the material in the wing, the vibration on the wing, the noises that the components make when they finally reach their breaking point, and high definition video of the entire test being performed. Looking at a simple graph of the strain data over time is beneficial, but the ideal goal for engineers performing this test is probably to characterize how the strain and vibration of the wing are affected when exposed to different levels of force over time for a given design (all while watching and listening to the event played back). To do this, simple static graphs or images are insufficient.

Today’s advanced data processing software must be dynamic. Not only should graphs be able to depend on or interact with one another (for example, to provide cursor visualization coordinating one graph’s data values in time as compared to others), but in fact, graphs alone are no longer adequate visualization tools. To properly visualize engineering measurement data, it needs to be able to be correlated with alternative information such as video, GPS positioning or timestamps, sound, and more – you never know when the interrelationship of information may be the key to the next breakthrough in understanding.

– Derrick Snyder derrick.snyder@ni.com

*~*~*~*~*

About Derick Snyder

Derrick Snyder is a product manager for NI DIAdem measurement data processing and visualization software at National Instruments. He received a bachelor’s degree in Computer Engineering from Vanderbilt University.