How Tech Giants Manage and Manipulate the Big Data Problem

Ajmal Muhammed
8 min readSep 17, 2020

Welcome to the introduction of Big data and problems that come with big data.

Let’s talk about Big Data,

Big data can be defined as: “high-volume and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”

The digital era has created an overwhelming amount of information, with the total amount of data projected to rise to 44 zettabytes in 2020. This massive amount of data has proven to be immensely valuable to large enterprise companies — for the first time; enterprises can integrate disparate data into meaningful sources for AI algorithms to manipulate and understand behaviors. By the end of 2017, the adoption of big data in companies had reached 53%.

By an estimate, around 90% of the world’s data has been created in the last two years alone. Moreover, 80% of the data is unstructured or available in widely varying structures, difficult to analyze.
Now, you know the amount of data produced. Though such a large amount of data bring big challenge and the more significant challenge arises with the fact that this data is of no arranged format. It has images, line streaming records, videos, sensor records, GPS tracking details. In short, it’s unstructured data. Traditional systems are useful in working with structured data (limited as well), but they can’t manage such a large amount of unstructured data.

Why Big data is so important?

Data plays a huge role in understanding valuable insights about target demographics and customer preferences. From every interaction with technology, regardless of whether it’s active or passive, we are creating new data that can describe us. With data being captured through products, video cameras, credit cards, cell phones, and other touchpoints, our data profile is growing exponentially. If analyzed correctly, these data points can explain a lot about our behavior, personalities, and life events. Companies can leverage these insights for product improvements, business strategy, and marketing campaigns to cater to the target customers.

The concept of big data and its importance has been around for years, but only recently has technology enabled the speed and efficiency at which large sets of data can be analyzed. As data — both structured and unstructured — grow substantially in the coming years, it will be collected and examined to reveal unexpected insights and even help predict the future.

Big data will change how even the smallest companies do business as data collection and interpretation become more accessible. New, innovative, and cost-effective technologies are constantly emerging and improving, making it incredibly easy for any organization to seamlessly implement big data solutions.

One may ask this question of why even need to care about storing this data and processing it? For what purpose? The answer is that we need this data to make more smart and calculative decisions in whatever field we are working on. Business forecasting is not a new thing. It has been prepared in the past as well, but with minimal data. Too ahead of the competition, businesses MUST use this data and then make more intelligent decisions. These decisions range from guessing the preferences of consumers to preventing fraud activities well in advance. Professionals in every field may find their reasons for the analysis of this data.

The four V’s of Big Data

Characteristics of Big Data

When you require to determine that you need to use any big data system for your subsequent project, see into your data that your application will build and try to watch for these features. These points are called 4 V in the big data industry.

Volume

Volume is absolutely a slice of the bigger pie of Big data. The internet-mobile cycle, delivering with it a torrent of social media updates, sensor data from tools, and an outburst of e-commerce, means that all industry swamped with data, which can be amazingly valuable if you understand how to work on it.

Variety

Structured data collected in SQL tables are a matter of past. Today, 90% of data produced is ‘unstructured,’ arriving in all shapes and forms- from Geospatial data to tweets which can investigate for content and thought, to visual data such as photos and videos.
So, is this always come in structured still, or is it come in unstructured or semi-structured?

Velocity

Each minute of every day, users throughout the globe upload 200 hours of video on Youtube, send 300,000 tweets, and carry over 200 million emails. And this keeps growing as the internet speed is getting faster.
So, what is the future of your data and velocity?

Veracity

This applies to the uncertainty of the data available to marketers. This may also be referred to as the variability of data streaming that can be changeable, making it tough for organizations to respond quickly and more appropriately.

why companies like Google and Amazon were among the first to address the Big Data problem?

Mainly because of the sheer scale of problems they were trying to solve and the business models they chose:

  • Google was trying to index (vaguely, read, download, and categorize) the entire Internet helping us, the consumers, find information on the web efficiently while selling us advertisements.
  • Amazon was mounting the world’s largest “retail store” with its hundreds of logistic locations, thousands of suppliers, millions of customers all together generating billions of data points while collecting mostly laser thin profit margins albeit from a substantial number of merchandize

Challenges of Big Data

The challenge of collecting and making sense of such vast volumes of data goes far beyond the capabilities of traditional centralized database systems and applications.

Attempting to solving such problems with traditional tools would bring about issues of cost-effectiveness (particularly software licensing with traditional database vendors), limited traditional enterprise server hardware storage and computational capacity, systems’ resilience, etc.

Amongst other challenges compounding the issue were globally dispersed points of data to connect to energy consumption, global network latency, and customer’s hunger for an immediate response, all called for innovative, reliable, cost-effective, incredibly fast, secure, and truly “Big” scale Data solutions.

How Big Data Problem is Solved by Tech Giants

This problem tickled google first due to their search engine data, which exploded with the revolution of the internet industry. And it is very hard to get any proof of it that its internet industry. They smartly resolved this difficulty using the theory of parallel processing. They designed an algorithm called MapReduce. This algorithm distributes the task into small pieces and assigns those pieces to many computers joined over the network, and assembles all the events to form the last event dataset.

Well, this looks logical when you understand that I/O is the most costly operation in data processing. Traditionally, database systems were storing data into a single machine, and when you need data, you send them some commands in the form of an SQL queries. These systems fetch data from the store, put it in the local memory area, process it, and send it back to you. This is the real thing that you could do with limited data in control and limited processing capability.

But when you see Big Data, you cannot collect all data in a single machine. You MUST save it into multiple computers (maybe thousands of devices). And when you require to run a query, you cannot aggregate data into a single place due to high I/O cost. So what MapReduce algorithm does; it works on your query into all nodes individually where data is present, and then aggregate the final result and return to you.

It brings two significant improvements, i.e. very low I/O cost because data movement is minimal; and second less time because your job parallel ran into multiple machines into smaller data sets.

How Hadoop come to play in solving the big data problem

Ever wondered how LinkedIn figures out whom to recommend in your People You May Know section? Like what are the chances! You probably met this person long back and now a bleak memory remains. If you think of it, it’s an intriguing question, but a data scientist knows better. Curious much? The answer is Hadoop!

Hadoop supports to leverage the chances provided by Big Data and overcome the challenges it encounters.

So, What is Hadoop?

Hadoop is an open-source, a Java-based programming framework that continues the processing of large data sets in a distributed computing environment. It based on the Google File System or GFS.

Hadoop runs few applications on distributed systems with thousands of nodes involving petabytes of information. It has a distributed file system, called Hadoop Distributed File System or HDFS, which enables fast data transfer among the nodes.

How it can be used to solves the problem?

Big Data problems are everywhere in this digital world. There is too much data to make sense of and consume. Companies big and small are now investing to solve problems in Hadoop. Data analysis to help business operations arrive at decisions or support business process are common scenarios of hadoop.

Pattern recognition: Since Hadoop can manage structured, semi-structured and unstructured data, pattern recognition is a common big data problem that Hadoop can successfully tackle. Solving business problems with Hadoop cluster that breaks down large data into a smaller cluster for analyzing and processing is the key to resolve this concern. A common use case for pattern recognition is real time analytics. Hadoop can continuously capture, store and process data from sensors and connected devices. With this live data, Hadoop can proactively assist in churning possible safety risks and help in damage control.

Index Building: Indexing large amount of data is another big data challenge. Data creation has become such a norm that it is nearly impossible to cap creation of data. Creating indexes on those millions of records can be accomplished with Hadoop. It essentially simplifies search queries on high volume data. A common use case of this would be an app that has to announce correct scores for leading players on their chart!

Recommendation Engines: Recommendation engines have picked up and how on almost every online platform. Be it e-commerce website or content based sites like Youtube. With Hadoop creating relevant recommendations and altering recommendations within fraction of seconds is possible. Hadoop and predictive analysis basically helps in filtering vast information online to narrow consumable choices that increase user engagement and customer conversions.

Applications of Hadoop

Organizations, especially those that generate a lot of data rely on Hadoop and similar platforms for the storage and analysis of data. These organizations include Facebook. Facebook generates an enormous volume of data and has been found to apply Hadoop for its operations. The flexibility of Hadoop allows it to function in multiple areas of Facebook in different capacities. Facebook data are thus compartmentalized into the different components of Hadoop and the applicable tools. Data such as status updates on Facebook, for example, are stored on the MySQL platform. The Facebook messenger app is known to run on HBase.

Experts have also stated that e-commerce giant, Amazon also utilize components of Hadoop inefficient data processing. The component of Hadoop that is utilized by Amazon include Elastic MapReduce web service. Elastic MapReduce web service is adapted for effectively carrying out data processing operations, which include log analysis, web indexing, data warehousing, financial analysis, scientific simulation, machine learning, and bioinformatics.

Other organizations that apply components of Hadoop include eBay and Adobe. eBay uses Hadoop components such as Java MapReduce, Apache Hive, Apache HBase, and Apache Pig for processes such as research and search optimization. Adobe is known to apply components of Hadoop such as Apache HBase and Apache Hadoop. The production, as well as development processes of Adobe, applies components of Hadoop on clusters of 30 nodes. There are reports of the expansion of the nodes utilized by Adobe.

Although these are examples of the application of Hadoop on a large scale, vendors have developed tools that allow the application of Hadoop in small scales across different operations.

--

--