What is Spark in Big Data?
2 July 2021
Basically Spark is a framework – in the same way that Hadoop is – which provides a number of inter-connected platforms, systems and standards for Big Data projects.
Like Hadoop, Spark is open-source and under the wing of the Apache Software Foundation. Essentially, open-source means the code can be freely used by anyone. Beyond that, it can also be altered by anyone to produce custom versions aimed at particular problems, or industries. Volunteer developers, as well as those working at companies which produce custom versions, constantly refine and update the core software adding more features and efficiencies. In fact Spark was the most active project at Apache last year. It was also the most active of all of the open source Big Data applications, with over 500 contributors from more than 200 organizations.
Spark is seen by techies in the industry as a more advanced product than Hadoop – it is newer, and designed to work by processing data in chunks “in memory”. This means it transfers data from the physical, magnetic hard discs into far-faster electronic memory where processing can be carried out far more quickly – up to 100 times faster in some operations.
Spark has proven very popular and is used by many large companies for huge, multi-petabyte data storage and analysis. This has partly been because of its speed. Last year, Spark set a world record by completing a benchmark test involving sorting 100 terabytes of data in 23 minutes – the previous world record of 71 minutes being held by Hadoop.
Additionally, Spark has proven itself to be highly suited to Machine Learning applications. Machine Learning is one of the fastest growing and most exciting areas of computer science, where computers are being taught to spot patterns in data, and adapt their behaviour based on automated modelling and analysis of whatever task they are trying to perform.
It is designed from the ground up to be easy to install and use – if you have a background in computer science! In order to make it available to more businesses, many vendors provide their own versions (as with Hadoop) which are geared towards particular industries, or custom-configured for individual clients’ projects, as well as associated consultancy services to get it up and running.
Spark uses cluster computing for its computational (analytics) power as well as its storage. This means it can use resources from many computer processors linked together for its analytics. It’s a scalable solution meaning that if more oomph is needed, you can simply introduce more processors into the system. With distributed storage, the huge datasets gathered for Big Data analysis can be stored across many smaller individual physical hard discs. This speeds up read/write operations, because the “head” which reads information from the discs has less physical distance to travel over the disc surface. As with processing power, more storage can be added when needed, and the fact it uses commonly available commodity hardware (any standard computer hard discs) keeps down infrastructure costs.
Unlike Hadoop, Spark does not come with its own file system – instead it can be integrated with many file systems including Hadoop’s HDFS, MongoDB and Amazon’s S3 system.
Another element of the framework is Spark Streaming, which allows applications to be developed which perform analytics on streaming, real-time data – such as automatically analyzing video or social media data – on-the-fly, in real-time.
In fast changing industries such as marketing, real-time analytics has huge advantages, for example ads can be served based on a user’s behavior at a particular time, rather than on historical behavior, increasing the chance of prompting an impulse purchase.
So that’s a brief introduction to Apache Spark – what it is, how it works, and why a lot of people think that it’s the future. I hope you found it useful.
Related Articles
AI Gone Wild: How Grok-2 Is Pushing The Boundaries Of Ethics And Innovation
As AI continues to evolve at breakneck speed, Elon Musk's latest creation, Grok-2, is making waves in the tech world.[...]
Apple’s New AI Revolution: Why ‘Apple Intelligence’ Could Change Everything
Apple's announcement of 'Apple Intelligence' marks a seismic shift in how we interact with our devices.[...]
Why AI Models Are Collapsing And What It Means For The Future Of Technology
Artificial intelligence has revolutionized everything from customer service to content creation, giving us tools like ChatGPT and Google Gemini, which can generate human-like text or images with remarkable accuracy.[...]
Where Will Artificial Intelligence Take Us In The Future?
Just a few years back, if you had been told that by 2024, you would be able to have a conversation with a computer that would seem almost completely human, would you have believed it?[...]
AI: Overhyped Fantasy Or Truly The Next Industrial Revolution?
The term “fourth industrial revolution” has been used in recent years to describe the transformative impact that many believe AI and automation will have on human society.[...]
The World On Edge: 5 Global Mega Threats That Could Reshape Our Future
In an era of unprecedented global interconnectedness, humanity faces a perfect storm of challenges that threaten to reshape our world.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media