Insert title here

Case Studies

Bringing your idea to life and in front of billions of eyes

Big Data (Santosh Shinde)


The Customer is a leading market research company.


Though having a robust analytical system, the Customer believed that it would not be able to satisfy the company’s future needs. Acknowledging this situation, the Customer was keeping their eyes open for a future-focused innovative solution. A system-to-be was to cope with the continuously growing amount of data, to analyze big data faster and enable comprehensive advertising channel analysis.

After deciding on the system’s-to-be architecture, the Customer was searching for a highly qualified and experienced team to implement the project. Satisfied with a long-lasting cooperation with ScienceSoft, the Customer addressed our consultants to do the entire migration from the old analytical system to a new one.


During the project, the Customer’s business intelligence architects were cooperating closely with ScienceSoft’s big data team. The former designed an idea, and the latter was responsible for its implementation.

For the new analytical system, the Customer’s architects selected the following frameworks:

  • Apache Hadoop – for data storage;
  • Apache Hive – for data aggregation, query and analysis;
  • Apache Spark – for data processing.

Amazon Web Services and Microsoft Azure were selected as cloud computing platforms.

Upon the Customer’s request, during the migration, the old system and the new one were operating in parallel.

Overall, the solution included five main modules:

  • Data preparation
  • Staging
  • Data warehouse 1
  • Data warehouse 2
  • Desktop application

Data preparation

The system has been supplied with raw data taken from multiple sources, such as TV views, mobile devices browsing history, website visits data and surveys. To enable the system to process more than 1,000 different types of raw data (archives, XLS, TXT, etc.), data preparation included the following stages coded in Python:

  • Data transformation
  • Data parsing
  • Data merging
  • Data loading into the system.


Apache Hive formed the core of that module. At that stage, data structure was similar to raw data structure and had no established connections between respondents from different sources, for example, TV and internet.

Data warehouse 1

Similar to the previous block, that one also based on Apache Hive. There, data mapping took place. For example, the system processed the respondents’ data for radio, TV, internet and newspaper sources and linked users’ ID from different data sources according to the mapping rules. ETL for that block was written in Python.

Data warehouse 2

With Apache Hive and Spark as a core, the block guaranteed data processing on the fly according to the business logic: it calculated sums, averages, probabilities, etc. Spark’s DataFrames were used to process SQL queries from the desktop app. ETL was coded in Scala. Besides, Spark allowed filtering query results according to access rights granted to the system’s users.

Desktop application

The new system enabled a cross analysis of almost 30,000 attributes and built intersection matrices allowing multi-angled data analytics for different markets. In addition to standard reports, such as Reach Pattern, Reach Ranking, Time Spent, Share of Time, etc., the Customer was able to create ad hoc reports. After the Customer selected several parameters of interest (for example, a particular TV channel, group of customers, time of day), the system returned a quick reply in the form of easy-to-understand charts. The Customer could also benefit from forecasting. For example, based on expected reach and planned advertising budget, the system would forecast the revenue.


At the project closing stage, the new system was able to process several queries up to 100 times faster than the outdated solution. With the valuable insights that the analysis of almost 30,000 attributes brought, the Customer was able to carry out comprehensive advertising channel analysis for different markets.


Apache Hadoop, Apache Hive, Apache Spark, Python (ETL), Scala (Spark, ETL), SQL (ETL), Amazon Web Services (Cloud storage), Microsoft Azure (Cloud storage), .NET (desktop application).


Add Comment     See All Comments
sling tasche slater michael korsmk mini backpack pursecoach poppy wristletcoach hutton crossbody ottoman phone caselightweight iphone 11 caseminnie mouse phone case iphone xrs20 plus defender case buffalo sabres miller jerseymitchell and ness franco harris jerseyeddie goldman jerseynba 2020 city edition nike mercurial vapor pro ic noirall grey nike mercurial vapor xii shoessneakers 574blue pink adidas superstar slip on moncler clothing mensmoncler fulmarus navymoncler vest sale womensmoncler puffer jacket with fur hood mahmudtarek
Thanks for your whole hard work on this web site. My mum delights in getting into investigations and it's really simple to grasp why. My partner and i know all of the lively means you deliver useful guidance via the web blog and as well as increase contribution from other individuals on the theme so our favorite simple princess is truly understanding a lot. Take advantage of the remaining portion of the new year. You're the one doing a good job. off-white
air max 90 gul serpentaj6 black infraredair jordan 3 retro noir cehommest grisflyknit rn womens mens red sole shoeschristian louboutin platform heelscost of red bottom shoesred bottoms heels saks fifth avenue jordan superfly 4 negro azulkobe 8 hvit gulall grey nike air vapormax mensnike air jordan 1 retro high sort jordan 4 black goldjordan 6 hvid and burgundylunarepic unlimitedair jordan eclipse sort cold shoulder summer maxi dresseswhite overall dressdillards prom dresses longgood witch floral long sleeve midi dress amazon vince camuto dressesurban outfitters crochet dressglamorous v neck button dressfrench connection strapless dress crmozon
I want to show appreciation to you for bailing me out of such a issue. After exploring through the internet and coming across methods which were not beneficial, I was thinking my life was gone. Existing devoid of the solutions to the problems you have sorted out all through this report is a serious case, and the ones which may have in a wrong way damaged my entire career if I had not discovered your web blog. Your main capability and kindness in touching the whole thing was crucial. I don't know what I would have done if I had not come across such a stuff like this. I can also at this moment relish my future. Thanks for your time so much for your expert and sensible help. I won't hesitate to refer your web site to any individual who ought to have recommendations on this situation. bape shirt
Needed to put you that little bit of word to finally say thanks a lot yet again for those marvelous strategies you have shared here. It was remarkably open-handed of people like you to give freely all numerous people might have marketed as an e book to make some dough on their own, primarily now that you might well have tried it if you desired. These basics in addition served like the fantastic way to comprehend someone else have a similar desire just like my very own to know the truth a great deal more on the topic of this condition. I think there are several more enjoyable instances in the future for those who find out your site. black golden goose

Tech Divinity cloud enable faster performance