Most people think that Big Data projects start directly with the deployment of large distributed clusters of heavy map reduce jobs, whereas reality shows that there isn't any unique/perfect solution to solving problems when dealing with large volumes of data.§§By knowing the different Big Data integration patterns, you will understand why most of the time you will have to deploy a heterogeneous architecture that fulfills different needs, and furthermore what limits each pattern that may lead you to choose effective alternates.§§We will go through real concrete industry use cases that leverage these patterns such as REST API which requests large amount of data stored in No-SQL like Couchbase and Elasticsearch. We will see how massive data processing can be done in such No-SQL databases without the need of diving deep into Big Data.§§But when the volume is too high and the data structures gets too complex, the kind of pattern being employed reaches its limits and that's when we can start thinking of delegating complex data processing jobs to, for example, a Hadoop based Big Data architecture.§§The difficulty is to then choose a relevant combination of big data technologies available within the Hadoop ecosystem. We will focus on processing long jobs, architecture, stream data patterns, log analysis, and real time analytics. Every pattern will be illustrated with practical examples, which uses the different apache projects such as Avro, Spark, Kafka, and so on.§§Traditional Big Data infrastructures are built for digesting and rendering data synthesis and analytics from large amount of data. This book will also help you to understand why you should consider using machine learning algorithms early on in the project, before being overwhelmed by constraints implied by dealing with high throughput of Big data.§§
Opinie i recenzje użytkowników
Dodaj opinie lub recenzję dla Scalable big data architecture. Twój komentarz zostanie wyświetlony po moderacji.