Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop While many sources explain how to use various components in the Hadoop ecosystem this practical book takes you through architectural considerations necessary to tie those components together into a complete tailored application based on your particular use case To reinforce those lessons the book's second section provides detailed examples of architectures used in some of the most commonly found Hadoop applications Whether you're designing a new Hadoop application or planning to integrate Hadoop into your existing data infrastructure Hadoop Application Architectures will skillfully guide you through the process This book covers Factors to consider when using Hadoop to store & model data Best practices for moving data in & out of the system Data processing frameworks including Map Reduce Spark & Hive Common Hadoop processing patterns such as removing duplicate records & using windowing analytics Giraph Graph X & other tools for large graph processing on Hadoop Using workflow orchestration & scheduling tools such as Apache Oozie Near-real-time stream processing with Apache Storm Apache Spark Streaming & Apache Flume Architecture examples for clickstream analysis fraud detection & data warehousing