Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Big Data Ingestion: Flume, Kafka and NiFi

Preliminaries When building Big Data pipelines, we need to think on how to ingest the Volume, Variety and Velocity of data showing up at the gates of what would typically be a Hadoop ecosystem. Preliminary considerations such as scalability, reliability, adaptability, cost in terms of development time, etc. will all come into play when deciding ...



This post first appeared on Java Code Geeks, please read the originial post: here

Share the post

Big Data Ingestion: Flume, Kafka and NiFi

×

Subscribe to Java Code Geeks

Get updates delivered right to your inbox!

Thank you for your subscription

×