H2O Big Data Prediction Engine
h2o = fast statistical, machine learning & math runtime for bigdata
Source: https://github.com/0xdata/h2o   Apache
 

H2O

H2O makes Hadoop do math! H2O scales statistics, machine learning and math over BigData. H2O is extensible and users can build blocks using simple math legos in the core. H2O keeps familiar interfaces like R, Excel & JSON so that BigData enthusiasts & experts can explore, munge, model and score datasets using a range of simple to advanced algorithms. Data collection is easy. Decision making is hard. H2O makes it fast and easy to derive insights from your data through faster and better predictive modeling. H2O has a vision of online scoring and modeling in a single platform.

Product Vision for first cut

H2O product, the Analytics Engine will scale Classification and Regression.

  • RandomForest, Generalized Linear Modeling (GLM), logistic regression, k-Means, available over R / REST / JSON-API
  • Basic Linear Algebra as building blocks for custom algorithms
  • High predictive power of the models
  • High speed and scale for modeling and scoring over BigData

Data Sources

  • We read and write from/to HDFS, S3, NoSQL, SQL
  • We ingest data in CSV format from local and distributed filesystems (nfs)
  • A JDBC driver for SQL and DataAdapters for NoSQL datasources is in the roadmap. (v2)

Console provides Adhoc Data Analytics at scale via R-like Parser on BigData

  • Able to pass and evaluate R-like expressions, slicing and filters make this the most powerful web calculator on BigData

Users

Primary users are Data Analysts looking to wield a powerful tool for Data Modeling in the Real-Time. Microsoft Excel, R, SAS wielding Data Analysts and Statisticians. Hadoop users with data in HDFS will have a first class citizen for doing Math in Hadoop ecosystem. Java and Math engineers can extend core functionality by using and extending legos in a simple java that reads like math. See package hex. Extensibility can also come from writing R expressions that capture your domain.

Design

We use the best execution framework for the algorithm at hand. For first cut parallel algorithms: Map Reduce over distributed fork/join framework brings fine grain parallelism to distributed algorithms. Our algorithms are cache oblivious and fit into the heterogeneous datacenter and laptops to bring best performance. Distributed Arraylets & Data Partitioning to preserve locality. Move code, not data, not people.

Extensions

One of our first powerful extension will be a small tool belt of stats and math legos for Fraud Detection. Dealing with Unbalanced Datasets is a key focus for this. Users will use JSON/REST-api via H2O.R through connects the Analytics Engine into R-IDE/RStudio.

Immediately related elementsHow this works
-
OpenSherlock Project Â»OpenSherlock Project
Resources Â»Resources
Harvesting Process Support Â»Harvesting Process Support
H2O Big Data Prediction Engine
+Kommentare (0)
+Verweise (0)
+About