Ahana Cloud for Presto review: Fast SQL queries against data lakes

Victoria D. Doty

Hope springs everlasting in the database small business. Whilst we’re nevertheless listening to about information warehouses (rapidly examination databases, generally that includes in-memory columnar storage) and instruments that boost the ETL step (extract, remodel, and load), we’re also listening to about advancements in information lakes (which shop information in its indigenous structure) and information federation (on-desire information integration of heterogeneous information retailers).

Presto keeps coming up as a rapidly way to carry out SQL queries on massive information that resides in information lake data files. Presto is an open up source dispersed SQL query motor for managing interactive analytic queries towards information resources of all dimensions. Presto will allow querying information exactly where it life, including Hive, Cassandra, relational databases, and proprietary information retailers. A one Presto query can incorporate information from many resources. Facebook takes advantage of Presto for interactive queries towards a number of inner information retailers, including their 300PB information warehouse.

The Presto Basis is the corporation that oversees the improvement of the Presto open up source task. Facebook, Uber, Twitter, and Alibaba established the Presto Basis. Added users now include Alluxio, Ahana, Upsolver, and Intel.

Ahana Cloud for Presto, the matter of this assessment, is a managed company that simplifies Presto for the cloud. As we’ll see, Ahana Cloud for Presto operates on Amazon, has a quite straightforward person interface, and has conclude-to-conclude cluster lifecycle administration. It operates in Kubernetes and is highly scalable. It has a designed-in catalog and simple integration with information resources, catalogs, and dashboarding instruments.

Competitors to Ahana Cloud for Presto include Databricks Delta Lake, Qubole, and BlazingSQL. I will draw comparisons at the conclude of the short article.

Presto and Ahana architecture

Presto is not a basic-purpose relational database. Somewhat, it is a device developed to competently query huge quantities of information working with dispersed SQL queries. Whilst it can substitute instruments that query HDFS working with pipelines of MapReduce work these as Hive or Pig, Presto has been prolonged to work above distinctive types of information resources including common relational databases and other information resources these as Cassandra.

In shorter, Presto is not developed for on-line transaction processing (OLTP), but for on-line analytical processing (OLAP) including information examination, aggregating huge quantities of information, and creating studies. It can query a huge wide variety of information resources, from data files to databases, and return success to a amount of BI and examination environments.

Presto is an open up source task that operated below the auspices of Facebook. It was invented at Facebook and the task continues to be created by both of those Facebook inner builders and a amount of third-occasion builders below the supervision of the Presto Basis.

Presto’s scalable, clustered architecture takes advantage of a coordinator for SQL parsing, arranging, and scheduling, and a amount of employee nodes for query execution. Result sets from the staff movement again to the shopper by means of the coordinator.

Ahana Cloud packages managed Presto, a Hive metadata catalog, a information lake hosted on Amazon S3, cluster administration, and entry to Amazon databases into what is successfully a cloud information warehouse in an open up, disaggregated stack, as proven in the architecture diagram below. The Presto Hive connector manages entry to ORC, Parquet, CSV, and other information data files.

ahana for presto 03 Ahana


As implemented on AWS, Ahana Cloud for Presto spots the SaaS console outdoors of the customer’s VPC and the Presto clusters and Hive metastore inside the customer’s VPC. Amazon S3 buckets provide as storage for information data files.

The Ahana command plane will take care of cluster orchestration, logging, protection and entry command, billing, and help. The Presto clusters and the storage are living inside the customer’s VPC.

Working with Ahana Cloud for Presto

Ahana delivered me with a fingers-on lab that authorized me to generate a cluster, connect it to resources in Amazon S3 and Amazon RDS MySQL, and workout Presto working with SQL from Apache Superset. Superset is a modern day information exploration and visualization system. I didn’t genuinely workout the visualization part of Superset, as the place of the workout was to glance at SQL overall performance working with Presto.

ahana for presto 05 IDG

When you generate a Presto cluster in Ahana, you select your instance types for the coordinator, metastore, and staff, and the original amount of staff. You can scale the amount of staff up or down afterwards. Because the datasets I was working with have been comparatively compact (only hundreds of thousands of rows), I didn’t trouble enabling I/O caching, which is a new function of Ahana Cloud.

ahana for presto 06 IDG

The Clusters pane of the Ahana interface exhibits your active, pending, and inactive clusters. The PrestoDB Console exhibits the standing of the managing cluster.

I identified the course of action of incorporating information resources a little bit bothersome since it expected me to edit URI strings and JSON configuration strings. It would have been easier if the strings had been assembled from parts in different text packing containers, specifically if the text packing containers have been populated automatically.

ahana for presto 07 IDG

Developing catalogs and changing from CSV to ORC structure took just below a moment, for 26.2 million rows of film scores. Querying an ORC file is much faster than querying a CSV file. For example, counting the ORC file will take 2.five seconds, though counting the CSV file will take forty eight.6 seconds.

ahana for presto 08 IDG

This federated query joins film scores in ORC structure with film information in a MySQL database table to generate a list of scores, counts, and attractiveness broken down into deciles. It took 10 seconds.

ahana for presto 09 IDG

This query computes the most common videos in the federated database with a description that mentions weapons, and also studies the movies’ budgets. The query took 7.five seconds.

How to combine Ahana Presto with device discovering and deep discovering

How do people combine Ahana Presto with device discovering and deep discovering? Commonly, relatively than working with Superset as a shopper, they use a notebook, both Jupyter or Zeppelin. To carry out the SQL query, they use a JDBC backlink to the Ahana Presto query motor. Then the output from the SQL query populates the suitable composition or information body for use in device discovering, depending on the framework used.

New functions of Ahana Cloud for Presto

The model of Ahana Cloud I examined included the advancements introduced on March 24, 2021. These provided overall performance advancements these as information lake I/O caching and tuned query optimization, and simplicity of use advancements these as automatic and versioned updates of Ahana Compute Aircraft.

I didn’t use all of them myself. For example, I didn’t permit information lake I/O caching since the information lake table I was working with was as well compact, and I didn’t devote long plenty of with Ahana to see a model update.

Ahana Cloud for Presto vs. opponents

Overall, Ahana Cloud for Presto is a great way to convert a information lake on Amazon S3 into what is successfully a information warehouse, without the need of shifting any information. Working with Ahana Cloud avoids most of the perform expected to established up and tune Presto and Apache Superset. SQL queries run promptly on Ahana Cloud for Presto, even when they are signing up for many heterogeneous information resources.

Databricks Delta Lake takes advantage of distinctive systems to complete some of the similar points as Ahana Cloud for Presto. All the data files in Databricks Delta Lake are in Apache Parquet structure, and Delta Lake takes advantage of Apache Spark for SQL queries. Like Ahana Cloud for Presto, Databricks Delta Lake can speed up SQL queries with an integrated cache. Delta Lake cannot carry out federated queries, even so.

Qubole, a cloud-indigenous information system for analytics and device discovering, assists you to ingest datasets from a information lake, construct schemas with Hive, query the information with Hive, Presto, Quantum, and/or Spark, and keep on to your information engineering and information science. You can use Zeppelin or Jupyter notebooks, and Airflow workflows. In addition, Qubole assists you manage your cloud paying in a system-unbiased way. Contrary to Ahana, Qubole can run on AWS, Microsoft Azure, Google Cloud Platform, and Oracle Cloud.

BlazingSQL is an even faster way of managing SQL queries, working with Nvidia GPUs and managing SQL on information loaded into GPU memory. BlazingSQL lets you ETL uncooked information right into GPU memory as GPU DataFrames. The moment you have GPU DataFrames in GPU memory, you can use RAPIDS cuML for device discovering, or convert the DataFrames to DLPack or NVTabular for in-GPU deep discovering with PyTorch or TensorFlow.

Ahana Cloud for Presto is a worthwhile option to its opponents, and is easier to established up and sustain than an open up source Presto deployment. It’s certainly value the effort of a cost-free trial.

Expense: $.25/Ahana Cloud Credit (ACC) hour. See pricing calculator and table of instance charges. Illustration: Presto Cluster of 10 x r5.xlarge managing every single workday fees $256/thirty day period.

Platform: Operates on Amazon Elastic Kubernetes Services.

Copyright © 2021 IDG Communications, Inc.

Next Post

Microsoft returns to Java with Azure-focused OpenJDK release

A pair of months ago an nameless Twitter account explained to the story of an practically-forgotten Microsoft April Fool’s prank, the 1996 seeding of vacant boxes of a “Microsoft Coffee” Java progress tool throughout Seattle. Of course, at the time, the pranksters didn’t know that Microsoft was currently performing on […]

Subscribe US Now