Table of Contents

Comparing Google Vertex AI vs AWS Sagemaker vs Amazon Bedrock

Generative AI platforms have transformed the landscape of machine learning, offering advanced models for natural language processing. In this comparison, we'll explore three leading platforms, Google's Vertex AI with PaLM2 model, AWS Sagemaker with Falcon-7b-instruct, and Amazon Bedrock with the AI21 Labs Jurassic-2Mid model.

Feature Comparison

To provide clarity, let's compare the key features of Google Vertex AI, AWS Sagemaker, and Amazon Bedrock in a tabular format:

Screenshot 2023-11-24 at 10.39.15

Model Comparison: Text Bison vs Falcon-7b-instruct vs Jurassic Model

Text Bison (PaLM2 - Google Vertex AI)

  • Versatile generative model for natural language processing.
  • Robust multilingual support and adaptability to various formats.
  • Integrated with Google Cloud services for seamless deployment and scaling.
  • Offers fine-tuning options for customizing the model to specific requirements.

Falcon-7b-instruct (AWS Sagemaker)

  • Hosted on HuggingFace, a powerful model for instructive text generation.
  • Provides multilingual support and integration with HuggingFace resources.
  • Seamless integration with AWS services, offering scalability.
  • Allows users to create custom training pipelines for fine-grained control.

Jurassic Model (AI21 Labs - Amazon Bedrock)

  • Production-ready large language model powering natural language AI.
  • Multilingual and capable of generating text in various languages.
  • Seamless integration with Amazon Web Services, offering scalability.
  • Allows users to create custom training pipelines for fine-grained control.

Case Study: Generating Medical Summaries with Google Vertex AI's LLM Model – PaLM2(Text-Bison), AWS Sagemaker with HuggingFace (Falcon-7b-instruct model) and Amazon Bedrock (AI21 Labs - Jurassic-2 Mid)

1. Google Vertex AI's LLM Model – PaLM2(Text-Bison)

We have already discussed this in my previous blog, please refer https://www.encora.com/insights/exploring-the-world-of-generative-ai-with-googles-vertex-ai

 

2. AWS Sagemaker with HuggingFace (falcon-7b-instruct model)

We have already discussed this in my previous blog, please refer https://www.encora.com/insights/boost-your-nlp-projects-with-amazon-sagemaker-and-tiis-llm-model

3. Amazon Bedrock (AI21 Model - Jurassic-2 Mid)

Let us delve into this specific use case to gain insights into the implementation of content generation utilizing the Amazon Bedrock Jurassic-2 Mid model by AI21 Labs.
The following outlines the procedural steps to achieve this objective through AWS (Amazon Bedrock).

Step 1: Access the AWS Console and navigate to Amazon Bedrock. Within the Foundation Model section, select 'Base Models.' Here, you will find a comprehensive list of available Foundation Models.

Screenshot 2023-11-24 at 11.00.25

Step 2: Select Playgrounds-> Text.  The subscribed models will be displayed in the dropdown and choose the model desired.  In this case, select AI21 Labs and the model Jurassic-2 Mid.

In the designated text area, kindly furnish your prompt for text generation. Our case study revolves around the generation of Medical Summaries utilizing JSON data, which has been supplied. Please note that the specified parameters, including Temperature, Top P, and Max Length, have been configured on the right side for your reference.

Screenshot 2023-11-24 at 11.01.09

Step 3: On click of the Run button the text is generated and displayed on the text area.

Screenshot 2023-11-24 at 11.01.49

Response: In accordance with the preceding prompt, the Jurassic-2 Mid model has produced a medical summary utilizing the JSON data supplied within the prompt. 
The preceding case study was conducted in Amazon Bedrock without any coding efforts.

Now, let us explore the process of accomplishing the aforementioned task utilizing Python code with relevant libraries for medical summary generation. 

Medical Summary by Amazon Bedrock using Python Code

Step 1: Import necessary libraries

Screenshot 2023-11-24 at 11.03.10

Step 2: Set the path for the certification bundle

Screenshot 2023-11-24 at 11.03.43

Step 3: Create a client for Bedrock AI21 Labs model using the boto3 library

Screenshot 2023-11-24 at 11.04.19

Step 4: Define the JSON payload with the medical data and generation parameters

Screenshot 2023-11-24 at 11.04.54

Step 5: Specify the model ID and content type for the Bedrock AI21 Labs model

Screenshot 2023-11-24 at 11.05.30

Step 6: Try to invoke the Bedrock AI21 Labs model and handle SSL errors

Screenshot 2023-11-24 at 11.06.17

Here is the output:

Screenshot 2023-11-24 at 13.19.29

In considering the options presented in this comparison, you are empowered to make an informed decision regarding the preferred approach for your needs.  I am providing the comparative approach based on the Use case I have chosen.

Screenshot 2023-11-24 at 13.23.04

Pricing Details

Here are the pricing details of each platform followed by the estimation of the pricing for the use case. 

Google Vertex AI (TextBison - PaLM2 Model)

Screenshot 2023-11-24 at 13.29.44

AWS SageMaker (Falcon-7b-instruct model hosted in HuggingFace)

Sagemaker Pricing(On demand) – For model development and usage, here is an example of the pricing: For instance ml.m5.large with 8 GiB memory the price per hour is $0.115.  For more details please visit https://aws.amazon.com/sagemaker/pricing/


Lambda Pricing – Based on the number of requests and the duration of the request. Here is an example:   For Memory 1024 MB – Price per 1 ms is $0.0000000167  (Detailed pricing model, please visit https://aws.amazon.com/lambda/pricing/)AWS Bedrock (AI21 Model - Jurassic-2Mid)

AWS Bedrock (AI21 Model - Jurassic-2Mid)

Screenshot 2023-11-24 at 13.31.26

Pricing for the Medical Summary Generation Use Case:

The pricing for the above Medical Summary generation use case for all three models are mentioned below.

In evaluating these pricing models, users are encouraged to consider their specific use cases and requirements. The unique features, capabilities, and pricing structures of each platform contribute to their suitability for different applications.

Remember, the provided pricing is illustrative, and actual costs may vary based on usage patterns, data volumes, and additional features leveraged.

Key Takeaways

Considering the presented use case, which involved generating medical summaries, Google's Vertex AI and Amazon Bedrock demonstrated superior performance compared to SageMaker. The ability to produce expected results effortlessly, without extensive coding efforts, positions Vertex AI and Bedrock as favorable choices for this particular scenario.
In essence, the choice between Vertex AI, SageMaker, and Bedrock should align with specific project requirements and organizational constraints. Users are advised to conduct a detailed analysis based on their unique circumstances for an informed decision.

References 

https://cloud.google.com/vertex-ai/docs/generative-ai/pricing

https://aws.amazon.com/bedrock/pricing/

https://aws.amazon.com/sagemaker/pricing/?p=pm&c=sm&z=4

https://aws.amazon.com/lambda/pricing/

About Encora

Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.

Learn More about Encora

We are the software development company fiercely committed and uniquely equipped to enable companies to do what they can’t do now.

Learn More

Global Delivery

READ MORE

Careers

READ MORE

Industries

READ MORE

Related Insights

Enabling Transformation in Hospitality through Technology-Led Innovation

As the exclusive sponsor of the 2024 Hotel Visionary Awards, we support organizations leading ...

Read More

Key Insights from HLTH 2024: The Future of Patient-Centered Healthcare

Discover key insights from HLTH 2024 on digital health, AI in diagnostics, data interoperability, ...

Read More

Data-Driven Engineering: Transforming Operations and Products from Insight to Impact

Discover how data-driven engineering transforms operations and product development, enhancing team ...

Read More
Previous Previous
Next

Accelerate Your Path
to Market Leadership 

Encora logo

Santa Clara, CA

+1 669-236-2674

letstalk@encora.com

Innovation Acceleration

Speak With an Expert

Encora logo

Santa Clara, CA

+1 (480) 991 3635

letstalk@encora.com

Innovation Acceleration