Swiss NewsPaper
No Result
View All Result
  • Business
    • Business Growth & Leadership
    • Corporate Strategy
    • Entrepreneurship & Startups
    • Global Markets & Economy
    • Investment & Stocks
  • Health & Science
    • Biotechnology & Pharma
    • Digital Health & Telemedicine
    • Scientific Research & Innovation
    • Wellbeing & Lifestyle
  • Marketing
    • Advertising & Paid Media
    • Branding & Public Relations
    • SEO & Digital Marketing
    • Social Media & Content Strategy
  • Economy
    • Economic Development
    • Global Trade & Geopolitics
    • Government Regulations & Policies
  • Sustainability
    • Climate Change & Environmental Policies
    • Future of Work & Smart Cities
    • Renewable Energy & Green Tech
    • Sustainable Business Practices
  • Technology & AI
    • Artificial Intelligence & Automation
    • Big Data & Cloud Computing
    • Blockchain & Web3
    • Cybersecurity & Data Privacy
    • Software Development & Engineering
  • Business
    • Business Growth & Leadership
    • Corporate Strategy
    • Entrepreneurship & Startups
    • Global Markets & Economy
    • Investment & Stocks
  • Health & Science
    • Biotechnology & Pharma
    • Digital Health & Telemedicine
    • Scientific Research & Innovation
    • Wellbeing & Lifestyle
  • Marketing
    • Advertising & Paid Media
    • Branding & Public Relations
    • SEO & Digital Marketing
    • Social Media & Content Strategy
  • Economy
    • Economic Development
    • Global Trade & Geopolitics
    • Government Regulations & Policies
  • Sustainability
    • Climate Change & Environmental Policies
    • Future of Work & Smart Cities
    • Renewable Energy & Green Tech
    • Sustainable Business Practices
  • Technology & AI
    • Artificial Intelligence & Automation
    • Big Data & Cloud Computing
    • Blockchain & Web3
    • Cybersecurity & Data Privacy
    • Software Development & Engineering
No Result
View All Result
Swiss NewsPaper
No Result
View All Result
Home Technology & AI Big Data & Cloud Computing

Configure cross-account entry of Amazon SageMaker Lakehouse multi-catalog tables utilizing AWS Glue 5.0 Spark

swissnewspaper by swissnewspaper
11 May 2025
Reading Time: 29 mins read
0
Configure cross-account entry of Amazon SageMaker Lakehouse multi-catalog tables utilizing AWS Glue 5.0 Spark


An IAM position, Glue-execution-role, within the client account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and fasten it:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "LFandRSserverlessAccess",
                "Effect": "Allow",
                "Action": [
                    "lakeformation:GetDataAccess",
                    "redshift-serverless:GetCredentials"
                ],
                "Useful resource": "*"
            },
            {
                "Impact": "Permit",
                "Motion": "iam:PassRole",
                "Useful resource": "*",
                "Situation": {
                    "StringEquals": {
                        "iam:PassedToService": "glue.amazonaws.com"
                    }
                }
            }
        ]
    }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this position:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "glue.amazonaws.com"
                    ]
                },
                "Motion": "sts:AssumeRole"
            }
        ]
    }

Steps for producer account setup

For the producer account setup, you’ll be able to both use your IAM administrator position added as Lake Formation administrator or use a Lake Formation administrator position with permissions added as mentioned within the conditions. For illustration functions, we use the IAM admin position Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, observe the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Knowledge Catalog.
  3. After the registration is initiated, you will notice the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Title, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM position you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Evaluate the main points on the Evaluate and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk exhibits up.
    010-BDB 5089

Because the Admin position, you too can question the orderstbl in Amazon Athena and ensure the information is offered.

011-BDB 5089

Grant permissions on the tables from the producer account to the patron account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 primarily based Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the patron account. We are able to’t share your complete catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the patron account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe below Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Knowledge permissions within the navigation pane and confirm that the patron account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 primarily based Iceberg desk returnstbl_iceberg knowledge location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Knowledge lake areas within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail in your S3 bucket that you simply supplied whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM position, present the user-defined position LakeFormationS3Registration_custom that you simply created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Knowledge lake areas within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for client account setup

For the patron account setup, we use the IAM admin position Admin, added as a Lake Formation administrator.

The steps within the client account are fairly concerned. Within the client account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by working take a look at queries in Athena. The admin additional grants permissions to the position Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Knowledge Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the patron AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Observe the directions in Accepting a useful resource share invitation from AWS RAM to overview and settle for the pending invitations.
  4. When the invite standing modifications to Accepted, select Shared assets below Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID below Proprietor ID column show accurately.
    024-BDB 5089
  6. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It is best to see the customerdb and public databases. You may additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog degree on the Lake Formation console, as a result of catalog-level sharing isn’t attainable. You may overview the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Knowledge Catalog object that references an area or cross-account federated database-level catalog from different AWS accounts. For extra particulars, confer with Accessing a shared federated catalog. Catalog hyperlink containers are primarily Lake Formation useful resource hyperlinks on the catalog degree that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk >:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Title, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Kind, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Title (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM position, present the Redshift-S3 knowledge switch position that you simply had created within the conditions.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Evaluate the main points on the Evaluate and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database below rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you’ll be able to select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Title, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Go away Location – elective as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand spanking new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created below rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables won’t work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, confer with Making a useful resource hyperlink to a shared Knowledge Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get routinely populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created below public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 primarily based Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Knowledge Catalog database.

Although we’re capable of see the shared database within the default catalog of the patron, a useful resource hyperlink is required to question from analytics engines, akin to Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For extra particulars on utilizing AWS Glue with Lake Formation, confer with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It is best to see the shared database title customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields needs to be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created below the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you’ll be able to confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure an S3 bucket is supplied to retailer the Athena question outcomes. For particulars, confer with Specify a question consequence location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You can even run a be part of question as follows. Take note of the three-point notation for referring to the tables from two completely different catalogs:
SELECT
returns_tb.market as Market,
sum(orders_tb.amount) as Total_Quantity
FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb
JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb
ON orders_tb.order_id = returns_tb.order_id
GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 primarily based Iceberg tables in the identical question, throughout AWS accounts, by the Knowledge Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with extra IAM principals within the client account. Often, the information lake admin grants permissions to knowledge analysts, knowledge scientists, and knowledge engineers within the client account to do their job features, akin to processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution position Glue-execution-role that we created within the conditions.

Useful resource hyperlinks permit solely Describe and Drop permissions. You should use the Grant on track configuration to supply database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the patron account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant extra permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Knowledge permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 primarily based returns desk within the client account to investigate the information and establish the overall orders positioned per market.

Exchange > within the script together with your client account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin contemporary.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Title, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM position, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and anticipate the job to finish.
  8. Evaluate the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the client account.
  2. Delete the AWS Glue job within the client account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of knowledge switch in each accounts and the Athena question outcomes bucket within the client account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the conditions.

Conclusion

On this put up, we illustrated tips on how to deliver your present Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed tips on how to question the shared knowledge warehouse and knowledge lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this convenient to combine your Redshift tables with an present knowledge mesh and entry the tables utilizing AWS Glue Spark. Check this answer in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be happy to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 primarily based default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket below the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to confer with Specify a question consequence location.
CREATE DATABASE customerdb;
CREATE EXTERNAL TABLE customerdb.returnstbl_csv(
  `returned` string, 
  `order_id` string, 
  `market` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY ';' 
LOCATION
  's3:////'
TBLPROPERTIES (
  'skip.header.line.rely'='1'
);

choose * from customerdb.returnstbl_csv restrict 10; 

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert knowledge from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(
  `returned` string, 
  `order_id` string, 
  `market` string)
LOCATION 's3:///returnstbl_iceberg/' 
TBLPROPERTIES (
  'table_type'='ICEBERG'
);

INSERT INTO customerdb.returnstbl_iceberg
SELECT *
FROM returnstbl_csv;  

SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10; 

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Hook up with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders knowledge to the orderstbl:
create database ordersdb;
use ordersdb;

create desk orderstbl(
  row_id int, 
  order_id VARCHAR, 
  order_date VARCHAR, 
  ship_date VARCHAR, 
  ship_mode VARCHAR, 
  customer_id VARCHAR, 
  customer_name VARCHAR, 
  phase VARCHAR, 
  metropolis VARCHAR, 
  state VARCHAR, 
  nation VARCHAR, 
  postal_code int, 
  market VARCHAR, 
  area VARCHAR, 
  product_id VARCHAR, 
  class VARCHAR, 
  sub_category VARCHAR, 
  product_name VARCHAR, 
  gross sales VARCHAR, 
  amount bigint, 
  low cost VARCHAR, 
  revenue VARCHAR, 
  shipping_cost VARCHAR, 
  order_priority VARCHAR
  );

copy orderstbl
from 's3:///ordersdatacsv/orders.csv' 
iam_role 'arn:aws:iam:::position/service-role/'
CSV 
DELIMITER ';'
IGNOREHEADER 1
;

choose * from ordersdb.orderstbl restrict 5;

In regards to the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Massive Knowledge Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to reinforce product options, works with AWS prospects and companions to architect lakehouse options, and establishes greatest practices for knowledge governance.

056-BDB 5089Subhasis Sarkar is a Senior Knowledge Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with modern options. He makes a speciality of AWS knowledge architectures, notably knowledge mesh implementations utilizing AWS CDK parts.

Buy JNews
ADVERTISEMENT


An IAM position, Glue-execution-role, within the client account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and fasten it:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "LFandRSserverlessAccess",
                "Effect": "Allow",
                "Action": [
                    "lakeformation:GetDataAccess",
                    "redshift-serverless:GetCredentials"
                ],
                "Useful resource": "*"
            },
            {
                "Impact": "Permit",
                "Motion": "iam:PassRole",
                "Useful resource": "*",
                "Situation": {
                    "StringEquals": {
                        "iam:PassedToService": "glue.amazonaws.com"
                    }
                }
            }
        ]
    }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this position:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "glue.amazonaws.com"
                    ]
                },
                "Motion": "sts:AssumeRole"
            }
        ]
    }

Steps for producer account setup

For the producer account setup, you’ll be able to both use your IAM administrator position added as Lake Formation administrator or use a Lake Formation administrator position with permissions added as mentioned within the conditions. For illustration functions, we use the IAM admin position Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, observe the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Knowledge Catalog.
  3. After the registration is initiated, you will notice the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Title, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM position you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Evaluate the main points on the Evaluate and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk exhibits up.
    010-BDB 5089

Because the Admin position, you too can question the orderstbl in Amazon Athena and ensure the information is offered.

011-BDB 5089

Grant permissions on the tables from the producer account to the patron account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 primarily based Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the patron account. We are able to’t share your complete catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the patron account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe below Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Knowledge permissions within the navigation pane and confirm that the patron account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 primarily based Iceberg desk returnstbl_iceberg knowledge location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Knowledge lake areas within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail in your S3 bucket that you simply supplied whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM position, present the user-defined position LakeFormationS3Registration_custom that you simply created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Knowledge lake areas within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for client account setup

For the patron account setup, we use the IAM admin position Admin, added as a Lake Formation administrator.

The steps within the client account are fairly concerned. Within the client account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by working take a look at queries in Athena. The admin additional grants permissions to the position Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Knowledge Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the patron AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Observe the directions in Accepting a useful resource share invitation from AWS RAM to overview and settle for the pending invitations.
  4. When the invite standing modifications to Accepted, select Shared assets below Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID below Proprietor ID column show accurately.
    024-BDB 5089
  6. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It is best to see the customerdb and public databases. You may additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog degree on the Lake Formation console, as a result of catalog-level sharing isn’t attainable. You may overview the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Knowledge Catalog object that references an area or cross-account federated database-level catalog from different AWS accounts. For extra particulars, confer with Accessing a shared federated catalog. Catalog hyperlink containers are primarily Lake Formation useful resource hyperlinks on the catalog degree that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk >:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Title, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Kind, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Title (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM position, present the Redshift-S3 knowledge switch position that you simply had created within the conditions.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Evaluate the main points on the Evaluate and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database below rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you’ll be able to select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Title, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Go away Location – elective as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand spanking new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created below rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables won’t work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, confer with Making a useful resource hyperlink to a shared Knowledge Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get routinely populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created below public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 primarily based Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Knowledge Catalog database.

Although we’re capable of see the shared database within the default catalog of the patron, a useful resource hyperlink is required to question from analytics engines, akin to Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For extra particulars on utilizing AWS Glue with Lake Formation, confer with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It is best to see the shared database title customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields needs to be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created below the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you’ll be able to confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure an S3 bucket is supplied to retailer the Athena question outcomes. For particulars, confer with Specify a question consequence location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You can even run a be part of question as follows. Take note of the three-point notation for referring to the tables from two completely different catalogs:
SELECT
returns_tb.market as Market,
sum(orders_tb.amount) as Total_Quantity
FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb
JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb
ON orders_tb.order_id = returns_tb.order_id
GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 primarily based Iceberg tables in the identical question, throughout AWS accounts, by the Knowledge Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with extra IAM principals within the client account. Often, the information lake admin grants permissions to knowledge analysts, knowledge scientists, and knowledge engineers within the client account to do their job features, akin to processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution position Glue-execution-role that we created within the conditions.

Useful resource hyperlinks permit solely Describe and Drop permissions. You should use the Grant on track configuration to supply database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the patron account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant extra permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Knowledge permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 primarily based returns desk within the client account to investigate the information and establish the overall orders positioned per market.

Exchange > within the script together with your client account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin contemporary.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Title, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM position, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and anticipate the job to finish.
  8. Evaluate the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the client account.
  2. Delete the AWS Glue job within the client account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of knowledge switch in each accounts and the Athena question outcomes bucket within the client account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the conditions.

Conclusion

On this put up, we illustrated tips on how to deliver your present Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed tips on how to question the shared knowledge warehouse and knowledge lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this convenient to combine your Redshift tables with an present knowledge mesh and entry the tables utilizing AWS Glue Spark. Check this answer in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be happy to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 primarily based default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket below the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to confer with Specify a question consequence location.
CREATE DATABASE customerdb;
CREATE EXTERNAL TABLE customerdb.returnstbl_csv(
  `returned` string, 
  `order_id` string, 
  `market` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY ';' 
LOCATION
  's3:////'
TBLPROPERTIES (
  'skip.header.line.rely'='1'
);

choose * from customerdb.returnstbl_csv restrict 10; 

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert knowledge from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(
  `returned` string, 
  `order_id` string, 
  `market` string)
LOCATION 's3:///returnstbl_iceberg/' 
TBLPROPERTIES (
  'table_type'='ICEBERG'
);

INSERT INTO customerdb.returnstbl_iceberg
SELECT *
FROM returnstbl_csv;  

SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10; 

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Hook up with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders knowledge to the orderstbl:
create database ordersdb;
use ordersdb;

create desk orderstbl(
  row_id int, 
  order_id VARCHAR, 
  order_date VARCHAR, 
  ship_date VARCHAR, 
  ship_mode VARCHAR, 
  customer_id VARCHAR, 
  customer_name VARCHAR, 
  phase VARCHAR, 
  metropolis VARCHAR, 
  state VARCHAR, 
  nation VARCHAR, 
  postal_code int, 
  market VARCHAR, 
  area VARCHAR, 
  product_id VARCHAR, 
  class VARCHAR, 
  sub_category VARCHAR, 
  product_name VARCHAR, 
  gross sales VARCHAR, 
  amount bigint, 
  low cost VARCHAR, 
  revenue VARCHAR, 
  shipping_cost VARCHAR, 
  order_priority VARCHAR
  );

copy orderstbl
from 's3:///ordersdatacsv/orders.csv' 
iam_role 'arn:aws:iam:::position/service-role/'
CSV 
DELIMITER ';'
IGNOREHEADER 1
;

choose * from ordersdb.orderstbl restrict 5;

In regards to the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Massive Knowledge Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to reinforce product options, works with AWS prospects and companions to architect lakehouse options, and establishes greatest practices for knowledge governance.

056-BDB 5089Subhasis Sarkar is a Senior Knowledge Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with modern options. He makes a speciality of AWS knowledge architectures, notably knowledge mesh implementations utilizing AWS CDK parts.

RELATED POSTS

Scalable analytics and centralized governance for Apache Iceberg tables utilizing Amazon S3 Tables and Amazon Redshift

Writer Quicker, Smarter AI/BI Dashboards with Dynamic Calculations

NVIDIA Pronounces DGX Cloud Lepton for GPU Entry throughout Multi-Cloud Platforms


An IAM position, Glue-execution-role, within the client account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and fasten it:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "LFandRSserverlessAccess",
                "Effect": "Allow",
                "Action": [
                    "lakeformation:GetDataAccess",
                    "redshift-serverless:GetCredentials"
                ],
                "Useful resource": "*"
            },
            {
                "Impact": "Permit",
                "Motion": "iam:PassRole",
                "Useful resource": "*",
                "Situation": {
                    "StringEquals": {
                        "iam:PassedToService": "glue.amazonaws.com"
                    }
                }
            }
        ]
    }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this position:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "glue.amazonaws.com"
                    ]
                },
                "Motion": "sts:AssumeRole"
            }
        ]
    }

Steps for producer account setup

For the producer account setup, you’ll be able to both use your IAM administrator position added as Lake Formation administrator or use a Lake Formation administrator position with permissions added as mentioned within the conditions. For illustration functions, we use the IAM admin position Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, observe the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Knowledge Catalog.
  3. After the registration is initiated, you will notice the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Title, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM position you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Evaluate the main points on the Evaluate and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk exhibits up.
    010-BDB 5089

Because the Admin position, you too can question the orderstbl in Amazon Athena and ensure the information is offered.

011-BDB 5089

Grant permissions on the tables from the producer account to the patron account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 primarily based Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the patron account. We are able to’t share your complete catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the patron account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe below Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Knowledge permissions within the navigation pane and confirm that the patron account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 primarily based Iceberg desk returnstbl_iceberg knowledge location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Knowledge lake areas within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail in your S3 bucket that you simply supplied whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM position, present the user-defined position LakeFormationS3Registration_custom that you simply created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Knowledge lake areas within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for client account setup

For the patron account setup, we use the IAM admin position Admin, added as a Lake Formation administrator.

The steps within the client account are fairly concerned. Within the client account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by working take a look at queries in Athena. The admin additional grants permissions to the position Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Knowledge Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the patron AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Observe the directions in Accepting a useful resource share invitation from AWS RAM to overview and settle for the pending invitations.
  4. When the invite standing modifications to Accepted, select Shared assets below Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID below Proprietor ID column show accurately.
    024-BDB 5089
  6. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It is best to see the customerdb and public databases. You may additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog degree on the Lake Formation console, as a result of catalog-level sharing isn’t attainable. You may overview the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Knowledge Catalog object that references an area or cross-account federated database-level catalog from different AWS accounts. For extra particulars, confer with Accessing a shared federated catalog. Catalog hyperlink containers are primarily Lake Formation useful resource hyperlinks on the catalog degree that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk >:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Title, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Kind, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Title (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM position, present the Redshift-S3 knowledge switch position that you simply had created within the conditions.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Evaluate the main points on the Evaluate and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database below rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you’ll be able to select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Title, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Go away Location – elective as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand spanking new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created below rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables won’t work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, confer with Making a useful resource hyperlink to a shared Knowledge Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get routinely populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created below public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 primarily based Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Knowledge Catalog database.

Although we’re capable of see the shared database within the default catalog of the patron, a useful resource hyperlink is required to question from analytics engines, akin to Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For extra particulars on utilizing AWS Glue with Lake Formation, confer with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It is best to see the shared database title customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields needs to be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created below the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you’ll be able to confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure an S3 bucket is supplied to retailer the Athena question outcomes. For particulars, confer with Specify a question consequence location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You can even run a be part of question as follows. Take note of the three-point notation for referring to the tables from two completely different catalogs:
SELECT
returns_tb.market as Market,
sum(orders_tb.amount) as Total_Quantity
FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb
JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb
ON orders_tb.order_id = returns_tb.order_id
GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 primarily based Iceberg tables in the identical question, throughout AWS accounts, by the Knowledge Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with extra IAM principals within the client account. Often, the information lake admin grants permissions to knowledge analysts, knowledge scientists, and knowledge engineers within the client account to do their job features, akin to processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution position Glue-execution-role that we created within the conditions.

Useful resource hyperlinks permit solely Describe and Drop permissions. You should use the Grant on track configuration to supply database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the patron account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant extra permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Knowledge permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 primarily based returns desk within the client account to investigate the information and establish the overall orders positioned per market.

Exchange > within the script together with your client account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin contemporary.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Title, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM position, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and anticipate the job to finish.
  8. Evaluate the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the client account.
  2. Delete the AWS Glue job within the client account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of knowledge switch in each accounts and the Athena question outcomes bucket within the client account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the conditions.

Conclusion

On this put up, we illustrated tips on how to deliver your present Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed tips on how to question the shared knowledge warehouse and knowledge lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this convenient to combine your Redshift tables with an present knowledge mesh and entry the tables utilizing AWS Glue Spark. Check this answer in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be happy to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 primarily based default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket below the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to confer with Specify a question consequence location.
CREATE DATABASE customerdb;
CREATE EXTERNAL TABLE customerdb.returnstbl_csv(
  `returned` string, 
  `order_id` string, 
  `market` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY ';' 
LOCATION
  's3:////'
TBLPROPERTIES (
  'skip.header.line.rely'='1'
);

choose * from customerdb.returnstbl_csv restrict 10; 

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert knowledge from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(
  `returned` string, 
  `order_id` string, 
  `market` string)
LOCATION 's3:///returnstbl_iceberg/' 
TBLPROPERTIES (
  'table_type'='ICEBERG'
);

INSERT INTO customerdb.returnstbl_iceberg
SELECT *
FROM returnstbl_csv;  

SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10; 

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Hook up with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders knowledge to the orderstbl:
create database ordersdb;
use ordersdb;

create desk orderstbl(
  row_id int, 
  order_id VARCHAR, 
  order_date VARCHAR, 
  ship_date VARCHAR, 
  ship_mode VARCHAR, 
  customer_id VARCHAR, 
  customer_name VARCHAR, 
  phase VARCHAR, 
  metropolis VARCHAR, 
  state VARCHAR, 
  nation VARCHAR, 
  postal_code int, 
  market VARCHAR, 
  area VARCHAR, 
  product_id VARCHAR, 
  class VARCHAR, 
  sub_category VARCHAR, 
  product_name VARCHAR, 
  gross sales VARCHAR, 
  amount bigint, 
  low cost VARCHAR, 
  revenue VARCHAR, 
  shipping_cost VARCHAR, 
  order_priority VARCHAR
  );

copy orderstbl
from 's3:///ordersdatacsv/orders.csv' 
iam_role 'arn:aws:iam:::position/service-role/'
CSV 
DELIMITER ';'
IGNOREHEADER 1
;

choose * from ordersdb.orderstbl restrict 5;

In regards to the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Massive Knowledge Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to reinforce product options, works with AWS prospects and companions to architect lakehouse options, and establishes greatest practices for knowledge governance.

056-BDB 5089Subhasis Sarkar is a Senior Knowledge Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with modern options. He makes a speciality of AWS knowledge architectures, notably knowledge mesh implementations utilizing AWS CDK parts.

Buy JNews
ADVERTISEMENT


An IAM position, Glue-execution-role, within the client account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and fasten it:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "LFandRSserverlessAccess",
                "Effect": "Allow",
                "Action": [
                    "lakeformation:GetDataAccess",
                    "redshift-serverless:GetCredentials"
                ],
                "Useful resource": "*"
            },
            {
                "Impact": "Permit",
                "Motion": "iam:PassRole",
                "Useful resource": "*",
                "Situation": {
                    "StringEquals": {
                        "iam:PassedToService": "glue.amazonaws.com"
                    }
                }
            }
        ]
    }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this position:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "glue.amazonaws.com"
                    ]
                },
                "Motion": "sts:AssumeRole"
            }
        ]
    }

Steps for producer account setup

For the producer account setup, you’ll be able to both use your IAM administrator position added as Lake Formation administrator or use a Lake Formation administrator position with permissions added as mentioned within the conditions. For illustration functions, we use the IAM admin position Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, observe the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Knowledge Catalog.
  3. After the registration is initiated, you will notice the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Title, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM position you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Evaluate the main points on the Evaluate and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk exhibits up.
    010-BDB 5089

Because the Admin position, you too can question the orderstbl in Amazon Athena and ensure the information is offered.

011-BDB 5089

Grant permissions on the tables from the producer account to the patron account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 primarily based Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the patron account. We are able to’t share your complete catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the patron account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe below Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Knowledge permissions within the navigation pane and confirm that the patron account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 primarily based Iceberg desk returnstbl_iceberg knowledge location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Knowledge lake areas within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail in your S3 bucket that you simply supplied whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM position, present the user-defined position LakeFormationS3Registration_custom that you simply created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Knowledge lake areas within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for client account setup

For the patron account setup, we use the IAM admin position Admin, added as a Lake Formation administrator.

The steps within the client account are fairly concerned. Within the client account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by working take a look at queries in Athena. The admin additional grants permissions to the position Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Knowledge Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the patron AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Observe the directions in Accepting a useful resource share invitation from AWS RAM to overview and settle for the pending invitations.
  4. When the invite standing modifications to Accepted, select Shared assets below Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID below Proprietor ID column show accurately.
    024-BDB 5089
  6. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It is best to see the customerdb and public databases. You may additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog degree on the Lake Formation console, as a result of catalog-level sharing isn’t attainable. You may overview the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Knowledge Catalog object that references an area or cross-account federated database-level catalog from different AWS accounts. For extra particulars, confer with Accessing a shared federated catalog. Catalog hyperlink containers are primarily Lake Formation useful resource hyperlinks on the catalog degree that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk >:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Title, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Kind, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Title (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM position, present the Redshift-S3 knowledge switch position that you simply had created within the conditions.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – elective web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Evaluate the main points on the Evaluate and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database below rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you’ll be able to select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Title, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Go away Location – elective as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand spanking new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created below rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables won’t work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, confer with Making a useful resource hyperlink to a shared Knowledge Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get routinely populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created below public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 primarily based Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Knowledge Catalog database.

Although we’re capable of see the shared database within the default catalog of the patron, a useful resource hyperlink is required to question from analytics engines, akin to Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For extra particulars on utilizing AWS Glue with Lake Formation, confer with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, below Knowledge Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It is best to see the shared database title customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink title, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields needs to be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created below the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you’ll be able to confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure an S3 bucket is supplied to retailer the Athena question outcomes. For particulars, confer with Specify a question consequence location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You can even run a be part of question as follows. Take note of the three-point notation for referring to the tables from two completely different catalogs:
SELECT
returns_tb.market as Market,
sum(orders_tb.amount) as Total_Quantity
FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb
JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb
ON orders_tb.order_id = returns_tb.order_id
GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 primarily based Iceberg tables in the identical question, throughout AWS accounts, by the Knowledge Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with extra IAM principals within the client account. Often, the information lake admin grants permissions to knowledge analysts, knowledge scientists, and knowledge engineers within the client account to do their job features, akin to processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution position Glue-execution-role that we created within the conditions.

Useful resource hyperlinks permit solely Describe and Drop permissions. You should use the Grant on track configuration to supply database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Knowledge permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Knowledge Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the patron account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant extra permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Knowledge permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 primarily based returns desk within the client account to investigate the information and establish the overall orders positioned per market.

Exchange > within the script together with your client account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin contemporary.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Title, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM position, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and anticipate the job to finish.
  8. Evaluate the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the client account.
  2. Delete the AWS Glue job within the client account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of knowledge switch in each accounts and the Athena question outcomes bucket within the client account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the conditions.

Conclusion

On this put up, we illustrated tips on how to deliver your present Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed tips on how to question the shared knowledge warehouse and knowledge lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this convenient to combine your Redshift tables with an present knowledge mesh and entry the tables utilizing AWS Glue Spark. Check this answer in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be happy to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 primarily based default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket below the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to confer with Specify a question consequence location.
CREATE DATABASE customerdb;
CREATE EXTERNAL TABLE customerdb.returnstbl_csv(
  `returned` string, 
  `order_id` string, 
  `market` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY ';' 
LOCATION
  's3:////'
TBLPROPERTIES (
  'skip.header.line.rely'='1'
);

choose * from customerdb.returnstbl_csv restrict 10; 

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert knowledge from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(
  `returned` string, 
  `order_id` string, 
  `market` string)
LOCATION 's3:///returnstbl_iceberg/' 
TBLPROPERTIES (
  'table_type'='ICEBERG'
);

INSERT INTO customerdb.returnstbl_iceberg
SELECT *
FROM returnstbl_csv;  

SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10; 

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Hook up with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders knowledge to the orderstbl:
create database ordersdb;
use ordersdb;

create desk orderstbl(
  row_id int, 
  order_id VARCHAR, 
  order_date VARCHAR, 
  ship_date VARCHAR, 
  ship_mode VARCHAR, 
  customer_id VARCHAR, 
  customer_name VARCHAR, 
  phase VARCHAR, 
  metropolis VARCHAR, 
  state VARCHAR, 
  nation VARCHAR, 
  postal_code int, 
  market VARCHAR, 
  area VARCHAR, 
  product_id VARCHAR, 
  class VARCHAR, 
  sub_category VARCHAR, 
  product_name VARCHAR, 
  gross sales VARCHAR, 
  amount bigint, 
  low cost VARCHAR, 
  revenue VARCHAR, 
  shipping_cost VARCHAR, 
  order_priority VARCHAR
  );

copy orderstbl
from 's3:///ordersdatacsv/orders.csv' 
iam_role 'arn:aws:iam:::position/service-role/'
CSV 
DELIMITER ';'
IGNOREHEADER 1
;

choose * from ordersdb.orderstbl restrict 5;

In regards to the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Massive Knowledge Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to reinforce product options, works with AWS prospects and companions to architect lakehouse options, and establishes greatest practices for knowledge governance.

056-BDB 5089Subhasis Sarkar is a Senior Knowledge Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with modern options. He makes a speciality of AWS knowledge architectures, notably knowledge mesh implementations utilizing AWS CDK parts.

Tags: accessAmazonAWSConfigurecrossaccountGlueLakehousemulticatalogSageMakerSparktables
ShareTweetPin
swissnewspaper

swissnewspaper

Related Posts

Scalable analytics and centralized governance for Apache Iceberg tables utilizing Amazon S3 Tables and Amazon Redshift
Big Data & Cloud Computing

Scalable analytics and centralized governance for Apache Iceberg tables utilizing Amazon S3 Tables and Amazon Redshift

23 May 2025
Writer Quicker, Smarter AI/BI Dashboards with Dynamic Calculations
Big Data & Cloud Computing

Writer Quicker, Smarter AI/BI Dashboards with Dynamic Calculations

21 May 2025
NVIDIA Pronounces DGX Cloud Lepton for GPU Entry throughout Multi-Cloud Platforms
Big Data & Cloud Computing

NVIDIA Pronounces DGX Cloud Lepton for GPU Entry throughout Multi-Cloud Platforms

20 May 2025
AI Improves Integrity in Company Accounting
Big Data & Cloud Computing

AI Improves Integrity in Company Accounting

19 May 2025
Democracy.exe: When Exponential Tech Crashes the Human Thoughts
Big Data & Cloud Computing

Democracy.exe: When Exponential Tech Crashes the Human Thoughts

17 May 2025
Saying new fine-tuning fashions and strategies in Azure AI Foundry
Big Data & Cloud Computing

Saying new fine-tuning fashions and strategies in Azure AI Foundry

16 May 2025
Next Post
Amazon And Goodreads: The Hidden Monopoly Shaping Ebook Gross sales

Amazon And Goodreads: The Hidden Monopoly Shaping Ebook Gross sales

Earth911 Inspiration: Jared Diamond on Poor Human Selections

Earth911 Inspiration: Change At The Velocity Of Belief

Recommended Stories

Espresso Break: Armed Madhouse – Missile Misfit

Espresso Break: Armed Madhouse – Missile Misfit

21 May 2025
Public Markets Are Key to the U.S. Economic system

Retail Buying and selling Development in Perspective

18 May 2025
How I Arrange Advertising and marketing for an eCommerce Delivery and Achievement Firm as Their Fractional CMO

How I Arrange Advertising and marketing for an eCommerce Delivery and Achievement Firm as Their Fractional CMO

6 May 2025

Popular Stories

  • Eat Clear Assessment: Is This Meal Supply Service Value It?

    Eat Clear Assessment: Is This Meal Supply Service Value It?

    0 shares
    Share 0 Tweet 0
  • RBI panel suggests extending name cash market timings to 7 p.m.

    0 shares
    Share 0 Tweet 0
  • Working from home is the new normal as we combat the Covid-19

    0 shares
    Share 0 Tweet 0
  • Dataiku Brings AI Agent Creation to AI Platform

    0 shares
    Share 0 Tweet 0
  • The Significance of Using Instruments like AI-Primarily based Analytic Options

    0 shares
    Share 0 Tweet 0

About Us

Welcome to Swiss NewsPaper —your trusted source for in-depth insights, expert analysis, and up-to-date coverage across a wide array of critical sectors that shape the modern world.
We are passionate about providing our readers with knowledge that empowers them to make informed decisions in the rapidly evolving landscape of business, technology, finance, and beyond. Whether you are a business leader, entrepreneur, investor, or simply someone who enjoys staying informed, Swiss NewsPaper is here to equip you with the tools, strategies, and trends you need to succeed.

Categories

  • Advertising & Paid Media
  • Artificial Intelligence & Automation
  • Big Data & Cloud Computing
  • Biotechnology & Pharma
  • Blockchain & Web3
  • Branding & Public Relations
  • Business & Finance
  • Business Growth & Leadership
  • Climate Change & Environmental Policies
  • Corporate Strategy
  • Cybersecurity & Data Privacy
  • Digital Health & Telemedicine
  • Economic Development
  • Entrepreneurship & Startups
  • Future of Work & Smart Cities
  • Global Markets & Economy
  • Global Trade & Geopolitics
  • Government Regulations & Policies
  • Health & Science
  • Investment & Stocks
  • Marketing & Growth
  • Public Policy & Economy
  • Renewable Energy & Green Tech
  • Scientific Research & Innovation
  • SEO & Digital Marketing
  • Social Media & Content Strategy
  • Software Development & Engineering
  • Sustainability & Future Trends
  • Sustainable Business Practices
  • Technology & AI
  • Uncategorised
  • Wellbeing & Lifestyle

Recent News

  • The right way to Make Extra Cash with a Easy Supply Ecosystem
  • Morning Bid: Hammer comes down
  • Issues to Do in Downtown Lancaster, PA: A 4-Day Itinerary
  • The Case of Walter Rodney – Creating Economics
  • AI and consciousness — and a positive-sum tomorrow

© 2025 www.swissnewspaper.ch - All Rights Reserved.

No Result
View All Result
  • Business
    • Business Growth & Leadership
    • Corporate Strategy
    • Entrepreneurship & Startups
    • Global Markets & Economy
    • Investment & Stocks
  • Health & Science
    • Biotechnology & Pharma
    • Digital Health & Telemedicine
    • Scientific Research & Innovation
    • Wellbeing & Lifestyle
  • Marketing
    • Advertising & Paid Media
    • Branding & Public Relations
    • SEO & Digital Marketing
    • Social Media & Content Strategy
  • Economy
    • Economic Development
    • Global Trade & Geopolitics
    • Government Regulations & Policies
  • Sustainability
    • Climate Change & Environmental Policies
    • Future of Work & Smart Cities
    • Renewable Energy & Green Tech
    • Sustainable Business Practices
  • Technology & AI
    • Artificial Intelligence & Automation
    • Big Data & Cloud Computing
    • Blockchain & Web3
    • Cybersecurity & Data Privacy
    • Software Development & Engineering

© 2025 www.swissnewspaper.ch - All Rights Reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?