<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>aws Archives - Albert Nogués</title>
	<atom:link href="https://www.albertnogues.com/tag/aws/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.albertnogues.com/tag/aws/</link>
	<description>Data and Cloud Freelancer</description>
	<lastBuildDate>Thu, 31 Dec 2020 10:02:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Unload data from AWS Redshift to S3 in Parquet</title>
		<link>https://www.albertnogues.com/unload-data-from-aws-redshift-to-s3-in-parquet/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=unload-data-from-aws-redshift-to-s3-in-parquet</link>
		
		<dc:creator><![CDATA[Albert]]></dc:creator>
		<pubDate>Thu, 31 Dec 2020 09:59:14 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[BigData]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[aws]]></category>
		<category><![CDATA[parquet]]></category>
		<category><![CDATA[redshift]]></category>
		<category><![CDATA[s3]]></category>
		<category><![CDATA[snappy]]></category>
		<category><![CDATA[unload]]></category>
		<guid isPermaLink="false">http://192.168.1.40/?p=1007</guid>

					<description><![CDATA[<p>Following the previous redshift articles in this one I will explain how to export data from redshift to parquet in s3. This can be interesting when we want to archive (infrequently queried) data to be queried cheaper with spectrum, or to store in s3 archive, or to export to another storage solution like glacier. The &#8230; </p>
<p>The post <a href="https://www.albertnogues.com/unload-data-from-aws-redshift-to-s3-in-parquet/">Unload data from AWS Redshift to S3 in Parquet</a> appeared first on <a href="https://www.albertnogues.com">Albert Nogués</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Following the previous redshift articles in this one I will explain how to export data from redshift to parquet in s3. This can be interesting when we want to archive (infrequently queried) data to be queried cheaper with spectrum, or to store in s3 archive, or to export to another storage solution like glacier.</p>



<p>The first thing we need to do is to modify our redshift cluster iam role to allow write to s3. We go to our cluster in the redshift panel, we click on properties, and then we will see the link to the iam role attached to the cluster. we click on it and it will open the IAM role page.</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="183" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-1024x183.png" alt="" class="wp-image-1008" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-1024x183.png 1024w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-300x54.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-768x137.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-1536x275.png 1536w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1-336x60.png 336w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet1.png 1628w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Then we add the policy named AmazonS3ReadOnlyAccess as shown in the following pic:</p>



<figure class="wp-block-image size-large"><img decoding="async" width="329" height="143" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet2.png" alt="" class="wp-image-1009" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet2.png 329w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet2-300x130.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet2-138x60.png 138w" sizes="(max-width: 329px) 100vw, 329px" /></figure>



<p>And with this we already have all the required permissions ready. The next step now is to make sure we have an available s3 bucket. I&#8217;ve created one for the demo purposes with a folder called parquet_exports.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="889" height="422" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet3.png" alt="" class="wp-image-1010" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet3.png 889w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet3-300x142.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet3-768x365.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet3-126x60.png 126w" sizes="(max-width: 889px) 100vw, 889px" /></figure>



<p>For starting the extraction, we will use the customers table we used in the previous articles. This table was loaded as well from the TPC-DS test data from s3 in a gzip file but now it sits inside our redshift node. The instruction to unload the data is called <a rel="noreferrer noopener" href="https://docs.aws.amazon.com/es_es/redshift/latest/dg/r_UNLOAD.html" data-type="URL" data-id="https://docs.aws.amazon.com/es_es/redshift/latest/dg/r_UNLOAD.html" target="_blank">UNLOAD</a> <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Since we want our data in parquet + snappy format, which is usually the recommended way (avro is not supported in redshift UNLOAD, only CSV and parquet), we need to express it in the unload statement.</p>



<p>Contrary to spectrum, here we can unload data to buckets in another region. So if thats your need make sure you fill the bucket region as well in the unload statement. The statement is as follows</p>



<pre class="wp-block-code"><code>UNLOAD ('<em>select-statement</em>')
TO '<em>s3://object-path/name-prefix</em>'
<em>authorization</em>
&#91; <em>option</em> &#91; ... ] ]

where <em>option</em> is
{ &#91; FORMAT &#91; AS ] ] CSV | PARQUET
| PARTITION BY ( <em>column_name</em> &#91;, ... ] ) &#91; INCLUDE ]
| MANIFEST &#91; VERBOSE ] 
| HEADER           
| DELIMITER &#91; AS ] '<em>delimiter-char</em>' 
| FIXEDWIDTH &#91; AS ] '<em>fixedwidth-spec</em>'   
| ENCRYPTED &#91; AUTO ]
| BZIP2  
| GZIP 
| ZSTD
| ADDQUOTES 
| NULL &#91; AS ] '<em>null-string</em>'
| ESCAPE
| ALLOWOVERWRITE
| PARALLEL &#91; { ON | TRUE } | { OFF | FALSE } ]
| MAXFILESIZE &#91;AS] <em>max-size</em> &#91; MB | GB ] 
| REGION &#91;AS] 'aws-region' }</code></pre>



<p>As you can see, it allows to pass a select statement instead of a table name. This way we can project the required columns and there is no need to export the whole table if we do not need it. We can also specify the max filesize. Usually with parquet 256 MB is a good split size. And make sure with Parquet not to specify any compression format as otherwise it will crash, as by default redshift already compresses it with Snappy.</p>



<p>To run the export we also need to fetch the arn of our redshift role, and incliude it just after the bucket path:</p>



<pre class="wp-block-code"><code>UNLOAD ('select * from customer')
TO 's3://albertnogues-parquet/parquet_exports/'
iam_role 'arn:aws:iam::742123541312:role/Redshift_Albertnogues.com'
FORMAT AS PARQUET
MAXFILESIZE 256 MB</code></pre>



<p>After about two minutes, the query finished sucessfully. We can go to our s3 bucket to see the parquet files there and check that the split file size is the one we requested</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="430" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-1024x430.png" alt="" class="wp-image-1011" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-1024x430.png 1024w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-300x126.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-768x322.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-1536x644.png 1536w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4-143x60.png 143w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet4.png 1664w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>We can check first the size of the table in redshift with the following query:</p>



<pre class="wp-block-code"><code>SELECT "table", tbl_rows, size size_in_MB FROM SVV_TABLE_INFO
order by 1</code></pre>



<figure class="wp-block-table"><table><tbody><tr><td><strong>Table</strong></td><td><strong>Num Rows</strong></td><td><strong>Size (MB)</strong></td></tr><tr><td>customer</td><td>30.000.000</td><td>2098</td></tr></tbody></table></figure>



<p>So its quite clear that the export looks ok as the size is similar. We can now download one fo the parquet files and inspect it with some parquet tool analyzer. I tend to use the python version of parquet-tools based on apache arrow project. You can install it with:</p>



<pre class="wp-block-code"><code>pip install parquet-tools</code></pre>



<p>And then we will inspect the file with the following:</p>



<pre class="wp-block-code"><code>parquet-tools inspect 0001_part_03.parquet</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="627" height="620" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet5.png" alt="" class="wp-image-1012" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet5.png 627w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet5-300x297.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet5-61x60.png 61w" sizes="auto, (max-width: 627px) 100vw, 627px" /></figure>



<p>And if we scroll down a little bit we can see the total number of files on our parquet file:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="531" height="614" src="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet6.png" alt="" class="wp-image-1013" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet6.png 531w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet6-259x300.png 259w, https://www.albertnogues.com/wp-content/uploads/2020/12/RedsiftParquet6-52x60.png 52w" sizes="auto, (max-width: 531px) 100vw, 531px" /></figure>
<p>The post <a href="https://www.albertnogues.com/unload-data-from-aws-redshift-to-s3-in-parquet/">Unload data from AWS Redshift to S3 in Parquet</a> appeared first on <a href="https://www.albertnogues.com">Albert Nogués</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Use Redshift Spectrum to query infrequently used data on S3</title>
		<link>https://www.albertnogues.com/use-redshift-spectrum-to-query-infrequently-used-data-on-s3/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=use-redshift-spectrum-to-query-infrequently-used-data-on-s3</link>
		
		<dc:creator><![CDATA[Albert]]></dc:creator>
		<pubDate>Wed, 30 Dec 2020 11:14:01 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[BigData]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[aws]]></category>
		<category><![CDATA[datawarehousing]]></category>
		<category><![CDATA[redshift]]></category>
		<category><![CDATA[spectrum]]></category>
		<guid isPermaLink="false">http://192.168.1.40/?p=994</guid>

					<description><![CDATA[<p>Redshift spectrum lets us to query data in s3 buckets using redshift. This scenario is specially interesting in large datawarehouses with data that we do not need to query often but it may be nevessary from time to time to run some of our queries. In this situation, probably we dont want the data to &#8230; </p>
<p>The post <a href="https://www.albertnogues.com/use-redshift-spectrum-to-query-infrequently-used-data-on-s3/">Use Redshift Spectrum to query infrequently used data on S3</a> appeared first on <a href="https://www.albertnogues.com">Albert Nogués</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Redshift spectrum lets us to query data in s3 buckets using redshift. This scenario is specially interesting in large datawarehouses with data that we do not need to query often but it may be nevessary from time to time to run some of our queries. In this situation, probably we dont want the data to be loaded into our redshift cluster as this may push us to provision larger redshift clusters that the ones we need for procession our usual queries (take into account that redshift clusters are set with specific cpu, ram and space and they can not be totally tailored to our needs, so only few combinations are available)</p>



<p>There are a few requirements however to be able to use redshift spectrum. First, the data loaded will be like an external table in other systems (RDBMS) or like hive external tables. So this means that we can forget about updating or deleting our data. We will only get read only capability, but usually thats we are looking for (think in historical data from already closed exercices, archived data and so on), where the cost of storing it in s3 even in a cool tier can ve several orders of magnitude cheaper than keeping it inside redshift.</p>



<p>There are other technical limitations tough. For example, <strong>the s3 bucket has to be in the same region as our redshift cluster</strong>, so plan in advance or move the data to another s3 bucket. There are other requirements like permissions and so on, you can read it <a rel="noreferrer noopener" href="https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html" data-type="URL" data-id="https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html" target="_blank">here</a>.</p>



<p>Before starting we need to attach a role to our redshift cluster that grants access to read s3 buckets. If you followed my previous article on redshift you will already have this role, otherwise check how to do it <a href="https://www.albertnogues.com/load-data-from-s3-and-run-tpc-ds-queries-on-amazon-redshift/" data-type="URL" data-id="https://www.albertnogues.com/load-data-from-s3-and-run-tpc-ds-queries-on-amazon-redshift/">here</a>.</p>



<p>If you followed my previous article, apart of the s3 read only permisison, you need to add the glue catalog permission to create a table. For this, modify the role (or if you&#8217;re creating a new one, add the following permission: <code>AWSGlueConsoleFullAccess</code></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="633" src="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0-1024x633.png" alt="" class="wp-image-997" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0-1024x633.png 1024w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0-300x185.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0-768x475.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0-97x60.png 97w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum0.png 1031w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>For the sake of this exercise I&#8217;ve load part of the TPC-DS customer data into a s3 bucket i made public on the Paris region. Then we will create an external customer table pointing to the data in the bucket (which is in fact compressed in gz) and we will query that data from redshift.</p>



<p>Once the bucket data is loaded, we are ready to go back to our redshift cluster. I&#8217;ve used <a href="https://redshift-downloads.s3.amazonaws.com/TPC-DS/2.13/3TB/customer/customer_1_14.dat.gz" data-type="URL" data-id="https://redshift-downloads.s3.amazonaws.com/TPC-DS/2.13/3TB/customer/customer_1_14.dat.gz" target="_blank" rel="noreferrer noopener">this file</a> for the tests. We will create an external schema and an external customers table. For this we need first to copy the arn of the role we created with the s3 read access. To get that let&#8217;s go to our cluster, then click on properties and later on click in Copy Amazon Resource Name (ARN):</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="383" src="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-1024x383.png" alt="" class="wp-image-996" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-1024x383.png 1024w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-300x112.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-768x287.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-1536x574.png 1536w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1-161x60.png 161w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum1.png 1787w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Once we have the arn of our role, then run the folloowing in your editor. It will create the external schema in a new spectrum database (feel free to choose any other database name if you need)</p>



<pre class="wp-block-code"><code>create external schema spectrum_albertnogues 
from data catalog 
database 'spectrum' 
iam_role 'arn:aws:iam::742123541312:role/Redshift_Albertnogues.com'
create external database if not exists;</code></pre>



<p>Then we are ready to create our external table:</p>



<pre class="wp-block-code"><code>create external table spectrum_albertnogues.customer
(
  c_customer_sk int4,                 
  c_customer_id char(16),             
  c_current_cdemo_sk int4 ,   
  c_current_hdemo_sk int4 ,   
  c_current_addr_sk int4 ,    
  c_first_shipto_date_sk int4 ,                 
  c_first_sales_date_sk int4 ,
  c_salutation char(10) ,     
  c_first_name char(20) ,     
  c_last_name char(30) ,      
  c_preferred_cust_flag char(1) ,               
  c_birth_day int4 ,          
  c_birth_month int4 ,        
  c_birth_year int4 ,         
  c_birth_country varchar(20) ,                 
  c_login char(13) ,          
  c_email_address char(50) ,  
  c_last_review_date_sk int4
)
row format delimited
fields terminated by '|'
stored as textfile
location 's3://redshift-spectrum-albertnogues/customers/';</code></pre>



<p>As you can see it&#8217;s not really 100% the same customers table from the TPC-DS data. This is because external tables do not support primary keys, not null syntax and some other keywords that do not make sense in external tables. Make sure you used the right separator and selected the right format of data (compression is determined by the file extension).</p>



<p>Then, as you will see the table creation takes virtually nothing. This is because in fact, the data is not loaded, its only a shortcut to our s3 data. But to make sure it&#8217;s working we can run a few queries to our new table:</p>



<pre class="wp-block-code"><code>select count(*) from spectrum_albertnogues.customer;</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="448" height="163" src="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum2.png" alt="" class="wp-image-998" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum2.png 448w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum2-300x109.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum2-165x60.png 165w" sizes="auto, (max-width: 448px) 100vw, 448px" /></figure>



<p>And to make sure data is properly loaded, we can query it as well:</p>



<pre class="wp-block-code"><code>select * from spectrum_albertnogues.customer LIMIT 5;</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="189" src="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3-1024x189.png" alt="" class="wp-image-999" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3-1024x189.png 1024w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3-300x55.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3-768x142.png 768w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3-324x60.png 324w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum3.png 1238w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As a last example, if you run my previous blog post on redshift, we can now join our new external customers table with our web returns fact table, to see how many objects these users returned depending on their origin country, to see if there is any pattern or not.</p>



<pre class="wp-block-code"><code>select c_birth_country as customer_birth_country, sum(wr_return_quantity) as qty_returned_total
 from web_returns,
      date_dim,
     spectrum_albertnogues.customer
 where wr_returned_date_sk = d_date_sk 
   and d_year =2002
   and wr_returning_customer_sk = c_customer_sk
 group by c_birth_country
 order by 2 desc;</code></pre>



<p>Of course the query should take a bit more since the data is not inside redshift, and needs to be fetched from s3 but we can get our output quite fast and easily:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="709" height="547" src="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum4.png" alt="" class="wp-image-1000" srcset="https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum4.png 709w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum4-300x231.png 300w, https://www.albertnogues.com/wp-content/uploads/2020/12/Spectrum4-78x60.png 78w" sizes="auto, (max-width: 709px) 100vw, 709px" /></figure>



<p>Thats it!. For more advanced topics like performance improvements when querying from s3 and others, you can read <a rel="noreferrer noopener" href="https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-performance.html" data-type="URL" data-id="https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-performance.html" target="_blank">here</a> and <a href="https://blog.openbridge.com/10-simple-tips-that-help-you-quickly-find-success-adopting-amazon-redshift-spectrum-810db089abbe" data-type="URL" data-id="https://blog.openbridge.com/10-simple-tips-that-help-you-quickly-find-success-adopting-amazon-redshift-spectrum-810db089abbe" target="_blank" rel="noreferrer noopener">here</a>. I recommend basically partitioning, a splittable file format like parquet compressed with snappy or some other codec, small files (but not very small), add as many filers as you can to avoid retrieving unnecesary data, and if you use a columnar format like parquet, you will avoid fetching unused columns in your select statement. This is important (and can have a huge impact in costs) because now it&#8217;s the time of discussing the price of all this.</p>



<p>Bear in mind that <strong>AWS charges 5 dollars for each Terabyte of data scanned from s3 with spectrum</strong>, so make sure you do the right usage on this (only data infrequently queried) and do not overestimate these charges as they grew quickly.</p>



<p>For expert advice or project requests you can contact me <a rel="noreferrer noopener" href="https://www.albertnogues.com/contact/" data-type="URL" data-id="https://www.albertnogues.com/contact/" target="_blank">here</a>. Happy Querying!</p>
<p>The post <a href="https://www.albertnogues.com/use-redshift-spectrum-to-query-infrequently-used-data-on-s3/">Use Redshift Spectrum to query infrequently used data on S3</a> appeared first on <a href="https://www.albertnogues.com">Albert Nogués</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
