Using AWS S3 for Laravel Storage

The Amazon Simple Storage Service (Amazon S3) is a web-based cloud storage service that is scalable, high-speed, and easy to use. On Amazon Web Services, the service is meant for data and application backup and archiving online (AWS).

Key points

#N1
Development team

Amazon Simple Storage Service (S3)

File Systems

A file system is a mechanism and data structure for controlling how data is saved and accessed. There are numerous types of file systems. Each one has its own structure, logic, and characteristics, such as speed, adaptability, security, and size. Some file systems were created with specific applications in mind. The way files are named, saved, and retrieved from a storage device is defined by a file system. Without a file system, the storage device would have a large chunk of data placed back-to-back and wouldn't be distinguishable.

Amazon Simple Storage Service (S3)

The Amazon Simple Storage Service (Amazon S3) is a web-based cloud storage service that is scalable, high-speed, and easy to use. On Amazon Web Services, the service is meant for data and application backup and archiving online (AWS). It's an AWS cloud service for storing data in a safe, highly accessible, and redundant manner. Customers of different sizes and sectors utilize it for a variety of purposes, including backup and recovery and data storage for cloud-native applications.

It is a cloud-based object storage service that provides industry-leading scalability, data availability, security, and performance. You can save money, organize data, and establish fine-tuned access restrictions to suit specific business, organizational, and compliance requirements with cost-effective storage classes and easy-to-use administration tools.

Benefits of S3 Storage

Versioning

Versioning allows different variants of a file/object to reside in the same bucket, but it is not enabled by default. If an object is accidentally removed, this allows for a rollback or recovery. S3 will also manage the removal of non-current versions of an object if an object expiration lifecycle policy is implemented.

Security

Thanks to encryption features and access management capabilities, data stored in your AWS S3 environment are protected from unauthorized access. This includes restricting public access to all of your items, at both the bucket and the account level. The security of data stored regionally is ensured by strong authentication.

Data Redundancy

AWS S3 maintains your data across numerous devices in an S3 region that spans at least three availability zones (AZs). It has reduced redundancy storage (RRS), which minimizes latency by storing data in buckets that are geographically separated. For users in geographically scattered locations, this saves resources and improves application efficiency.

Accessibility, Scalability, and Durability

For things stored in S3, the service guarantees 99.999999999 percent durability and offers several security and compliance certifications. It allows for infinite data and object storage in a number of formats for most data types. A stored dataset, which is an object, ranges from 1 to 5 gigabytes.

REST and SOAP API Interfaces

S3 storage provides web service interfaces based on representational state transfer (REST) and simple object access protocol (SOAP) that may be used with any form of Web development toolkit.

AWS S3 Buckets

A bucket is a box that holds objects. A file and any metadata that describe it are also considered objects. To store an object in Amazon S3, you must first establish a bucket and then upload the object to it. You may open, download, and transfer the object once it's in the bucket. You can clean up your resources when you no longer require an object or a bucket.

Laravel Filesystem provides different drivers to work with, for example, local filesystem, Amazon S3, and Rackspace. These drivers provide a convenient and easy way to upload files locally or on the cloud. For Amazon S3, although the functionality is essentially built into the framework, getting started can be a little disorienting, especially for people who aren't familiar with the AWS suite. To integrate it successfully, we only need our AWS credentials to access the console and create a new S3 bucket.

#C1
Development team

Amazon Simple Storage Service (S3)

File Systems

A file system is a mechanism and data structure for controlling how data is saved and accessed. There are numerous types of file systems. Each one has its own structure, logic, and characteristics, such as speed, adaptability, security, and size. Some file systems were created with specific applications in mind. The way files are named, saved, and retrieved from a storage device is defined by a file system. Without a file system, the storage device would have a large chunk of data placed back-to-back and wouldn't be distinguishable.

Amazon Simple Storage Service (S3)

The Amazon Simple Storage Service (Amazon S3) is a web-based cloud storage service that is scalable, high-speed, and easy to use. On Amazon Web Services, the service is meant for data and application backup and archiving online (AWS). It's an AWS cloud service for storing data in a safe, highly accessible, and redundant manner. Customers of different sizes and sectors utilize it for a variety of purposes, including backup and recovery and data storage for cloud-native applications.

It is a cloud-based object storage service that provides industry-leading scalability, data availability, security, and performance. You can save money, organize data, and establish fine-tuned access restrictions to suit specific business, organizational, and compliance requirements with cost-effective storage classes and easy-to-use administration tools.

Benefits of S3 Storage

Versioning

Versioning allows different variants of a file/object to reside in the same bucket, but it is not enabled by default. If an object is accidentally removed, this allows for a rollback or recovery. S3 will also manage the removal of non-current versions of an object if an object expiration lifecycle policy is implemented.

Security

Thanks to encryption features and access management capabilities, data stored in your AWS S3 environment are protected from unauthorized access. This includes restricting public access to all of your items, at both the bucket and the account level. The security of data stored regionally is ensured by strong authentication.

Data Redundancy

AWS S3 maintains your data across numerous devices in an S3 region that spans at least three availability zones (AZs). It has reduced redundancy storage (RRS), which minimizes latency by storing data in buckets that are geographically separated. For users in geographically scattered locations, this saves resources and improves application efficiency.

Accessibility, Scalability, and Durability

For things stored in S3, the service guarantees 99.999999999 percent durability and offers several security and compliance certifications. It allows for infinite data and object storage in a number of formats for most data types. A stored dataset, which is an object, ranges from 1 to 5 gigabytes.

REST and SOAP API Interfaces

S3 storage provides web service interfaces based on representational state transfer (REST) and simple object access protocol (SOAP) that may be used with any form of Web development toolkit.

AWS S3 Buckets

A bucket is a box that holds objects. A file and any metadata that describe it are also considered objects. To store an object in Amazon S3, you must first establish a bucket and then upload the object to it. You may open, download, and transfer the object once it's in the bucket. You can clean up your resources when you no longer require an object or a bucket.

Laravel Filesystem provides different drivers to work with, for example, local filesystem, Amazon S3, and Rackspace. These drivers provide a convenient and easy way to upload files locally or on the cloud. For Amazon S3, although the functionality is essentially built into the framework, getting started can be a little disorienting, especially for people who aren't familiar with the AWS suite. To integrate it successfully, we only need our AWS credentials to access the console and create a new S3 bucket.

#E1
Development team

Using S3 for File Storage in Laravel

Using S3 for File Storage in Laravel

How to Create an S3 Bucket

  • Log in to your AWS account; if you don’t have one already, sign up for one.
  • Proceed to the S3 section.
  • Select “Create Bucket”.
  • Enter a unique name for your bucket.
  • Select a region.
  • Uncheck the box to block public access.
  • Leave all other default settings unchanged and select “Create”.

How to Create a Bucket Policy

This is an AWS Identity and Access Management (IAM) policy that is resource-based. To provide other AWS accounts or IAM users access permissions to the bucket and the items within it, you must apply a bucket policy. Object permissions only apply to items created by the bucket owner.

Begin by selecting the bucket.

Select the “Permissions” tab and then select “Edit“ in the “Bucket Policy“ session.

Copy Bucket ARN and proceed to the “Policy Generator“.

Edit the statement. Use the ARN you copied earlier.

Generate the policy document.

The Policy is generated in JSON format. Copy and paste it into the Bucket Policy tab and save. Ensure that you have unchecked "Block all public access" in the permissions tab before creating the bucket policy.

AWS Identity and Access Management (IAM) Service

AWS Identity and Access Management (IAM) allows for fine-grained access management throughout the whole AWS infrastructure. You can control who has access to which services and resources, and under what conditions, using IAM. You can manage permissions to your workforce and systems with IAM policies to ensure least-privilege permissions and easily manage authorized and unauthorized resources. You can manage users and permissions in AWS with AWS Identity and Access Management (IAM). The service is geared towards businesses with a large number of users or systems that use AWS products. Users’ security credentials, such as access keys, and permissions can all be managed from a single location.

How to Create an IAM User

If you don’t have an existing user, you can create a new one and add a bucket policy that allows the IAM user to upload to an S3 bucket.

  • Select “Add User”.
  • Add the user name and select the AWS access type.
  • Set the password and then proceed to permissions.
  • Leave the other default settings unchanged and create the user.
  • Save the user’s access key ID and secret access key since the secret access key can only be viewed once.

File Storage with S3 in Laravel

Next, you’ll learn how to upload files to S3 in Laravel. It’s important to have created your bucket and IAM user before proceeding. You’ll need the user’s access key ID and secret access key.

Laravel now has a simple method for uploading files to Amazon S3. Because Laravel comes with the setup to use it whenever you want, the approach is really simple. We only need our AWS credentials to access the interface and build a new S3 bucket to integrate it effectively. Doesn’t this seem easy?

Create a New Laravel Application

You can create a new Laravel project via the Composer command or the Laravel installer:

laravel new project_name   
or
composer create-project laravel/laravel project_name

Add Amazon S3 Cloud Storage Credentials

Open the .env file and update the AWS bucket configurations.

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=<region_name>
AWS_BUCKET=<bucket_name>
AWS_USE_PATH_STYLE_ENDPOINT=false

In conig/filesystems.php, the S3 driver is configured in the s3 array and can be modified to suit your requirements.

's3' => [
            'driver' => 's3',
            'key' => env('AWS_ACCESS_KEY_ID'),
            'secret' => env('AWS_SECRET_ACCESS_KEY'),
            'region' => env('AWS_DEFAULT_REGION'),
            'bucket' => env('AWS_BUCKET'),
            'url' => env('AWS_URL'),
            'endpoint' => env('AWS_ENDPOINT'),
            'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
        ],

Set Up the Model and Migrations

Now that we have made the necessary configuration changes, create the model and database migration simultaneously by running the following command:

php artisan make:model Image -m

This creates a model file called Image.php in the app/Models directory, and a migration file called create_images_table.php in the database/migrations directory. Update Images.php by adding the code below to the top of the file, which enables Model mass assignment.

protected $guarded = [];

Then, update the up() method of the migration file as shown in the following example.

 public function up()
    {
        Schema::create('images', function (Blueprint $table) {
            $table->id();
            $table->string('title');
            $table->string('image');
            $table->timestamps();
        });
    }

Connect to Your Database

Here is an article I wrote that explains how to connect a Laravel Application to a MySQL database. If you have a different database, make sure to connect to it appropriately.

Install the Composer Package

Before using the S3 driver, you will need to install the appropriate package via the Composer package manager:

composer require --with-all-dependencies league/flysystem-aws-s3-v3

Set Up the Controller

To create the controller, run this Artisan command:

php artisan make:controller ImageController

It will create a new file called ImageController.php in the app/Http/Controllers directory. After creating the file, add the following import statements to import the classes that the controller will use:

use App\Models\Image;
use Illuminate\Http\Request;
Next, update the controller with an upload() method to return the form to upload and a store() method to upload the image to S3.

public function upload()
    {
        return view('upload');
    }
    public function store(Request $request)
    {
        $request->validate([
            'title' => 'required',
            'image' => 'required|image|mimes:jpeg,png,jpg,gif,svg|max:2048',
        ]);
        if ($request->hasFile('image')) {
            $extension  = request()->file('image')->getClientOriginalExtension(); //This is to get the extension of the image file just uploaded
            $image_name = time() .'_' . $request->title . '.' . $extension;
            $path = $request->file('image')->storeAs(
                'images',
                $image_name,
                's3'
            );
            Image::create([
                'title'=>$request->title,
                'image'=>$path
            ]);
            return redirect()->back()->with([
                'message'=> "Image uploaded successfully",
            ]);
     }
    }

Set Up Routes

You need one route for getting the view and another for storing the image.

To define them, add the following code to routes/web.php.

Route::get('image-upload', [ ImageController::class, 'upload' ])->name('image.upload');
Route::post('image-store', [ ImageController::class, 'store' ])->name('image.upload.post');

Then, add the import statement to the top of the file.

use App\Http\Controllers\ImageController;

Set Up the View

You will need to create a blade file resources/views directory called upload.blade.php. and update it like this:

<!DOCTYPE html>
<html>
<head>
    <title>Laravel File Storage with Amazon S3 </title>
    <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css">
</head>

<body>
<div class="container d-flex justify-content-center align-items-center" style="height: 100vh;">

    <div class="panel panel-primary">
      <div class="panel-heading"><h2>Laravel File Storage with Amazon S3 </h2></div>
      <div class="panel-body">

        @if (Session::get('message'))
        <div class="alert alert-success alert-block">
                <strong>{{Session::get('message')}}</strong>
        </div>
        @endif

        @if (count($errors) > 0)
            <div class="alert alert-danger">
                <strong>Whoops!</strong> There were some problems with your input.
                <ul>
                    @foreach ($errors->all() as $error)
                        <li>{{ $error }}</li>
                    @endforeach
                </ul>
            </div>
        @endif

        <form action="{{ route('image.upload.post') }}" method="POST" enctype="multipart/form-data">
            @csrf
            <div class="row">
                <div class="col-md-6">
                    <label for="">Title</label>
                    <input type="text" name="title" class="form-control">
                </div>

                <div class="col-md-6">
                    <label for="">Image</label>
                    <input type="file" name="image" class="form-control">
                </div>

                <div class="col-md-6">
                    <button type="submit" class="btn btn-success">Upload</button>
                </div>

            </div>
        </form>

      </div>
    </div>
</div>
</body>

</html>

Testing

You can now test to see if the image uploads correctly. Visit the form at http://127.0.0.1:8000/image-upload. It should look like this:

After uploading the image, it gives a successful alert. You can also check the bucket to see if the image uploaded successfully.

Ref : Using S3 for File Storage in Laravel

#E2
Development team

Using AWS S3 for Laravel Storage

Using AWS S3 for Laravel Storage

Prerequisites

There are two thing I ask that you have before attempting anything in this article.

  1. An AWS account.
  2. A Laravel project with the Flyststem S3 package installed with the following command:
composer require league/flysystem-aws-s3-v3 "^3.0" --with-all-dependencies

IAM That IAM

Like most seemingly confusing names in AWS, this one is actually pretty simple. IAM = 'Identity and Access Management'.

This is where we need to start, by creating a Group and a User and get your Access Key and Secret. Come on, one step at a time.

User Group

  1. In the searchbar at the top, type "IAM" and click on the IAM service.
  2. On the left sidebar, click "User Groups"
  3. Click the "Create group" button
  4. Create a name for the group, I am going to call mine "S3FullAccess", all the users in this group will have full access to do anything to any of my S3 buckets, but they will ONLY have access to make changes to S3 buckets.
  5. Scroll down and in the :Attach permissions policies", search for "S3". The search bar here is a bit wonky, just hit enter and you will see the list update.
  6. Check the box next to "AmazonS3FullAccess" and click the "Create group" button on the bottom.

Create The User

  1. Next click "Users" in the left sidebar.
  2. Click "Create user" and give the user a name. Now the name can be anything you want, typically I make a user for each of my apps and name it accordingly.
  3. Do NOT check the box to grant the user AWS Console access unless you know what you are doing. Essentially, this will also allow our user to log into AWS and also have API access which isn't the goal of this demo.
  4. Click "Next". Now we are going to add this user to the group we just made, check the box next to the group and click "Next".
  5. You can add "Tags" to your user if you want, I normally just skip these. Click "Create user".

Get the Key and Secret

  1. The next screen should show you a list of all your users, click on the user you just created.
  2. Click on the Security credentials tab and scroll down to the "Access keys" section.
  3. Click "Create access key".
  4. For this tutorial, the user we are creating is essentially a "Third-party service", select it, check the Confirmation at the bottom and click "Next".
  5. I skip the description tag. Each user can have multiple access key (up to 2) and you can label them here if you want.
  6. Click "Create access key". On the next page you will see your access key, and you can show or copy your secrete access key. You will need both of these values, copy them and paste them into your Laravel .env file.

Kick The Bucket

"S3" - this is another confusing AWS name, right? S3 - simply stands for Simple Storage Service, 3 S's = S3. That is all. I am certain you have come up with worse variable names than this :D. Everything that goes into a bucket is an "Object", I will likely refer to "objects" instead of pictures, videos etc. because anything can go into a bucket. And a bucket is simply put a container for objects.

Let's roll!

Create the Bucket

  1. In the search bar type "s3" and select "S3".
  2. Click the "Create bucket" button. On the next screen we are going to specify several things that you can edit later, except the bucket name.
  3. Additionally, be sure to take note of the "AWS Region", you will need this in your .env.

A Note about Bucket names: Bucket names are unique per region. The us-east-1 region is closes to where I live and the one I typically use, but so do a LOT of people. The bucket name "Laravel" for example will not be available there because someone else created it first. Choose your bucket name carefully.

  1. Scroll down and check the "ACLs enabled" radio button in the "Object ownership" section.
  2. Select "Object writer".
  3. Uncheck the "Block all public access" and check the acknowledgement.
  4. I will keep versioning disabled, no tags... I will leave the rest of the defaults untouched and click "Create bucket".

Update the Bucket Policies and CORS

  1. The next screen should be a list of your buckets, click on the one you just created.
  2. Click the "Permissions" tab.
  3. In the "Bucket policy" section click the "Edit" button, paste the following code, but make sure you update it with your bucket's name and click "Save changes".
{
    "Version": "2012-10-17",
    "Id": "Policy1692807538499",
    "Statement": [
        {
            "Sid": "Stmt1692807537432",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::laravelonlinebucket/*"
        }
    ]
}
  1. You can read more on how to control who and what has access to your bucket objects, this statement will allow any "Principal" or entity to take any action in our bucket. You can get more strict with statements like this as you learn more.
  2. Scroll to the "Access control list (ACL)" section and click the "Edit" button.
  3. Check the box for "List" and "Read" next to "Everyone (public access)" and check the acknowledgement on the bottom and click "Save changes".
  4. Scroll down to the "Cross-origin resource sharing (CORS)" section, click the "Edit" button and paste in the following code:
[
    {
        "AllowedHeaders": [],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "*"
        ]
    }
]
  1. In the "AllowedOrigins" section of that json we are allowing ALL origins, that means ANY domain can access the objects in this bucket. If you want to limit that to a specific domain you can adjust it here.
  2. Click on the "Objects" tab. we will refresh this section once we push some objects to the bucket.
  3. Let's head back over to the Laravel app and update our .env accordingly, set your bucket name, region, url, etc. like this.

AWS_ACCESS_KEY_ID=AKIA35DMCR3BAIFEVVMP
AWS_SECRET_ACCESS_KEY=YMrG4Tw6UQ0HHKU/ByvhBxuF56jKhgTJfBwHUkVR
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=noonewillevergetthisbucketname
AWS_URL="https://noonewillevergetthisbucketname.s3.amazonaws.com/"
AWS_USE_PATH_STYLE_ENDPOINT=true

Facade

This tutorial is NOT a Laravel Storage lesson. However, we gotta see the benefits of what we just did and test things out. I am going to do a VERY simplistic approach to pushing objects to the bucket and do it all from a web route function - YOLO!

Storage Disks

  1. Back in your Laravel app, head over to the config/filesystems.php file.
  2. You can use the S3 disk that is set up here already and just tweak things. However, typically my apps will post different types of files that I want organized into different directories in my bucket. For example, an "invoices" directory and a "profile-picture" directory. I will create two disks here and configure them appropriately.
  3. Paste the following code, updating it for your use cases:
'invoices' => [
    'driver' => 's3',
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_DEFAULT_REGION'),
    'bucket' => env('AWS_BUCKET'),
    'url' => env('AWS_URL'),
    'endpoint' => env('PROFILE_ENDPOINT'),
    'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
    'visibility' => 'public',
    'root' => 'invoices'
],,
'profile-photos' => [
    'driver' => 's3',
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_DEFAULT_REGION'),
    'bucket' => env('AWS_BUCKET'),
    'url' => env('AWS_URL'),
    'endpoint' => env('PROFILE_ENDPOINT'),
    'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
    'visibility' => 'public',
    'root' => 'profile-photos'
],
  1. Take note of the 'root' this tells S3 which directory to put files when using each disk.

Demo Time

  1. I have an image stored in my storage/app directory called 'hero.png' that I will be using for this demo.
  2. The code below is getting the raw image data from my storage directory, then it is uploading it the specified disk/directory which returns the path, using the Storage url function, passing the path in, we get the full url to the image!
Route::get('buckets', function(){
    $disk = 'profile-photos';
    $heroImage = Storage::get('hero.png');
    $uploadedPath = Storage::disk($disk)->put('hero.png', $heroImage);
    return Storage::disk($disk)->url($uploadedPath);
});
  1. Hitting this route in browser yields: https://noonewillevergetthisbucketname.s3.amazonaws.com/profile-photos/hero.png. Go ahead and check this awesome image I made with Midjourney
  2. If we go back to our bucket objects in AWS and refresh we should see the profile-photos directory and the hero.png file inside.
  3. If we swap out the disk in our function, refresh S3 objects again we will see a different directory is now created for our invoices.
Route::get('buckets', function(){
    $disk = 'invoices';
    $heroImage = Storage::get('hero.png');
    $uploadedPath = Storage::disk($disk)->put('hero.png', $heroImage);
    return Storage::disk($disk)->url($uploadedPath);
});
  1. You can continue and carry on using all the Laravel Storage methods you know and love and take advantage of the amazing power behind AWS S3 Buckets.

Not so bad right?

So you got a taste of what you can do in AWS. One thing I like to do if I know that some of my objects need to be "highly available" in my bucket, meaning it needs to be available quickly all over the world, is leverage Cloudfront which will distribute all of the files in a bucket or a directory in a bucket to all of Amazon's servers all over the world. That way when someone in Japan requests a file it isn't making hops all the way from Virginia. Play with things, experiment, get your feet wet!

Ref : Using AWS S3 for Laravel Storage

#E3
Development team

Allow Public Read access to an AWS S3 Bucket

Allow Public Read access to an S3 Bucket

To allow public read access to an S3 bucket: - Open the AWS S3 console and click on the bucket's name - Click on the Permissions tab - Find the Block public access (bucket settings) section, click on the Edit button, uncheck the checkboxes and click on Save changes

  • In the Permissions tab, scroll down to the Bucket policy section and click on the Edit button. Paste the following policy into the textarea to grant public read access to all files in your S3 bucket.

Replace the YOUR_BUCKET_NAME placeholder with your bucket's name.

bucket-policy-public-read

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
    }
  ]
}

For example, the bucket policy of an S3 bucket with the name my-bucket will look like:

Save the changes you've made to the bucket's policy and your bucket will have public read access enabled.

(Optional) - If you need to access your bucket with HTTP requests from the browser, you have to update the bucket's Cross-origin resource sharing (CORS) options to allow your frontend's requests

In the Permissions tab of your S3 bucket, scroll down to the Cross-origin resource sharing (CORS) section and click on the Edit button

Paste the following JSON into the textarea and save the changes

cors-configuration

[
  {
      "AllowedHeaders": [
          "Authorization",
          "Content-Length"
      ],
      "AllowedMethods": [
          "GET"
      ],
      "AllowedOrigins": [
          "*"
      ],
      "ExposeHeaders": [],
      "MaxAgeSeconds": 3000
  }
]

To test that your bucket has public read access enabled:

  1. Click on the Objects tab in your S3 bucket.
  2. Click on the checkbox next to a file's name.
  3. Click on the Copy URL button at the top and copy the public URL of the file.

Paste the URL in your browser and you should see the contents of the file (for HTML files or images).

Note that you'll see a red badge with the text Publicly accessible next to your bucket's name.

In this case, the bucket policy only grants public read access to the bucket, so other people can't add objects to your S3 bucket.

Ref : Allow Public Read access to an AWS S3 Bucket

README.md

A utility-first CSS framework for rapidly building custom user interfaces.

build
passing
downloads
8.7M

Documentation


For full documentation, visit tailwindcss.com

Community


For help, discussion about best practices, or any other conversation that would benefit from being searchable:
Discuss Tailwind CSS on GitHub
For casual chit-chat with others using the framework:
Join the Tailwind CSS Discord Server

Contributing


If you're interested in contributing to Tailwind CSS, please read our contributing docs before submitting a pull request.

© 2020 GitHub, Inc.,