Skip links

Website Hosting of Metrosaga

About Metrosaga:

Metrosaga is the fastest-growing digital video and news publisher in India. We’re a young, independent platform producing compelling journalism in a new format. Our content is Pan India and Multilingual, and we’re committed to upholding the highest standards of Journalism

MetroSaga.com is a product of Bro4u Online Service Pvt Ltd, your companion for metro digital life. Be the first one to know what’s happening around, learn new things, know about the city’s food points, travel places, news, and more.

Advertisements

Deep inside us was/is the passion to give exciting content that is the talk of the town and fulfill your lust for better information. And yes, as sincere Bangaloreans, we also write articles on our old rich Karnataka tradition. For example, temples, tourist places and many more.

The Challenge:

Bro4u Online Service Pvt Ltd i.e. metrosaga was running its infrastructure on traditional way. But with thousands of sudden traffic of users per second and unpredictable spikes in traffic, it became difficult to run the infrastructure efficiently.

The database consumption was also more data reads than data writes,  Metrosaga had to pay more amount for the infrastructure maintenance.

Also due to unexpected traffic spike on the sites during the peak time it was not easier task to handle & sustain the load. Also, the high availability was a challenge until the aws multi AZ was used. The product manager of metrosaga said it was not the easier job to keep visitor stick up to the content as every user leaves the site within a few minutes of watching the news or any other blog content. They wanted to minimize the unnecessary expenses on the infrastructure.

The previous architecture also required to continuously monitor & maintain the task manually with the dedicate team of engineers. Such as checking for the system security patches etc.

Why Amazon Web Services:

Metrosaga decided to choose Amazon Web Services over the (AWS) because of the services offered by aws. This services also includes

  • Networking – VPC, Security groups, NACL, Subnets etc
  • Databases such as DynamoDB managed service by aws
  • Compute services – Amazon ec2
  • For high availability – Autoscaling,
  • DR – Multi AZ etc.

The scaling in & scale out provided by aws is very instantaneous & a few clicks away, or alarm based.

Aws turned to be a perfect solution to host the app & minimize the infrastructure cost. The support offered by AWS is admirable. The quick resolution helps to reach commitment easily and business grows.

AWS provide documents to help maintain the security at best level.

The Benefits:

After moving the infrastructure to AWS with ec2, amazon managed database, autoscaling, CloudWatch etc. services it became possible to handle the large number of requests on peak time.

The services like AWS elastic load balancer i.e. Application load balancer gives you the ability to handle the traffic more intelligently. It also provides a way the be available on higher load i.e. site or app can withstand on peak timings. Suggestion were provided on timely basis with AWS best practices. The complete architecture is as follows.

 RDS use case: Bro4U application Metrosaga archtecture:

This architecture gives the brief idea about the implementation of DynamoDB & web app on aws.

It shows the logical isolated section of the cloud i.e. VPC which enables the user to maintain its own network inside the AWS. It has all the networking capabilities mentioned below.

Subnets is also the part of networking which helps to keep your services at public level or to be used privet. So basically, the application is hosted with the help of aws compute service such as EC2.

While deploying the application the configuration of the servers is taken into consideration. Also, the AWS calculator was there to help with approximate costings for the given load.

Maintain the cost and configuration cost the required decision were made and selection of compute resources is done.

Now once the EC2 is running the autoscaling is taken into consideration for the for the emergency spiking of traffic. This will help to scale up to a capacity required to be available at peak hours. And it worked.

The amazon cloud watch takes care of the monitoring task, that reduces the need of manual efforts to track down the logs.

Ec2 compute service also provides the facility to give tracking of logs at network interface the access logs to keep track of requests & and predict the load.

Then we have privet subnets where the application is deployed and not available for public access, it is secured with the SG & NACL.

It is also kept in autoscaling group for scaling purpose and thanks AWS for this. It also has the monitoring integrated to keep things in place.

Furthermore, we have databases deployed in privet subnets with different availability zones. It is connected to application via endpoints, which means complete security provided with internal routing only. After the same architecture implementation, it is possible to deploy the code across its app easily within minutes. And whereas the internet gateway plays the role to get internet access to public subnets. Aside from controlling costs, the business is now able to deliver new products and services far more quickly and cost-effectively than its previous environment.

While taking this into consideration as per the client need ensured usage of IAM user & roles for routine activities like checking CloudWatch metrics of EC2 instances, Checking Logs related to ec2, ELB etc. and application related. Enabled MFA for root user with no access keys assigned. Implemented this functionality over all AWS accounts.

we have ensured that CloudTrail should be enabled in all the AWS Regions so that services related logs will be saved in amazon S3 bucket and bucket has versioning enabled. with necessary safety measure in this project we have implemented AWS CloudTrail helps to track down the all the activity that are ongoing into the AWS account. We have configured it with AWs S3 so all the logs that are being track that all will stored under the AWS S3 bucket and also, we have enabled KMS over there. if someone gets that access to that bucket but he should not able to read all the logs from CloudTrail.

The Databases are stored in privets subnets. It is completely isolated from direct access to the internet. We have created Subnets and NAT gateway, route tables, NACL, Internet Gateway, database server, there are some private subnet which has database server and there is no internet connectivity but we have provided internet connectivity using NAT gateway so that if any patch need update then we can easily update that with the help of NAT gateway. we have created some public subnets to deploy webserver.

The cryptographic keys are managed securely. The AWS IAM is used to distribute & manage the privet keys. Mostly we are using ACM for ssl/tls certificate where client has some domain names and he want to map that domain to the route53 and it has load balancer that should have ssl enable so usually we refers ACM for doing that. and for Crypto we are using KMS for encryption.

Installed and configured AWS CLI and using AWS CLI we used to make the programmatic calls. Client need a solution where he wants make API calls as per requirement, we have implemented this concept in our project. For making API call we need to run bulk off commands which AWs has provided to us.

According to listed best practices which are provided by AWS a few things were taken into consideration. Making use of indexes efficiently to save space and the cost. It will lead to performance upgrade. We have Considered few attributes to minimize the size of items written to the index and also Optimize Frequent Queries to Avoid Fetches, We also ensured that usage of the adaptive capacity feature to rebalance the partitions created as per the client need in order to reduce request throttling on the single partition which we were able to observe during the POC phase and we thought that making use of indexing will efficiently save space and cost and it will lead to performance upgrade.

We take it into consideration that DynamoDB adaptive capacity responds by increasing partition 4’s capacity so that it can sustain the higher workload of 150 WCU/sec without being throttled we took this all into, patterns. We took that into consideration that we can retrieve data from the index using a Query, the same way as we use Query with a table. A table can have multiple secondary indexes, If we try to concurrently create more than one table with a secondary index, we have taken all the necessary actions.

we have ensured that Granular item recovery: A company attorney accidentally deletes a time sensitive email, then empties the contents of the Trash folder. Since Microsoft Exchange is a business-critical application for this busy company, IT continuously backs up delta level changes in Exchange. And since their backup application is capable of granular backup and recovery, they can recover the individual message within an RTO of 5 minutes instead of restoring an entire VM for a single message. Ecommerce site: A retail store’s self-hosted e-commerce site uses three different databases: a relational database storing the product catalog, a document database that reports historical order data, and an API database connecting to their payment processor’s gateway. The document database can reconstruct data from other databases so its RTO and RPO are within 24 hours. The RPO & RTO provided in case of individual failure or the any other major Disaster. Based on the current architecture. The company replicates the few changes it makes during the week to their provider’s DR platform. The API database holds ordering information and needs both RPO and RTO in seconds. IT continuously replicates data to the failover site, which immediately takes over processing should the API database go down. For example, if you have a 4-hour RPO for an application then you will have a maximum 4-hour gap between backup and data loss. Having a 4-hour RPO does not necessarily mean you will lose 4 hours’ worth of data. Should a word processing application go down at midnight and come up by 1:15 am, you might not have much (or any) data to lose. But if a busy application goes down at 10 am and isn’t restored until 2:00 pm, you will potentially lose 4 hours’ worth of highly valuable, perhaps irreplaceable data. In this case, arrange for more frequent backup that will let you hit your application-specific RPO

Leave a comment

Explore
Drag