Overview
At Meritto, data security is deeply embedded across our infrastructure, application, and operational layers. Hosted on AWS, we leverage tools like WAF, GuardDuty, ELK Stack, and CodeCommit to ensure robust protection, continuous monitoring, and secure change management. From encryption and access control to real-time alerts and isolated data architecture, every aspect is designed to safeguard sensitive information and maintain regulatory readiness.
Security Controls
We are using best practices to achieve the overall security of an application. Refer to the mentioned points below:
- The application is hosted on AWS and default DDoS protection is enabled across infrastructure to mitigate DDoS attacks.
- At the network level, ACL(Access Control Lists) has been used to restrict unwanted traffic. We block unwanted IP or network traffic on this layer.
- We are using multiple Load Balancers(ALB) and on each load balancer, there are path-based rules defined for direct internet traffic. Once those rules are satisfied only then hits are entertained by the application else we reject those requests on ALB itself.
- As recommended, We have provided high bandwidth to our infrastructure to absorb DDOS attacks.
- We are using AWS WAF to prevent any attack. It protects our application against SQL injection, XSS javascript, real-time POST actions, and automated attacks with Cloudflare rulesets. Default and some custom rules both are configured.
- We have a third-party vendor for VAPT. Penetration testing is performed once a year and Bots run on a regular basis on a dedicated VAPT environment and publish its report.
- All applicant uploads are saved in s3 which is protected and only the application can access those objects from s3.
- All databases are encrypted at rest and only applications can access them using proper CMK.
- Traffic is encrypted in transit using Transport Layer Security 1.2 (TLS) with an industry-standard AES-256 cipher.
- We have enabled GuardDuty, Inspector etc services in our AWS account. These services audit our infrastructure provide a report of action items and regularly work on those recommendations to make infrastructure more efficient and secure.
- Other than the above-mentioned pointers, We are monitoring every single request on the application server in real-time. Every unusual hit is notified. Every abnormal spike in traffic is being monitored. The system is resilient and hardened enough to handle DDOS Attacks.
For Alerts, we are using tools as mentioned below:
-
IVR Alerts are configured across the infrastructure. In case there is any latency or service outage, authorized people get instant call alerts.
- Wherever required, set up alarm using AWS cloud watch service.
-
OSEEC: OSSEC is an open-source Host-based Intrusion Detection System (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris, and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed.
-
Nagios: Nagios is an open-source computer software application that monitors systems, networks, and infrastructure. Nagios offers monitoring and alerting services for servers, switches, applications, and services. It alerts users when things go wrong and alerts them a second time when the problem has been resolved.
Our infrastructure is very vast, and we use shared servers for our partners. Dedicated Infrastructure has its cost implications which is not best suited for university. Also, we don’t promote separate infrastructure as there are multiple technologies that we are using to give customers a better experience. Please go through the technology architecture and tech stack that we use for complete information. Please go through the below pointers for an explanation.
Data Isolation
We have used multiple databases for storing customer data and all are designed in a way that data is always isolated for each client.
- We use Aurora Cluster which is a highly scalable database service of AWS. We make sure that data is isolated for all customers. Most of the tables are dynamic having a unique customer id in suffix. Where in some tables if we don't have a suffix, we isolate data based on unique customer ID. Aurora cluster can scale based on defined rules as well to cater to sudden high workloads.
- MongoDB is our noSQL database where we save the majority of data like User Activities, Counsellor Stats, Payment Response, Communications data Email, SMS & WhatsApp, etc. All data is stored in customer-specific collections. For example: Customer data will be saved in the customer's collections. No other customer data will be there in those collections. We are using the MongoDB cluster as our NoSQL database.
- ElasticSearch is being used as a search engine. For each customer, data will be stored in customer-specific indexes in elastic which makes it also isolated. We are using Elasticsearch's high-availability cluster.
- We save all files uploaded by the applicant in a separate directory in s3. There is a separate directory for each customer in which no other customer uploads reside.
Data Security:
- All sensitive data like user credentials, API keys, and merchant details are saved in encrypted format in the database hence it is not in readable format. We use the latest encryption techniques to make sure that encrypted data is highly secure and safe. The algorithm that we use is AES for encryption.
- We use the SHA 256 algorithm for hashing a string. Passwords etc are stored using this.
- We have strict policies on data access. No one can access the production data of clients unless requested. All Activity logs are also being maintained for all such activities.
- All our infrastructure servers are hardened, which makes it more secure. We follow the process of enhancing server security through a variety of means which results in a much more secure server operating environment.
Patch Management Process
The patch management process is performed regularly. Every verified OS Security patch/software is initially installed on the testing environment. After successfully testing, the patch rolled over to the production environment.
There is no downtime involved in this process as we have multiple application servers and patches can be installed seamlessly on production. As best practices, we do such activities late at night when traffic is minimal in case there is a major patch or server maintenance activity.
We follow Security advisory websites run by CERTs (https://www.cisecurity.org/) to get all notifications.
Change Management Process
We follow the below process:
- Nature of the change.
- The scale of the change.
- Possible impact.
- Proceed or re-evaluate.
So, once we decide to proceed, We have multiple tools in place for managing/tracking every change that goes on in the production environment.
- We use ZOHO Sprints as a release/sprint management tool. Complete tickets and assignments are tracked there.
- We use Jenkins as release deployment tool and there are multiple test environments on which any change that goes on production is thoroughly tested. Before going live, we deploy a sprint on the UAT environment and the team executes core / sanity test cases. After sign-off from QA and the respective Product team, we make it LIVE. As four steps mentioned in the process, we decide release time on the basis of nature/scale/impact of change.
- We use AWS Codecommit for change management. Branch protection rules are in place and Peer Review is also configured. Nobody can merge pull requests with code review.
Yes, All feature updates and new module updates are shared with all customers via email from the NopPaperForms team.
Access logs Audit logs Management
There is a dedicated infrastructure team that monitors logs and in case of any abnormal activity, they take required action. Various tools in place are mentioned below:
-
ELK Stack: We have our in-house ELK stack which we use as a log management platform. Complete infrastructure logs come in one central place and are monitored on a real-time basis. We have built extensive Dashboards / Graphs which are used to monitor complete traffic in our production environment on a real-time basis.
-
NewRelic: New Relic gives you deep performance analytics for every part of your software environment. You can easily view and analyze massive amounts of data, and gain actionable insights in real-time: For your apps, users, and business.
-
OSEEC: OSSEC is an open-source Host-based Intrusion Detection System (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris, and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed.
-
Nagios: Nagios is an open-source computer software application that monitors systems, networks, and infrastructure. Nagios offers monitoring and alerting services for servers, switches, applications, and services. It alerts users when things go wrong and alerts them a second time when the problem has been resolved.
-
IVR Alerts are configured across the infrastructure. In case there is any latency or service outage, authorized people get instant call alerts.
- Wherever required, setup an alarm using the AWS Cloudwatch service.
All our infrastructure servers are hardened, which makes it more secure. We follow the process of enhancing server security through a variety of means which results in a much more secure server operating environment.
We follow Security advisory websites run by CERTs (https://www.cisecurity.org/) to get all notifications.
Perimeter Security (Managed Firewall, WAF, IDS & IPS, WAF)
Complete infra is set up in a custom Virtual Private Cloud (VPC) under Amazon Cloud. In VPC, there is a private subnet and a public subnet configured. Only Load Balancers are in the public subnet. Rest everything resides in a private subnet which can only be accessed internally from specific IPs and since it’s running on private IP, this makes it highly secure.
We have a Web Application Firewall (WAF) that blocks all unwanted activities happening on the application. Both default AWS rules and some customized rules are configured on WAF.
DR - Disaster Recovery
We are completely hosted on AWS and the infrastructure is designed as per recommendations. Things that we are already doing are mentioned below and in parallel, we are working on DR Architecture as well.
- The compute layer uses EC2 instances and the workload pattern is stateless.
- The complete Infra web layer is designed as multi-AZ, which makes it resilient if one AZ is down then the application can serve requests from another available AZ. This makes it highly available
- Our Primary database is Aurora cluster(AWS managed service). This is also a Multi-AZ setup, Which makes it highly available. We have enabled autoscaling at the DB level as well in case there is a load on some particular day or time. It gets auto-scaled.
- Our Cache layer is ElastiCache(AWS-managed service) with Multi-AZ enabled, and multiple shards configured. This makes it highly available
- We take backups of all databases at a defined frequency and the same restoration test is performed periodically.
BCP - Business Continuity Plan
Available in directory.
Backup & Restore
There are different backup policies configured for different types of data that we have. The same is mentioned below:
-
Databases: We do all database backups once a day. All backups are encrypted and saved in s3 which offers industry-leading scalability, data availability, security, and performance. The last 15 days database is always available in s3 and we archive all backups prior to 15 days in Glacier. We check restoration weekly to validate backups.
-
CodeBase: We use AWS Codecommit to maintain code versioning and branches. Peer reviews are also configured for all changes that are planned to go into production.
-
Applicant Uploads: All uploads are getting saved in s3 in customer specific directory.
Network Architecture Diagram
Available in directory.
VAPT Report
Available in directory.
General Data Protection compliance certificate
We regularly audit our infra and are aggressively working on compliance as well. Our infra is completely hosted on AWS and there are services like GuardDuty, Inspector, etc which audit complete infrastructure services and provide reports. We have enabled these services for regular infrastructure audits and rest assured infrastructure is highly secured and designed as per recommendations. Attached are csv files downloaded from GuardDuty which show our service and its compliance status. This should suffice the requirement.
In Parallel, We have onboarded Sprinto (Our Compliance Consultant) and we are working aggressively to get compliance.
Cloud server: AWS or Azure or Inhouse Datacenter
The application is completely hosted on AWS - Mumbai Region.
Compliances
-
ISO 27001: It is an information security standard created by the International Organization for Standardization (ISO). NoPaperForms is already ISO 27001 compliant and this is applicable for Applications, People, and Processes.
-
ISO 9001: It is defined as the internationally recognized standard for Quality Management Systems (QMS). ISO 9001 certification provides the basis for effective processes and effective people to deliver an effective product or service time after time. NoPaperForms is already ISO 9001 compliant.
-
GDPR: GDPR is for European Citizens to process their data. It can be considered the world's strongest set of data protection rules, which enhance how people can access information about them and place limits on what organizations can do with personal data. GDPR came into force on May 25, 2018. We are working aggressively and hopefully will comply with GDPR soon.
-
SOC 2: SOC 2 is a voluntary compliance standard for service organizations, developed by the American Institute of CPAs (AICPA), which specifies how organizations should manage customer data. The standard is based on the following Trust Services Criteria: security, availability, processing integrity, confidentiality, and privacy. We are working aggressively and hopefully will comply with SOC 2 soon.
DPDP Readiness
A digital data protection bill was introduced recently, and our compliance partner is currently working on the action plan in response to the Digital Personal Data Protection Act 2023 (DPDP Act). Once we receive the draft from them, we'll share the action plan. It's worth noting that the DPDP Act 2023 bears similarities to GDPR, and we are also addressing GDPR compliance, which will encompass DPDP Act compliance as well.
Below are a few things that we are already doing to safeguard customer's data.
- We regularly audit our infra and are aggressively working on compliance as well. Our infra is completely hosted on AWS and there are services like GuardDuty, Inspector, etc which audit complete infrastructure services and provide reports. We have enabled these services for regular infrastructure audits and rest assured infrastructure is highly secured and designed as per recommendations.
- We have a third-party vendor for VAPT. Penetration testing is performed once a year and Bots run regularly on a dedicated VAPT environment and publish its report.
- All databases are encrypted at rest and only applications can access them using proper CMK.
- Traffic is encrypted in transit using Transport Layer Security 1.2 (TLS) with an industry-standard AES-256 cipher.
- We are using AWS WAF to prevent any attack. It protects our application against SQL injection, XSS javascript, real-time POST actions, and automated attacks with Cloudflare rulesets. Default and some custom rules both are configured.
- We have strict policies on data access. No one can access the production data of clients unless requested. All Activity logs are also being maintained for all such activities.
- Access Control: Access to your data is strictly controlled. We implement user-based access control, allowing you to define and manage specific access permissions for different users or user groups. This means that only authorized personnel have access to specific data, and you can regularly review and update access permissions.
- Data Masking: Sensitive data can be masked to ensure that only authorized personnel see the full data, while others see only a limited or redacted view. Data masking adds an extra layer of security, especially when dealing with sensitive or personal information.
- Session Control: We employ robust session management to monitor and control user login sessions. This includes features like session timeouts and the ability to revoke sessions remotely if necessary. These measures prevent unauthorized access to user accounts.
- User Training: As per the global compliance process, We provide training to our employees on data security best practices to prevent internal threats and maintain a secure working environment.
Conclusion
In summary, our product is designed with a strong focus on data security, including user-based access control, and we continuously work to maintain the highest standards.