Friday, April 30, 2021

Steps of Spring Security and JWT

 Steps of Spring Security and JWT:

1. Dependency of jsonWebToken in Maven.

2. In Main class, @EnableWebSecurity annotation which extends WebSecurityConfigurerAdapter

3. Override configure method

protected void configure(HttpSecurity httpSecurity){

    httpSecurity.csrf().disable()

                        .authorizeRequests().antMatchers("/authenticate")

                        .permitAll()

                        .anyRequest().authenticated().and().

                        .exceptionHandling().and().sessionManagement()

                        .sessionCreationPolicy(SessionCreationPolicy.STATELESS);

  httSecurity.addFIlterBefore(jwtRequestFilter, userNamePwdAuthenticationFIlter);

4. Make request Mapping of "/authenticate" where jwtTokenUtil.generateToken(userDetails).

5. Create a Service class jwtTokenUtil having generateToken method:

     JwtBuilder.setClaims(claims).setSubject(subject).setIssuedAt(time).setExpiration(time)

    .signWith(SignatureAlgorithm.H256, SECRET_KEY).compare()

6. also validateToken by extracting userName.

7. Create Filter JwtRequestFilter extends OncePerRequestFilter

    @Override

    protected void doFilterInternal(HttpServletRequest request, response, filterChain){

        final String auth = request.getHeader("Authorization");

        jwtUtil.extractUserName(auth);

8. validateToken

9. chain.doFilter(request, response);


Performance Tests

 Performance Tests:

Frequently we discuss performance testing without considering the specifics. Multiple types of performance tests can be performed on a system at a very high scale.

The most common types of performance tests are as follow

LATENCY TEST

  • It is intended to compute the end to end transaction time.
  • The latency of the system is an observable parameter to the management, which tells how long customers have to wait for a transaction to happen. Hence this is one of the important performance tests.
  • Avg. value computation is not the right choice, latency is mostly computed in terms of P99.99, P99.9, P99, and P95.

THROUGHPUT TEST

  • It defines how many concurrent transactions a system can handle.
  • Latency and throughput tests are mostly interrelated.
  • Max throughput of the system is measured until the system starts degrading.

LOAD TEST

  • It represents a binary question — can the system handle a specific load?
  • It is mostly conducted just before the business events e.g. launch in a new country, viral content, and social media events.

STRESS TEST

  • It is intended to compute the breaking point of the system and how much spare headroom systems have.

ENDURANCE TEST

  • To detect the anomalies in the system if the system runs for an extended duration.
  • Many problems in the system are detected only if the system runs for a longer duration e.g. Slow memory leaks, cache population, and memory fragmentation issues.
  • It is the most suggested test for the fast response system which cannot tolerate the long length of the stop the world event caused by the full GC.

CAPACITY PLANNING TEST

  • To check if the system scales as per the expectations when additional resources are added to the system.

DEGRADATION TESTING

  • To check the behavior of the system when it partially fails. It is also known as the partial failure test.
  • It is usually done to validate the resiliency of the system. Chaos Monkey by Netflix is one such example to build a truly resilient system.

RULES TO SELECT TESTS

Golden rules that provide useful guidance over which performance test you should perform:

  • Identify what you care about and figure out how to measure it.
  • Optimize what matters, not what is easy to optimize.
  • Play the big points first.

NON FUNCTIONAL REQUIREMENTS

Observables that are important to the management and system.

NFR’s are generally provided as follow by the management:

  • Reduce the 95% percentile transaction time by 100ms
  • Improve system so that 5x throughput on existing hardware is possible.
  • Improve average response time by 30%

CAP Theorem

 CAP Theorem:

If you ever worked with any NoSQL database, you must have heard about CAP theorem. Mr. Brewer spoke about this theorem at Symposium on Principles of Distributed Computing many years way back in 2000.

Similar to Microservices blog, I will go again with the restaurant example. It is most probable that an IT professional driving (well… is there undrive?) in Bangalore traffic, one or the other day will think, let me quit this job and start a restaurant.

Let’s start the story. Srinivas was fed up (it usually happens to 85% of people just after appraisals) with IT job and eventually quit his job to start a restaurant. After careful examination, he started delivery by a phone call. He hired few delivery boys whom he got at very cheaper rates after many food delivery startups vanished in thin air.

Day5: Srinivas chose to operate the call himself while sitting at billing counter. Morning it was lull period. But from 7 pm onwards he started getting many calls. Whatever the order he gets, he writes on a paper, gives it to kitchen and …boom… it is cooked (well, not every time) and delivered to the customer. Around 8:30 pm he saw one customer walking to him. He is gasping for breath and apparently looks angry (maybe hungry inside). “I have been calling for last 30 minutes. Your phone is always engaged. I had to walk for 20 mins to come here to place the order. I am not happy.”

Idea time: Srinivas apparently was not happy and shaken to the core. When he was in IT services field, he was every day told by his bosses that “customer is God and you can’t make God upset”. Being a God fearing person he sacrificed many things in life including going to temple. After some disturbed sleep and thinking time, he got a brilliant idea. “Let me hire one more operator who can take the calls. If one line is engaged, another person will pick up.” It took a week to onboard new person while he dealt with fuming customers for a week. This is improving “Availability”.

Pic: Improve Availability

Day 15: Now new employee Raj is on-boarded and Srinivas is delighted. Customer’s waiting time in the call drastically reduced. If one line is engaged, calls are automatically transferred to second line. Between Srinivas and Raj things are working well. They were able to take orders and process them.

Day 27: At 8:00 pm Srinivas got a call from a customer. “I placed an order 45 minutes back. What is the status?” Srinivas took his phone number and name and tried looking at his order list. He doesn’t have it. He looked at Raj who is next to him. Raj is busy in taking other orders. He can’t disturb him. Srinivas apologised and asked the customer to wait for 2 mins. Customer is already unhappy and making him wait made him furious. He said “Cancel my order” and disconnected the phone. God fearing Srinivas is again distressed.

Idea time: Srinivas thought bit more about it. He also realized this kind of situation will come to Raj as well. After some thinking time…….Eureka — he found a solution. Next day he agreed with Raj to exchange order details as soon as they take orders. For example order number 223 was taken by Srinivas. He will have the original order and pass the copy of the order details to Raj. Similarly order number 224 was taken by Raj and he will pass the copy to Srinivas. Now they both have all the order details. Later if a customer asks the status, they can answer without keeping the customer in waiting. This is having “Consistency

Pic: Available and Consistent

Day 283: Everything is going on well so far. The business increased multifold. Now he has 3 people taking orders and he built 1 kitchen. Both Srinivas and Raj are not doing this work any more. New team is Suma, Ramesh and Supriya. They are young, vibrant and nonchalant. As for the previous process, each of them updates other two on the orders.

Day 289: All well and good until one fine day. Like in a bollywood movie, Supriya fell in love with Ramesh and Ramesh fell in love with Suma. And things started becoming complicated and Supriya started feeling like a loser. Things became worse with time. Both Ramesh and Suma stopped communicating the order details to Supriya and Supriya also did same. This led to broken communication. Now pretty much things went back to day 1. There is no “Partition tolerance” The only way service can be made available and consistent is by getting rid of either Ramesh and Suma or Supriya or making them work together. Or otherwise you can make system “Available” but with inconsistent data.

No partition tolerance because of broken communication

Lets come back to reality of our IT world.

CAP stands for Consistency, Availability and Partition Tolerance.

  • Consistency (C ): All nodes see the same data at the same time. What you write you is what you get to read.
  • Availability (A): A guarantee that every request receives a response about whether it was successful or failed. Whether you want to read or write you will get some response back.
  • Partition tolerance (P): The system continues to operate despite arbitrary message loss or failure of part of the system. Irrespective of communication cut down among the nodes, system still works.

Often CAP theorem is misunderstood. It is not any 2 out of 3. Key point here is P is not visible to your customer. It is Technology solution to enable C and A. Customer can only experience C and A.

P is driven by wires, electricity, software and hardware and none of us has any control and often P may not be met. If P is existing, there is no challenge with A and C (except for latency issues). The problem comes when P is not met. Now we have two choices to make.

AP: When there is no partition tolerance, the system is available but with inconsistent data.

CP: When there is no partition tolerance, system is not fully available. But the data is consistent.

It would be a crime if I don’t end the explanation of CAP theorem without the famous CAP triangle and some popular databases.

CAP

Geek, Artist, Fitness enthusiast, Amateur investor and Simple human

Data consistency in microservices

 Data consistency in microservices:

In this article, I’d like to share my knowledge and experience in Garanti BBVA, about moving from monolithic to microservices architectures, especially regarding data consistency.

Data consistency is hardest part of the microservices architecture. Because in a traditional monolith application, a shared relational database handles data consistency. In a microservices architecture, each microservice has its own data store if you are using database per service pattern. So databases are distributed among the applications. Each application may use different technologies to manage their data like non-sql databases. Although this kind of distributed architecture has many benefits such as scalability, high availability, agility etc., in terms of data management, there are some critical points regarding data such as transaction management and data consistency/integrity.

Figure 1. Sample Overall Transition Diagram

In the distributed architecture, data is highly available and scalable because each microservice has its own runtime and data store.

The Problem: Data Consistency in Distributed Systems

For monolith applications, a shared relational database handles and guarantee data consistency by ACID transactions. The acronym ACID means:

  • Atomicity: all the steps of a transaction is succeeded or failed together, no partial state, all or nothing.
  • Consistency: all data in the database is consistent at the end of transaction.
  • Isolation: only one transaction can touch the data in the same time, other transactions wait until completion of the working transaction.
  • Durability: data is persisted in the database at the end of the transaction.

In order to maintain strong data consistency, relational database management systems support ACID properties.

Figure 2. Sample Sequence Diagram of Monolithic Application

But in a microservices architecture, each microservice has its own data store which has different technologies. So, there is no central databaseno single unit of work. Business logic is spanned to the multiple local transactions. This means that you can’t use single transaction unit of work among databases in a microservices architecture. But you still need the ACID properties in your application.

Figure 3. Sample Microservices Interaction Diagram

Let’s explain with a simple sample scenario. In an order management system, there might be services such as stock management, payment and order management. Let’s assume that these services are designed in accordance with microservice architecture and database per service pattern is applied. In order to complete the order process, the order service first calls the Stock Management service for stock control and reservation, and the relevant products in the order are reserved for not selling to another customer. The second step is the payment step. Payment service is responsible for the payment business. Order Service calls the Payment Service and completes the payment from the customer’s credit card. Since each one is a separate service, updates over the separated DBs are committed within the service scope. The last step is creating order record. In this step, let’s say a technical error has occurred and the order record could not be created, the order number could not be sent to the customer, but the payment was received from the customer. Data consistency problem occurred here. I will talk about what can be done after this point in the Possible Solutions section in the rest of the article.

Figure 4. Sample Sequence Diagram of Microservices

Possible Solutions

First of all, there is no single solution which works well for each case. Different solutions can be applied depending on the use-case.

There are two main approaches to solve the problem:

  • Distributed transactions
  • Eventual consistency

Distributed Transactions

In a distributed transaction, transactions are executed on two or more resources (e.g. databases, message queues). Data integrity is guaranteed across multiple databases by distributed transaction manager or coordinator.

A distributed transaction is a very complex process since multiple resources are involved in the process.

Two-phase commit(2PC) is a blocking protocol used to guarantee that all the transactions are succeeded or failed together in a distributed transaction.

The XA standard is a specification for the 2PC distributed transactions. JTA includes standard API for XA. JTA-compliant application servers support XA out-of-the-box. But all the resources have to be deployed to a single JTA platform to run 2PC. For a microservice architecture, it is not suitable.

Benefits of Distributed Transactions

  • Strong data consistency
  • Support ACID features

Drawbacks of Distributed Transactions

  • Very complex process to maintain
  • High latency low throughput since it is a blocking process (not suitable for high load scenarios)
  • Possible deadlocks between transactions
  • Transaction coordinator is a single point of failure

Eventual Consistency

Eventual consistency is a model used in distributed systems to achieve high availability. In an eventual consistent system, inconsistencies are allowed for a short time until solving the problem of distributed data.

This model doesn’t apply to distributed ACID transactions across microservices. Eventual consistency uses the BASE database model.

While ACID model is providing a consistent system, BASE model provides high availability.

The acronym BASE means;

  • Basically Availableensures availability of data by replicating it across the nodes of the database cluster
  • Soft-statedue to lock of the strong consistency data may change over the time. Consistency responsibility is delegated to the developers.
  • Eventual consistency: immediate consistency may not possible with BASE but consistency will be provided eventually (in a short time).

SAGA is a common pattern that operates the eventual consistency model.

The Saga pattern is an asynchronous model, based on a series of services. In a Saga pattern, the distributed transaction is performed by asynchronous local transactions on all related microservices. Each services updates their own data in a local transaction. Saga manages the execution of the sequence of services.

Two most common implementations of Saga transactions are:

Choreography-based SAGA: No central coordinator exists in this case. Each service produces an event after completion of its task and each service listens to events to take an action. This pattern requires an mature event-driven architecture.

  • Event Sourcing is an approach to store the state of event changes by using an Event Store. Event Store is a message broker acting as an event database. States are reconstructed by replaying the events from the Event Store.
  • Choreography-based SAGA pattern can work well for small number of steps in a transaction (e.g. 2 to 4 steps). When number of steps in a transaction is increasing, it is difficult to track which services listen to which events.

Orchestration-based SAGA: A coordinator service (Saga Execution Orchestrator, SEG) is responsible for sequencing transactions according to business logic. Orchestrator decides to which operation should be performed. If an operation fails, Orchestrator undo the previous steps. It is called as compensation operation. Compensations are the actions to apply when a failure happens to keep the system in consistent state.

  • Undoing changes may not be possible already when data has been changed by a different transaction.
  • Compensations must be idempotent because they might be called more than once within the retry mechanism.
  • Compensations should be designed carefully.

There are some available frameworks to implement the Saga orchestration pattern e.g. CamundaApache Camel.

Benefits of SAGA

  • Perform non-blocking operations running on local atomic transactions
  • Offer no deadlocks between transactions
  • Offer no single point of failure

Drawbacks of SAGA

  • Eventual data consistency
  • Does not have read isolation, needs extra effort (e.g. the user could see the operation being completed, but in a few second, it is cancelled due to a compensation transaction.)
  • Difficult to debug, when participant service count is increased
  • Increased development cost (actual service developments plus compensations service developments are required)
  • Design is complex

Data consistency between the distributed data stores can be extremely difficult to maintain. There needs to be a different mindset in design of the new applications.

We can say that responsibility for data consistency moves from database to application level.

 

Which Solution to Choose

The solution depends on the use case and consistency requirements.

In general, the following design considerations should be taken into account.

  1. Avoid using distributed transactions across microservices if possible. Working with distributed transactions brings more complex problems.
  2. Design your system that doesn’t require distributed consistency as much as possible. To achieve this, identify transaction boundaries as following;
  • Identify the operations that have to work in same unit of work. Use strong consistency for this type of operations
  • Identify the operations that can able to tolerate possible latencies in terms of consistency. Use eventual consistency for this type of operations

3. Consider using event-driven architecture for asynchronous non-blocking service calls

4. Design fault-tolerant systems by compensations and reconciliation processes to keep the system consistent

5. Eventual consistent patterns requires a change in mindset for design and development

Conclusion

Microservices architecture has great features such as high availability, scalability, automation, autonomous teams etc. A number of changes in traditional methods are required to obtain maximum efficiency of the microservice architectural style. Data and consistency management is one of the topics that needs to be designed carefully.

Top DataStructures Problem from Medium-2

  Array: Find a pair with the given sum in an array Maximum Sum Subarray Problem (Kadane’s Algorithm) Longest Increasing Subsequence Problem...