7 Major Mistakes to Avoid When Moving to Microservices

For many companies, moving to microservices can mean big benefits for your organization.

Work with a partner that can help you see these benefits and avoid the pitfalls.


As we told you in our blog Going from Monolithic to Microservices, the architecture enables your organization to keep up with frequent deployment, improve productivity, and allows for targeted system changes without impacting the entire application.

However, the complexities of microservices architecture should never be taken lightly. The intricacy, data integration, management, and required skillsets to integrate the new system can be overwhelming for an organization not properly prepared.

To help you out, we have put together a top 7 list of the most common mistakes that companies make when moving to microservices. With these in mind, you can decide if the architecture is right for your application and if so, minimize stress while maximizing ROI.

Currently undergoing a modernization effort? Check out our Platform Modernization blog series and learn everything you need to know about the process, from determining your approach to sunsetting your existing platform.


Most Common Mistakes When Moving to Microservices


1. Assuming Microservices is Always the Next Right Step

The first common mistake organizations make is assuming microservices is one-size-fits-all architecture. While it is a great solution for modernizing some monolith applications, it is not always the right next step and once committed to the architecture, it is very hard to turn back.

Look at it this way, microservices is just another approach to application development. While the architecture is excellent at addressing problems like adapting to rapid changes, scalability, agility, and availability, there are some pitfalls for organizations not properly prepared.

For example, to reap the true benefits of microservices, distributed systems are a must. However, distributed systems increase application complexity. Thus inadequate planning and infrastructure investment could lead to performance issues and increase overhead needed to manage services and ensure they are contained and isolated.

Additionally, microservices comes with a steep learning curve since it requires a different technology stack and team mindset shift. A lot of care must be put into properly managing teams, documentation, and automation testing.

If you are thinking about moving to microservices, it is important to understand both the benefits AND challenges, so you can determine if it is the right next step for your application.


2. Building Services That Are Too Small

Ok, we know what you are thinking, “doesn’t the name MICROservices imply small?”

Yes, you are right. In the end, the architecture is all about building modular services and well-defined interfaces. After all, these services are what give a platform the flexibility to quickly adapt to changes, minimize the impact of these changes, and quickly scale desired services to meet higher demand.

The tricky part is the definition of “modular.” Is it one line of code, several lines of codes that do one thing, or a collection of blocks of code that execute one business logic? This definition is different from person to person and business to business.

Resist the urge to build too small. Going smaller means having more services and each service can have many instances with each instance running within a separate container.

While a container is not as heavy as a virtual machine, there is still considerable overhead in terms of computing and storage to the distributed physical servers in a cluster. As the number of containers increases, so does the stress to the infrastructure and issues in performance. To ensure a certain performance level, companies need to invest in upgrading physical systems that keep up. Inter-communications, scaling, versioning, releasing and rolling updates become much more complicated to automate and maintain. Finally, you must have a good dev-ops and development team with the right expertise to manage the platform.

On the contrary, building too big defeats the purpose of moving to microservices.

So what is the right size for a microservice? Well, the answer to this question varies depending on the business scenario and different mindsets of the people designing the microservices.

Choosing the right size for a microservice is no easy task. However, it is important to consider this critical decision throughout the design stage of your platform and continue to revisit and modify service size as you learn what works throughout this process.


3. Tightly Coupling Services

When moving from a monolith to microservices architecture, there is a tendency to create microservices that are tightly coupled and dependant on one another in terms of inter-connectivity, states, or intentions. In other words, one service cannot function independently without the existence of the other service.

A system of tightly coupled microservices is really the worst of both worlds. You still have a monolith in the sense that modular changes and scaling are difficult. At the same time, you have microservices with unnecessary complexity and high costs.

That said, microservices should be able to function independently. This allows for each service and even each of the service’s instances to be brought offline at any time without bringing down the whole application

It is not always possible to loosely couple all microservices. But at the very least, services that are minor, replaceable, have frequent updates, or need scaling strategies should be independent from the others.


4. Not Centralizing The Logging

To troubleshoot an issue, typically the first step is to look at the logs produced by the application.

In a monolith application, there is usually a central location to look for the logs. However, this is not the case in microservices since a business functionality might consist of several microservices. This issue is amplified when scaled because each of the services has multiple instances and each instance runs and produces logs within a separate container.

Navigating a network of distributed containers in a cluster to locate the necessary logs would be a nightmare.

If centralized logging is not properly planned, designed, invested, and implemented, unnecessary time and money will be spent towards troubleshooting issues.

To avoid this pitfall, all logs should be harvested and routed to one central location where they can be easily filtered and searched and each log can be quickly mapped back to its original request.


5. Keeping States Localized to Microservices

When moving from monolith to microservices, there is a tendency to keep vital states, i.e. cached intermediary calculations, temporary data in files, etc.,  locally in the services. If done properly, this may work for a monolith but is bad practice for microservices.

Why? Microservices and their instances can go offline anytime due to failures, rolling updates, or by the platform that manages them. Once dead, an instance of a service is gone forever. To replace a dead service, you cannot simply revive it. Instead, new instances will need to be created. This means all of the data stored in the containers of the stateful services will be lost.

Best practice in microservices is to store all states or data in distributed systems (databases, Hazelcast, etc.) where all services and their instances can access necessary information.


6. Not Considering Code Duplications vs. External Dependencies vs. Utility Services

Breaking down a large monolith application into independent microservices introduces a number of code duplications to account for the duplicate logic needed for the microservices to be developed and released independently.

Now, code duplication is not automatically a bad thing for microservices because: (1) it is necessary to have independent services and (2) looking at the service level, there are no such code duplications. See in some cases, services evolve independently causing those original common logics to also change over time.

However, there are common logics that must remain consistent for some microservices. In these cases, code duplication is a problem because one change leads to repeated changes in all of the related microservices.

The quickest solution for this problem is to factor out these common logics into external libraries so that various services can declare them as dependencies. Factoring out code duplication into common libraries helps to eliminate making changes to multiple services and reduces the chance of human error while applying those changes.

But this solution still has a major issue: all corresponding services must be changed, rebuilt and released to get the new versions of the common libraries. This challenge defeats a very important goal of microservices, agility. In some cases, this problem can be resolved by creating utility microservices holding those common logics. Other services will call these utility microservices instead of using libraries. With this approach, agility still holds, BUT we create tightly coupled services.

All of that to say, no single solution is perfect. But we must carefully consider which logics can be duplicated, which should pull from external common libraries, and which should become utility services to help keep the application as agile as possible and the code-base manageable.


7. Not Automating the Versioning and Releasing Process

In a monolith application, versioning and releasing is relatively simple because everything goes together. On the contrary for microservices, some services might undergo rapid changes and require rapid releases, while other services remain relatively static.

In microservices, a service instance is deployed within a container and a container is usually instantiated from an image. This means, different versions of a service usually have corresponding container images’ versions.

Managing a service’s version and its deployments alone should be automated to avoid human error and improve the turn-around time of this process.

More importantly, even if the services are all loosely coupled, their versions’ compatibility must be managed carefully. When microservices mature and have undergone multiple changes and releases, knowing the compatibility of different versions of different services would be impossible without documentation and some form of automation.

Needless to say, microservices is a hot trend in software development and for companies prepared for the move, the benefits are well worth the effort.

However, in order for the effort to be successful, companies need to evaluate existing applications, infrastructure, and other capabilities first and make the educated decision whether or not microservices is the right next move. Your organization should consider if you have the capability to handle microservices in the long run so you can truly reap the benefits of the architecture.


Outsourcing microservices and working with a partner that knows how to conquer the complexities can help you maximize those benefits.

Keep your stress low by contacting KMS and find out how to properly manage this transition, keep the integrity of your data intact, and expand on the skillset of your existing team.


Smart Strategies to Deliver Quality Code: Part 2 Collaboration

What is collaboration? Merriam-Webster states it is “to work jointly with others or together especially in an intellectual endeavor.” We agree — coding is definitely an intellectual endeavor! Collaborating with others helps to reduce the stress of working independently. Cue the song: All by myself! Don’t wanna be all by myself anymore! Sorry, couldn’t resist getting that stuck in your head.

In part 1 of this blog serieswe discussed how properly maintaining code is an essential part of delivering a quality product. Now, we’re going to discuss how effective team collaboration can boost code quality.


Be a Team Player!

How many times did you hear that while growing up? Being a team player was important in our youth and is still important as adults. Teams that work together can use their collective talents to improve quality and produce high-performing products.

Here are just a few benefits a development team can enjoy if everyone is a team player:

  • Boost knowledge sharing
  • Ensure smooth integration and testing
  • Encourage peer reviews to compensate others’ weaknesses
  • Raise awareness of progress/problems to move forward
  • Achieve higher confidence in code quality

All of these points would be moot if you resist becoming a team player. A team is only as weak as its most ineffective member. (Psst … don’t be that guy — or gal! You know, the one who gets the sideways glances or aggravated sighs from the team due to unanswered emails.)


Take Off Those Blinders

When you encounter problems during development, the quickest fix is not always the best solution long term. Collaborating with your team can help you validate the solution and does not cause new problems in other areas of the product. You may be inspired to create elegant and effective solutions that would be more than just a short-term fix.

Quick and workaround solutions save time initially but are usually short-lived and costly long-term. Always search for the most effective solution. Collaboration plays a big part in this pursuit.

Our recommendation? Talk to other developers working up or downstream from your code. If you only worry about your own code, the solution you find may solve your issue but to the detriment of the system as a whole. Also, if you discuss the issue with another developer, they may have an “ah ha!” moment. Something they can do on their side may fix the problem for you. And who doesn’t like having to do less work for the same result?

Thinking long-term means the code should integrate and play well with the existing system. Effective solutions also involve using best practices. Make sure to be flexible, pay attention to, and understand/follow these practices and standards. After all, soldiers who don’t follow orders and decide their own way of behaving eliminate the whole unit and lose the war. (Hint: Communication + Using Best Practices = Winning)


Be open to new ideas.

Forward-thinking developers are a hot commodity! Imagining how the code will be maintained after it’s been created, offering valuable ideas and solutions to a problem, and creating a more efficient approach to building code. Compound these skills with a collaborative mindset? You’ll be worth your weight in gold (real gold, not the weight of a Bitcoin).

What does it mean to be open to new ideas? Well, for starters, take what we’re about to tell you and be receptive to the concept.

While researching this blog, we came across SmartBear’s 2018 State of Code Review and found it to be an eye-opening survey. Did you know that “55% of respondents choose ‘Workload’ as the top obstacle to conducting code reviews, followed by ‘Deadline/Time Constraints’ at 42%”? At last, we have the reassurance that we are not alone in our deadline pressures!

As we were so impressed with this survey, we set up a call with Patrick Londa, a Marketing Manager at SmartBear, to discuss the results. The key takeaway? The importance of using code review tools to effectively manage your workflow.

“When we ask developers about their code review process, we often get laughs or grimaces initially. It seems like an abstract thing to them, like they know they need to do it but it’s kind of a chore and gets in the way of the things they want to do otherwise, whether that’s building a new feature, etc. The more that teams can standardize and make it a seamless part of their process, the more willing they are to participate in reviews on a regular basis.”

– Patrick Londa, SmartBear Marketing Manager

Have you ever thought of code review as similar to doing the dishes or taking out the trash? Trust us, we’ve all been there. As luck would have it, “chores” and “work” do not have to be synonymous. Using code review tools, such as SmartBear’s Collaborator or Atlassian’s Crucible, can make the time spent conducting those reviews more productive than merely sending an email.

According to the survey, “For the third straight year [2016 – 2018], respondents identified code review as the #1 way to improve code quality.” Take that statistic and look at your own practices. When you actually take the time to do reviews, how do you track progress and results? Do you … just send emails? We certainly hope not!


Effective developers are great communicators.

In the end, effective collaboration is key to building amazing products. Be a team player who contributes to everyone’s success, search for the best solutions and not just the one that first comes to mind, and be open to new ideas your team members suggest.

Communication is the very foundation of collaboration. But it’s also about how you communicate that matters. Working to improve code quality takes more than just sending an email and hoping you receive a timely response.

We get it. Finding time to communicate in an already packed schedule can be a daunting task. So don’t waste time searching through emails to respond to work-related tasks! Take advantage of technology to help you … well, to help you build more technology! There are multiple collaboration tools available that can help optimize the number of hours in the day you spend working.

We’ll say it again. Teams that collaborate effectively will always produce more code in a shorter timespan and with higher quality. Think about this too: tools can help collaboration, but behaviors should also change. Reach out for assistance when you’re stuck, regularly share your code with others to get help during peer reviews, and be receptive to feedback from others.

After all, teams are all on the same ship. Every part of a team contributes to making it across the sea or sinking to the bottom. Don’t allow your product to sink to a watery graveyard! Keep it floating through team collaboration and, with hard work and a dash of luck, the ship will make it across the sea faster than others. We wish you fair winds and following seas!

Looking for a new software development partner?


Smart Strategies to Deliver Quality Code: Part 1 Maintainability

The quality of your code can make or break a product. The question is … who is responsible for testing? Conventional thought creates distinct, separate roles for a developer and a tester. Developers … well, they write code. And testers are responsible for quality assurance. Right?

Sure. Developers write the code that testers verify works correctly. But is anything ever simply black-and-white? In this regard, responsibility for testing has been relegated to the grey zone.

Now, we’ll tell you how to break out of that grey zone by suggesting code maintainability techniques developers can use to enhance quality.


I can read my code, why can’t you?

Have you ever said this to another developer? What about to a tester who comments on your coding ability?

Simple code is always better because it is easier for someone else to understand right away. Even if you don’t work in a team, build your code with the perspective that you’re not the only one who will touch the code. Other people may need to read, understand and support it down the road.

Readable code saves a lot of time and effort to provide support, fix bugs, handle change requests, and implement new enhancements. Bad code is expensive!  A task that needs only one line of change (which can be done quickly) could end up taking hours or even days to read through and figure out.

Software with poorly written code is difficult to maintain and extend, shortening the lifespan of the product. Do you really want your hard work to be scrapped so new code can be built? No? We didn’t think so.


  • Use descriptive naming
  • Keep style and structure consistent
  • Separation of Concerns principle keeps code cohesive with a single responsibility
  • Be clear, direct and to-the-point with transparent intention


Keep It Simple, Sam

Have you heard the phrase “Keep It Simple, Sam?” This design principle urges products be made as simple as possible so that even someone with limited knowledge can operate or repair the product. If your code is not maintained properly, how will that affect the application as a whole?

Create code that is self-explanatory so that someone without your knowledge of the code can read it easily. If you think extra documentation is necessary to explain what the code is doing, then the code is too complex.

Avoid using inline comments and separate documentation as they are expensive to update in the business where requirements are constantly changing. For example, if you made comments in the code to explain something but the code is then changed, you must also make sure to change the comment to reflect the new code. This can be time-consuming and, if the comments are not updated, confusing.

Quality > quantity! Keep code and relevant documentation simple to save a lot of time and effort on support and integration of various sub-systems.


  • Use straightforward logic to minimize need for extra in-code documentation
  • Keep sections modular to reduce complexity and maximize reusability
  • Ensure all team members understand the best practices and standards that apply to your code base


Consider Future Growth

To succeed in a competitive marketplace, software products must continually add new features without introducing bugs. Quality code takes future growth and extensibility into consideration.

As a developer, you know that your code will need to be refactored and enhanced over time. But did you know that the best way to protect your code quality and enable you to refactor confidently is to ensure that your core functionality is continuously tested at the unit and integration level?

Let’s put it this way. If you fail to continuously test working functionality, hidden bugs can crawl into your code while it is being refactored. This can cause serious problems down the road and can lead to costly changes in core product quality. We shudder at that thought.

Use automated unit and integration tests to catch bugs before they can eat away at the structure. These tests provide the first line of quality assurance for delivered outputs. More importantly, they can be executed automatically in build cycles to quickly make sure bugs haven’t taken residence. Turn on that No Vacancy sign!


  • Ensure your automated tests cover all documented acceptance criteria, as well as common sense scenarios, that may not be fully documented.
  • Run a full suite of unit and integration tests on your local build before you commit your code to the common repository.
  • You must use Continuous Integration Pipeline tools (like Jenkins and GitLab) to run your unit and integration tests for each build.


Take Pride in Your Work!

Do you like showing off your code to other developers? Do you have confidence that your code will stand up to rigorous testing?

We challenge you to have pride in your work by spending a little more time finding bugs before sending it to a tester. In turn, you can challenge the tester to find problems with your code. After all, quality is not only a tester’s job. Developers are responsible for the quality of the code they deliver.

The World Quality Report 2018-19 backs up our claim that quality is not only a tester’s responsibility but also other departments:

“Today, the way QA and testing activities are executed has changed. On the one hand, the adoption of new frameworks and technologies has broadened the number of skills required for testing, while on the other, testing activities too have spilled over to other domains and functions such as Development and Business Analysis. Thus, today, everyone has a role to play when it comes to QA and testing.”

Experience has taught us that using these practices help developers build better code. It also improves relationships between developers and testers. Ultimately, if the developer thinks of the potential issues to begin with, the whole process can run smoother and the product can be released to market much faster.

Are you still not convinced you would be able to fit quality checks into your schedule? We have a solution for that too! Stay tuned for the next part of the series when we discuss the topic of team collaboration to improve quality. In particular, we will draw references from SmartBear’s 2018 State of Code Review and an interview with Marketing Manager Patrick Londa. Collaboration has never been easier with today’s technology advances!

Looking for a software development or testing partner?



World Quality Report – https://www.capgemini.com/service/world-quality-report-2018-19/