If you prefer to read individual sections, here is a table of contents.
Introduction
If you are sitting in the shade today, it’s because someone planted a tree a long time ago.
The goal of enterprise workflow is to make software development as easy and consistent as possible, less dependent of quality of talent, their mood and set of skills. It should also be possible to replace any given developer with another one of the same caliber and continue development as normal. Last but not least, it should give confidence to another organization, that the process can deliver on time, and on budget, and everything must work correctly from the first time.
Usually this means not only it has to be a logically sound workflow as seen by a small group of architects and managers, but also it has to follow established practices in the industry. Which means such ways of writing code that has worked for 20+ years, rather than trying to catch up with latest online trends.
I split this article into 5 distinct parts, Documentation, Communication, System Integration, Training and Coding, listed in no particular order. All of them are important, but even if only a few of them are being followed, it still makes a decent foundation for high quality software development.
Documentation
Documentation is a love letter than you write to your future self.
When discussing a topic, it is very common for a group of people to understand the same subject in a variety of ways. On surface, they agree, but when they start working on something, it turns out they did not completely understand what was discussed. Assumption and estimates are part of human nature and require practice to be done correctly. Without practice, the error margin is too high. Because numbers are complicated. And your brain is not meant to process them.
To eliminate the error factor, it helps to document shared understanding in form of text, diagrams, screenshot, chat logs, conversation transcripts and other written means of communication. Yes, writing things down is much slower than talking it through. However, it has a 100% knowledge retention rate, which does not fade with time. Time factor is important, because if you don’t remember your wrong decisions and mistakes that came out of it, you will repeat them. Practically, it determines the number of iterations required for produce a viable product. It does not help if you complete a week’s task in a day, and then spend 2 weeks to rework it (refactor, fix bugs etc).
In enterprise software development writing code is very expensive, if you consider coding + builds + PR review + automation (=unit and integration testing) + manual QA + ticket related effort (move to “in progress”, resolve, verify, log time). In comparison, writing documentation is cheap. It might take 1-2 hours to document proposed design at high level for a feature worth 2 weeks of coding. The benefit of spending those few hours on documentation prior to coding is 3-fold. First, we want to make sure our shared understanding is documented in writing. It can be useful for audit and record keeping (root cause analysis). Second, it helps to know if a developer actually understood everything correctly. This can save lots of time, 50% and more of the scope could be spent trying different things, because the original idea did not work. And finally, when our business requirements is mapped to a developer friendly language (technical English, specific and concise), it is much easier to make edits to it, when things need to change slightly at the end. Which in turn makes it easier to code.
Communication
The difference between the right word and the almost right word is the difference between lightning and the lightning bug.
Everything should be written down as much as possible. Sure, verbal is great for passing the idea to someone else within 5-10min. But did they get it completely? If they are dragged into another task for a day or two, will they remember when coming back to it? If they get sick, and someone else needs to take over, will you need to spend another 5-10min of your time? All these are valid questions when you have a team of size 100 developers and more.
Written works even better when communicating the same information to a group of people. You could have an online team meeting, but the issue is that it mostly works for transferring emotions, not information. Objective information transfer becomes less efficient with more people attending. Some don’t want to listen, some don’t care, and others cannot react fast enough to make reasonable comments. I would say there should never be more than 5 people attending a technical discussion.
System Integration
I was gratified to be able to answer promptly, and I did. I said I didn’t know.
We no longer live in a world where emails are preferred method of information exchange. In fact, it could be that in 1-2 years from now, no-one will be using emails at all, at least in corporate world. But regardless of how we choose to communicate, enterprise standard implies tight integration between all available products. Being able to link PRs, commits, jira tickets, slack messages and emails is a business in itself for some companies. Integration is literally all they do.
But not only information needs to be accessible from everywhere (which includes every device type), this access method needs to be efficient. Both in number of user actions, ease of use and minimizing the number of artifacts per deliverable, while still providing a level of granularity needed for business to operate. For example, if we have a 5 line change in 50 commits, we should rebase and squash it into a single commit. It is much easier to undo, forward merge, track from jira and so on. For non-code related activities, this can include analytics for root cause analysis, code quality, time logs - with or without AI - simple dashboards or even raw Excel data for further analysis.
With how much information there is, it is no longer possible to remember everything. Knowledge should not be silo-ed, which happens when you always have to ask someone for a “secret” list of files, bookmarks, jira filters etc. Therefore it must be a priority to implement such level of integration that assumes no prior knowledge about the knowledge system itself. In other words, you do not need to know how exactly the integration works, to be able to use it. The system should be intuitive enough that anyone can find anything, from the first time, or within a few iterations.
When it comes to people, knowing or not knowing something (usually, subject matter) is often a point of conflict. Some have been on the job for a few years, they know it, while others who just joined - don’t. Often the former group looks down on the latter with contempt. And this is bad for everyone. Productivity is correlated with information sharing, but it doesn’t mean everyone should be talking with everyone all the time. Same way PRs replaced direct commits to master or feature branch which were popular 10-20 years ago, knowledge can be pushed into the system, so that others can pull. It is much more efficient this way, compared to direct 1-on-1 sharing.
Training
Never stop learning, because life never stops teaching.
Not everyone knows everything. Knowledge gets outdated with time. New skills emerge, old skills become obsolete. Enterprise workflow assumes that training will be required at any time during the development process. Training is more than reading online articles or watching videos. Remember, enterprise guarantees success by following the established process. In simple terms, it could be validating learning efficiency by having trainees pass the exam at the end, often automated with a reputable certification vendor (Pluralsight, Microsoft). Such training should be 100% free to the employee, done in their paid time on the job, and encouraged (bonuses or other formal recognition). Investing in people has a long term benefit for the company, even if some of them choose to leave.
Coding
Code as if the guy who maintains your code is a violent psychopath who knows where you live
Enterprise code should be simple and of high quality at the same time. Despite popular belief, you do not need to use latest frameworks, programming patterns, libraries and 3rd party controls to get high quality code. There are a few basic metrics of high quality code. It runs fast, it can be fixed fast, and it can be modified to fit new requirements fast. Fast is relative to industry average. For example bug X can be fixed in Y days. Can you fix bug similar to X in Y minutes? That’s called fast.
What does not matter is how quickly the code can be initially written. When an hour of downtime can cost $10000 and more, it makes sense to spend a few more days to get a guaranteed high quality outcome. Also some corp-to-corp contracts penalize for bugs, and penalties can run into millions of dollars. In practice it means that if 1 developer causes 1 bug, it could be more expensive than their lifetime salary at that company.
Statistically, in software development, making code fast inversely correlates with having it duplicated. If your code is largely a product of copy/paste, it will be slow to write, run, change and maintain. There is a risk that over time a heavily copy/pasted solution will become obsolete due to constantly occurring bugs and performance issues, which take more time to fix than for new bugs of the same nature to appear. In long term, it is more dangerous to copy/paste business logic, than to copy/paste framework level code, aka boilerplate. This is because >90% of the code base is business logic, which is much more likely to change than framework.
For a real world example, consider Google. They became a number one search engine, because they could search very fast, and their search results were more relevant that search results on rivals’ platforms. Google also pays their developers very well, and only loads them to 10-20% of the daily capacity. Today, there isn’t a faster search engine than google, and so Google kept their lead. So why doesn’t everyone try to come up with their idea of the search engine, implement it, beat google on performance, and make billions? Because performance is very complicated. Not only do you need raw brain power, you also need the right tools and theoretical basis in computer science, math and optimization theory. And with all this, you still need to spend a few years worth of research, which might end up with nothing (as most research).
Conclusion
First of all, thank you for getting this far. Even if you do not agree with some points, I think this article served its purpose. Which is to plant the seed. Perhaps, when you hear the same points later in your career, you will recognize them and listen instead of taking a defensive stance. Because the idea is to empower everyone to make an impact, instead of concentrating the power in the hands of select few, usually the most vocal software developers, leads and architects. In today’s diverse world, great ideas can come from developers of any level of seniority. If one cannot cope with a task, blame the process, not the developer. Proper development workflows should help everyone perform at the best of their ability. How to achieve it - is another story.