Java: Standing Operating Procedures (SOP)
SOP in Java development
SOP - Standard Operating Procedure(s) or Standing Operating Procedure(s) - a set of step-by-step instructions implemented by organizations (military - mainly NATO armies, law enforcement agencies, emergency services, corporate business).
It guides workers through routine activities in case of any event.
Standard means there is only one generally accepted pattern. Standing means patterns accepted by a unit (team, branch, department) in practice that may vary within an organization (like rules of conduct being team-specific). Standing is a better word in the context of development - each project has its own policies.
SOP in development is roughly an equivalent of a runbook for sys-admins and dev-ops. It contains steps required to complete stages of development process or to fulfil the Definition of Done in Agile (in form of a rulebook or checklist).
Standing Operating Procedure: Java development
Before the beginning of work with feature branch:
- checkout / fetch feature branch from master or most up-to-date branch
- mvn clean install
- check if there are unit tests, run it, check results
- check if there are IT tests, run it, check results
- same for any other type of tests
- run application locally (proper local configuration .yml / .properties, Spring profile)
- find out how to do some rough manual testing of an application in local and on environment (at least in local)
Rough manual testing means:
- application starts and is available
- it is working
- features implemented up to this date are working
- new implementations are working
Before every crucial commit / push:
- mvn clean install -Dmaven.test.skip, mvn clean install
- launch UT suite
- launch IT suite
- run the application
- remove unnecessary changes, comments, clean unused imports, check one by one the files that are you pushing
- be sure not to commit / push secrets (unless otherwise indicated)
- if push contains environment-specific changes / runtime-sensitive changes (impacting DB or external connections, app config, IT-related implementation), build branch and deploy it to any dedicated env (like dev, test) to check if it is not failing and starts successfully
In addition, before final push and MERGE - BUILD - DEPLOY process:
-
restore original versions of all properties, remove developer changes, e.g.
- liquibase.properties -> back to local db at localhost, with local user and pass
- .yml or .properties -> remove dev config, e.g. localhost proxies (mocks, authorization…)
- mock module .yml -> restore port 8080
- pom.xml -> remove local libraries versions and replace by official repo versions, reset to versions working on environments
- pom.xml -> no SNAPSHOT versions, specifically check for incompatibilities between JDK, SpringBoot, Groovy and the like
- remove hardcoded secrets, replace by environment variables
- verification of changes inside IntelliJ IDEA: Version Control -> Local changes -> Ctrl + D on a file to show diff
- run local Sonar
- merge the latest master into working branch (feature branch)
- build and run the app
- run UT and IT
- create pull request
- build (GitHub, Bitbucket), create artifact (if not created automatically) and deploy using feature branch through CI / CD (Bamboo, Jenkins)
- check kubectl (running instances, no restarts), Kibana / Splunk logs
- when pull request accepted, merge and remove feature branch if no longer needed
- build and deploy using master branch, check kubectl, Kibana / Splunk logs
- rough manual testing (at least test new feature implementation)
Definition of Done (story closed, task completed, released)
Test-Driven Development (TDD) criteria are filled - follow required strategy or create it if not exists before development starts:
- Unit Test (UT) - verify code contained in a class (responsibility of an author or another developer)
- Integration Test (IT) - verify interaction between classes (developer or QA engineer)
- Behavior-Driven Development (BDD) - like above but more business oriented (often with written descriptions, dev or QA)
- System testing (mainly manual) - verify a running application in general, key features, crucial points, new features (developer, QA, BA or PO)
- System Integration Testing (SIT) - veriffy a running application as part of a system, including third-party components (BA / PO in cooperation with devs, better to have dedicated testers - QA team)
- End-To-End Testing (E2E) - verify the system as a whole in technical context (QA team, mainly using dedicated tools)
- User Acceptance Testing (UAT) - verify the system as a whole in business context or user experience / user interface context (UX / UI) - performed by business user, QA team simulating business user or selected business stuff responsible for testing
Acceptance criteria for application / library development:
- Merged implementation has been tested accordingly on a given level of TDD, on a given environment.
- Merged implementation behaviour is stable and predictible on environments, from DEV, TEST, INT to envs of higher order, monitored for some period of time (like a couple of days).
- In case of microservices, changes made in app config (.yml / .properties) are reflected in the configuration in services.config (on all evironments accordingly).
- All changes made to REST API endpoints are reflected in services contract.
- New feature and the application in general is working well in the integrated system, including other affected services, members of the same pipeline, and so on, on various environments.
- Application artifact / library version has been successfuly released.
- For some cases, technical documentation must be updated / released.
NOTE These are not The Ten Commandments! Procedures may vary depending on a project!