Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the Fast Construction of Java

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "how to realize the rapid construction of Java". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

This article takes a simple e-commerce order system as an example, the source code please visit:

Git clone https://github.com/e-commerce-sample/order-backend``git checkout a443dace

The technology stack mainly includes: Spring Boot, Gradle, MySQL, Junit 5, Rest Assured, Docker and so on.

Step 1: start by writing README

A good README can give people a panoramic overview of the project, enable newcomers to get started on the project quickly, and reduce communication costs. At the same time, README should be concise and well-organized, and the recommendations should include the following:

Project Information: briefly describe the business functions realized by the project in one or two sentences

Technology selection: list the technology stack of the project, including language, framework, middleware, etc.

Local build: lists the tool commands used in local development

Domain model: core domain concepts, such as Order, Product, etc., for sample e-commerce systems.

Testing strategy: how to classify automated tests, which must be written, and which are not necessary.

Technical architecture: technical architecture diagram

Deployment architecture: deployment architecture diagram

External dependency: the external integrator on which the project runs, such as the order system depends on the member system

Environment information: access mode of each environment, database connection, etc.

Coding practice: unified coding practice, such as exception handling principle, paging encapsulation, etc.

FAQ: answers to frequently asked questions during development.

It is important to note that the information in README may change as the project evolves (such as introducing a new technology stack or adding a new domain model), so it also needs to be constantly updated. Although we know that one of the painful points of software documentation is that it can't keep pace with the actual progress of the project, as far as README is concerned, developers are advised not to be stingy with a little bit of keystroke time.

In addition, in addition to keeping the README up-to-date, some important architectural decisions can be recorded in the code base in the form of sample code. New developers can quickly understand the common practices and architecture choices of the project by reading the sample code directly. Please refer to ThoughtWorks's technical radar.

One-click local build

In order to avoid embarrassment such as "it took 3 colleagues to build successfully locally" mentioned earlier, in order to reduce the manual operation of "lazy" programmers, and to provide a consistent development experience for all developers, we hope that everything can be done with one command. Here, I summarize the following commands for different scenarios:

Generate IDE project: idea.sh, generate IntelliJ project file and open IntelliJ automatically

Local operation: run.sh, start the project locally, start the local database automatically, listen to debug port 5005

Local build: local-build.sh. Code can be submitted only if the local build is successful.

The above three commands can basically complete the needs of daily development. At this time, the development process for newcomers is roughly as follows:

Pull substitution code

Run idea.sh to open IntelliJ automatically

Write code, including business code and automated tests

Run run.sh for local debugging or necessary manual testing (this step is not required)

Run local-build.sh to complete the local build

Pull the code again to ensure the success of local-build.sh and submit the code.

In fact, the content of these command scripts is very simple, such as the content of the run.sh file is:

#! / usr/bin/env bash./gradlew clean bootRun

However, this explicit command can reduce the fear of newcomers, because they only need to know how to run these three commands to develop. In addition, one small detail: the locally built local-build.sh command could have been renamed to the simpler build.sh, but when we use the Tab key to auto-complete on the command line, we will find that it is inconvenient to automatically complete to the build directory instead of the build.sh command, so it is named local-build.sh. The details are small, but they reflect the purpose that we want to give developers a minimalist development experience. I call these seemingly trivial things "humanistic care" for programmers.

Directory structure

The directory structure advocated by Maven has now become the de facto industry standard, and Gradle also adopts the Maven directory structure by default, which is sufficient for most projects. In addition, in addition to the Java code, there are other types of files in the project, such as the configuration of Gradle plug-ins, tool scripts, and deployment configurations. In any case, the principle of the project directory structure is simple and organized, do not add extra folders at will, and need to be refactored in time.

In the example project, there are only two folders at the top level, one is the src folder for Java source code and project configuration, and the other is the gradle folder for all Gradle configurations. In addition, for developers' convenience, put the three common scripts mentioned above directly into the root directory:

└── order-backend ├── gradle / / folder, used to place all Gradle configuration ├── src / / folders, Java source code ├── idea.sh / / generate IntelliJ project ├── local-build.sh / / local build └── run.sh / / local run before submission

For gradle, we deliberately put the Gradle plug-in script with the plug-in configuration, such as Checkstyle:

├── gradle │ ├── checkstyle │ │ ├── checkstyle.gradle │ │ └── checkstyle.xml

In fact, by default, the Checkstyle plug-in looks for checkstyle.xml configuration files from the config directory at the root of the project, but on the one hand, it adds extra folders, and on the other hand, the facilities associated with the plug-in are scattered in different places, violating the principle of cohesion in a broad sense.

Based on business subcontracting

In the early years, Java subcontracting was usually based on technology, such as controller package, service package and infrastructure package at the same level as domain package. This approach is not currently respected by the industry, but should first be based on business subcontracting. For example, in the order sample project, there are two important domain objects, Order and Product (called aggregation root in DDD), around which all the business is developed, so create the order package and the product package respectively, and then create each subpackage associated with it under the package. The order package at this time is as follows:

├── order │ ├── OrderApplicationService.java │ ├── OrderController.java │ ├── OrderNotFoundException.java │ ├── OrderRepository.java │ ├── OrderService.java │ └── model │ ├── Order.java │ ├── OrderFactory.java │ ├── OrderId.java OrderItem.java OrderStatus.java

As you can see, we put classes such as OrderController and OrderRepository directly under the order package, and there is no need to divide these classes into separate subpackages. For the domain model Order, because it contains multiple objects, they are grouped into model packages based on the principle of cohesion. But this is not a must. If the business is simple enough, we can even put all classes directly under the business package, as is the case with the product package:

└── product ├── Product.java ├── ProductApplicationService.java ├── ProductController.java ├── ProductId.java └── ProductRepository.java

In coding practice, we always implement the code based on a business use case. In the technical subcontracting scenario, we need to switch back and forth among the scattered packages, which increases the cost of code navigation. In addition, the changes submitted by the code are also scattered. When viewing the code submission history, it is impossible to directly see what business function the submission is about. Under the business subcontract, we only need to modify the code under a single unified package, which reduces the cost of code navigation. Another benefit is that if one day we need to migrate a business to another project (such as identifying independent micro-services), then the whole business package can be moved directly.

Of course, business-based subcontracting does not mean that all codes must be confined to business packages. The logic here is: business subcontracting is given priority, and then some codes that do not belong to any business can be subcontracted separately, such as some util classes, public configurations, and so on. For example, we can still create a common package with subpackages such as Spring common configuration, exception handling framework and logs:

└── common ├── configuration ├── exception ├── loggin └── utils Automated Test Classification

In the current micro-service and front-end separate development model, the back-end project only provides pure business API and does not contain UI logic, so the back-end project will no longer contain heavyweight end-to-end tests such as WebDriver. At the same time, back-end projects, as independent running units that provide business functions, should also be tested at the API level.

In addition, some framework code in the program, either technical framework code such as Controller, or based on a certain architectural style of code (such as ApplicationService in DDD practice), on the one hand, does not contain business logic, on the other hand, it is a very thin layer of abstraction (that is, relatively simple to implement), and it is not necessary to cover it with unit tests, so the author's point of view is that you do not have to write separate unit tests for this. In addition, there are some important component codes in the program, such as Repository or distributed locks to access the database, the use of unit tests is actually "unmeasurable", and the use of API tests seems unreasonable in classification logic, so we can create a special test type called component testing.

Based on the above, we can classify automated tests:

Unit testing: core domain models, including domain objects (such as Order classes), Factory classes, domain service classes, etc.

Component testing: classes that are not suitable for writing unit tests but must be tested, such as the Repository class. In some projects, this type of testing is also called integration testing.

API testing: simulate the client to test each API interface, need to start the program.

Gradle provides only the src/test/java directory for testing by default, and for the above three types of testing, we need to separate them for management (and a sign of separation of responsibilities). To do this, you can classify the test code through the SourceSets provided by Gradle:

SourceSets {componentTest {compileClasspath + = sourceSets.main.output + sourceSets.test.output runtimeClasspath + = sourceSets.main.output + sourceSets.test.output} apiTest {compileClasspath + = sourceSets.main.output + sourceSets.test.output runtimeClasspath + = sourceSets.main.output + sourceSets.test.output}}

So far, the three types of tests can be written in the following directories:

Unit test: src/test/java

Component testing: src/componentTest/java

API Test: src/apiTest/java

It should be noted that the API tests here put more emphasis on the testing of business functions, and there may also be contract tests and security tests in some projects. Although technically speaking, these tests are all access to API, but these tests are separate concerns, so it is recommended to treat them separately.

It is worth mentioning that because component testing and API testing need to start the program, that is, we need to prepare the local database, we use Gradle's docker-compose plug-in (or jib plug-in), which automatically runs the Docker container (such as MySQL) before running the test:

Apply plugin: 'docker-compose'dockerCompose {useComposeFiles = [' docker/mysql/docker-compose.yml']} bootRun.dependsOn composeUpcomponentTest.dependsOn composeUpapiTest.dependsOn composeUp

For more details on test classification configuration, such as JaCoCo test coverage configuration, please refer to the sample project code in this article. Readers who are not familiar with Gradle can refer to the author's Gradle learning series.

Log processing

In addition to completing the basic configuration, there are two other points to consider in log processing:

Add request identification to the log to facilitate link tracking. Multiple logs are sometimes output during the processing of a request, and log tracking is more convenient if each log shares a unified request ID. At this point, you can use the MDC (Mapped Diagnostic Context) feature natively provided by Logback to create a RequestIdMdcFilter:

Protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {/ / request id in header may come from Gateway, eg. Nginx String headerRequestId = request.getHeader (HEADER_X_REQUEST_ID); MDC.put (REQUEST_ID, isNullOrEmpty (headerRequestId)? NewUuid (): headerRequestId); try {filterChain.doFilter (request, response);} finally {clearMdc ();}}

Centralized log management, in the scenario of multi-node deployment, the logs of each node are scattered, so tools such as ELK can be introduced to uniformly output logs to ElasticSearch. The sample project in this article uses RedisAppender to output logs to Logstash:

Ecommerce-order-backend-$ {ACTIVE_PROFILE} elk.yourdomain.com 6379 whatever ecommerce-ordder-log true redis

Of course, there are many unified logging solutions, such as Splunk and Graylog.

Exception handling

When designing a framework for exception handling, you need to consider the following:

Provide uniformly formatted exception returns to the client

The exception information should contain enough context information, preferably structured data to facilitate client parsing

Different types of exceptions should contain unique identifiers so that clients can accurately identify

There are usually two forms of exception handling, one is hierarchical, that is, each specific exception corresponds to an exception class, which eventually inherits from a parent exception, and the other is unitary, that is, there is only one exception class in the whole program. then use a field to distinguish different exception scenarios. The advantage of hierarchical exception is that it can express the meaning of exception, but if the hierarchical design is not good, it may lead to a large number of exception classes in the whole program; the advantage of simplex is simplicity, but its disadvantage is that it is not expressive enough.

The example project in this article uses hierarchical exceptions, all of which inherit from a single AppException:

Public abstract class AppException extends RuntimeException {private final ErrorCode code; private final Map data = newHashMap ();}

Here, the ErrorCode enumeration contains the unique identity of the exception, the HTTP status code, and the error message; the data field represents the context information for each exception.

In the example system, an exception is thrown when no order is found:

Public class OrderNotFoundException extends AppException {public OrderNotFoundException (OrderId orderId) {super (ErrorCode.ORDER_NOT_FOUND, ImmutableMap.of ("orderId", orderId.toString ());}}

When returning an exception to the client, an ErrorDetail class is used to unify the exception format:

Public final class ErrorDetail {private final ErrorCode code; private final int status; private final String message; private final String path; private final Instant timest private final Map data = newHashMap ();}

The final data returned to the client is:

{requestId: "d008ef46bb4f4cf19c9081ad50df33bd", error: {code: "ORDER_NOT_FOUND", status: 404, message: "order not found", path: "/ order", timestamp: 1555031270087, data: {orderId: "123456789"}

As you can see, the data structures in ORDER_NOT_FOUND and data correspond one to one, that is, for the client, if the ORDER_NOT_FOUND is found, then it can be determined that there must be an orderId field in the data, and then the precise structural parsing can be completed.

Background tasks and distributed locks

In addition to completing the client request immediately, there are usually some regular tasks in the system, such as sending mail to the user or running data reports on a regular basis; in addition, sometimes we asynchronize the request from the design. At this point, we need to build infrastructure related to background tasks. Spring natively provides task processing (TaskExecutor) and task scheduling (TaskSchedulor) mechanisms, but in distributed scenarios, we also need to introduce distributed locks to resolve concurrency conflicts, so we introduce a lightweight distributed locking framework, ShedLock.

The enable Spring task is configured as follows:

Configuration@EnableAsync@EnableSchedulingpublic class SchedulingConfiguration implements SchedulingConfigurer {@ Override public void configureTasks (ScheduledTaskRegistrar taskRegistrar) {taskRegistrar.setScheduler (newScheduledThreadPool (10));} @ Bean (destroyMethod = "shutdown") @ Primary public TaskExecutor taskExecutor () {ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor (); executor.setCorePoolSize (2); executor.setMaxPoolSize (5); executor.setQueueCapacity (10); executor.setTaskDecorator (new LogbackMdcTaskDecorator ()) Executor.initialize (); return executor;}}

Then configure Shedlock:

@ Configuration@EnableSchedulerLock (defaultLockAtMostFor = "PT30S") public class DistributedLockConfiguration {@ Bean public LockProvider lockProvider (DataSource dataSource) {return new JdbcTemplateLockProvider (dataSource);} @ Bean public DistributedLockExecutor distributedLockExecutor (LockProvider lockProvider) {return new DistributedLockExecutor (lockProvider);}}

Implement background task processing:

@ Scheduled (cron = "0 Run scheduled task 1 *?") @ SchedulerLock (name = "scheduledTask", lockAtMostFor = THIRTY_MIN, lockAtLeastFor = ONE_MIN) public void run () {logger.info ("Run scheduled task.");}

To support code calling distributed locks directly, create a DistributedLockExecutor based on Shedlock's LockProvider:

Public class DistributedLockExecutor {private final LockProvider lockProvider; public DistributedLockExecutor (LockProvider lockProvider) {this.lockProvider = lockProvider;} public T executeWithLock (Supplier supplier, LockConfiguration configuration) {Optional lock = lockProvider.lock (configuration); if (! lock.isPresent ()) {throw new LockAlreadyOccupiedException (configuration.getName ());} try {return supplier.get () } finally {lock.get () .unlock ();}}

Call directly in the code when using:

Public String doBusiness () {return distributedLockExecutor.executeWithLock (()-> "Hello World.", new LockConfiguration ("key", Instant.now (). PlusSeconds (60));}

The example project in this article uses JDBC-based distributed locks. In fact, any mechanism that provides atomic operations can be used for distributed locks. Shedlock also provides distributed lock implementation mechanisms based on Redis, ZooKeeper, Hazelcast, and so on.

Unified code style

In addition to the Checkstyle uniform code format, some common coding practices in the project also need to be unified across the development team, including, but not limited to, the following:

The client request data classes all use the same suffix, such as Command

The data returned to the client uses the same suffix, such as Represetation

Unify the process framework for request processing, such as the traditional three-tier architecture or DDD tactical model

Provide consistent exception returns (see the "exception handling" section)

Provide a unified paging structure class

Clear test categories and unified test base classes (please refer to the "automated test classification" section)

Static code check

Static code review mainly includes the following Gradle plug-ins. For specific configuration, please refer to the sample code of this article:

Checkstyle: used to check the code format and standardize the coding style

The successor of Spotbugs:Findbugs

Security check of Java Class Library provided by Dependency check:OWASP

Sonar: tracking for code continuous improvement

Health examination

Health checks are mainly used in the following scenarios:

We want to initially check whether the program is working properly.

Some load balancing software will judge the reachability of nodes through a health check URL.

At this point, you can implement a simple API interface that is not controlled by permissions and can be accessed publicly. If the interface returns the 200th status code of HTTP, you can initially assume that the program is running normally. In addition, we can add some additional information to the API, such as the submission version number, build time, deployment time, and so on.

Start the sample project for this article:

. / run.sh

Then visit the health check API: http://localhost:8080/about, and the results are as follows:

{requestId: "698c8d29add54e24a3d435e2c749ea00", buildNumber: "unknown", buildTime: "unknown", deployTime: "2019-04-11T13:05:46.901+08:00 [Asia/Shanghai]", gitRevision: "unknown", gitBranch: "unknown", environment: "[local]"}

The above interface uses a simple Controller implementation in the sample project, and in fact Spring Boot's Acuator framework can provide similar functionality.

API document

The difficulty of software documentation lies not in writing, but in maintenance. How many times, when I went down step by step against the project document, I couldn't get the correct result. I asked my colleague and got a reply, "Oh, that's out of date." The Swagger used in the example project in this paper reduces the cost of API maintenance to some extent, because Swagger can automatically identify method parameters, return objects, URL and other information in the code, and then automatically create API documents in real time.

Configure Swagger as follows:

@ Configuration@EnableSwagger2@Profile (value = {"local", "dev"}) public class SwaggerConfiguration {@ Bean public Docket api () {return new Docket (SWAGGER_2) .select () .apis (basePackage ("com.ecommerce.order")) .build () .build ();}

Start the local project and visit http://localhost:8080/swagger-ui.html:

Database migration

In the traditional development mode, the database is maintained by a special operation and maintenance team or DBA. To modify the database, you need to apply to DBA and tell it to migrate the content. Finally, DBA is responsible for the implementation of database changes. In the continuous delivery and DevOps movement, these tasks are gradually advanced to the development process, which of course does not mean that DBA is not needed, but that these tasks can be done by developers and operators together. In addition, in the micro-service scenario, the database is included within the boundaries of a single service, so based on the principle of cohesion (why, this seems to be the third time this article has mentioned the principle of cohesion, which shows its importance in software development). Changes to the database are best maintained in the code base along with the project code.

The example project in this article uses Flyway as the database migration tool. After adding the Flyway dependency, you can create a migration script file in the src/main/sources/db/migration directory:

Resources/ ├── db │ └── migration │ ├── V1__init.sql │ └── V2__create_product_table.sql

The naming of the migration script needs to follow certain rules to ensure the execution order of the script, and do not modify the migration file after it takes effect, because Flyway will check the checksum of the file, and if the checksum is inconsistent, the migration will fail.

Multi-environment construction

In the software development process, we need to deploy the software to multiple environments and go through multiple rounds of verification before we can finally go online. In different stages, the running state of the software may be different, such as local development may be dependent on the third-party system stub; continuous integration may be built using testing memory database, and so on. For this reason, the following environment is recommended for the sample project in this article:

Local: for developers' local development

Ci: for continuous integration

Dev: for joint debugging of front-end development

Qa: for testers

Uat: class production environment for functional acceptance (sometimes referred to as staging environment)

Prod: formal production environment

CORS

In a system where the front end is separated from the front end, the front end is deployed separately, and sometimes the domain name is different from the back end, so cross-domain processing is required. The traditional approach can be done through JSONP, but this is a more "trick" approach. The current more general practice is to use the CORS mechanism. In the Spring Boot project, the configuration of enabling CORS is as follows:

@ Configurationpublic class CorsConfiguration {@ Bean public WebMvcConfigurer corsConfigurer () {return new WebMvcConfigurer () {@ Override public void addCorsMappings (CorsRegistry registry) {registry.addMapping ("/ * *");};}}

For projects that use Spring Security, you need to ensure that CORS works before the filter of Spring Security, for which Spring Security provides the appropriate configuration:

@ EnableWebSecuritypublic class WebSecurityConfig extends WebSecurityConfigurerAdapter {@ Override protected void configure (HttpSecurity http) throws Exception {http / / by default uses a Bean by the name of corsConfigurationSource .cors () .and ()} @ Bean CorsConfigurationSource corsConfigurationSource () {CorsConfiguration configuration = new CorsConfiguration () Configuration.setAllowedOrigins (Arrays.asList ("https://example.com")); configuration.setAllowedMethods (Arrays.asList (" GET "," POST ")); UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource (); source.registerCorsConfiguration (" / * ", configuration); return source;}} commonly used third party class libraries

Here are some common third-party libraries that developers can introduce according to the needs of the project:

Guava: common class libraries from Google

Apache Commons: common class libraries from Apache

Mockito: mock mainly used for unit testing

DBUnit: managing database test data during testing

Rest Assured: for Rest API testing

Serialization and deserialization of Jackson 2:Json data

Jjwt:Jwt token certification

Lombok: automatically generate common Java code, such as equals () method, getter and setter, etc.

Feign: declarative Rest client

Tika: used to accurately detect file types

Itext: generate Pdf files, etc.

Zxing: generate QR code

Xstream: a lighter XML processing library than Jaxb

This is the end of the content of "how to achieve the Rapid Construction of Java". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report