ZStack Cloud Platform
Single Server, Free Trial for One Year
In today’s wave of AI-based digital transformation, private cloud IaaS software plays an increasingly important role. As a bridge for users to interact with complex back-end systems, the front end occupies a vital position in private cloud software. An efficient, reliable, and easy -to-use front end can not only enhance the user experience, but also significantly improve work efficiency, reduce operational errors, and maximize the value of the private cloud as the infrastructure of the intelligent computing base. With the rapid development of technology, it is particularly important to keep the front-end architecture modern and high-performance. This article will detail how we explore progressive front-end architecture upgrade solutions enabled by ZStack AIOS to meet growing business needs and technical challenges, thereby providing users with a more excellent private cloud software experience.
Our front-end project started to be built in 2019. The core is based on the micro-front-end architecture of Umi 3 and Qiankun. The idea of dividing the main application and the sub-application can greatly alleviate the problems of slow hot updates and long packaging time in the development of the original 3.x version of the monolith application. However, with the continuous iteration of the business, each sub-application has grown from a small sapling to a towering tree. Now the average time to start a sub-application is about 50 seconds+, and the hot update time sometimes even takes 20 seconds, which is undoubtedly a bad development experience.
Currently, the pipeline in the build system adopts a full packaging method. On average, a complete build takes about 40 minutes, which is very long. In contrast, the backend build time is only about 20 minutes. There is no doubt that there are significant performance issues in the build speed.
Umi has adopted a solution of hard-coding all dependent package versions. This certainly ensures stability during development and ensures that the project can be delivered quickly, but it is not conducive to continuous product polishing. As a result, when our team wants to introduce new technologies or new solutions in the industry, we need to consider Umi’s compatibility, and sometimes we even need to implement some of our own plug-ins. For example, Umi hard-coded React version 16, so it is difficult for us to use the new features of 18; Umi’s hard-coded PostCSS 7 does not support the latest version of Tailwind CSS. At the same time, we adopted AntD as our basic component library at the beginning of development to ensure the speed and effect of development. However, due to the customization requirements of the business side, a large number of AntD’s own styles were overwritten, resulting in the AntD version also having to be locked. Once upgraded, it will have a lot of negative impact on the styles of the entire platform.
We adopted the Monorepo development model, built based on yarn workspace. Since there is no dependency cache, the project needs to fully build the dependency library locally before starting, which takes about 2 minutes, wasting a lot of development time. Webpack 4 is also a relatively old version. Lint tools such as ESLint have certain performance issues, which also slow down the development experience to a certain extent.
In summary, our current architecture seemed advanced in 2019, but with the continuous iteration of the business, it brought a series of problems. These challenges not only affected the work efficiency of the development team, but also limited the technical innovation and performance optimization of the product. Therefore, it is imperative to upgrade and optimize the front-end architecture to improve development efficiency, improve user experience, and lay the foundation for future technological evolution.
After analyzing the above problems, we found that we need to update the architecture. To solve the above problems, we must inevitably solve the problems brought by Umi itself. So there are two solutions. One is to try to upgrade Umi 4, and the other is to strip Umi and try other excellent architecture solutions in the community. We will try both routes and choose the one that can get through faster.
Umi4 upgraded the default React version to 18, React Router from v5 to v6, and provided support for Mako, a packaging tool written in Rust. In theory, it can solve the three problems of slow startup, slow hot update, and slow packaging. So I chose a sub-application with fewer functions to try to upgrade. According to the officialMigration TutorialWe change them one by one.
This step is very simple, just modify the dependent version of umi.
After our research, we found that the API of Umi 4 is very incompatible with that of Umi 3 plug-ins. Unfortunately, some plug-ins in our project are written based on the API of Umi 3, which means that these plug-ins must be rewritten. Let’s skip this step first. As long as the project can run, it should be a simple matter to rewrite the plug-in with the help of Zhita AIOS.
Then the nightmare came. When we tried to run it, a series of very strange errors were reported. The reason is very simple, and it is also mentioned in the official documentation. There must be something incompatible between our configuration and Umi 4. Then we can only check the configuration items one by one. However, there are more and more problems. The more we change the configuration, the harder it is to run the project. After spending about 3 man-days, we found that this matter might be a time black hole, and there is no hope of victory. Perhaps a more reliable solution is to start a new project based on Umi 4 and then slowly move the business code over. At this point in the progress, we decided to put it aside for now and see if other routes are feasible.
There are quite a few excellent full-stack frameworks in the open source community, such as Next.js, Remix, etc. Newcomers in the packaging tool industry include Vite and RsBuild. Because we need to consider the problem of gradual upgrades, the smaller the amount of single modification, the better. After all, it is impossible to start all over again for the monolithic project at once. For this reason, when we decide on the solution, whether it supports the Qiankun micro-frontend solution we are currently using has become a relatively important indicator. Next.js is a very excellent full-stack framework that supports SSR and works closely with the React development team. It can be said that many new features in React 18 and 19 are simply customized for Next.js. However, after our research, we found that there seemed to be compatibility issues between Next.js and Qiankun, and Next.js itself is a full-stack framework with some of its own encapsulation. Our concern is that it cannot meet the needs of high customization (repeating the mistakes of Umi), so we gave up. At this stage of the research, the team focused on Vite and RsBuild. These two tools are newcomers in the packaging industry. One uses the browser’s support for loading ES Modules to achieve hot updates in seconds, and the other is a packaging tool based on Rust, which is said to be compatible with Webpack. It is very important that the configuration of both is very explicit and has no black box logic. The community acceptance is very high and the ecosystem is complete, which is very suitable for us to continue to polish our products.
Vite officially provides a React project template that can be used out of the box. After it was generated, we first tested whether it could be connected to Qiankun as a sub-application. We tried the community plug-in vite-plugin-qiankun. After completing the configuration according to the example, we found that it could be successfully connected. The development environment can be hot-updated normally (although it will cause page refreshes), and there are no problems running in the production environment. At this time, we found a pitfall. The React plug-in can run normally when using @vitejs/plugin-react, but it cannot be used normally when using @vitejs/plugin-react-swc based on SWC. After searching, there is no related issue, so we will run @vitejs/plugin-react first.
So we found a sub-application with the least amount of business code and slowly moved the business logic over. The first difficulty was to make some modifications to adapt to ES Module. Because Umi3’s packaging tool is Webpack 4, many third-party dependencies are packaged in CJS format. Some very strange errors will appear in the pre-build phase of Vite’s dependencies. Modifying the configuration related to optimizeDeps cannot solve the problem. Sometimes one problem is fixed but three new problems are generated. Fortunately, by changing the package to the ES Module version (for example, Lodash is replaced with Lodash-es), the running problem was slowly solved. This took about 2.5 man-days.
The sub-application runs fine independently, but I was shocked when it was connected to the main application. The styles of the main application were all wrong (remember we overwrote a lot of the original styles of AntD at the beginning?). After research, I found that Vite would mount all the styles related to the page header, which would cause the main and sub-application styles to conflict. This problem is easy to solve. Just add a prefix to the class name through autoprefixer.
vite.config.ts The configuration file has changed these things, and you can see that there are many things to configure:
// https://vitejs.dev/config/
exportdefaultdefineConfig(({ mode, command }) =>({
build:{
// outDir: path.join(
// __dirname,
// ‘../../../mi’
// )
// 解决require is not defined
commonjsOptions:{
transformMixedEsModules:true
}
// assetsDir: ‘static’
},
optimizeDeps:{
force:true
},
// base: loadEnv(mode, process.cwd()).VITE_BASE_URL,
base: command ===’build’?`/${APP_NAME}`:loadEnv(mode, process.cwd()).VITE_BASE_URL,
plugins:[
// https://github.com/tengmaoqing/vite-plugin-qiankun
react({
fastRefresh:!useDevMode
}),
qiankun(APP_NAME,{
useDevMode
}),
vitePluginImp({
libList:[
{
libName:’antd’,
style:name =>`antd/es/${name}/style`
}
]
}),
vitePluginGraphqlLoader(),
commonjs()
]asPluginOption[],
css:{
preprocessorOptions:{
less:{
modifyVars:{
‘ant-prefix’:’zac’
},
javascriptEnabled:true
}
}
}
}))
After solving the above problems, the sub-applications under Vite can run perfectly under the main application. At the same time, we found that RsBuild version 1.0 was also released, so we also want to try RsBuild. After all, the decision that determines the future development direction of the project should be made as cautiously as possible.
Similar to Vite, the project can be easily run through the official project template. Of course, the first step is to test the compatibility with Qiankun, which was surprisingly smooth. Because RsPack, which RsBuild is based on, can be understood as the Rust version of Webpack, it can be run by simply modifying the configuration according to the example of Qiankun’s official example and making some entry transformation. The only problem is that the hot update of the sub-application will fail when running under the main application. I have been tossing around with the configuration of the Dev Server for a long time but to no avail. After observing the debugging tool of the browser, I can see that the page actually received the code update message sent by the Dev Server, but the code did not move. I guess it is because of Qiankun’s access. It is likely that the source code needs to be debugged. Considering that the sub-application can be debugged independently, it is not a big problem to not solve this problem for the time being.
The next step was to move the business code, and the whole process was also very smooth, after all, it benefited from the compatibility with the Webpack API. It can be said that this was the best experience so far.
Just do some configuration and you can run it without any mental burden:
export default defineConfig({
plugins:[pluginReact(), pluginLess(), pluginSvgr()],
server:{
port:7036,
proxy:{
‘/graphql’:{
target:’http://localhost:3100′,
ws:true,
secure:false,
},
},
},
output:{
assetPrefix:’/ai-store/’,
},
dev:{
assetPrefix:’http://localhost:7036/ai-store/’,
// assetPrefix: true
},
tools:{
rspack:{
output:{
library:`ai-store-[name]`,
libraryTarget:’umd’,
chunkLoadingGlobal:`webpackJsonp_ai-store`,
uniqueName:’ai_store’,
}
}
bundlerChain:(chain)=>{
chain.module
.rule(‘graphql’)
.test(/\.(gql|graphql)$/)
.use(‘graphql’)
.loader(‘graphql-tag/loader’);
},
},
});
In addition, RsBuild also supports Module Federation 2.0. We have also tried to integrate it. Through configuration, it can be used together with Qiankun, and the entire application may be gradually migrated to Module Federation in the future. However, enabling Module Federation will significantly slow down hot updates and cold starts, and the reason has not yet been studied.
The following is a module producer configuration, and the type: “window” needs to be configured in the Qiankun environment.
new ModuleFederationPlugin({
name:’ai_store’,
library:{type:”window”,name:”federation_provider”},
exposes:{
‘./button’:’./src/button.tsx’,
},
shared:{
react:{
singleton:true,
requiredVersion:’^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc’,
},
‘react-dom’:{
singleton:true,
requiredVersion:’^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc’,
},
},
}),
After running both Vite and RsBuild, the final route decision moment has arrived. After careful consideration, we decided to abandon Umi for the simple reason that it is not suitable for us to continuously polish the product. In the past five years, our team has spent a lot of time debugging source code and fighting wits with Umi’s black box, and there are even problems that cannot be solved. Therefore, we tend to choose open source solutions that are more developed and mature and have an active community. In the duel between Vite and RsBuild, we still chose RsBuild. Both Vite and RsBuild have the ability to hot update in seconds, but RsBuild is written in Rust, while Rollup, which Vite packaging relies on, is not (Rolldown written in Rust is still in the development stage), so RsBuild is far ahead in terms of packaging speed. Another more important point is that it has better compatibility with Qiankun and natively supports the new micro-frontend solution Module Federation (Vite needs to rely on plug-ins to implement). In summary, we chose RsBuild as the final build tool solution.
Our original react and react-dom versions were 16.14. Thanks to the good backward compatibility of the React library, our business code can run completely after upgrading to 18.3.1 without any modification.
{
“react”: “^18.3.1”,
“react-dom”: “^18.3.1”,
}
Modify the version number as shown above and run pnpm install.
React Router v5 to v6 has some breaking changes due to a lot of official refactoring. The biggest change is the hook useHistory was removed and replaced with useNavigation. All places we use useHistory in our code base need to be replaced.
// v5
history.push(url)
// v6
navigate(url)
// v5
history.replace(url)
// v6
navigate(url,{replace:true})
// v5
history.push(url,params)
// v6
navigate(url,{state:params})
// v5
history.replace(url,params)
// v6
navigate(url,{replace:true,state:params})
Our interaction solution with BFF uses GraphQL+Apollo Client, our previous solution was to ensure that the entire window was a singleton, and the main application would initialize an Apollo Client for all sub-applications to share when it started. Our version was hard-coded to 3.4.17, but running it under React 18 resulted in useLazyQuery No, this problem is solved after upgrading to 3.11.8. However, it brings a new problem, that is, Apollo Client becomes a multi-instance. There is no way, this is a temporary problem caused by gradual upgrades to maintain delivery quality, and it can be solved after all sub-applications are upgraded to the new architecture.
const[loadZone,{ called, loading, data }]=useLazyQuery(
GET_ZONE,
{variables:{language:”english”}}
);
As a high-performance and highly flexible instant atomic CSS engine, UnoCSS can not only significantly reduce the size of CSS files, but also increase the reuse rate of styles. The on-demand generation feature of UnoCSS means that only styles that are actually used will be compiled, which greatly optimizes the performance of the production environment. Because our design system specifications are quite complete, when using LESS before, developers were often troubled by the complexity of the design system, which caused some visual bugs on the page. The high customizability of UnoCSS can solve this problem well. In uno.config.ts, , we can fully define the grid and theme section, a series of design system-related rules. The configured rules can perfectly correspond to the terms in the design system. Here is an example:
{
theme:{
colors:{
“neutral-0″:”#ffffff”,
“neutral-100″:”#f5f7fa”,
“neutral-200″:”#f0f2f5”,
“neutral-300″:”#dbdde0”,
“neutral-400″:”#c8cacd”,
“neutral-500″:”#96989b”,
“neutral-600″:”#707275”
}
}
For example, if you want the background color of an element to be white, the developer can directly write bg-neutral-0, which greatly reduces the mental burden of development and the bugs caused by the page visual not matching the design draft, and also saves time for design walk-through.
Our existing component library is a secondary encapsulation based on AntD, which well met the needs of our business at the time. However, due to the high degree of customization of the design language, we have overwritten a large number of AntD styles, which makes continuous integration and upgrading of AntD quite difficult. At the same time, due to the tight writing time, the component library code has obvious signs of rushing, resulting in a certain amount of technical debt. As a result, it takes a developer at least half a day to fix a bug in some components. In order to reduce the impact of such problems on development happiness and empower our business in the AI era, a new stable and lightweight component library is essential.
Our requirements for the component library are actually very simple. First, it provides common capabilities of the middle and back-end management system, and second, it is headless, because we have a complete set of custom theme design systems. Coincidentally, the shadcn/ui based on Radix UI, which was very popular in 2023, meets our needs very well. Its concept of “build your own component library” is exactly what we want, and it can be well based on it for secondary packaging. This component library does not give you out-of-the-box use, but an example of making your own component library.
The example given on the official website uses tailwindcss, but we prefer to use unocss for project reasons. After installing unocss, we do not need to follow the official website configuration, just execute init.
npx shadcn@latest init
npx shadcn@latest add button
After executing the above two lines of commands, a button.tsx containing the default button component will appear in our code base:
import *asReactfrom”react”
import{Slot}from”@radix-ui/react-slot”
import{ cva,typeVariantProps}from”class-variance-authority”
import{ cn }from”@/lib/utils”
const buttonVariants =cva(
“inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50”,
{
variants:{
variant:{
default:”bg-primary text-primary-foreground hover:bg-primary/90″,
destructive:
“bg-destructive text-destructive-foreground hover:bg-destructive/90”,
outline:
“border border-input bg-background hover:bg-accent hover:text-accent-foreground”,
secondary:
“bg-secondary text-secondary-foreground hover:bg-secondary/80”,
ghost:”hover:bg-accent hover:text-accent-foreground”,
link:”text-primary underline-offset-4 hover:underline”,
},
size:{
default:”h-10 px-4 py-2″,
sm:”h-9 rounded-md px-3″,
lg:”h-11 rounded-md px-8″,
icon:”h-10 w-10″,
},
},
defaultVariants:{
variant:”default”,
size:”default”,
},
}
)
exportinterfaceButtonProps
extendsReact.ButtonHTMLAttributes<HTMLButtonElement>,
VariantProps<typeof buttonVariants>{
asChild?:boolean
}
constButton=React.forwardRef<HTMLButtonElement,ButtonProps>(
({ className, variant, size, asChild = false, …props }, ref) =>{
constComp= asChild ?Slot:”button”
return(
<Comp
className={cn(buttonVariants({ variant, size, className }))}
ref={ref}
{…props}
/>
)
}
)
Button.displayName=”Button”
export{Button, buttonVariants }
Then we can use unocss to change the style to our own design system, and there will be no problem of overriding the style.
In addition, the component library used by AI code generation websites similar to v0 is V0, which may effectively improve the efficiency of AI code generation.
import React,{ useState }from’react’
import{Button}from”@/components/ui/button”
import{Input}from”@/components/ui/input”
import{Select,SelectContent,SelectItem,SelectTrigger,SelectValue}from”@/components/ui/select”
import{Textarea}from”@/components/ui/textarea”
import{Card,CardContent,CardHeader,CardTitle}from”@/components/ui/card”
import{Label}from”@/components/ui/label”
import{Switch}from “@/components/ui/switch”
The above is the header of the component file generated by V0. You can see that all the basic components are provided by shadcn/ui. Then the workflow can be optimized to talk to AI, generate code, and then copy it into our code base for direct use.
Updates at the component library level will inevitably lead to different APIs and different ways of using components. During the encapsulation process, we try to adapt to the API consistent with the previous component library to provide developers with a smoother migration experience. However, there is a problem that must be solved, that is, how to improve developers’ acceptance of new components? After all, the business pressure is already very high, and learning new things in this case may cause additional mental burden on developers.
The solution we chose is storybook, an out-of-the-box component library development solution. We can develop our components by starting storybook, and after the development is completed, we can directly generate an interactive document station, which saves time in developing component libraries and learning new components.
We originally used yarn workspace to build a Monorepo. One of the biggest pain points was the lack of a mature multi-package management solution, which led to unclear management between different packages, and multiple instance problems caused by inconsistent dependency versions. Repeated packaging was often required, wasting meaningless time. Another big pain point was the old-fashioned yarn ghost dependency problem. In order to solve the above problems, we made the following upgrades:
pnpm is a next-generation package management tool that has been widely used in the front-end ecosystem. It greatly saves disk space, improves performance, and solves the ghost dependency problem of yarn by using hard links and symbolic links to share dependencies.
Configure the workspace to declare which packages in which paths are included in the workspace:
packages:
– “storybook”
– “packages/*”
– “packages/apps/*”
As for ghost dependencies, we used babel to write a relatively crude scanning script to find out which referenced packages do not exist in the dependencies in our package.json. We found a very magical thing that some of the third-party libraries we referenced rely on ghost dependencies. When encountering errors such as package not found, we analyze and solve them on a case-by-case basis. The solution is nothing more than completing the package or changing the package.
We initially chose Vercel Turborepo, the function is simple and direct, easy to introduce, and has a remote cache mechanism that is very effective in speeding up. Just declare the task execution mode as shown in the example below and declare the corresponding scripts in package.json.
turbo.json:
{
“$schema”:”https://turbo.build/schema.json”,
“tasks”:{
“build”:{
“dependsOn”:[
“^build”
],
“outputs”:[
“dist/**”
]
},
“build-storybook”:{
“cache”:false,
“outputs”:[
“dist/**”
],
“dependsOn”:[
“^build”
]
},
“type-check”:{
“dependsOn”:[
“^type-check”
]
},
“lint”:{
“dependsOn”:[
“^lint”
]
}
}
}
package.json:
{
“scripts”: {
“build”: “vite build”
}
}
After the configuration is complete, runturbo buildThis can trigger the multi-threaded sequential build of each sub-package in the entire Monorepo.
At the same time, the community has ready-made turborepo-remote-cache The library allows us to easily perform private deployment of remote cache servers. In this way, Turborepo will analyze whether there is a build cache before actual execution. If so, it can directly download the cache from the remote cache server, which greatly saves the packaging time of the development and continuous build pipeline.
However, as the number of packages increased, for unknown reasons, the cache hit rate dropped sharply. It often happened that if a file in a package was changed, no cache hit would be found in the entire Monorepo.Nx. Nx is a powerful build system and project management tool that can help us optimize the build process and implement incremental build and caching.
The configuration file directly uses the ability of Zhita AIOS to deploy Qianwen chatbots with one click to convert Turborepo’s configuration into Nx and run it directly:
{
“$schema”:”./node_modules/nx/schemas/nx-schema.json”,
“targetDefaults”:{
“build”:{
“dependsOn”:[
“^build”
],
“outputs”:[
“{projectRoot}/dist”
],
“cache”:true
},
“build-storybook”:{
“dependsOn”:[
“^build”
],
“outputs”:[
“{projectRoot}/dist”
]
},
“type-check”:{
“dependsOn”:[
“^type-check”
]
},
“lint”:{
“dependsOn”:[
“^lint”
]
}
},
“defaultBase”:”main”
}
changesets is a tool for managing versions and generating update logs, which can help us better manage multi-package releases. Previously, we released versions through the build process written by gulp, and changeset natively includes the entire Monorepo release process and provides complete support for Monorepo.
Install:
pnpm install @changesets/cli -W && pnpm changeset init
After the installation is complete, execute the following command and then follow the procedure to upgrade the package and automatically generate the version number and git tag:
pnpm changeset add
pnpm changeset version
pnpm build
pnpm changeset publish
As a new generation of JavaScript code analysis tool, oxlint has significant advantages over ESLint. It is written in Rust, runs faster, and can greatly improve the lint efficiency of large projects. OxLint also supports incremental analysis, further shortening the code inspection time. In addition, OxLint’s rule design is more rigorous and comprehensive, which can capture more potential problems and improve code quality. Its configuration is also simpler and more intuitive, reducing the learning cost for developers. It comes with perfect out-of-the-box rule support for React, without any additional configuration. After installation, execute oxlint. in the code base.
Now that umi has been stripped off, the originally exposed communication method useModel(‘@@qiankunFromMaster’) by using umi is not applicable. For global state management, we chose Zustand, mainly because it is simple to write and does not have too much “academic” template code like Redux.
Create a Store:
import { create }from’zustand’
const useBearStore =create((set) =>({
bears:0,
increasePopulation:() =>set((state) =>({bears: state.bears+1})),
removeAllBears:() =>set({bears:0}),
}))
use:
function BearCounter(){
const bears = useBearStore((state)=> state.bears)
return
<h1>{bears} around here …</h1>
}
functionControls(){
const increasePopulation = useBearStore((state)=> state.increasePopulation)
return<button onClick={increasePopulation}>one up</button>
}
Of course, writing this alone will not allow the main application to communicate, because at the macro level the two apps are not in the same React context. The library zustand-pub allows applications in different contexts to communicate via Zustand.
Declare Store in the main application:
type Locale=’en-US’|’zh-CN’
interfaceIState{
currentZone:Zone|null|{}
currentUser:Partial<CurrentUser>
currentLocale:Locale
}
interfaceIAction{
setCurrentZone:(currentZone: Zone | {}) =>void
setCurrentUser:(currentUser: Partial<CurrentUser>) =>void
setCurrentLocale:(currentLocale: Locale) =>void
}
const pubStore =newPubStore(‘zstack_cloud_global_store’)
const store = pubStore.defineStore<IState&IAction>(
‘zstack_cloud_global_store’,
// @ts-ignore
persist(
set =>({
currentZone:{uuid:”},
setCurrentZone:(currentZone: Zone | {}) =>{
set({ currentZone })
},
currentUser:{},
setCurrentUser:(currentUser: Partial<CurrentUser>) =>{
set({ currentUser })
},
currentLocale:’zh-CN’,
setCurrentLocale:currentLocale =>{
set({ currentLocale })
}
}),
{name:’zstack_cloud_global_store’}
)
)
exportconst usePlatformStore =create(store)
Declare the store in the sub-application:
interface IState{
currentZone:Zone|null|NonNullable<unknown>;
currentUser:Partial<any>;
currentLocale:any;
}
interfaceIAction{
setCurrentZone:(currentZone: Zone | NonNullable<unknown>) =>void;
setCurrentUser:(currentUser: Partial<any>) =>void;
}
const pubStore =newPubStore(‘zstack_cloud_global_store’);
const store = pubStore.getStore<IState&IAction>(‘zstack_cloud_global_store’);
const localStore = create<IState&IAction>((set) =>({
currentZone:{uuid:”},
setCurrentZone:(currentZone: Zone | NonNullable<unknown>) =>
set({ currentZone }),
currentUser:{name:’lisi’},
setCurrentUser:(currentUser: Partial<any>) =>set({ currentUser }),
currentLocale:’zh-CN’,
}));
exportconst usePlatformStore =(
store &&window.__POWERED_BY_QIANKUN__?create(store): localStore
)astypeof localStore;
Then use it like a normal Zustand.
It turns out that our intranet hosting website runs an old version of cnpm, so we took this opportunity to upgrade it together. Verdaccio is a modern private npm repository solution with many advantages. It provides excellent performance and stability, and implements powerful access control and security features. In addition, it also has an excellent caching mechanism, complete private package management functions, and active community support. Another decisive reason for our choice is that it provides configurationUplink. This function saves us the work of migrating old versions of packages:
uplinks:
npmjs:
url:https://registry.npmjs.org/
server2:
url:http://mirror.local.net/
timeout:100ms
server3:
url:http://mirror2.local.net:9000/
baduplink:
url: http://localhost:55666/
The core goal of our architecture upgrade is to improve development speed and experience. Through comprehensive optimization of the front-end infrastructure, we have observed some exciting changes in the first sub-application that has completed the upgrade.
First, the introduction of RsBuild has greatly shortened our development time. Its second-level hot update capability has increased the original hot update time by up to 40 times, allowing development changes to be immediately reflected on the page. At the same time, the sub-application build time has also been greatly shortened. In a medium-sized sub-application containing 30,000 lines of code, the average build time has been reduced from nearly 50 seconds to only 5 seconds, an increase of nine times. This has greatly increased the speed of continuous package delivery, facilitated QA’s rapid testing, and improved delivery quality.
The adoption of UnoCSS makes UI development more efficient. After using atomic CSS, the speed of writing and adjusting the interface is increased by about 30% through pre-configured design system rules and VS Code plug-ins.
After introducing Nx and pnpm workspace for Monorepo management, thanks to Nx’s caching mechanism, unnecessary builds can be effectively reduced, making the package delivery speed up to 70%.
Considering that we currently have 40+ sub-applications, and completing the upgrade of just one sub-application has brought such significant improvements, the overall efficiency will gradually be improved in the gradual upgrade process.
Through this comprehensive and in-depth research and exploration of the architecture upgrade, we have laid a solid foundation for future expansion and optimization. With the gradual and gradual upgrade of sub-applications, we expect to see an exponential improvement in the performance and development efficiency of the entire system. This will enable us to provide a more efficient and reliable private cloud software experience, thereby maintaining a strong competitive advantage in the wave of AI-driven digital transformation, maximizing the infrastructure value of private cloud as the foundation of intelligent computing, and providing customers with more excellent products and services.