\n\n## two config directories for a single web application?! why though??\n\nthe `config` directory at the root contains all the configuration files related to build - like our application's webpack config or any other bundler that we might use, environment files and other configs.\n\n\n\nyou might also notice that it is nested and that the webpack configuration lives in its own directory. this makes the configurations more organised and easier to manage. this might seem trivial but when the application starts growing , and with that the build process might also get complex - which then demands a well organised place of its own. also, this brings a peace of mind while working with it -- a large mess of configuration files is the last thing you might want while deploying your application in production! 👀\n\nthe other `config` directory inside our `src` folder is for configurations related to our application, .i.e, the ones related to runtime. this may contain our json files ( or any other files ) that might shape the behaviour or capabilities of our app. although, this may or may not be required as per your needs but for me, i have had this folder in most of the projects.\n\n
\n\n## but wait, what about the `resources` and `assets` directories? aren't assets also a part of the 'resources' for our react application?\n\n \n\n\n\nwell, the `assets` directory here is meant **_only_** for images and other media _, duhh,_\n\nwhereas, `resources` is for data that might be required by our web application, for example, constants and other static data which basically doesn't have any or much logic associated to it. you can also add small methods to return the data, perhaps formatted to specific needs, and / or perform **_minor_** operations on them there which can be used by parts of our application, which by the way -- _trust me_ -- will make your code a lot cleaner and more organised.\n\nthis directory may also contain data and other 'resources' which can be occasionally fetched, stored and updated; and maybe processed a little before they are used in certain parts of our web application. well, i guess you get the idea.\n\n\n\n## structuring pages and components\n\nso, here comes the interesting part. atleast i think so. this is something that has been derived from a few other solutions on architecting react applications as well as other web applications along with some of my own practical experience. and by far, i'm pretty satisfied with it! 🤓\n\nto start with, let's assume our web application contains a home page, a profile page for the users and just for the sake of not having just two pages in the example, a third page that we will call -- the other page. so the directory structure would look something like this :\n\n```js\n-- src\n----- components\n----- config\n----- pages\n--------- home\n----------- index.js\n----------- index.scss // mandatory sass file - i just wanted to make this look realistic!!\n--------- profile\n----------- index.js\n--------- other-page\n----------- components\n----------- index.js\n----- resources\n\n```\n\nnotice how all the pages have their own separate directory with an entry point? and how that 'other' page has a component folder? why do we need another component folder? don't we already have a component folder in the root of `src` directory?\n\nwait, just hold on for a second! i'll explain it real quick! ☝\n\n### the \"branching\" structure explained\n\nthis is what i call the \"branching\" structure. each page has their own directory, their own set of components which are not used anywhere else except in that specific page, their own styles rules and other stuff which are associated with only that page. if any component is shared by two pages, guess where they'd go? yes, you guessed it right -- the `components` directory in the root of our `src` directory!\n\nbut.. you might wonder.. what is the point of doing that?\n\nlet's say, one day you and your teammates decide to get rid of the 'other' page -- _maybe the name wasn't good enough?_ -- so what do you do? spend an entire afternoon or a day on removing code, breaking and fixing the application? **no**.\n\nyou just go ahead and delete the directory and remove its reference from where it was attached to / used in the web application. _and voila, it's done!_ 💁🏻♂️\n\nnothing breaks in your app just because a bunch of code was deleted! everything is independent of each other's existence even if they were binded together at some point! a lot less to work with and worry about, isn't it? and yeah, this principle can be applied to almost any application / software and not just some react application.\n\n\n\nsome of you might think -- well no, our application / software is quite complex and stuff is just too interconnected with each other. they **_shared_** code, were **_bridged_** together, etc. but i guess you might understand now what to do with the \"shared code\" and \"bridges\" if you try to apply this principle to it! this is just a simple example to demonstrate and give you an idea of how parts of the product can be organised for convenience and maintainability.\n\n\n\n## leveraging layout components\n\nyou can also go ahead and add another directory to the `src` -- called `layouts` ( or maybe add it to the `components` directory, whichever feels more appropriate to you ) which contains a layout file which is global to the application, or even have multiple layouts ; each associated with certain parts of the application. for example, let's assume our application also has a fancy navbar and a decent footer which goes into all of our pages. instead of having them shoved inside our `components` directory and then repeatedly used inside each page - we can have a layout file that contains the navbar and the footer and renders the `children` that are passed to it, like so :\n\n```jsx\n\n\n
\n yayy! this is my fancy home page!!\n
\n\n\n// and in the profile page :\n\n\n
\n this is the page of the user whose data we're secretly trying to steal!\n please read our privacy policies (not so) carefully!!\n
\n\n\n```\n\nand in our layout file, we can have something similar to this :\n\n```jsx\nconst layout = ({ children }) => (\n <>\n \n {children}\n \n >\n);\n\nexport default layout;\n```\n\nbetter now, isn't it? even this website, with its simplicity, has a layout component! 🤓\n\n\n\n## but wait.. there's more to architecting react applications!!\n\nyes, i haven't forgotten about reducers, the lengthy sagas, services, a ton of action creators and what not! but that's for the [second part of this article](/articles/architecting-react-applications-redux-store-services-and-sagas/) since i don't want it to become too long and exhausting to read. also, this first part might serve as a good starting point for beginners or other fellow developers who are new to react development.\n\n\n\n## conclusion\n\n_did you like this article? or did i miss something? is there something that you have that can be added to this article -- that can make it even better?_\n\n_please leave a comment below or you can also contact me through my [social media profiles](/)._\n\n_thank you for reading!_ 😄\n\n \n\nhappy hacking! cheers! 🎉"},{id:"automate-lighthouse-audits-progressive-web-app",title:"Automate Lighthouse Audits for Performance Testing in CI/CD",description:"Complete guide to automating Lighthouse audits for Progressive Web Apps. Learn to set up automated performance testing with Mocha and Chai, integrate with CI/CD pipelines, and maintain consistent web performance standards.",subtitle:"Build Automated Performance Testing with Mocha, Chai & Lighthouse",keywords:"lighthouse automation tutorial,automated performance testing,pwa testing ci cd,lighthouse mocha chai,web performance automation,progressive web app testing,lighthouse npm automation,performance monitoring automation",tags:["pwa","performance","automation","testing","frontend","web development"],category:"Engineering and Development",path:"/articles/automate-lighthouse-audits-progressive-web-app/",date:"2019-06-29T22:12:03.284Z",hero_image:{src:"/_astro/banner.D0CwAIJ-.webp",width:1e3,height:420,format:"webp"},hero_image_alt:"Automated Lighthouse audit dashboard showing CI/CD performance testing pipeline for Progressive Web Apps",series:null,searchableContent:'automate lighthouse audits for performance testing in ci/cd complete guide to automating lighthouse audits for progressive web apps. learn to set up automated performance testing with mocha and chai, integrate with ci/cd pipelines, and maintain consistent web performance standards. build automated performance testing with mocha, chai & lighthouse lighthouse automation tutorial,automated performance testing,pwa testing ci cd,lighthouse mocha chai,web performance automation,progressive web app testing,lighthouse npm automation,performance monitoring automation pwa performance automation testing frontend web development engineering and development we all know how valuable and helpful the insights are from lighthouse audits when we’re developing our web applications. but the way most of us check is manually through chrome dev tools or the lighthouse extension, which in my opinion, is not very productive.\n\nfor those of us who don’t know, there are mainly four ways of auditing our web application with lighthouse :\n\n- via chrome dev tools\n\n- command-line\n\n- npm module\n ( which we are going to use in a while )\n\n- [pagespeed insights](https://developers.google.com/speed/pagespeed/insights/)\n\n\n\n## prerequisites: installing dependencies\n\nto programmatically perform lighthouse audits, we can use the [lighthouse npm package](https://www.npmjs.com/package/lighthouse), [mocha](https://mochajs.org/), and [chai](https://www.chaijs.com) for writing our tests and [chrome-launcher](https://www.npmjs.com/package/chrome-launcher) for running our lighthouse tests.\n\nfirst, let\'s install the above packages as dev dependencies in our project :\n\n```bash\nnpm install lighthouse chrome-launcher chai mocha --save-dev\n```\n\n## setting up lighthouse programmatically\n\nnow, let’s create a file named `lighthouse.tests.js` in our `tests` directory. we’ll run our lighthouse audits through this file. here, we’ll import the lighthouse module and chrome launcher that helps us to open our webpage from the local development server and run the audits to test against a minimum threshold that we want our lighthouse scores to be.\n\nwhile this might sound a lot to do, it isn’t much. here’s what it looks like on actual code :\n\n```js\nconst lighthouse = require("lighthouse");\nconst chromelauncher = require("chrome-launcher");\n\nfunction launchchromeandrunlighthouse(url, opts, conf = null) {\n return chromelauncher\n .launch({ chromeflags: opts.chromeflags })\n .then((chrome) => {\n opts.port = chrome.port;\n return lighthouse(url, opts, conf).then((res) =>\n chrome.kill().then(() => res.lhr)\n );\n });\n}\n```\n\nand it is as simple as that. we launch the chrome browser instance with the `chromelauncher.launch` method and then run lighthouse tests with the site url and configuration for our audits. after that, we close/kill the chrome instance and return the results. and this is how it looks like when in use :\n\n```js\nlaunchchromeandrunlighthouse(testurl, opts, config).then((res) => {\n // results are available in `res`\n});\n```\n\n## writing the audit tests with mocha and chai\n\nso now, we can put this call inside our `before` hook for the tests and then have tests for each metric, something like this :\n\n```js\ndescribe("lighthouse audits", function () {\n // timeout doesn\'t need to be same. it can be more or less depending on your project.\n this.timeout(50000);\n let results;\n before("run test", (done) => {\n launchchromeandrunlighthouse(testurl, opts, config).then((res) => {\n // extract the results you need for your assertions.\n done();\n });\n });\n it("performance test", (done) => {\n // test your performance score against the threshold\n done();\n });\n // some more tests..\n});\n```\n\nstill looks weird? don’t worry! check out this repository for an example setup of [lighthouse tests with mocha](https://github.com/rishichawda/lighthouse-mocha-example) and try that out with your web application!\n\nthis method can be applied to automate the tests in continuous integration/deployment environments so that you don’t have to worry about manually auditing your web application and checking whether it meets the minimum satisfactory levels.\n\n\n\n## conclusion\n\nso there you go. that’s all we need to do for automating lighthouse audits for our progressive web applications to make sure they are always worthy of the internet and user’s data packets!\n\n\n\n \n\n_did you like this article? or did i miss something? is there something that you have that can make it even better?_\n_please leave a comment below, or you can also contact me through my [social media profiles](/)._\n\n_thank you for reading!_ 😄\n\n \n\nhappy hacking! cheers! 🎉'},{id:"architecting-react-applications-redux-store-services-and-sagas",title:"Scalable React Architecture: Redux, Sagas & Services Pattern",description:"Build maintainable React applications with proper Redux architecture. Complete guide to organizing Redux store, sagas, services, and selectors for scalable enterprise React projects with best practices.",subtitle:"Enterprise-Grade React Application Structure with Redux Toolkit and Sagas",keywords:"react redux architecture,scalable react apps,redux toolkit setup,redux saga tutorial,react folder structure,enterprise react patterns,react state management,redux best practices,react application architecture,react redux patterns",tags:["javascript","react","frontend","web development"],category:"Engineering and Development",path:"/articles/architecting-react-applications-redux-store-services-and-sagas/",date:"2019-05-11T22:12:03.284Z",hero_image:{src:"/_astro/banner.DmjiWhJb.jpg",width:1277,height:526,format:"jpg"},hero_image_alt:"Scalable React application architecture diagram showing Redux store structure with sagas and services",series:{title:"Developing Scalable React Applications",currentPart:2},searchableContent:"scalable react architecture: redux, sagas & services pattern build maintainable react applications with proper redux architecture. complete guide to organizing redux store, sagas, services, and selectors for scalable enterprise react projects with best practices. enterprise-grade react application structure with redux toolkit and sagas react redux architecture,scalable react apps,redux toolkit setup,redux saga tutorial,react folder structure,enterprise react patterns,react state management,redux best practices,react application architecture,react redux patterns javascript react frontend web development engineering and development this article is a continuation of my previous article - [architecting react applications](/articles/architecting-react-applications/) - where i wrote about a simple way to architect almost any react application into a modular structure. in this article, i am going to write about a _relatively_ complex codebase with things such as application state management.\n\nwe'll build upon the same directory structure so that we can also determine whether our previously prepared codebase scales well in more complex scenarios rather than just having a few pages or components. we'll follow the same steps, .i.e, take a look at the directory and then briefly go through the parts one by one.\n\n\n\n## setting up the basic redux structure\n\nlet's add some of the redux's _magic_ to our application to manage its global state. ✨\n\nbut wait, we need to get the structure ready first. so, here we go --\n\n\n\n### directory organization rationale\n\nthis structure might seem familiar to you, and this one of the most popular ways among the developers and is pretty intuitive. all the actions go into a directory called `actions`, reducers in their directory, and the same for middlewares. one thing that is not very common here is a `root.reducer` and `root.store` file at the `src` root. now, many developers i have known to prefer keeping the `root.reducer` ( sometimes stored as an `index.js` ) inside the `reducers` directory because it is then 'closer' to all the reducers. i agree it might make much sense to keep it that way, but i prefer keeping my `root.reducer` and `root.store` in the root of my `src`. and here's why :\n\n- `reducers` directory is strictly kept for storing individual reducers. the `index.js` inside it is used as the main entry point to export all the reducers. ☝️\n- `root.reducer` and `root.store` seem _closer_ ( or hooked? ) rather than all the reducers closer to the `root.reducer` -- since `root.reducer` here is being used to configure the reducer before we hook it up with store ( which is done inside `root.store` later ).\n\nso it pretty much makes sense -- to keep all the reducers separately in a place and then just imported through a single entry point to our `root.reducer`, which stays close to our `root.store`. the simple reason is it is easier to find at the root of `src` directory than inside another nested directory. that is one of the reasons why it is named `root.reducer` and not `index.js`.\n\n### actions and middlewares structure\n\nsimilarly, our `actions` directory contains all our `actions`, an `action.types` file for all the action types _( we can even have a directory named `shared` in the `src` and put the type file there, i used to do it when i had started using redux )_ and the main entry file which exports all the actions. each file inside our `actions` directory can contain a set of actions that are related to a single aspect of our application, for example, a user, or a user interface state or perhaps some data synchronization. the same goes for the `middlewares` directory, which holds our custom middlewares, if any, and a single entry point which exports all of them.\n\n> [!note] note that all three new directories added to our previous structure have the main entry point, which exports all the individual parts -- mainly because it makes the imports cleaner and also makes it look modular.\n\nso, we're done with the basic stuff that could be added to any react application which implements an application state ( a little secret -- you can do it without using redux too! ). 🤓\n\n\n\n## integrating sagas, services, and selectors\n\nlet's add more volume to the codebase. the first thing that comes to my mind is sagas -- mainly because any real-world application with a considerable codebase usually has asynchronous actions going on in parallel. let's assume we need [redux-saga](https://redux-saga.js.org/) for our application, and we can't do away with thunks!\n\noh wait, let's have some services too -- for fun! 💥\n\nand while we're at it -- let's not forget about making our state management which we had set up earlier a little better by adding selectors to our application -- which is a must by the way if we have many things going on inside our application store!\n\ndid i add too much? well, to cover all of it, without implying that your application can not have all of it, which can pretty much if required.\n\n### directory structure\n\nokay, let's follow the same pattern and make the directories first. i'll go ahead and create directories for them, like so :\n\n\n\nthis structure might look pretty much self-explanatory, and you already have an idea of how we might structure them internally. all our sagas go inside the `sagas` directory, with our root saga also inside it _( you can name it whatever you want -- `root.saga` or `index.js` )_ just like our entry points to reducers and actions inside their directories.\n\nbut you might ask..\n\n### why keep `root.saga` inside the `sagas` directory? 🤔\n\nwell, here's a pretty simple explanation. when we discussed about the reducers and actions, we kept the entry point of the directory as something we're directly using in our application -- entry point in `reducers` directory for importing all reducers to `root.reducer` and entry point in `actions` directory for importing them in various parts of our react application. similarly, for our `root.saga` or `index.js` inside the `sagas` directory, which is going to be used in our `root.store` while initialization, makes more sense to be seen as an entry/access point and not anything more complicated than that. usually, it'll contain our root saga, which spawns/calls/forks/ invoke other sagas accordingly.\n\n### structure for selectors and services\n\nthe same goes for selectors and services. both contain an entry point that exports all the selectors and service modules from the directory. keep in mind that it is there to provide us a cleaner import and better view of the structure!\n\n\n\n## conclusion: a scalable foundation\n\nand there we go! we have pretty much completed setting up our react application to start with a complicated project -- but with a relatively simple structure which anyone can get used to and something that scales well too! in my experience, a similar structure has fared well in scaling up along with regular and extensive application-wide changes while keeping our productivity high.\n\n \n\n_did you find this architecture helpful? have suggestions or questions?_\n\n_please leave a comment below, or reach out via my [social media profiles](/contact)._\n\n_thank you for reading!_ 😄\n\n \n\nhappy hacking! cheers! 🎉\n\n"},{id:"core-web-vitals-optimization-strategies",title:"Core Web Vitals: Real-World Optimization Strategies",description:"Master Core Web Vitals optimization for improved SEO and user experience. Learn practical strategies to optimize Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift with real code examples.",subtitle:"Complete Guide to LCP, INP, and CLS Optimization for Better Rankings",keywords:"core web vitals optimization,lcp optimization,inp optimization,cls optimization,web performance seo,page speed optimization,google core web vitals,performance metrics,seo ranking factors,web vitals improvement",tags:["seo","performance","frontend","optimization","web development"],category:"Engineering and Development",path:"/articles/core-web-vitals-optimization-strategies/",date:"2025-01-20T00:00:00.000Z",hero_image:{src:"/_astro/banner.Bx7ggWtR.png",width:1536,height:1024,format:"png"},hero_image_alt:"Core Web Vitals metrics dashboard showing performance optimization results",series:{title:"Guide to Improving Page Performance in 2025",currentPart:1,ongoing:!0},searchableContent:'core web vitals: real-world optimization strategies master core web vitals optimization for improved seo and user experience. learn practical strategies to optimize largest contentful paint, interaction to next paint, and cumulative layout shift with real code examples. complete guide to lcp, inp, and cls optimization for better rankings core web vitals optimization,lcp optimization,inp optimization,cls optimization,web performance seo,page speed optimization,google core web vitals,performance metrics,seo ranking factors,web vitals improvement seo performance frontend optimization web development engineering and development google’s *core web vitals* are a set of user-centric metrics that measure page load speed, interactivity, and visual stability. these metrics (largest contentful paint, interaction to next paint, and cumulative layout shift) directly impact both user experience and search rankings.\n\nin this article, we\'ll go through each core web vitals metric in plain language, explain what factors affect them, and provide practical tips and code examples to optimize them for seo and ux.\n\n> [!bonus] google advises that achieving “good core web vitals” aligns with what its ranking systems “seek to reward”.\n\n## core web vitals metrics overview\n\n> [!note] in 2024, google replaced the old *first input delay (fid)* metric with the new *interaction to next paint (inp)* metric to better capture real-world responsiveness. in this article, i\'m going to use the same.\n\n- **largest contentful paint (lcp)** measures load performance. in simple terms, it marks the time when the *largest piece of content* (image, video, or block of text) becomes visible to the user. a fast lcp reassures users that the page is useful. a good lcp score is **2.5 seconds or less** (75th percentile of page loads), while an lcp over 4 seconds is considered poor.\n\n- **interaction to next paint (inp)** measures page responsiveness. it replaces fid as of march 2024. inp looks at all click, tap, and keypress interactions during a user’s visit and reports a value close to the *slowest* one (excluding rare outliers). a low inp means the page consistently responds quickly. google recommends an inp **≤ 200 ms** (75th percentile), *200–500 ms* needs improvement, and over *500 ms* is poor.\n\n- **cumulative layout shift (cls)** measures visual stability. it sums up all *unexpected layout shifts* that occur while the page is loading and interacting. a shift happens when visible elements move from one frame to the next (e.g. an image or ad loading pushes content down). unanticipated shifts can frustrate users or cause mis-clicks. a good cls score is **0.1 or less** (75th percentile); anything above 0.25 is considered poor.\n\n> [!important] each metric is evaluated on the 75th percentile of user experiences, so one slow pageview out of many can push our score into the “needs improvement” or “poor” range. in summary, we must strive for **lcp ≤ 2.5 s, inp ≤ 200 ms, and cls ≤ 0.1** to provide a smooth user experience and help seo.\n\n## measuring and monitoring core web vitals\n\nbefore optimizing, we need to *measure* these metrics for our site. google provides several tools that share the same underlying data (chrome user experience report).\n\n1. for quick feedback, use **chrome devtools** – its performance panel can show live lcp, inp, and cls data for your page, even overlaying real-user (crux) values on your local load test.\n\n2. **pagespeed insights (psi)** reports lab metrics and field data (crux) for a url or origin, including lcp, inp, and cls.\n\n3. if your site has a search console property, check the **core web vitals report** there for a breakdown of url performance over the past 28 days. search console is especially useful for tracking metrics over time on real traffic.\n\n4. for deeper analysis or custom dashboards, use **crux tools**: the crux dashboard and crux vis (based on google data studio) let you slice web vitals data by device, origin/page, and more. however, crux data has a delay (typically a few weeks of aggregate data) and may not cover very low-traffic pages.\n\n5. **real user monitoring (rum)** for immediate and detailed data, instrument your own rum. the easiest way is google’s [web-vitals](https://github.com/googlechrome/web-vitals) javascript library. this library reports lcp, inp, cls (and other vitals) using the browser’s performance apis. we can send these values to our analytics or a logging endpoint.\n\nmany third-party rum services (e.g. new relic, datadog, speedcurve) also now have built-in core web vitals support. field data from rum captures your specific users’ experiences and can pinpoint which pages or interactions need attention.\n\n> [!tip] always compare *lab measurements* to *field data*. a development/test environment may load faster (e.g. cached assets, faster cpu), yielding optimistic metrics. use field data (crux or rum) as the source of truth for what real users see. when using lab tools, consider throttling cpu/network in devtools to mimic slower devices and networks.\n\n## optimizing largest contentful paint (lcp)\n\nlcp is heavily influenced by how quickly the largest content on the page can be downloaded and rendered. to improve lcp, examine the entire loading process.\n\ntwo critical factors are:\n- initial server response (ttfb)\n- speed of loading the lcp resource (often a large image, video, or a block of text that requires web fonts).\n\n\n
image credits: generated using sora
\n\nthe **time to first byte (ttfb)**, the delay before the lcp resource starts loading, its **resource load duration**, and the **element render delay** all add up to lcp. optimizing each can improve your lcp score.\n\nso what can be done?\n\n- **reduce server response time.** a slow ttfb (time from navigation to first byte) delays lcp. use a fast backend or cdn. enable gzip/brotli compression and caching on your html documents. for dynamic sites, optimize database queries or use edge caching (e.g. cloudflare pages, aws cloudfront).\n\n- **optimize the lcp resource.** identify which element is triggering lcp (chrome devtools performance tab or lighthouse can highlight the “largest contentful paint” element). if it’s an image or video, compress it (modern formats like webp or avif can cut size dramatically) and serve scaled versions for each device. **lazy-load images or videos that are not in the initial viewport** so they don’t compete for bandwidth. for the main (above-the-fold) image or video, consider *preloading* it. \n\nfor example:\n\n ```html\n \x3c!-- preload a hero image so the browser knows to fetch it early --\x3e\n \n \n ```\n\n preloading tells the browser to fetch the resource with high priority. always pair a preload with the exact image url and the same `width`/`height` (or css aspect-ratio) to avoid layout shifts (see cls tips below).\n\n* **minimize render-blocking resources.** css and javascript that block rendering can delay lcp. critical css needed to render above-the-fold content should be inlined or loaded with high priority. use `rel="preload"` on your main stylesheet, or split css so that only essential styles load first. for non-critical js, use `defer` or `async`.\n\n ```html\n \x3c!-- defer non-essential scripts --\x3e\n