Routing in nativescript-vue

I’m a big fan of Vue JS. So, when heading into mobile app development, Native script was the one thing which made me excited to work on. For those who are new to this, Nativescript is an open source framework which is used to create true native mobile apps for both Android and IOS. Nativescript supports Angular, Vue, Vanilla JS and Typescript. It is known for its performance, compared to other mobile app development frameworks like react native and ionic.

Here in this blog, we are going to focus on nativescript-vue routing…

Nativescript-vue is basically, Nativescript core combined with Vue JS.

nativescript

 

How do we implement routing in nativescript-vue?

The shocking news here is, Vue routing is not supported in nativescript-vue. The Nativescript community is currently working on it. But for now, we have to go with manual routing methods.

oh my god

 

Let’s go ahead with manual routing…

To implement manual routing, you just need to know the following three methods:

  1. $navigateTo
  2. $navigateBack
  3. $showModal
  • $navigateTo:

The functionality of $navigateTo is to redirect from one component to another. This method can be used in the view and in the methods like given below:

Consider a scenario where the current component should be redirected to homePage component on click of the “Go” button. We use the $navigateTo methods like:

<Button text="Go" @tap="$navigateTo(homePage)" />

Or we could add in the method like :

<Button text="Go" @tap="goToHomePage" />

goToHomePage() {

    this.$navigateTo(HomePage);

}

There might be a scenario where we need to pass data from one component to another component. In that case, the data can be passed as props using the $navigateTo method by.

this.$navigateTo(ComponentName, {

  props: {

   // pass the data as an object here

 }

});

 

What else we can do with “$navigateTo”?

This method also gives us properties to apply transitions while navigating to the next page.

There are three ways to set the transition:

  1. transition: Applies on all platforms.
  2. transitioniOS: Applies only to IOS.
  3. transitionAndroid: Applies only to Android.

The default transition is “platform”.

this.$navigateTo(NextComponent, {

  transition: {

     name: 'flip',

    duration: 2000,

   }

});

The below listed are the available transitions:

  1. curl (same as curlUp) (iOS only)
  2. curlUp (iOS only)
  3. curlDown (iOS only)
  4. explode (Android Lollipop(21) and up only)
  5. fade
  6. flip (same as flipRight)
  7. flipRight
  8. flipLeft
  9. slide (same as slideLeft)
  10. slideLeft
  11. slideRight
  12. slideTop
  13. slideBottom

Another important property is clearHistory.

“clearHistory” is used to clear the navigation history. It accepts a boolean value. Setting it to “true”, clears the navigation history.

There are still a few other things that $navigateTo method can do. Refer here for the properties that are accepted by this method.

  • $navigateBack:

This method is used to navigate back to the previous page. It is used like:

<Button text="Back" @tap="$navigateBack" />
  • $showModal:

This method is used to display the component inside a modal.

For closing the modal we use “$modal.close”. Props are passed as an option to the $showModal as the following:

this.$showModal(Component, { props: { message: “Props is passed here”  }});

That’s it…. We have now mastered manual routing in nativescript-vue by learning simple three methods. Hope this blog was helpful and let me know your thoughts on this in the comments!

Thanks for reading!

Reference:

https://nativescript-vue.org/en/docs/routing/manual-routing

 

Array methods in Javascript

Array is one of the most important and frequently used concepts in JS. If someone raises the question “What is an array?”, we used to say,

“An array is a homogeneous collection of elements”.

My own definition of the array is,

It is a data structure used for arranging the elements or data as a group and each element can be accessed through its index.

To develop a JS enabled application, we should know this basic concept of arrays and the methods to be used.

For e.g:

const sampleArray = [  ‘HTML’  ,  ‘JavaScript’  ,  ‘ES6’ ]

where,

sampleArray [ 0 ] – ‘HTML’

sampleArray [ 1 ] – ‘JavaScript’

sampleArray [ 2 ] – ‘HTML’

Now, let us see some most commonly used array methods with simple examples

  • length:

To find the length or size of a particular array.

Eg: console.log(sampleArray.length); 

Output: 3
  • Adding an element to an array:
  1. push:

    To add an element at the end of an array.

    Eg: sampleArray.push(‘NodeJS’); 
    Output:  [  ‘HTML’  ,  ‘JavaScript’  ,  ‘ES6’ ,  ‘NodeJS’ ]
  2. unshift:

    To add an element at the front of an array.

    Eg: sampleArray.unshift(‘NodeJS’); 
    Output:  [ ‘NodeJS’ ,   ‘HTML’  ,  ‘JavaScript’  ,  ‘ES6’  ]
  • Removing element(s) from an array:
  1. pop:

    To remove the last element from an array.

    Eg: sampleArray.pop(); 
    Output:  [  ‘HTML’  ,  ‘JavaScript’ ]
    

    Here ES6 is removed as it was at the end of the array.

  2. shift:

    To remove the first element in the array.

    Eg: sampleArray.shift(); 
    Output:  [  ‘JavaScript’  ,  ‘ES6’  ]
    

    Here HTML is removed as it was the first element in the array.

  3. Remove an item by index position – splice(pos,1)

    To remove an element by index position.

    Eg: sampleArray.splice(2, 1);
    Output:  [  ‘HTML’ ,  ‘JavaScript’  ]
    

    Here ES6 is removed as it was present in the index 2 and the second parameter specifies the number of elements to be removed.

  4. Remove multiple items – splice(pos,n)

    To remove more than one elements based on the index position.
    Here n specifies the number of elements to be removed from the specific index pos.

    Eg: const removedItems = sampleArray.splice(1 , 2);
    Output:  sampleArray = [  ‘HTML’  ]
    removedItems = [  ‘JavaScript’ , ‘ES6’  ]
    

     Here both Javascript(index-1)  and ES6(index-2)  are removed and pushed to removedItems.

  • Copying an array:
  1. slice:
    To make a copy of an array with all its elements.

    Eg: const copyOfArray = sampleArray.slice();
    Output:  copyOfArray = [  ‘HTML’  ,  ‘JavaScript’  ,  ‘ES6’ ,  ‘NodeJS’ ]
  • Find the existence of an element:
  1. includes():
    To find whether an element present in an array or not.

    Eg: var isPresent = sampleArray.includes(‘ES6’);   // returns true
    var isPresent = sampleArray.includes(‘MongoDB’);   // returns false
  • Merging of arrays:
  1. concat():
    To combine/merge two or more arrays to form a new array.

    Eg:  var array1 = ['a', 'b', 'c'];
    var array2 = ['d', 'e', 'f'];
    
    console.log(array1.concat(array2));
    Output:  [ 'a', 'b', 'c', 'd', 'e', 'f' ];
  • Operations/Functions that return a new array:
  1. filter:
    This returns a new array with the elements that pass the conditions given in the function.

    Eg:
    const filteredArray = sampleArray.filter(element => element.length < 5);      
    Output:  [  ‘HTML’   ,  ‘ES6’  ] , Here the length of each element is less than 5.
  2. map:
    Here, the given function is executed for each element of the array and the result will be pushed to a new array.

    Eg:     const sampleArray = [ 4, 2, 34, 50 ]
    const mappedArray  = sampleArray.map(element => element*2); 
    Output:  [ 8 , 4  , 68 , 100 ]
  • Looping through an array:
    1. for loop.
    2. forEach
    3. for of
    4. for in
    5. Map
    6. Filter

These are some of the methods which we can use to loop through an array.

And yes…We came to an end and I hope this blog on frequently used array methods will be useful to develop a js-based application. For more methods, refer the below link.

Reference:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array

Gitlab Runner

Hi, folks,

This blog contains the steps on how to implement continuous integration and continuous deployment using the Gitlab runner.

Here in the following example, we have two separate servers.

Install Gitlab Runner in Linux Server:

To install Gitlab runner use the following command :

sudo apt-get install gitlab-runner

Once the git lab runner is installed, we should register the runners of the project.

  1. To register the runner, run the command,
sudo gitlab-runner register
  1. Once you hit enter it asks for instance URL,
  2. Next, It will ask for of token of the runner. This token can be seen in the repository.
Path for the token ----> Settings > CI/CD > Runners > Token

There are two types of runners in the Gitlab runners. They are

  • Specific Runners
  • Shared Runners

Specific Runners :

These runners are useful for the job of a project which has specific requirements.

Shared Runners:

These runners are useful for a job of different projects which has similar requirements.

4. Next, It will ask for the description for the runner.

5. Next, it will ask for tags associated with the runner ( these tags can be changed later in the UI)

6. Finally, it will ask for runner executor: ( Here I use Shell script for runner executor)

Now the Gitlab runner is registered successfully in the server.

Next, .gitlab-ci.yml file needs to be created by which the Gitlab runner is executed.

This .gitlab-ci.yml file should be created in the project root. This file triggers when the code is needed to push to the server.

Stages in .gitlab-ci.yml File:

Stages are a series of steps to reach the final destination. Gitlab allows you to define any number of stages with any names. Gitlab allows to run the stages one by one.  If anyone of the stages fails, it prevents the other ones to run.

Stages

  • build
  • test
  • staging
  • production

Example script for .gitlab-ci.yml

stages

    -deploy

deploy_to_prod

stage: deploy

script : “echo deploy to production && ssh of your server in which code needs to be deployed && commands to deploy the code in the server.

only:

   -master

To deploy the code in the server using Gitlab runner we should add the ssh key of the server to the repository.

Steps to add server ssh in the repository’s deployed key.

  1. Use cat ~/.ssh/id_rsa.pub command to get the ssh key of the server.
  2. Open your repository in the Gitlab and go to Settings > Repository > Deployed Keys(Expand). Add the ssh key of the server in the deployed keys.

Now, when a new code is pushed to master branch the code is deployed to the server successfully.

 

Squash Apps Named Among Top App Developers in India by Ranking and Review Website Clutch.co!

Squash Apps is dedicated to working with our clients to deliver the best, most robust web apps on the market. Our company started in 2015 headquartered in Coimbatore, India where we work primarily on custom software development for small businesses. We specialize in mobile app development, specifically using a single team to create IONIC Hybrid apps to run on multiple platforms, and scaling servers to meet the needs of our clients. The IONIC platform has been praised by many app developers and as The World Beast magazine states, it is useful to our company because “The Ionic framework is powered by a huge community of developers, and you can get an extensive range of resources on the web” to accomplish any task and build the vision that our clients ask for.

Clutch has just recently granted their 2019 leader awards and we are fortunate to receive a position on their list of Top App Developers in India. It is no secret that there are many other companies in India who specialize in similar services as Squash apps. It can sometimes be difficult to differentiate our services from others, however, we are honored and thankful for the immense research and data collection that Clutch has performed in order to highlight our company as one of the best mobile app development companies in India.

At its core, Clutch is a ratings and review website to help facilitate business to business interaction in order to best pair businesses to the agencies or consultants they need to tackle their next big challenge. The level of detail in clutch’s reviews is exceptional compared to other sites and the statistics and tools such as their leader matrix can reveal important information on the potential experience you may have hiring a particular company.

On top of this, they also host two sister sites, The Manifestand Visual Objects, which periodically perform rankings of service providers based on a range of factors including past clients and experience, verified client reviews, and market presence. The Manifest is designed to guide users to tackle business projects and keep up to date with tech news for the purposes of successfully growing a business and overcoming challenges. Visual objects is specifically intended to help prospective clients visualize the possibilities of creative app development projects by displaying a digital portfolio of prior work.

The team at Squash Apps is excited to announce that our company has been ranked among the Top 50 Web Development Companies in India by The Manifest, and among the Top Mobile App Developers in India by Visual Objects. We have even been recognized for a few of our notable projects on The Manifest (As displayed below) and are excited to continue to receive praise and recognition from our satisfied clients.

Clutch Review

Our company is immensely thankful for all of their support and work done by clutch to help connect us to our potential clients and build a powerful network of satisfied customers. We are lucky to have such a dedicated company such as clutch to improve business relations around the world and we look forward to growing our profile and watching the reviews pile up!

 

 

 

 

Reactive Forms (Angular)

We all know that Angular Application is a reactive system. We have two categories of form structures in Angular:

  1. Reactive Forms
  2. Template Driven Forms.

In template driven forms, the template directives are used. Whereas in reactive forms, we could build our own representation of the form in the components. Thus, we could always opt for Reactive Forms.

Let’s begin by knowing the terminologies of reactive forms.

  • formControl
  • formGroup
  • formArray
  • controlValueAccessor

In this blog, we are going to look through formControls and formGroup.

Before beginning, do not forget to import the module (ReactiveFormsModule) and import the classes (FormControl, FormGroup, FormBuilder, Validators)

Imagine you have a form with the following fields:

  1. Name
  2. Age
  3. School
  4. Class
  5. Rank

So here, each field will be having different data types (such as alphabets, numbers, etc). Thus, in reactive forms, validating the fields become very easy using the built-in properties. Also, it is very easy for us to group the individual forms and make the necessary changes.

Let’s start with formControl:
Each field is recognized using the formControls for validation. Thus, we will be setting formControl for each field. Also, default values to the fields can also be set up initially.

Illustration:
In .html file, add:

<input type="text" [formControl]="name">

In .ts file, add:

name = new FormControl('any default values');

And then we go for formGroup:
These individual form controls can be grouped together for better handling of validation stuff and named as formGroup.

Illustration:
Let the HTML file remain the same.
In .ts file, add:

formGroupVariable = new FormGroup({
name = new FormControl('Smith Katana'),
age: new FormControl(12),
});

Now using the formGroupVariable, several validations, customizations can be done using the inbuilt methods. Following are some of them:

Illustration:
In .ts file, add :

name = new FormControl('any default values', {validators: Validators.required, updateOn: 'blur'});

Here, in the second parameter, we could use the Validators Object for validations. Refer the following link for Validators API reference:
https://angular.io/api/forms/Validators

Also updateOn key will intimate the validators to validate only after every blur to avoid validations for every keystroke.

We could also add our own customized method validator by adding the method instead of Validator object as follows:

Illustration:

In .ts file, add :

name = new FormControl('any default values',validatorMethod);

function validatorMethod(name: FormControl) {
//custom validations
}

Note* This method will be called for every keystroke

If you want to have a validation like to submit the form if any of the validations have been passed and it is not necessary that all the fields should be passed. But how do we do that?

formGroupVariable.patchValue('name', { onlySelf: true });

….Simple right?

To enable or disable the form use,

formGroupVariable.enable() && formGroupVariable.disable()

And also to disable a particular form control initially we could use the disabled attribute,

Illustration:

name: new FormControl({ disabled: true, value: null })

Reference Links:

Asynchronous Generators

For all the asynchronous challenges that we face in our code, would be solved by either Async-Await or Promises. Here comes another powerful alternative module which we call them as Generators!!!

There is a mysterious secret behind these generators which I will reveal it in the final section of our blog.

Let’s define Generators in simple:

1. What are they?

They are generally function with a * notation

2. Why do we need them?

They are just like normal Javascript functions with the motive of pausing and resuming the required particular function executions, thereby achieving better execution control

Illustration:

   asyncGenerator(function* () {

     let promises=yield apiService.get("asyncDb/promises.json");

     console.log(promises);

     let callbacks=yield apiService.get("asyncDb/callbacks.json");

     console.log(callbacks);

     let asyncAwaits=yield apiService.get("asyncDb/asyncAwaits.json");

     console.log(asyncAwaits);

   })


   function asyncGenerator(generator){

     let a=generator();

     function handle(yieldObject){

       if(!yieldObject.done){

         yieldObject.value.then(function(data){

           return handle(a.next(data))

         })

       }

     }

     return handle(a.next());

   }

Illustration Explanation:

Here asyncGenerator is the generator. The keyword “yield” will pause the execution inside the function. Calling next() method each time will invoke the yield expressions separately one by one.

And most importantly the yield expression will return an object containing the keys value and done. The object is like,

{

value: {Promises…..} (the yielded return value will be here) 

done: false (states whether the generator execution have reached the end or not)

}

The above code looks almost the same as that of Async/await implementation. But what makes the difference is most important thing. Async/await returns a Promise, whereas Generators returns an object {value: X, done: Boolean}. Meanwhile yield returns a stream of values from the generators.

The error handling parts are very easier in generators when they are used along with RXJS filtering operators. Instead of using next() we could also use throw(error). Even we could force the generator to stop by using return() in place of next() methods.

When you are tired of using the next() method again and again, we could simplify it by using the setImmediate () method. This will make all the promises unwrap one by one and wait for them to resolve before calling the next iteration.

Example:  

“setImmediate(() => next())”

The mysterious secret will be revealed now. Do we really need to learn generators and use them in handy?

The answer is NOOOO!

generators

Because we have a more power tool “Async/await” which holds best due to its powerful features. Async/await is built using generators. Without these generators, Async/await will not work at all. But that doesn’t mean we do need to use and learn generators in our code. Generators are more complex. Let the framework and library developers use these generators to create more powerful modules like Async/await. The side benefit from the generator what we could say is “Easier Testing”. The famous bundler ”Brunch”, the interpreter “Co” and “Redux-Saga” make use of these generators specifically.

async-await

Reference Links:

 

Lint Driven Development

It is no wonder that “Lint Driven Development – LDD” is one of the most essential development approach that every developer should follow up! It is an integral part of every developer’s toolbox! If you could agree my statement, then you’re already on the track of super cool development. If not, then I can get you a step further towards that through this blog.

What is linting?

You write code. Probably a lot of code. And you make mistakes. Probably a lot of mistakes. Sometimes your mistake is a real bug that can be fixed. Sometimes it’s just an unclear coding style which may seem trivial at first but they become patently important as the codebase grows and as more people stick their hands in it. You can’t always focus on fixing this inevitable mistakes that you do! Isn’t it? Because, there is probably no practical way to make it impossible to write sloppy and unclear code, but it is fascinating to consider how tooling has evolved to make it harder. One fine tooling is “Linters” which understands you and your code.

Linter is part of the style guide. It’s a small piece of software that automatically checks if your code has any stylistic or potential errors and meets the predefined code convention rules. You don’t have to manually go through the code base to check style and any errors. Linting helps us in two ways. First, it looks for code that will potentially break. Second, it provides a style guide for the development team to follow.

Why Linting is Important?

Some great movies involves compelling stories, and colorful screenplays that are easy to watch and understand. From that aspect, the job of a developer is similar to that of the movie director, since the code has to be easy to read and comprehensive. I know it’s pretty hard to focus on code quality when you’re under pressure to meet the next deadline, but if you’re thinking long term, you definitely need to write code that’s readable and maintainable.

In addition to its readability and maintainability there lies an important third reason which is lower technical debt which allows speeding up long term software development since it can be reused without involving any future developer’s time to work on fixing the old bugs and styling the code.

How Linters prevent our problems?

Linting tools throws warnings about certain types of code that can lead to common problems. Some are quite major. As a Javascript Engineer, I’ll list down some major problems in JS code and how linters could prevent those problems!

Problem #1
Most of us have got this one common question in our mind that our code works in development but why not in production? We all knew that most modern web stacks support minification, but neither the minifiers nor the browser tell us when are we missing semicolons. But a missing semicolon can break minified javascript and so it stops in production. So Linters notify you about the missing semicolons, braces, etc..

Problem #2:

Have you ever created a variable called “id” or “name” or “value”? Yeah, so has every other developer in history, and people who work on the same codebase as you is not an exception. And if someone forgets to declare all their variables with var, as they can overwrite each other unexpectedly. The scope of the variables is gone as well as your code.

You don’t always need to rely on your reviewers to find mistakes in your code and in some cases it takes too much of your reviewer’s time to find some sloppy mistakes or even they might miss it completely. Linting your JS can prevent potential XSS security holes, readability problems, and many more!

Linting Tools

Linters comes with a lot of tools that are capable of finding stylistic errors and sloppy mistakes in every technology that we use. Among many JS linters, ESLint seems to be the best available linter as it is completely plug-able, every single rule is a plugin and we can add more rules at run-time. It gives concise output, and also includes the rule name by default so it’s always easy to know which rules are causing the errors. We also have Linters for most languages like TS, CSS, HTML, Python, etc..

There are different means by which one can use linting tools to improve the code quality. To mention a few,

Linting manually, in the browser

Copy and Paste your code into JSHint or CSSLint or HTMLHint or any other static code analysis tool to check for some interesting lint errors. This might be the quickest way to get started with linting.

Linting from your code editor

Many code editors have support for configurable linting, such as VSCode. Here is the guide to configure Linters in VSCode. It has many extensions for all of the linters above. Grab one and have it check your code automatically which is really great since you’re already in your editor, where you can clean your code up simultaneously while you write it.

Linting from the command line

If you’re using a command line tool like Npm, you’re ahead of the curve. Just install a linter of your choice and follow some commands to get notified of your errors in code.

Linting as part of the build process

This is the best way to ensure that these types of problems never see the light of day. I recommend adding a flag to the ng serve and the ng build commands that automatically runs ng lint before each build and causes a build failure on any linter rule violation.

Even if you set up a linter, it might warn you against invalid code but it cannot stop you from pushing this code to the repo. This is where Git pre-commit hook comes into the picture. It’ll restrict the developer from committing the code if it doesn’t pass the rules available in .git/hooks. For information about Git Hooks, please visit here.

It is NoteWorthy!

I recommend turning off most of your linter’s settings when you start using it, so that you have a minimal set of errors to start with. Consider the ones it throws out, learn what they mean and then fix them. Later, re-enable another setting and repeat until you understand all the rules. I understand, it takes some time to understand everything and to get them fixed up. Let me copy out some words from Bruce Lee.

“I fear not the man who has practiced 10,000 kicks once,

         but I fear the man who has practiced one kick 10,000 times.”

– Bruce Lee

Software development is pretty much always slower than anyone wants it to be. It takes time. Sometimes you just have to be patient enough to turn out the code you need to write. No matter how long you’ve been in this field, you should keep practicing the craft of coding.

Conclusion

Linting is a vital part of our workflow, and will definitely help us improve our skills. If you were not using linting, I hope this blog convinced you a bit to configure lint to your code. I think Linting is one of the traits that makes a good developer! What do you think makes a great developer? Love to hear from you!

Rendering in Web Browser

The main purpose of the browser is to present the web resources by requesting it from the server and rendering it. In present days, apart from the development phase, web developers face challenges where their UX designs are not the same across the browsers.

Basically, the HTML specifications are designed by W3C(World Wide Web Consortium). But only a part of the rules is followed by the browsers and the other parts are their own extensions and developments. This is why we face cross-browser compatibility issues.

The similarities across all browsers are the address bar, reload URL, bookmarks, home page, etc,,.

High Level Structure of a Browser

  • User Interface:  The View we see it when we hit an URL/link
  • Browser Engine: An engine that invokes the browser and responsible for displaying content on the screen(between UI and Rendering Engine)
  • Rendering Engine
  • Networking: For network calls that happen in the web browsers
  • UI Backend
  • Javascript Engine: Parse Javascript code
  • Data Storage: Refers the Local storage, cookies, WebSQL, IndexedDB, etc..

The Rendering Engine and the Javascript Engine will vary based on the browsers. Due to this, we face compatibility issues. Few are here,

browser-details

Let’s go more detail about Rendering Engine!

Rendering engine

Rendering Outline

First, HTML is parsed to DOM tree using HTML parser. In the meantime, the CSS is parsed to CSSOM tree using CSS parser. Combining both DOM and CSSOM, the ‘Render Tree’ is constructed.
Then goes the layout formation. In the layout process, every DOM element gets its coordinates.
Using the paint method, all coordinates get their position in the view/display

For a better experience, the rendering engine will try to display the contents on the screen as soon as possible. Many times we see the half view is loaded in the browser. This is because only a part of the HTML and CSS is parsed as rendering engine tried to give the view as quickly as possible and remaining are loaded later.

Terms:

Scripts:
The Rendering engine reads the HTML document line by line. If it finds a script tag <script>, the parsing of the document halts until the script finished. Due to this, the loading of the document takes time. To avoid this, use async/defer in the script tag to process asynchronously which means a separate thread is called for script parsing.

Repaint/Restyle:
Changing the style of the element that does not affect the respective element’s position, like changing the background color, the repaint will happen.

Reflow:
The Reflow will happen on changing the content that affects the element’s position or a situation where restructuring the DOM is necessary, like changing the text of an element, animations for the relative elements, resizing, scrolling, etc,.The cost of rendering is high when reflow happens frequently.

Better Optimization:

  • Use proper encoding document types and valid HTML, CSS elements.
  • Apply the rules in the correct cascade order
  • Apply animation only for the absolute/fixed positioned elements.
  • Work with ‘offline elements’.
  • Place the scripts end of the document or use async/defer in the scripts

References:

  • https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Rendering_engines
  • https://medium.com/@monica1109/how-does-web-browsers-work-c95ad628a509
  • https://hackernoon.com/how-do-web-browsers-work-40cefd2cb1e1
  • http://frontendbabel.info/articles/webpage-rendering-101/
  • https://developers.google.com/web/updates/2019/02/rendering-on-the-web
  • https://labs.ft.com/2012/08/basic-offline-html5-web-app/

Angular Lazy Loading

In this blog, we will be learning about lazy loading in angular. The main concept of lazy loading is that don’t load something which you don’t need. Lazy loading is a useful technique for reducing the size of the bundle when the app loads initially which improves the app loads time thus improving the user experience. It’s also easy to have features loaded only when the user navigates to their routes for the first time.

Main steps to set up lazy loading

  1. Create a feature module.
  2. Use loadChildren in the main routing module.
  3. Create a routing module for feature module.

Create a feature module

In order to use lazy loading, we need submodules in our applications often called a feature module. Assuming that you have an Angular CLI project, let’s create a feature module using the following command.

Create lazy loading module

Note: Don’t load the feature module in your main module.

Now let’s create two components inside the lazyLoading module using the following command.

Create lazy loading component

Use loadChildren in the main routing module

Now let’s load the feature module in our main routing module (app-routing.module.ts). We need to use loadChildren() method to lazy load the feature module.

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [
  { path: '', redirectTo: 'home', pathMatch: 'full' },
  {
    path:'lazyLoading',
    loadChildren:
      './lazy-loading/lazy-loading.module#LazyLoadingModule'
  }
];

@NgModule({
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
})

export class AppRoutingModule { }

 

The loadChildren() method takes the path to the module, then # followed by the module’s class name.

Create a routing module for a feature module

Now let’s configure routes in the routing module for the components under the feature module.

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

import { OneComponent } from './one/one.component';
import { TwoComponent } from './two/two.component';

const routes: Routes = [
{ path: '', component: OneComponent },
{ path: 'two', component: TwoComponent },
];

@NgModule({
  imports: [RouterModule.forChild(routes)],
  exports: [RouterModule]
})

export class LazyLoadingRoutingModule { }

 

In the feature routing module include the routes with RouterModule’s forChild() method instead of the forRoot() method.

Lazy loading has been configured successfully now LazyLoadingModule will load only when the user navigates to “/lazyLoading”.

Preloading Strategy

When we run the application only the main modules are loaded all the other modules are lazy loaded. In this case, the lazily loaded modules load only when the user navigates to the feature module. Since the module is lazy loaded we have to wait for it to be loaded to overcome this we can use preload strategy.

To use preloading strategy we have to add preloadingStrategt in our app-routing.module.ts as shown below.

import { NgModule } from '@angular/core';
import { Routes, RouterModule, PreloadAllModules } from '@angular/router';
const routes: Routes = [
 { path: '', redirectTo: 'home', pathMatch: 'full' },
 { 
  path: 'lazyLoading', 
  loadChildren:
    './lazy-loading/lazy-loading.module#LazyLoadingModule' 
 }
];
@NgModule({
  imports: [RouterModule.forRoot(routes, {
    preloadingStrategy: PreloadAllModules,
  }
)],
exports: [RouterModule]
})
export class AppRoutingModule { }

The two subclasses in preloadingStrategy are.

  • NoPreloading: Default strategy which provides no preloading.
  • PreloadAllModules: Preloads all the lazy loaded modules.

By using PreloadAllModules we can load the modules which are required on the initial load of an application. All the other lazy loaded modules are loaded asynchronously right after the initial load of the application is done.

Another strategy is to preload the modules which are required and some other module with a delay. The method to use this is to add the data object to the route config as shown below.

const routes: Routes = [
  { path: '', redirectTo: 'home', pathMatch: 'full' },
  { 
    path: 'lazyLoading', 
    loadChildren: './lazy-loading/lazy-loading.module#LazyLoadingModule',
    data: { preload: true, delay: false }, 
  }
];

 

Reference: