Wednesday, March 25, 2020

Repository pattern, done right

The repository pattern has been discussed a lot lately. Especially about it’s usefulness since the introduction of OR/M libraries. This post (which is the third in a series about the data layer) aims to explain why it’s still a great choice.
Let’s start with the definition:
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes
The repository pattern is an abstraction. It’s purpose is to reduce complexity and make the rest of the code persistent ignorant. As a bonus it allows you to write unit tests instead of integration tests. The problem is that many developers fail to understand the patterns purpose and create repositories which leak persistence specific information up to the caller (typically by exposing IQueryable<T>).
By doing so they get no benefit over using the OR/M directly.

Common misconceptions

Here are some common misconceptions regarding the purpose of the pattern.

Repositories is about being able to switch DAL implementation

Using repositories is not about being able to switch persistence technology (i.e. changing database or using a web service etc instead).
Repository pattern do allow you to do that, but it’s not the main purpose.
A more realistic approach is that you in UserRepository.GetUsersGroupOnSomeComplexQuery() uses ADO.NET directly while you in UserRepository.Create() uses Entity Framework. By doing so you are probably saving a lot of time instead of struggling with LinqToSql to get your complex query running.
Repository pattern allow you to choose the technology that fits the current use case.

Unit testing

When people talks about Repository pattern and unit tests they are not saying that the pattern allows you to use unit tests for the data access layer.
What they mean is that it allows you to unit test the business layer. It’s possible as you can fake the repository (which is a lot easier than faking nhibernate/EF interfaces) and by doing so write clean and readable tests for your business logic.
As you’ve separated business from data you can also write integration tests for your data layer to make sure that the layer works with your current database schema.
If you use ORM/LINQ in your business logic you can never be sure why the tests fail. It can be because your LINQ query is incorrect, because your business logic is not correct or because the ORM mapping is incorrect.
If you have mixed them and fake the ORM interfaces you can’t be sure either. Because Linq to Objects do not work in the same way as Linq to SQL.
Repository pattern reduces the complexity in your tests and allow you to specialize your tests for the current layer

How to create a repository

Building a correct repository implementation is very easy. In fact, you only have to follow a single rule:
Do not add anything into the repository class until the very moment that you need it
A lot of coders are lazy and tries to make a generic repository and use a base class with a lot of methods that they might need. YAGNI. You write the repository class once and keep it as long as the application lives (can be years). Why fuck it up by being lazy? Keep it clean without any base class inheritance. It will make it much easier to read and maintain.
The above statement is a guideline and not a law. A base class can very well be motivated. My point is that you should think before you add it, so that you add it for the right reasons.

Mixing DAL/Business

Here is a simple example of why it’s hard to spot bugs if you mix LINQ and business logic.


  1. var brokenTrucks = _session.Query<Truck>().Where(x => x.State == 1);

  2. foreach (var truck in brokenTrucks)

  3. {

  4.    if (truck.CalculateReponseTime().TotalDays > 30)

  5.        SendEmailToManager(truck);

  6. }
What does that give us? Broken trucks?
Well. No. The statement was copied from another place in the code and the developer had forgot to update the query. Any unit tests would likely just check that some trucks are returned and that they are emailed to the manager.
So we basically have two problems here:
a) Most developers will likely just check the name of the variable and not on the query.
b) Any unit tests are against the business logic and not the query.
Both those problems would have been fixed with repositories. Since if we create repositories we have unit tests for the business and integration tests for the data layer.

Implementations

Here are some different implementations with descriptions.

Base classes

These classes can be reused for all different implementations.

UnitOfWork

The unit of work represents a transaction when used in data layers. Typically the unit of work will roll back the transaction if SaveChanges()has not been invoked before being disposed.


  1. public interface IUnitOfWork : IDisposable

  2. {

  3.     void SaveChanges();

  4. }
.

Paging

We also need to have page results.


  1. public class PagedResult<TEntity>

  2. {

  3.     IEnumerable<TEntity> _items;

  4.     int _totalCount;

  5.    

  6.     public PagedResult(IEnumerable<TEntity> items, int totalCount)

  7.     {

  8.         _items = items;

  9.         _totalCount = totalCount;

  10.     }

  11.    

  12.     public IEnumerable<TEntity> Items { get { return _items; } }

  13.     public int TotalCount { get { return _totalCount; } }

  14. }
We can with the help of that create methods like:


  1. public class UserRepository

  2. {

  3.     public PagedResult<User> Find(int pageNumber, int pageSize)

  4.     {

  5.     }

  6. }

Sorting

Finally we prefer to do sorting and page items, right?


  1. var constraints = new QueryConstraints<User>()

  2.     .SortBy("FirstName")

  3.     .Page(1, 20);

  4.    

  5. var page = repository.Find("Jon", constraints);
Do note that I used the property name, but I could also have written constraints.SortBy(x => x.FirstName). However, that is a bit hard to write in web applications where we get the sort property as a string.
The class is a bit big, but you can find it at github.
In our repository we can apply the constraints as (if it supports LINQ):


  1. public class UserRepository

  2. {

  3.     public PagedResult<User> Find(string text, QueryConstraints<User> constraints)

  4.     {

  5.         var query = _dbContext.Users.Where(x => x.FirstName.StartsWith(text) || x.LastName.StartsWith(text));

  6.         var count = query.Count();

  7.        

  8.        //easy

  9.         var items = constraints.ApplyTo(query).ToList();

  10.        

  11.         return new PagedResult(items, count);

  12.     }

  13. }
The extension methods are also available at github.

Basic contract

I usually start use a small definition for the repository, since it makes my other contracts less verbose. Do note that some of my repository contracts do not implement this interface (for instance if any of the methods do not apply).


  1. public interface IRepository<TEntity, in TKey> where TEntity : class

  2. {

  3.     TEntity GetById(TKey id);

  4.     void Create(TEntity entity);

  5.     void Update(TEntity entity);

  6.     void Delete(TEntity entity);

  7. }
I then specialize it per domain model:


  1. public interface ITruckRepository : IRepository<Truck, string>

  2. {

  3.     IEnumerable<Truck> FindBrokenTrucks();

  4.     IEnumerable<Truck> Find(string text);

  5. }
That specialization is important. It keeps the contract simple. Only create methods that you know that you need.

Entity framework

Do note that the repository pattern is only useful if you have POCOs which are mapped using code first. Otherwise you’ll just break the abstraction using the entities. The repository pattern isn’t very useful then.
What I mean is that if you use the model designer you’ll always get a perfect representation of the database (but as classes). The problem is that those classes might not be a perfect representation of your domain model. Hence you got to cut corners in the domain model to be able to use your generated db classes.
If you on the other hand uses Code First you can modify the models to be a perfect representation of your domain model (if the DB is reasonable similar to it). You don’t have to worry about your changes being overwritten as they would have been by the model designer.
You can follow this article if you want to get a foundation generated for you.

Base class



  1. public class EntityFrameworkRepository<TEntity, TKey> where TEntity : class

  2. {

  3.     private readonly DbContext _dbContext;


  4.     public EntityFrameworkRepository(DbContext dbContext)

  5.     {

  6.         if (dbContext == null) throw new ArgumentNullException("dbContext");

  7.         _dbContext = dbContext;

  8.     }


  9.     protected DbContext DbContext

  10.     {

  11.         get { return _dbContext; }

  12.     }


  13.     public void Create(TEntity entity)

  14.     {

  15.         if (entity == null) throw new ArgumentNullException("entity");

  16.         DbContext.Set<TEntity>().Add(entity);

  17.     }


  18.     public TEntity GetById(TKey id)

  19.     {

  20.         return _dbContext.Set<TEntity>().Find(id);

  21.     }


  22.     public void Delete(TEntity entity)

  23.     {

  24.         if (entity == null) throw new ArgumentNullException("entity");

  25.         DbContext.Set<TEntity>().Attach(entity);

  26.         DbContext.Set<TEntity>().Remove(entity);

  27.     }


  28.     public void Update(TEntity entity)

  29.     {

  30.         if (entity == null) throw new ArgumentNullException("entity");

  31.         DbContext.Set<TEntity>().Attach(entity);

  32.         DbContext.Entry(entity).State = EntityState.Modified;

  33.     }

  34. }
Then I go about and do the implementation:


  1. public class TruckRepository : EntityFrameworkRepository<Truck, string>, ITruckRepository

  2. {

  3.     private readonly TruckerDbContext _dbContext;


  4.     public TruckRepository(TruckerDbContext dbContext)

  5.     {

  6.         _dbContext = dbContext;

  7.     }


  8.     public IEnumerable<Truck> FindBrokenTrucks()

  9.     {

  10.        //compare having this statement in a business class compared

  11.        //to invoking the repository methods. Which says more?

  12.         return _dbContext.Trucks.Where(x => x.State == 3).ToList();

  13.     }


  14.     public IEnumerable<Truck> Find(string text)

  15.     {

  16.         return _dbContext.Trucks.Where(x => x.ModelName.StartsWith(text)).ToList();

  17.     }

  18. }

Unit of work

The unit of work implementation is simple for Entity framework:


  1. public class EntityFrameworkUnitOfWork : IUnitOfWork

  2. {

  3.     private readonly DbContext _context;


  4.     public EntityFrameworkUnitOfWork(DbContext context)

  5.     {

  6.         _context = context;

  7.     }


  8.     public void Dispose()

  9.     {

  10.        

  11.     }


  12.     public void SaveChanges()

  13.     {

  14.         _context.SaveChanges();

  15.     }

  16. }

nhibernate

I usually use fluent nhibernate to map my entities. imho it got a much nicer syntax than the built in code mappings. You can use nhibernate mapping generator to get a foundation created for you. But you do most often have to clean up the generated files a bit.

Base class



  1. public class NHibernateRepository<TEntity, in TKey> where TEntity : class

  2. {

  3.     ISession _session;

  4.    

  5.     public NHibernateRepository(ISession session)

  6.     {

  7.         _session = session;

  8.     }

  9.    

  10.     protected ISession Session { get { return _session; } }

  11.    

  12.     public TEntity GetById(string id)

  13.     {

  14.         return _session.Get<TEntity>(id);

  15.     }


  16.     public void Create(TEntity entity)

  17.     {

  18.         _session.SaveOrUpdate(entity);

  19.     }


  20.     public void Update(TEntity entity)

  21.     {

  22.         _session.SaveOrUpdate(entity);

  23.     }


  24.     public void Delete(TEntity entity)

  25.     {

  26.         _session.Delete(entity);

  27.     }

  28. }

Implementation



  1. public class TruckRepository : NHibernateRepository<Truck, string>, ITruckRepository

  2. {

  3.     public TruckRepository(ISession session)

  4.         : base(session)

  5.     {

  6.     }


  7.     public IEnumerable<Truck> FindBrokenTrucks()

  8.     {

  9.         return _session.Query<Truck>().Where(x => x.State == 3).ToList();

  10.     }


  11.     public IEnumerable<Truck> Find(string text)

  12.     {

  13.         return _session.Query<Truck>().Where(x => x.ModelName.StartsWith(text)).ToList();

  14.     }

  15. }

Unit of work



  1. public class NHibernateUnitOfWork : IUnitOfWork

  2. {

  3.     private readonly ISession _session;

  4.     private ITransaction _transaction;


  5.     public NHibernateUnitOfWork(ISession session)

  6.     {

  7.         _session = session;

  8.         _transaction = _session.BeginTransaction();

  9.     }


  10.     public void Dispose()

  11.     {

  12.         if (_transaction != null)

  13.             _transaction.Rollback();

  14.     }


  15.     public void SaveChanges()

  16.     {

  17.         if (_transaction == null)

  18.             throw new InvalidOperationException("UnitOfWork have already been saved.");


  19.         _transaction.Commit();

  20.         _transaction = null;

  21.     }

  22. }

Typical mistakes

Here are some mistakes which can be stumbled upon when using OR/Ms.

Do not expose LINQ methods

Let’s get it straight. There are no complete LINQ to SQL implementations. They all are either missing features or implement things like eager/lazy loading in their own way. That means that they all are leaky abstractions. So if you expose LINQ outside your repository you get a leaky abstraction. You could really stop using the repository pattern then and use the OR/M directly.


  1. public interface IRepository<TEntity>

  2. {

  3.     IQueryable<TEntity> Query();

  4.    

  5.    // [...]

  6. }
Those repositories really do not serve any purpose. They are just lipstick on a pig.

Learn about lazy loading

Lazy loading can be great. But it’s a curse for all which are not aware of it. If you don’t know what it is, Google.
If you are not careful you could get 101 executed queries instead of 1 if you traverse a list of 100 items.

Invoke ToList() before returning

The query is not executed in the database until you invoke ToList()FirstOrDefault() etc. So if you want to be able to keep all data related exceptions in the repositories you have to invoke those methods.

Get is not the same as search

There are to types of reads which are made in the database.
The first one is to search after items. i.e. the user want to identify the items that he/she like to work with.
The second one is when the user has identified the item and want to work with it.
Those queries are different. In the first one, the user only want’s to get the most relevant information. In the second one, the user likely want’s to get all information. Hence in the former one you should probably return UserListItem or similar while the other case returns User. That also helps you to avoid the lazy loading problems.
I usually let search methods start with FindXxxx() while those getting the entire item starts with GetXxxx(). Also don’t be afraid of creating specialized POCOs for the searches. Two searches doesn’t necessarily have to return the same kind of entity information.

Summary

Don’t be lazy and try to make too generic repositories. It gives you no upsides compared to using the OR/M directly. If you want to use the repository pattern, make sure that you do it properly.

Angular 2 VS Angular 4: Features, Performance


In the world of web application development, Angular is considered one of the best open-source JavaScript frameworks.
Google's Angular team announced that Angular 4 would be released on 23 March. Actually, they skipped version 3. As all of you know, the long awaited release of Angular 2 was a complete makeover of its previous version.
It is just awesome for old developers. However, for new developers who are still in the learning phase, it could be a little confusing and tricky. Anyway, this article will offer a comparison of Angular 2 and Angular 4.

Angular 2

Angular 2 was released at the end of 2015. Let's take a look at why this version was released and what it added to web development. 
This version of Angular was more focused on the development of mobile apps, as it allowed developers to create cross platform applications. The reason is that it is easier to handle the desktop component of things after the challenges connected to mobile apps (functionality, load time, etc.) have been addressed.
Numerous modules were eliminated out of Angular's core, which led to better performance. These made their way to Angular's ever-growing ecosystem of modules, which means that you have the ability to select and choose the components you want.
Angular 2.0 was aimed at ES6 and "evergreen" modern browsers (these automatically update to the most recent version). Building for these browsers means various hacks and workarounds that make Angular harder to develop can be eliminated, allowing developers to concentrate on the code linked to their company domain.

Angular 2 Features and Performance

AtScript is a superset of ES6 and it was used to help develop Angular 2. It is processed from the Traceur compiler (combined with ES6) to generate ES5 code and utilizes TypeScript's syntax to create runtime type assertions rather than compile time tests. But, AtScript is not mandatory--you still have the ability to use plain JavaScript/ES5 code rather than AtScript to compose Angular apps.

Improved Dependency Injection (DI):

Dependency injection (a program design pattern where an item is passed its own dependencies, as opposed to producing them) was among the aspects that originally differentiated Angular from its competitors. Dependency Injection is very helpful when it comes to modular development and element isolation, yet its implementation has been plagued with issues since Angular 1.x. Angular 2 handled these problems, in addition to adding missing features like kid injectors along with lifetime/scope control.

Annotation:

AtScript supplies tools for linking metadata with functions. This eases the building of object instances by supplying the essential information into the DI library (that will check for related meta data if calling a function or creating the instance of a class). It'll also be simple to override parameter information by providing an Inject annotation.

Child Injectors:

A kid injector inherits all of the professional services of its parent together with the capacity to override them at the child level. According to demand, several kinds of objects could be called out and mechanically overridden in a variety of scopes.

Instance Scope:

The enhanced DI library is comprised of instance scope controllers, which are even stronger when used with child injectors along with your scope identifiers.

Dynamic Loading:

This is a feature which was not available in the previous version(s) of Angular. It was addressed by Angular 2, however, which allowed programmers to add new directives or controls on the fly.

Templating:

In Angular 2, the template compilation procedure is asynchronous. Since the code relies on the ES6 module, the module loader will load dependencies simply by referencing them at the part component.

Directives:

Three kinds of Directives were made available for Angular 2: 
  • Component Directives: They made components reusable by encapsulating logic in HTML, CSS, and JavaScript.
  • Decorator Directives: They can be used to decorate elements (for example, Hiding/Showing elements by ng-hide/ng-show or adding a tooltip).
  • Template Directives: These can turn HTML into a reusable template. The instantiating of this template and its insertion into the DOM could be completely controlled by the directive writer. Examples include ng-repeat and ng-if.

Child Router:

The Child router will convert every part of the program to a more compact application by supplying it with its own router. It helps to encapsulate the entire feature collections of a program.

Screen Activator:

With Angular 2, developers were able to take finer control on the navigation life cycle, through a set of can* callbacks.
  • canActivate: It will allow or prevent navigation to the new control.
  • activate: It will respond to successful navigation to the new control.
  • canDeactivate: It will prevent or allow navigation away from the old controller.
  • deactivate: It will respond to successful navigation away from the old controller.

Design:

All this logic was built using a pipeline architecture that made it incredibly simple to add one's own actions into the pipeline or remove default ones. Moreover, its asynchronous character allowed developers to some make server requests to authenticate a user or load information for a control, while still in the pipeline.

Logging:

Angular 2.0 included a logging service known as diary.js--a very helpful attribute which measures where time is invested in your program (thus permitting you to identify bottlenecks in your code).

Scope:

$scope was removed from Angular 2.

Angular 4 Features and Performance

As compared to Angular 2, there are lots of new items added to this list. Not just new features but also some tweaks that improved old capabilities. So let's move on to see the list.

Smaller and Faster:

With Angular 4, programs will consume less space and run quicker than previous versions. And the staff is focused on continually making additional improvements.

View Engine:

They have made adjustments under to hood to exactly what AOT created code looks like. These modifications decrease the size of the generated code for those parts by approximately 60 percent. The more complicated the templates are, the greater the savings.

Animation Package:

They've pulled animations from the Angular core and set them in their own package. This means that in case you don't use animations, this excess code won't end up on your creation packages.
This feature will also enable you to easily find docs and to take advantage of auto-completion. You may add animations to the main NgModule by importing the Browser Animations Module out of @angular/platform-browser/animations.

Improved *ngIf and *ngFor:

The template binding syntax currently supports a few helpful alterations. Now you can utilize an if/else design syntax, and assign local variables like if to unroll an observable.

Angular Universal:

This release now contains the results of the external and internal work from the Universal team throughout the last few months. The vast majority of this Universal code is currently located in @angular/platform-server.
To learn more about using Angular Universal, have a look at the new renderModuleFactory method in @angular/platform-server, or Rob Wormald's Demo Repository. More documentation and code samples will come.

TypeScript 2.1 and 2.2 Compatibility:

The group has upgraded Angular into a more recent version of TypeScript. This will enhance the rate of ngc and you'll receive far better type checking during your program.

Source Maps for Templates:

Now whenever there's an error caused by something in one of the templates, they create source maps that provide a meaningful context concerning the original template.

Conclusion:

As I said earlier, Angular will be a bit confusing for those who are still in the learning phase. But for experienced developers who have knowledge of version 2, then it will be very easy for them to use and they will find it very helpful.

Best-practices learnt from delivering a quality Angular4 application


As some of you might recall, Angular2 went through an unusually long Alpha, Beta and RC stages. It seemed as if the entire Angular2 was re-written since the first Alpha release. So at the time of 2.0 final release, the entire Angular scene was very chaotic. There were hardly any good tutorials or resources which were working with 2.0.0.Final release.
I also did not have AngularJS(1.x) background. I had just delivered a huge SPA using Backbone-Marionette-Rivets.js stack. In the hindsight, it was a good thing to not have baggage from AngularJS!
All in all, we took a leap of faith! I placed my faith on Angular developers, community and my ability to adapt to new framework and jumped into the valley without a parachute!
Me and my core team members — we had about 3–4 weeks time to create the first spike and we all ran fence to fence; fell often but learnt a lot. My overall experience of JavaScript development scene also came handy when we scaled from 4 people to about 18 people team in couple of months.
Looking back, after 6–8 months of development and product delivery of the application, I can see that some good practices saved the day. This post summarizes them for everyone’s benefit. Without further ado, here are some of the best practices that you might want to adopt to deliver a quality Angular application…

Best practices for Absolute beginners

Become comfortable with ES2015

Most of the initial curve for angular is just about getting comfortable with ES2015. So ensure all the developers on the team have READ and actually TRIED ES2015 and ES2016 flavors of JavaScript. There is A LOT to learn here but it will just make ready to face the external world tutorials which often makes use of these syntax. E.g. syntax like () => { } or […a, b,] should not trip you. Or usage of importclassletconst, etc should be first nature to your developers.

Embrace Typescript and Visual Studio Code (VSCode).

Most of the code snippets for Angular you will find online are in Typescript.. which is a superset of ES2015. I will highly recommend that you use this so that code snippets online will make sense. Also as a companion, use Visual Studio for Code as your IDE, TSLint as your linter and TSLint plugin in VSCode to ensure you get best static code analysis experience. Plus — by using TS, you don’t need Babel. Bonus: Also add Angular Language Service plugin to VSCode. This gives far better angular experience especially in the Angular templates.

Master npm ecosystem.

Along-side ES2015, Angular is also all about being comfortable with Node and NPM Ecosystem. Any serious example will make use of package.json (npm) and node to build and run their example. Virtually EVERY angular component out there will give you instructions about how to install it using npm. So make absence of Npm and VSCode deal-breaker for your teams. Either your developers are using these tools or they are not on your team! Seriously!

Angular Application Development Best Practices

Eat, Sleep, Breath Components!

Angular is all about components. Design the components first, before starting to code. By design, I mean –
  • Draw outlines on the Visual-Designs to clearly demarcate which screen area will be owned by which component. Make the components small enough so that they can be reused at many places But large enough that making them any smaller makes no sense. It takes a bit of time to get used to creating this this logical grouping but you can naturally do this in 2–3 sprints. I insist on my entire team doing this for EVERY story in EVERY sprint.
  • Once you know your component, document the “inputs” and “outputs”. I have a small design-checklist which I make every developer fill-up as a short design documentation for each story. Please see Design Narrative section at the bottom of below this post if you want to adapt it in your project.
Design each component with Re-usability in mind. Try to create commonly used UI element as separate component and re-use them in the screens.

Use seed projects to hit the ground running!

Make use of some kickass starter seed projects because they would have done a great job at incorporating many features for you. I wholeheartedly recommend AngularClass webpack starter or BlackSonic’s ES6 starter. This will get you running in no time with a great foundation for large project.

Or… Use Angular-CLI

Other option is to use Angular-CLI. Angular CLI is really good option for those who are finding entire ES2015+TypeScript+Angular a bit overwhelming. It abstract away quite a few things from you including entire webpack configuration. But that abstraction is also a downside since you cannot tinker around those abstracted parts. Thankfully, there is a eject option in Angular-CLI to eject most of abstracted things.

RIP SystemJS, Hello Webpack!

From the beginning, stop using SystemJS and switch over to Webpack. Webpack is far more powerful and versatile tool. Optimize webpack bundles effectively to ensure that you are not bundling same modules in multiple chunks. There are bundle-analyzers from webpack which do brilliant job of telling you about this. BonusWebpack Learning Slides and Step-by-step code

Use AoT FTW!

Usage of AoT (ahead of time) compilation is a great step towards performance gains at runtime. It also reduces your bundle by about 30kb (gzipped) which is a LOT of improvement. Angular 4.0+ brings about 30% improvement in app bundles due to how it generates the AoT code.

Understand Observables from RxJS.

A LOT of Angular work is about understanding what is Observable. It’s very important to understand how Observables work and becoming comfortable with RxJS library which helps you become Observable Ninja.

Lazy Loading the non-first-page routes

Lazy-load every route which don’t need at 1st page hit. Webpack2 import()function will come handy for you. Also webpack’s ng-router-loader will help automate the bundle creation for each lazy loaded module automatically..

Using Widgets and Libraries

Consider using standard widget library like PrimeNG or ValorSoft. Try to avoid JQuery as it cannot be tree-shaken.

Debugging

Make use of ng.probe() in chrome console to do effective debugging / Or make use of Augury chrome extension — which wraps ng.probe for you.

Stay safe in Dark Corners of NgZone

NgZone and ZoneJS are some of the dark corners of Angular. When things don’t work fine even after you trying 100 different things for many days, you might be up against these two adversaries. I call them dark corners because no error will ever tell you that that error can be fixed if you fiddle around with NgZone. You must correlate your error and potential NgZone conflict yourself 😧. As such, NgZone is quite easy to use but I did not even know it existed for almost 5 months in my project.

Other wins via code structuring

  1. Shared Modules — Try to make use of shared module. Create a module and that should import & export all the commonly used modules & providers and import this in other modules.
  2. Global vs local CSS — Whenever writing the CSS, try to visualize if this kind of element might be used at lot of places and then write the style at application level instead of component, it will avoid the re-writing it again in new component. You can just override any small change is required in component level.
  3. Theme File with SCSS — When using any CSS preprocessor, always define a file which has variables only related to color, font-size, etc. of the applications, it will help when you need to change the theme.
  4. Typescript Inheritance for your help — Try to utilize the Inheritance in Typescript. If you have some view related functionality that might be required in many screens, you can create a base component with common functionality and then all other components can just extend it.
  5. Use Services — Strive for complete segregation between the View implementation and service call. In the UI component, keep code only related to view, and delegate to a service to make the backend calls and for any functional logic.

General Web Developer Productivity Best Practices

  1. Improve development workflow — Often times, developer do not think about finding shortcuts to improve their developer productivity. This includes, circumventing login locally, caching backend calls which are not required for your current work, making small code fixes to skip through 10 screens / clicks to arrive at the screen where they need to do their change. One should spend half an hour to twaek these things reach their current screen instantly and save time on every minor code update.
  2. Mandatory Human Code Review — We have mandated that all developers must deliver code as pull_request in Git. Architect would review and approve the code before merging. This ensures that each line of code has been reviewed before merging. This help catching bugs and quality / performance problems which cannot be caught using Linters.

Design narrative:

One of the best thing that we implemented was — process of design elaboration from each developer for their story.
I insist that my developers follow this narrative.

To deliver story xyz,

  1. Which component(s) would be required to be “created” or “modified”.
  2. How will the component be accessed? From a topNav? Routing? From some user interaction on other components?
  3. Which folder would those components belong?
  4. What kind of @input, @output would be provided to / emitted from these components.
  5. What would be your backend call requirement and it’s sequence
  6. Any form validations?
  7. Any spl technical things / libs required? E.g. moment, datepicker, modal, etc.
  8. “Productivity improvement”? How will you reach your page fast — hardcoding? Proxying?
I have found that the developers were far more confident and their code quality improved once we established above design documentation process. Hopefully, you will find similar change in your team quality as well.
This post is adapted from my original post on my blog.
That’s it folks for now. Thank you for patiently reading till the end! If you liked the story — please follow me on twitter and hit ❤️ symbol below the story.

Free hosting web sites and features -2024

  Interesting  summary about hosting and their offers. I still host my web site https://talash.azurewebsites.net with zero cost on Azure as ...