What Does RandomAccess Mean?

RandomAccess is a marker interface, like the Serializable and Cloneable interfaces. All these marker interfaces do not define methods. Instead, they identify a class as having a particular capability. In the case of Serializable, the interface specifies that if the class is serialized using the serialization I/O classes, a NotSerializableException will not be thrown (unless the object contains some other class that cannot be serialized). Cloneable similarly indicates that the use of the Object.clone( ) method for a Cloneable class will not throw aCloneNotSupportedException.

The RandomAccess interface identifies that a particular java.util.List implementation has fast random access. (A more accurate name for the interface would have been “FastRandomAccess.”) This interface tries to define an imprecise concept: what exactly is fast? The documentation provides a simple guide: if repeated access using the List.get( ) method is faster than repeated access using the Iterator.next( ) method, then the List has fast random access. The two types of access are shown in the following code examples.

Repeated access using List.get( ):

Object o;
for (int i=0, n=list.size(  ); i < n; i++)
  o = list.get(i);

Repeated access using Iterator.next( ):

Object o;
for (Iterator itr=list.iterator(  ); itr.hasNext(  ); )
  o = itr.next(  );

A third loop combines the previous two loops to avoid the repeated Iterator.hasNext( ) test on each loop iteration:

Object o;
Iterator itr=list.iterator(  );
for (int i=0, n=list.size(  ); i < n; i++)
  o = itr.next(  );

This last loop relies on the normal situation where List objects cannot change in size while they are being iterated through without an exception of some sort occurring. So, because the loop size remains the same, you can simply count the accessed elements without testing at each iteration whether the end of the list has been reached. This last loop is generally faster than the previous loop with the Iterator.hasNext( ) test. In the context of the RandomAccess interface, the first loop using List.get( ) should be faster than both the other loops that use Iterator.next( ) for a list to implement RandomAccess.

How Is RandomAccess Used?

So now that we know what RandomAccess means, how do we use it? There are two aspects to using the other marker interfaces, Serializable and Cloneable: defining classes that implement them and using their capabilities via ObjectInput /ObjectOutput and Object.clone( ), respectively.RandomAccess is a little different. Of course, we still need to decide whether any particular class implements it, but the possible classes are severely restricted: RandomAccess should be implemented only in java.util.List classes. And most such classes are created outside of projects. The SDK provides the most frequently used implementations, and subclasses of the SDK classes do not need to implement RandomAccess because they automatically inherit the capability where appropriate.

The second aspect, using the RandomAccess capability, is also different. Whether a class is Serializable or Cloneable is automatically detected when you use ObjectInput/ObjectOutput and Object.clone( ). But RandomAccess has no such automatic support. Instead, you need to explicitly check whether a class implements RandomAccess using the instanceof operator:

if (listObject instanceof RandomAccess)
  ...

You must then explicitly choose the appropriate access method, List.get( ) or Iterator.next( ). Clearly, if we test for RandomAccess on every loop iteration, we would be making a lot of redundant calls and probably losing the benefit of RandomAccess as well. So the pattern to follow in usingRandomAccess makes the test outside the loop. The canonical pattern looks like this:

Object o;
if (listObject instanceof RandomAccess)
{
  for (int i=0, n=list.size(  ); i < n; i++)
  {
    o = list.get(i);
    //do something with object o
  }
   
}
else
{
  Iterator itr = list.iterator(  );
  for (int i=0, n=list.size(  ); i < n; i++)
  {
    o = itr.next(  );
    //do something with object o
   
  }
}

Speedup from RandomAccess

I tested the four code loops shown in this section, using the 1.4 release, separately testing the -client (default) and -server options. To test the effect of the RandomAccess interface, I used the java.util.ArrayList and java.util.LinkedList classes. ArrayList implements RandomAccess, while LinkedList does not. ArrayList has an underlying implementation consisting of an array with constant access time for any element, so using the ArrayList iterator is equivalent to using the ArrayList.get( ) method but with some additional overhead. LinkedList has an underlying implementation consisting of linked node objects with access time proportional to the shortest distance of the element from either end of the list, whereas iterating sequentially through the list can shortcut the access time by traversing one node after another.

Times shown are the average of three runs, and all times have been normalized to the first table cell, i.e., the time taken by the ArrayList to iterate the list using the List.get( ) method in client mode.

Loop type (loop test) and access method

ArrayList java -client

LinkedList java -client

ArrayList java -server

LinkedList java -server

loop counter (i<n) and List.get( )

100%

too long

77.5%

too long

iterator (Iterator.hasNext( )) and Iterator.next( )

141%

219%

109%

213%

iterator (i<n) and Iterator.next( )

121%

205%

98%

193%

RandomAccess test with loop from row 1 or 3

100%

205%

77.5%

193%

The most important results are in the last two rows. The last line shows the times obtained by making full use of the RandomAccess interface, and the line before that shows the most optimal general technique for iterating lists if RandomAccess is not available. The size of the lists I used for the test (and consequently the number of loop iterations required to access every element) was sufficiently large that the instanceof test had no measurable cost in comparison to the time taken to run the loop. Consequently, we can see that there was no cost (but also no benefit) in adding the instanceofRandomAccess test when iterating the LinkedList, whereas the ArrayList was iterated more than 20% quicker when the instanceof test was included.

Forward and Backward Compatibility

Can you use RandomAccess and maintain backward compatibility with VM versions prior to 1.4? There are three aspects to using RandomAccess:

  • You may want to include code referencing RandomAccess without moving to 1.4.

  • Many projects need their code to be able to run in any VM, so the code needs to be backward-compatible to run in VMs using releases earlier than 1.4, where RandomAccess does not exist.

  • You will want to make your code forward-compatible so that it automatically takes advantage of RandomAccess when running in a 1.4+ JVM.

Making RandomAccess available to your development environment is the first issue, and if you are using an environment prior to 1.4, this can be as simple as adding the RandomAccess interface to your classpath. Any version of the SDK can create the RandomAccess interface. The definition for RandomAccess is:

package java.util;
public interface RandomAccess {  }

We also need to handle RandomAccess in the runtime environment. For pre-1.4 environments, the test:

if (listObject instanceof RandomAccess)

generates a NoClassDefFoundError at runtime when the JVM tries to load the RandomAccess class (for the instanceof test to be evaluated, the class has to be loaded). However, we can guard the test so that it is executed only if RandomAccess is available. The simplest way to do this is to check whether RandomAccess exists, setting a boolean guard as the outcome of that test:

static boolean RandomAccessExists;
...
   
  //execute this as early as possible after the application starts
  try
  {
    Class c =  Class.forName("java.util.RandomAccess");
    RandomAccessExists = true;
  }
  catch (ClassNotFoundException e)
  {
    RandomAccessExists = false;
  }

Finally, we need to change our instanceof tests to use the RandomAccessExists variable as a guard:

if (RandomAccessExists && (listObject instanceof RandomAccess) )

With the guarded instanceof test, the code automatically reverts to the Iterator loop if RandomAccess does not exist and should avoid throwing a NoClassDefFoundError in pre-1.4 JVMs. And, of course, the guarded instanceof test also automatically uses the faster loop branch whenRandomAccess does exist and the list object implements it.

Singleton Design Pattern

Singleton is a part of Gang of Four design pattern and it is categorized under creational design patterns. In this article, we are going to take a deeper look into the usage of the Singleton pattern. It is one of the most simple design pattern in terms of the modelling but on the other hand this is one of the most controversial pattern in terms of complexity of usage.

Singleton pattern is a design pattern which restricts a class to instantiate its multiple objects. It is nothing but a way of defining a class. Class is defined in such a way that only one instance of class is created in the complete execution of program or project. It is used where only a single instance of class is required to control the action throughout the execution. A singleton class shouldn’t have multiple instances in any case and at any cost. Singleton classes are used for logging, driver objects, caching and thread pool, database connections.

create-singleon-class

Implementation of Singleton class

An implementation of singleton class should have following properties:

  1. It should have only one instance : This is done by providing instance of class from within the class. Outer classes or subclasses should be prevented to create the instance. This is done by making the constructor private in java so that no class can access the constructor and hence cannot instantiate it.
  2. Instance should be globally accessible : Instance of singleton class should be globally accessible so that each class can use it. In java it is done by making the access-specifier of instance public.
//A singleton class should have public visiblity
//so that complete application can use
public class GFG {
  
  //static instance of class globally accessible
  public static GFG instance = new GFG();
  private GFG() {
    // private constructor so that class
    //cannot be instantiated from outside
    //this class
  }
}

Detailed Article: Implementation of Singleton Design Pattern in Java

Initialization Types of Singleton

    Singleton class can be instantiated by two methods:

  1. Early initialization : In this method, class is initialized whether it is to be used or not. Main advantage of this method is its simplicity. You initiate the class at the time of class loading. Its drawback is that class is always initialized whether it is being used or not.
  2. Lazy initialization : In this method, class in initialized only when it is required. It can save you from instantiating the class when you don’t need it. Generally lazy initialization is used when we create a singleton class.

Examples of Singleton class

  1. java.lang.Runtime : Java provides a class Runtime in its lang package which is singleton in nature. Every Java application has a single instance of class Runtime that allows the application to interface with the environment in which the application is running. The current runtime can be obtained from the getRuntime() method.
    An application cannot instantiate this class so multiple objects can’t be created for this class. Hence Runtime is a singleton class.
  2. java.awt.Desktop : The Desktop class allows a Java application to launch associated applications registered on the native desktop to handle a URI or a file.
    Supported operations include:

    • launching the user-default browser to show a specified URI;
      launching the user-default mail client with an optional mailto URI;
    • launching a registered application to open, edit or print a specified file.
    • This class provides methods corresponding to these operations. The methods look for the associated application registered on the current platform, and launch it to handle a URI or file. If there is no associated application or the associated application fails to be launched, an exception is thrown.
    • Each operation is an action type represented by the Desktop.Action class.

    This class also cannot be instantiated from application. Hence it is also a singleton class.

Applications of Singleton classes

There is a lot of applications of singleton pattern like cache-memory, database connection, drivers, logging. Some major of them are :-

  1. Hardware interface access: The use of singleton depends on the requirements. Singleton classes are also used to prevent concurrent access of class. Practically singleton can be used in case external hardware resource usage limitation required e.g. Hardware printers where the print spooler can be made a singleton to avoid multiple concurrent accesses and creating deadlock.
  2. Logger : Singleton classes are used in log file generations. Log files are created by logger class object. Suppose an application where the logging utility has to produce one log file based on the messages received from the users. If there is multiple client application using this logging utility class they might create multiple instances of this class and it can potentially cause issues during concurrent access to the same logger file. We can use the logger utility class as a singleton and provide a global point of reference, so that each user can use this utility and no 2 users access it at same time.
  3. Configuration File: This is another potential candidate for Singleton pattern because this has a performance benefit as it prevents multiple users to repeatedly access and read the configuration file or properties file. It creates a single instance of the configuration file which can be accessed by multiple calls concurrently as it will provide static config data loaded into in-memory objects. The application only reads from the configuration file at the first time and there after from second call onwards the client applications read the data from in-memory objects.
  4. Cache: We can use the cache as a singleton object as it can have a global point of reference and for all future calls to the cache object the client application will use the in-memory object.

Important points

  • Singleton classes can have only one instance and that instance should be globally accessible.
  • java.lang.Runtime and java.awt.Desktop are 2 singleton classes provided by JVM.
  • Singleton Design pattern is a type of creational design pattern.
  • Outer classes should be prevented to create instance of singleton class.

How to prevent Singleton Pattern from Reflection, Serialization and Cloning?

In this article, we will see that what are various concepts which can break singleton property of a class and how to avoid them. There are mainly 3 concepts which can break singleton property of a class. Let’s discuss them one by one.

  1. Reflection: Reflection can be caused to destroy singleton property of singleton class, as shown in following example:
    // Java code to explain effect of Reflection
    // on Singleton property
    import java.lang.reflect.Constructor;
    // Singleton class
    class Singleton
    {
        // public instance initialized when loading the class
        public static Singleton instance = new Singleton();
        
        private Singleton()
        {
            // private constructor
        }
    }
    public class GFG
    {
        public static void main(String[] args)
        {
            Singleton instance1 = Singleton.instance;
            Singleton instance2 = null;
            try
            {
                Constructor[] constructors =
                        Singleton.class.getDeclaredConstructors();
                for (Constructor constructor : constructors)
                {
                    // Below code will destroy the singleton pattern
                    constructor.setAccessible(true);
                    instance2 = (Singleton) constructor.newInstance();
                    break;
                }
            }
        
            catch (Exception e)
            {
                e.printStackTrace();
            }
            
        System.out.println("instance1.hashCode():- "
                                          + instance1.hashCode());
        System.out.println("instance2.hashCode():- "
                                          + instance2.hashCode());
        }
    }
    Output:-
    instance1.hashCode():- 366712642
    instance2.hashCode():- 1829164700
    

    After running this class, you will see that hashCodes are different that means, 2 objects of same class are created and singleton pattern has been destroyed.

    Overcome reflection issue: To overcome issue raised by reflection, enums are used because java ensures internally that enum value is instantiated only once. Since java Enums are globally accessible, they can be used for singletons. Its only drawback is that it is not flexible i.e it does not allow lazy initialization.

    //Java program for Enum type singleton
    public enum GFG
    {
      INSTANCE;
    }

    As enums don’t have any constructor so it is not possible for Reflection to utilize it. Enums have their by-default constructor, we can’t invoke them by ourself. JVM handles the creation and invocation of enum constructors internally. As enums don’t give their constructor definition to the program, it is not possible for us to access them by Reflection also. Hence, reflection can’t break singleton property in case of enums.

  2. Serialization:- Serialization can also cause breakage of singleton property of singleton classes. Serialization is used to convert an object of byte stream and save in a file or send over a network. Suppose you serialize an object of a singleton class. Then if you de-serialize that object it will create a new instance and hence break the singleton pattern.
    // Java code to explain effect of
    // Serilization on singleton classes
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.ObjectInput;
    import java.io.ObjectInputStream;
    import java.io.ObjectOutput;
    import java.io.ObjectOutputStream;
    import java.io.Serializable;
    class Singleton implements Serializable
    {
        // public instance initialized when loading the class
        public static Singleton instance = new Singleton();
        
        private Singleton()
        {
            // private constructor
        }
    }
    public class GFG
    {
        public static void main(String[] args)
        {
            try
            {
                Singleton instance1 = Singleton.instance;
                ObjectOutput out
                    = new ObjectOutputStream(new FileOutputStream("file.text"));
                out.writeObject(instance1);
                out.close();
        
                // deserailize from file to object
                ObjectInput in
                    = new ObjectInputStream(new FileInputStream("file.text"));
                
                Singleton instance2 = (Singleton) in.readObject();
                in.close();
        
                System.out.println("instance1 hashCode:- "
                                                     + instance1.hashCode());
                System.out.println("instance2 hashCode:- "
                                                     + instance2.hashCode());
            }
            
            catch (Exception e)
            {
                e.printStackTrace();
            }
        }
    }
    Output:- 
    instance1 hashCode:- 1550089733
    instance2 hashCode:- 865113938
    

    As you can see, hashCode of both instances is different, hence there are 2 objects of a singleton class. Thus, the class is no more singleton.

    Overcome serialization issue:- To overcome this issue, we have to implement method readResolve() method.

    // Java code to remove the effect of
    // Serialization on singleton classes
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.ObjectInput;
    import java.io.ObjectInputStream;
    import java.io.ObjectOutput;
    import java.io.ObjectOutputStream;
    import java.io.Serializable;
    class Singleton implements Serializable
    {
        // public instance initialized when loading the class
        public static Singleton instance = new Singleton();
        
        private Singleton()
        {
            // private constructor
        }
        
        // implement readResolve method
        protected Object readResolve()
        {
            return instance;
        }
    }
    public class GFG
    {
        public static void main(String[] args)
        {
            try
            {
                Singleton instance1 = Singleton.instance;
                ObjectOutput out
                    = new ObjectOutputStream(new FileOutputStream("file.text"));
                out.writeObject(instance1);
                out.close();
            
                // deserailize from file to object
                ObjectInput in
                    = new ObjectInputStream(new FileInputStream("file.text"));
                Singleton instance2 = (Singleton) in.readObject();
                in.close();
            
                System.out.println("instance1 hashCode:- "
                                               + instance1.hashCode());
                System.out.println("instance2 hashCode:- "
                                               + instance2.hashCode());
            }
            
            catch (Exception e)
            {
                e.printStackTrace();
            }
        }
    }
    Output:- 
    instance1 hashCode:- 1550089733
    instance2 hashCode:- 1550089733
    

    Above both hashcodes are same hence no other instance is created.

  3. Cloning: Cloning is a concept to create duplicate objects. Using clone we can create copy of object. Suppose, we ceate clone of a singleton object, then it wil create a copy that is there are two instances of a singleton class, hence the class is no more singleton.
    // JAVA code to explain cloning
    // issue with singleton
    class SuperClass implements Cloneable
    {
      int i = 10;
      @Override
      protected Object clone() throws CloneNotSupportedException
      {
        return super.clone();
      }
    }
    // Singleton class
    class Singleton extends SuperClass
    {
      // public instance initialized when loading the class
      public static Singleton instance = new Singleton();
      private Singleton()
      {
        // private constructor
      }
    }
    public class GFG
    {
      public static void main(String[] args) throws CloneNotSupportedException
      {
        Singleton instance1 = Singleton.instance;
        Singleton instance2 = (Singleton) instance1.clone();
        System.out.println("instance1 hashCode:- "
                               + instance1.hashCode());
        System.out.println("instance2 hashCode:- "
                               + instance2.hashCode());
      }
    }
    Output :- 
    instance1 hashCode:- 366712642
    instance2 hashCode:- 1829164700
    

    Two different hashCode means there are 2 different objects of singleton class.

    Overcome Cloning issue:- To overcome this issue, override clone() method and throw an exception from clone method that is CloneNotSupportedException. Now whenever user will try to create clone of singleton object, it will throw exception and hence our class remains singleton.

    // JAVA code to explain overcome
    // cloning issue with singleton
    class SuperClass implements Cloneable
    {
      int i = 10;
      @Override
      protected Object clone() throws CloneNotSupportedException
      {
        return super.clone();
      }
    }
    // Singleton class
    class Singleton extends SuperClass
    {
      // public instance initialized when loading the class
      public static Singleton instance = new Singleton();
      private Singleton()
      {
        // private constructor
      }
      @Override
      protected Object clone() throws CloneNotSupportedException
      {
        throw new CloneNotSupportedException();
      }
    }
    public class GFG
    {
      public static void main(String[] args) throws CloneNotSupportedException
      {
        Singleton instance1 = Singleton.instance;
        Singleton instance2 = (Singleton) instance1.clone();
        System.out.println("instance1 hashCode:- "
                             + instance1.hashCode());
        System.out.println("instance2 hashCode:- "
                             + instance2.hashCode());
      }
    }
    Output:-
    Exception in thread "main" java.lang.CloneNotSupportedException
    	at GFG.Singleton.clone(GFG.java:29)
    	at GFG.GFG.main(GFG.java:38)
    

    Now we have stopped user to create clone of singleton class. If you don;t want to throw exception you can also return the same instance from clone method.

    // JAVA code to explain overcome
    // cloning issue with singleton
    class SuperClass implements Cloneable
    {
      int i = 10;
      @Override
      protected Object clone() throws CloneNotSupportedException
      {
        return super.clone();
      }
    }
    // Singleton class
    class Singleton extends SuperClass
    {
      // public instance initialized when loading the class
      public static Singleton instance = new Singleton();
      private Singleton()
      {
        // private constructor
      }
      @Override
      protected Object clone() throws CloneNotSupportedException
      {
        return instance;
      }
    }
    public class GFG
    {
      public static void main(String[] args) throws CloneNotSupportedException
      {
        Singleton instance1 = Singleton.instance;
        Singleton instance2 = (Singleton) instance1.clone();
        System.out.println("instance1 hashCode:- "
                               + instance1.hashCode());
        System.out.println("instance2 hashCode:- "
                               + instance2.hashCode());
      }
    }
    Output:-
    instance1 hashCode:- 366712642
    instance2 hashCode:- 366712642
    

    Now, as hashcode of both the instances is same that means they represent a single instance.

10 Object Oriented Design Principles Java Programmer should know

Object Oriented Design Principles are core of OOP programming, but I have seen most of the Java programmers chasing design patterns like Singleton pattern, Decorator pattern or Observer pattern, and not putting enough attention on learning Object oriented analysis and design. It’s important to learn basics of Object oriented programming like Abstraction, Encapsulation, Polymorphism and Inheritance. But, at the same time, it’s equally important to know object oriented design principles, to create clean and modular design. I have regularly seen Java programmers and developers of various experience level, who either doesn’t heard about these OOP and SOLID design principle, or simply doesn’t know what benefits a particular design principle offers, or how to apply these design principle in coding.

Bottom line is, always strive for highly cohesive and loosely couple solution, code or design. Looking open source code from Apache and Sun are good examples of learning Java and OOPS design principles. They show us,  how design principles should be used in coding and Java programs. Java Development Kit follows several design principle like Factory Pattern in BorderFactory class,  Singleton pattern in Runtime class, Decorator pattern on various java.io classes. By the way if you really interested more on Java coding practices then read Effective Java by Joshua Bloch , a gem by the guy who wrote Java Collection API.

If you are interested in learning object oriented principles and patterns, then you can look at my another personal favorite Head First Object Oriented Analysis and Design. This an excellent book and probably the best material available in object oriented analysis and design, but it often shadowed by its more popular cousin Head First Design Pattern by Eric Freeman. Later is more about how these principle comes together to create pattern you can use directly to solve known problems. These books helps a lot to write better code, taking full advantage of various Object oriented and SOLID design principles.

Though best way of learning any design principle or pattern is real world example and understanding the consequences of violating that design principle, subject of this article is Introducing Object oriented design principles for Java Programmers, who are either not exposed to it or in learning phase. I personally think each of these OOPS and SOLID design principle need an article to explain them clearly, and I will definitely try to do that here, but for now just get yourself ready for quick bike ride on design principle town 🙂

DRY (Don’t repeat yourself)

Our first object oriented design principle is DRY, as name suggest DRY (don’t repeat yourself) means don’t write duplicate code, instead use Abstraction to abstract common things in one place. If you have block of code in more than two place consider making it a separate method, or if you use a hard-coded value more than one time make them public final constant. Benefit of this Object oriented design principle is in maintenance. It’s important  not to abuse it, duplication is not for code, but for functionality . It means, if you used common code to validate OrderID and SSN it doesn’t mean they are same or they will remain same in future. By using common code for two different functionality or thing you closely couple them forever and when your OrderID changes its format , your SSN validation code will break. So beware of such coupling and just don’t combine anything which uses similar code but are not related.

Encapsulate What Changes

Only one thing is constant in software field and that is “Change”, So encapsulate the code you expect or suspect to be changed in future. Benefit of this OOPS Design principle is that Its easy to test and maintain proper encapsulated code. If you are coding in Java then follow principle of making variable and methods private by default and increasing access step by step e.g. from private to protected and not public. Several of design pattern in Java uses Encapsulation, Factory design pattern is one example of Encapsulation which encapsulate object creation code and provides flexibility to introduce new product later with no impact on existing code.

Open Closed Design Principle

Classes, methods or functions should be Open for extension (new functionality) and Closed for modification. This is another beautiful SOLID design principle, which prevents some-one from changing already tried and tested code. Ideally if you are adding new functionality only than your code should be tested and that’s the goal of Open Closed Design principle. By the way, Open Closed principle is “O” from SOLID acronym.

Single Responsibility Principle (SRP)

Single Responsibility Principle is another SOLID design principle, and represent  “S” on SOLID acronym. As per SRP, there should not be more than one reason for a class to change, or a class should always handle single functionality. If you put more than one functionality in one Class in Java  it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality,  which require another round of testing to avoid any surprise on production environment.

Dependency Injection or Inversion principle

Don’t ask for dependency it will be provided to you by framework. This has been very well implemented in Spring framework, beauty of this design principle is that any class which is injected by DI framework is easy to test with mock object and easier to maintain because object creation code is centralized in framework and client code is not littered with that.There are multiple ways to  implemented Dependency injection like using  byte code instrumentation which some AOP (Aspect Oriented programming) framework like AspectJ does or by using proxies just like used in Spring. See this example of IOC and DI design pattern to learn more about this SOLID design principle. It represent “D” on SOLID acronym.

Favor Composition over Inheritance

Always favor composition over inheritance ,if possible. Some of you may argue this, but I found that Composition is lot more flexible than Inheritance. Composition allows to change behavior of a class at run-time by setting property during run-time and by using Interfaces to compose a class we use polymorphism which provides flexibility of to replace with better implementation any time. Even Effective Java advise to favor composition over inheritance. See here to learn more about why you Composition is better than Inheritance for reusing code and functionality.

Liskov Substitution Principle (LSP)

According to Liskov Substitution Principle, Subtypes must be substitutable for super type i.e. methods or functions which uses super class type must be able to work with object of sub class without any issue”. LSP is closely related to Single responsibility principle and Interface Segregation Principle. If a class has more functionality than subclass might not support some of the functionality ,and does violated LSP. In order to follow LSP SOLID design principle, derived class or sub class must enhance functionality, but not reduce them. LSP represent  “L” on SOLID acronym.

Interface Segregation principle (ISP)

Interface Segregation Principle stats that, a client should not implement an interface, if it doesn’t use that. This happens mostly when one interface contains more than one functionality, and client only need one functionality and not other.Interface design is tricky job because once you release your interface you can not change it without breaking all implementation. Another benefit of this design principle in Java is, interface has disadvantage to implement all method before any class can use it so having single functionality means less method to implement.

 

Programming for Interface not implementation

Always program for interface and not for implementation this will lead to flexible code which can work with any new implementation of interface. So use interface type on variables, return types of method or argument type of methods in Java. This has been advised by many Java programmer including in Effective Java and Head First design pattern book.

Delegation principle

Don’t do all stuff  by yourself,  delegate it to respective class. Classical example of delegation design principle is equals() and hashCode() method in Java. In order to compare two object for equality we ask class itself to do comparison instead of Client class doing that check. Benefit of this design principle is no duplication of code and pretty easy to modify behavior.
Here is nice summary of all these OOP design principles :

All these object oriented design principle helps you write flexible and better code by striving high cohesion and low coupling. Theory is first step, but what is most important is to develop ability to find out when to apply these design principle. Find out, whether we are violating any design principle and compromising flexibility of code, but again as nothing is perfect in this world, don’t always try to solve problem with design patterns and design principle they are mostly for large enterprise project which has longer maintenance cycle.

9 Anti-Patterns Every Programmer Should Be Aware Of

A healthy dose of self-criticism is fundamental to professional and personal growth. When it comes to programming, this sense of self-criticism requires the ability to detect unproductive or counter-productive patterns in designs, code, processes, and behaviour. This is why a knowledge of anti-patterns is very useful for any programmer. This article is a discussion of anti-patterns that I have found to be recurring, ordered roughly based on how often I have come across them, and how long it took to undo the damage they caused.

Some of the anti-patterns discussed have elements in common with cognitive biases, or are directly caused by them. Links to relevant cognitive biases are provided as we go along in the article. Wikipedia also has a nice list of cognitive biases for your reference.

And before starting, let’s remember that dogmatic thinking stunts growth and innovation so consider the list as a set of guidelines and not written-in-stone rules. And if I missed anything that you consider to be important, feel free to comment below!

1   Premature Optimization

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.Donald Knuth

Although never is often better than *right* now.Tim Peters, The Zen of Python

What is it?

Optimizing before you have enough information to make educated conclusions about where and how to do the optimization.

Why it’s bad

It is very difficult to know exactly what will be the bottleneck in practice. Attempting to optimize prior to having empirical data is likely to end up increasing code complexity and room for bugs with negligible improvements.

How to avoid it

Prioritize writing clean and readable code that works first, using known and tested algorithms and tools. Use profiling tools when needed to find bottlenecks and optimize the priorities. Rely on measurements and not guesses and speculation.

Examples and signs

Caching before profiling to find the bottlenecks. Using complicated and unproven “heuristics” instead of a known mathematically correct algorithm. Choosing a new and untested experimental web framework that can theoretically reduce request latency under heavy loads while you are in early stages and your servers are idle most of the time.

The tricky part

The tricky part is knowing when the optimization is premature. It’s important to plan in advance for growth. Choosing designs and platforms that will allow for easy optimization and growth is key here. It’s also possible to use “premature optimization” as an excuse to justify writing bad code. Example: writing an O(n2) algorithm to solve a problem when a simpler, mathematically correct, O(n) algorithm exists, simply because the simpler algorithm is harder to understand.

tl;dr

Profile before optimizing. Avoid trading simplicity for efficiency until it is needed, backed by empirical evidence.

2   Bikeshedding

Every once in a while we’d interrupt that to discuss the typography and the color of the cover. And after each discussion, we were asked to vote. I thought it would be most efficient to vote for the same color we had decided on in the meeting before, but it turned out I was always in the minority! We finally chose red. (It came out blue.)Richard Feynman, What Do You Care What Other People Think?

What is it?

Tendency to spend excessive amounts of time debating and deciding on trivial and often subjective issues.

Why it’s bad

It’s a waste of time. Poul-Henning Kamp goes into depth in an excellent email here.

How to avoid it

Encourage team members to be aware of this tendency, and to prioritize reaching a decision (vote, flip a coin, etc. if you have to) when you notice it. Consider A/B testing later to revisit the decision, when it is meaningful to do so (e.g. deciding between two different UI designs), instead of further internal debating.

Richard Feynman teaching a lecture.

Richard Feynman was not a fan of bikeshedding.

Examples and signs

Spending hours or days debating over what background color to use in your app, or whether to put a button on the left or the right of the UI, or to use tabs instead of spaces for indentation in your code base.

The tricky part

Bikeshedding is easier to notice and prevent in my opinion than premature optimization. Just try to be aware of the amount of time spent on making a decision and contrast that with how trivial the issue is, and intervene if necessary.

tl;dr

Avoid spending too much time on trivial decisions.

3   Analysis Paralysis

Want of foresight, unwillingness to act when action would be simple and effective, lack of clear thinking, confusion of counsel […] these are the features which constitute the endless repetition of history.Winston Churchill, Parliamentary Debates

Now is better than never.Tim Peters, The Zen of Python

What is it?

Over-analyzing to the point that it prevents action and progress.

Why it’s bad

Over-analyzing can slow down or stop progress entirely. In the extreme cases, the results of the analysis can become obsolete by the time they are done, or worse, the project might never leave the analysis phase. It is also easy to assume that more information will help decisions when the decision is a difficult one to make ― see information bias and validity bias.

How to avoid it

Again, awareness helps. Emphasize iterations and improvements. Each iteration will provide more feedback with more data points that can be used for more meaningful analysis. Without the new data points, more analysis will become more and more speculative.

Examples and signs

Spending months or even years deciding on a project’s requirements, a new UI, or a database design.

The tricky part

It can be tricky to know when to move from planning, requirement gathering and design, to implementation and testing.

tl;dr

Prefer iterating to over-analyzing and speculation.

4   God Class

Simple is better than complex.Tim Peters, The Zen of Python

What is it?

Classes that control many other classes and have many dependencies and lots of responsibilities.

Why it’s bad

God classes tend to grow to the point of becoming maintenance nightmares ― because they violate the single-responsibility principle, they are hard to unit-test, debug, and document.

How to avoid it

Avoid having classes turn into God classes by breaking up the responsibilities into smaller classes with a single clearly-defined, unit-tested, and documented responsibility. Also see “Fear of Adding Classes” below.

Examples and signs

Look for class names containing “manager”, “controller”, “driver”, “system”, or “engine”. Be suspicious of classes that import or depend on many other classes, control too many other classes, or have many methods performing unrelated tasks.

God classes know about too many classes and/or control too many.

The tricky part

As projects age and requirements and the number of engineers grow, small and well-intentioned classes turn into God classes slowly. Refactoring such classes can become a significant task.

tl;dr

Avoid large classes with too many responsibilities and dependencies.

5   Fear of Adding Classes

Sparse is better than dense.Tim Peters, The Zen of Python

What is it?

Belief that more classes necessarily make designs more complicated, leading to a fear of adding more classes or breaking large classes into several smaller classes.

Why it’s bad

Adding classes can help reduce complexity significantly. Picture a big tangled ball of yarns. When untangled, you will have several separated yarns instead. Similarly, several simple, easy-to-maintain and easy-to-document classes are much preferable to a single large and complex class with many responsibilities (see the God Class anti-pattern above).

Tangled ball of yarn

A tangled ball of yarn. Large classes have a tendency to turn into the software equivalent of this. (Photo by absolut_feli on Flickr)

How to avoid it

Be aware of when additional classes can simplify the design and decouple unnecessarily coupled parts of your code.

Examples and signs

As an easy example consider the following:

class Shape:
    def __init__(self, shape_type, *args):
        self.shape_type = shape_type
        self.args = args

    def draw(self):
        if self.shape_type == "circle":
            center = self.args[0]
            radius = self.args[1]
            # Draw a circle...
        elif self.shape_type == "rectangle":
            pos = self.args[0]
            width = self.args[1]
            height = self.args[2]
            # Draw rectangle...

Now compare it with the following:

class Shape:
    def draw(self):
        raise NotImplemented("Subclasses of Shape should implement method 'draw'.")

class Circle(Shape):
    def __init__(self, center, radius):
        self.center = center
        self.radius = radius

    def draw(self):
        # Draw a circle...

class Rectangle(Shape):
    def __init__(self, pos, width, height):
        self.pos = pos
        self.width = width
        self.height = height

    def draw(self):
        # Draw a rectangle...

Of course, this is an obvious example, but it illustrates the point: larger classes with conditional or complicated logic in them can, and often should, be broken down into simpler classes. The resulting code will have more classes but will be simpler.

The tricky part

Adding classes is not a magic bullet. Simplifying the design by breaking up large classes requires thoughtful analysis of the responsibilities and requirements.

tl;dr

More classes are not necessarily a sign of bad design.

6   Inner-platform Effect

Those who do not understand Unix are condemned to reinvent it, poorly.Henry Spencer

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.Greenspun’s tenth rule

What is it?

The tendency for complex software systems to re-implement features of the platform they run in or the programming language they are implemented in, usually poorly.

Why it’s bad

Platform-level tasks such as job scheduling and disk cache buffers are not easy to get right. Poorly designed solutions are prone to introduce bottlenecks and bugs, especially as the system scales up. And recreating alternative language constructs to achieve what is already possible in the language leads to difficult to read code and a steeper learning curve for anyone new to the code base. It can also limit the usefulness of refactoring and code analysis tools.

How to avoid it

Learn to use the platform or features provided by your OS or platform instead. Avoid the temptation to create language constructs that rival existing constructs (especially if it’s because you are not used to a new language and miss your old language’s features).

Examples and signs

Using your MySQL database as a job queue. Reimplementing your own disk buffer cache mechanism instead of relying on your OS’s. Writing a task scheduler for your web-server in PHP. Defining macros in C to allow for Python-like language constructs.

The tricky part

In very rare cases, it might be necessary re-implement parts of the platform (JVM, Firefox, Chrome, etc.).

tl;dr

Avoid re-inventing what your OS or development platform already does well.

7   Magic Numbers and Strings

Explicit is better than implicit.Tim Peters, The Zen of Python

What is it?

Using unnamed numbers or string literals instead of named constants in code.

Why it’s bad

The main problem is that the semantics of the number or string literal is partially or completely hidden without a descriptive name or another form of annotation. This makes understanding the code harder, and if it becomes necessary to change the constant, search and replace or other refactoring tools can introduce subtle bugs. Consider the following piece of code:

def create_main_window():
    window = Window(600, 600)
    # etc...

What are the two numbers there? Assume the first is window width and the second in window height. If it ever becomes necessary to change the width to 800 instead, a search and replace would be dangerous since it would change the height in this case too, and perhaps other occurrences of the number 600 in the code base.

String literals might seem less prone to these issues but having unnamed string literals in code makes internationalization harder, and can introduce similar issues to do with instances of the same literal having different semantics. For example, homonyms in English can cause a similar issue with search and replace; consider two occurrences of “point”, one in which it refers to a noun (as in “she has a point”) and the other as a verb (as in “to point out the differences…”). Replacing such string literals with a string retrieval mechanism that allows you to clearly indicate the semantics can help distinguish these two cases, and will also come in handy when you send the strings for translation.

How to avoid it

Use named constants, resource retrieval methods, or annotations.

Examples and signs

Simple example is shown above. This particular anti-pattern is very easy to detect (except for a few tricky cases mentioned below.)

The tricky part

There is a narrow grey area where it can be hard to tell if certain numbers are magic numbers or not. For example the number 0 for languages with zero-based indexing. Other examples are use of 100 to calculate percentages, 2 to check for parity, etc.

tl;dr

Avoid having unexplained and unnamed numbers and string literals in code.

8   Management by Numbers

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.Bill Gates

What is it?

Strict reliance on numbers for decision making.

Why it’s bad

Numbers are great. The main strategy to avoid the first two anti-patterns mentioned in this article (premature optimization and bikeshedding) was to profile or do A/B testing to get some measurements that can help you optimize or decide based on numbers instead of speculating. However, blind reliance on numbers can be dangerous. For example, numbers tend to outlive the models in which they were meaningful, or the models become outdated and no longer accurately represent reality. This can lead to poor decisions, especially if they are fully automated ― see automation bias.

Pryzbylewski from the show "The Wire" teaching a classroom.

Do you find yourself commiserating with Pryzbylewski from the HBO show The Wire, Season 4?

Another issue with reliance on numbers for determining (and not merely informing) decisions is that the measurement processes can be manipulated over time to achieve the desired numbers instead ― see observer-expectancy effect. Grade inflation is an example of this. The HBO show The Wire (which, by the way, if you haven’t seen, you must!) does an excellent job of portraying this issue of reliance on numbers, by showing how the police department and later the education system have replaced meaningful goals with a game of numbers. Or if you prefer charts, the following one showing the distribution of scores on a test with a passing score of 30%, illustrates the point perfectly.

Score distribution of the 2013 high school exit exam in Poland with passing score of 30%.

Score distribution of the high school exit exam in Poland with passing score of 30%. Source in Polish, and the the Reddit post that I first saw this in.

How to avoid it

Use measurements and numbers wisely, not blindly.

Examples and signs

Using only lines of code, number of commits, etc. to judge the effectiveness of programmers. Measuring employee contribution by the numbers of hours they spend at their desks.

The tricky part

The larger the scale of operations, the higher the number of decisions that will need to be made, and this means automation and blind reliance on numbers for decisions begins to creep into the processes.

tl;dr

Use numbers to inform your decisions, not determine them.

9   Useless (Poltergeist) Classes

It seems that perfection is attained, not when there is nothing more to add, but when there is nothing more to take away.Antoine de Saint Exupéry

What is it?

Useless classes with no real responsibility of their own, often used to just invoke methods in another class or add an unneeded layer of abstraction.

Why it’s bad

Poltergeist classes add complexity, extra code to maintain and test, and make the code less readable—the reader first needs to realize what the poltergeist does, which is often almost nothing, and then train herself to mentally replace uses of the poltergeist with the class that actually handles the responsibility.

How to avoid it

Don’t write useless classes, or refactor to get rid of them. Jack Diederich has a great talk titled Stop Writing Classes that is related to this anti-pattern.

Examples and signs

A couple of years ago, while working on my master’s degree, I was a teaching assistant for a first-year Java programming course. For one of the labs, I was given the lab material which was to be on the topic of stacks and using linked lists to implement them. I was also given the reference “solution”. This is the solution Java file I was given, almost verbatim (I removed the comments to save some space):

import java.util.EmptyStackException;
import java.util.LinkedList;

public class LabStack<T> {
    private LinkedList<T> list;

    public LabStack() {
        list = new LinkedList<T>();
    }

    public boolean empty() {
        return list.isEmpty();
    }

    public T peek() throws EmptyStackException {
        if (list.isEmpty()) {
            throw new EmptyStackException();
        }
        return list.peek();
    }

    public T pop() throws EmptyStackException {
        if (list.isEmpty()) {
            throw new EmptyStackException();
        }
        return list.pop();
    }

    public void push(T element) {
        list.push(element);
    }

    public int size() {
        return list.size();
    }

    public void makeEmpty() {
        list.clear();
    }

    public String toString() {
        return list.toString();
    }
}

You can only imagine my confusion looking at the reference solution, trying to figure what the point of the LabStack class was, and what the students were supposed to learn from the utterly pointless exercise of writing it. In case it’s not painfully obvious what’s wrong with the class, it’s that it does absolutely nothing! It simply passes calls through to the LinkedList object it instantiates. The class changes the names of a couple of methods (e.g. makeEmpty instead of the commonly used clear), which will only lead to user confusion. The error checking logic is completely unnecessary since the methods in LinkedListalready do the same (but throw a different exception, NoSuchElementException, yet another possible source of confusion). To this day, I can’t imagine what was going through the authors’ minds when they came up with this lab material. Anytime you see classes that do anything similar to the above, reconsider whether they are really needed or not.

Update (May 23rd, 2015): There were interesting discussions over whether the LabStack class example above is a good example or not on Hacker News as well below in the comments. To clarify, I picked this class as a simple example for two reasons: firstly, in the context of teaching students about stacks, it is (almost) completely useless; and secondly, it adds unnecessary and duplicated code with the error-handling code that is already handled by LinkedList. I would agree that in other contexts, such classes can be useful but even in those cases, duplicating the error checking and throwing a semi-deprecated exception instead of the standard one and renaming methods to less-commonly-used names would be bad practice.

The tricky part

The advice here at first glance looks to be in direct contradiction of the advice in “Fear of Adding Classes”. It’s important to know when classes perform a valuable role and simplify the design, instead of uselessly increasing complexity with no added benefit.

tl;dr

Avoid classes with no real responsibility.

What is Double-checked locking in java?

In software engineering, double-checked locking (also known as “double-checked locking optimization”[1]) is a software design pattern used to reduce the overhead of acquiring a lock by first testing the locking criterion (the “lock hint”) without actually acquiring the lock. Only if the locking criterion check indicates that locking is required does the actual locking logic proceed.

The pattern, when implemented in some language/hardware combinations, can be unsafe. At times, it can be considered an anti-pattern.[2]

It is typically used to reduce locking overhead when implementing “lazy initialization” in a multi-threaded environment, especially as part of the Singleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed.

Consider, for example, this code segment in the Java programming language as given by [2] (as well as all other Java code segments):

// Single-threaded version
class Foo {
    private Helper helper;
    public Helper getHelper() {
        if (helper == null) {
            helper = new Helper();
        }
        return helper;
    }

    // other functions and members...
}

The problem is that this does not work when using multiple threads. A lock must be obtained in case two threads call getHelper() simultaneously. Otherwise, either they may both try to create the object at the same time, or one may wind up getting a reference to an incompletely initialized object.

The lock is obtained by expensive synchronizing, as is shown in the following example.

// Correct but possibly expensive multithreaded version
class Foo {
    private Helper helper;
    public synchronized Helper getHelper() {
        if (helper == null) {
            helper = new Helper();
        }
        return helper;
    }

    // other functions and members...
}

However, the first call to getHelper() will create the object and only the few threads trying to access it during that time need to be synchronized; after that all calls just get a reference to the member variable. Since synchronizing a method could in some extreme cases decrease performance by a factor of 100 or higher,[5] the overhead of acquiring and releasing a lock every time this method is called seems unnecessary: once the initialization has been completed, acquiring and releasing the locks would appear unnecessary. Many programmers have attempted to optimize this situation in the following manner:

  1. Check that the variable is initialized (without obtaining the lock). If it is initialized, return it immediately.
  2. Obtain the lock.
  3. Double-check whether the variable has already been initialized: if another thread acquired the lock first, it may have already done the initialization. If so, return the initialized variable.
  4. Otherwise, initialize and return the variable.
// Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
    private Helper helper;
    public Helper getHelper() {
        if (helper == null) {
            synchronized(this) {
                if (helper == null) {
                    helper = new Helper();
                }
            }
        }
        return helper;
    }

    // other functions and members...
}

Intuitively, this algorithm seems like an efficient solution to the problem. However, this technique has many subtle problems and should usually be avoided. For example, consider the following sequence of events:

  1. Thread A notices that the value is not initialized, so it obtains the lock and begins to initialize the value.
  2. Due to the semantics of some programming languages, the code generated by the compiler is allowed to update the shared variable to point to a partially constructed object before A has finished performing the initialization. For example, in Java if a call to a constructor has been inlined then the shared variable may immediately be updated once the storage has been allocated but before the inlined constructor initializes the object.[6]
  3. Thread B notices that the shared variable has been initialized (or so it appears), and returns its value. Because thread B believes the value is already initialized, it does not acquire the lock. If B uses the object before all of the initialization done by A is seen by B (either because A has not finished initializing it or because some of the initialized values in the object have not yet percolated to the memory B uses (cache coherence)), the program will likely crash.

One of the dangers of using double-checked locking in J2SE 1.4 (and earlier versions) is that it will often appear to work: it is not easy to distinguish between a correct implementation of the technique and one that has subtle problems. Depending on the compiler, the interleaving of threads by the scheduler and the nature of other concurrent system activity, failures resulting from an incorrect implementation of double-checked locking may only occur intermittently. Reproducing the failures can be difficult.

As of J2SE 5.0, this problem has been fixed. The volatile keyword now ensures that multiple threads handle the singleton instance correctly. This new idiom is described in [4] and [5].

// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
    private volatile Helper helper;
    public Helper getHelper() {
        Helper result = helper;
        if (result == null) {
            synchronized(this) {
                result = helper;
                if (result == null) {
                    helper = result = new Helper();
                }
            }
        }
        return result;
    }

    // other functions and members...
}

Note the local variable result, which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to “return result;” instead of “return helper;”), which can improve the method’s overall performance by as much as 25 percent.[7]

If the helper object is static (one per class loader), an alternative is the initialization-on-demand holder idiom[8] (See Listing 16.6[9] from the previously cited text.)

// Correct lazy initialization in Java
class Foo {
    private static class HelperHolder {
       public static final Helper helper = new Helper();
    }

    public static Helper getHelper() {
        return HelperHolder.helper;
    }
}

This relies on the fact that nested classes are not loaded until they are referenced.

Semantics of final field in Java 5 can be employed to safely publish the helper object without using volatile:[10]

public class FinalWrapper<T> {
    public final T value;
    public FinalWrapper(T value) {
        this.value = value;
    }
}

public class Foo {
   private FinalWrapper<Helper> helperWrapper;

   public Helper getHelper() {
      FinalWrapper<Helper> tempWrapper = helperWrapper;

      if (tempWrapper == null) {
          synchronized(this) {
              if (helperWrapper == null) {
                  helperWrapper = new FinalWrapper<Helper>(new Helper());
              }
              tempWrapper = helperWrapper;
          }
      }
      return tempWrapper.value;
   }
}

The local variable tempWrapper is required for correctness: simply using helperWrapper for both null checks and the return statement could fail due to read reordering allowed under the Java Memory Model.[11] Performance of this implementation is not necessarily better than the volatile implementation.