Quantcast
Channel: Hacker News 50
Viewing all articles
Browse latest Browse all 9433

DIP in the Wild

$
0
0

Comments:"DIP in the Wild"

URL:http://martinfowler.com/articles/dipInTheWild.html


The Dependency Inversion Principle (DIP) has been around since the early '90s, even so it seems easy to forget in the middle of solving a problem. After a few definitions, I'll present a number of applications of the DIP I've personally used on real projects so you'll have some examples from which to form your own conclusions.

How did I Get Here?

My original introduction to theDependency Inversion Principle came from Robert (Uncle Bob) Martin around 1994. It, along with most of theSOLID principles, is simple to state but deep in its application. What follows are some recent applications I've used on real projects; everything I discuss is in production from June 2012 and as of mid 2013 is still in production. Some of these go further back in time, but keep coming back, which is a reminder to me that the basics remain important.

Synopsis of the DIP

There are many ways to express the dependency inversion principle:

  • Abstractions should not depend on details
  • Code should depend on things that are at the same or higher level of abstraction
  • High level policy should not depend on low level details
  • Capture low-level dependencies in domain-relevant abstractions

The common thread throughout all of these is about the view from one part of your system to anther; strive to have dependencies move towards higher-level (closer to your domain) abstractions.

Why care about dependencies?

A dependency is a risk. For example, if my system requires a Java Runtime Environment (JRE) to be installed and one is not installed, my system will not work. My system also probably requires some kind of Operating System. If users access the system via the web, it requires the user to have a browser. Some of these dependencies you control or limit, others you can ignore. For example,

  • In the case of the JRE requirement, you could make sure the deployment environment has an appropriate version of the JRE installed. Alternatively, if the environment is fixed, you might adjust the code to match the JRE. You could control the environment using a tool like Puppet to build up an environment from a simpler, known starting image. In any case, while the consequence is severe, it's well understood with several options to mitigate it. (My personal preference leans towards theCD end of the spectrum.)
  • When your system uses the String class, you probably do not invert that dependency. You could, for example if you think of String as a primitive (strictly not, but close enough), then manipulating a number of Strings starts to resemblePrimitive Obsession. If you introduce a type around the Strings, and add methods that make sense to your use of the those Strings, rather than simply exposing String methods, that starts to look like a kind of Dependency Inversion, so long as the resulting type is closer to your domain than a String.
  • In the case of browsers, if you want a modern experience it will be hard to support all browsers. You can try to allow all browsers and versions, limit support to relatively modern browsers or introduce feature degradation. This kind of dependency is complex and probably requires a multi-faceted approach to solve.

Dependencies represent risk. Handling that risk has some cost. Through experience, trial and error, or the collective wisdom of a team, you choose to explicitly mitigate that risk, or not.

Inversion compared to what?

Inversion is a reversal of direction, but reversal compared to what? The design part ofStructured Analysis and Design.

In structured analysis and design, we start with a high-level problem and break it up into smaller parts. For any of those smaller parts that are still "too big", we continue breaking them up. The high-level concept / requirement / problem is broken up into smaller and smaller parts. The high-level design is described in terms of these smaller and smaller parts and therefore it directly depends on the smaller, and more detailed, parts. This is also known as top-down design. Consider this problem description (somewhat idealized and cleansed, but otherwise something found in the wild):

Report Energy Savings Gather Data Open Connection Execute Sql Translate ResultSet Calculate Baseline Determine Baseline Group Project Time-sequence Data Calculate Across Date Range Product Report Determine Non-Baseline group Project Time-sequence Data Calculate Across Data range Calculate Delta from Baseline Format Results

The business requirement of reporting on energy savings depends on gathering data, which depends on executing Sql. Notice that the dependencies follow how the problem is decomposed. The more detailed something is, the more likely it will change. We have a high-level idea depending on something that is likely to change. Additionally, the steps are extremely sensitive to changes at the higher levels, which is a problem since requirements tend to change. We want to invert dependencies relative to that kind of decomposition.

Contrast that to bottom-up composition. You could find the logical concepts that exist in the domain and combine them to accomplish the high-level goal. For example, we have a number of things using power, we'll call those Consumers. We don't know much about them, so we'll get to them via aConsumer Repository. We have something called a Baseline in our domain, something needs to determine that. Consumers can calculate their Energy usage and then we can compare the energy used by the Baseline versus all of the Consumers to determine Energy Savings:

While the work we do could initially be the same, in this re-envisioning there's an opportunity, with a little more work, to introduce different ways to accomplish the details:

  • Switch out the repository for a different storage mechanism, there's no mention of SQL in its interface so we can use an in-memory solution, a NoSql solution or a RESTful service.
  • Instead of constructing a baseline, use anAbstract factory. That will provide support for multiple kinds of baseline calculations, which actually reflects the reality of a particular domain.

As you read this you might notice that there's some notion of theOpen Closed Principle in all of this. It's certainly related. Initially, break your problem into logical blocks suggested by your domain. As your system grows, use these blocks or extend them in some way to accommodate additional scenarios.

What Does That all Mean?

Where the DIP refers to abstractions, I've noticed many people confuse abstraction with:

  • An interface
  • An abstract base class
  • Something given as a constraint (e.g., external system architecture)
  • Something called a requirement, which is stated as a solution

In fact, any of these can be misleading:

  • An interface — Have a look at java.sql.Connection, compare your business domain to methods like getAutoCommit(),createStatement() and getHoldability(). While these might be reasonable for a database connection, how do these relate to something a user of your system wants to do? The connection is tenuous at best.
  • An abstract base class — An abstract base class has the same problems as an interface. If the methods make sense to your domain, it might be OK. If the methods make sense to a software library, maybe not. For example, consider java.util.AbstractList. Imagine a domain with an ever-increasing ordered listing of historical events. In this hypothetical domain, it never makes sense to remove() an item from the historical record. The List abstraction, because it solves a general problem and not your problem, offers at least this one feature that does not make sense for your domain. You can subclass AbstractList (or some other List class), but doing so still exposes a method (probably several) that does not make sense for your use of that class. As soon as you give in and allow clients to see unnecessary methods, you probably violate both the DIP and theLiskov Substitution Principle.
  • A constraint/requirement — When we are given work to do, does that work provide the motivation and goals or does it talk about how to solve the problem? Does your requirement talk about having to use message oriented middle-ware for integration, or which database fields to update to finish the work? Even if you are given a description of the goals for an actor, do those goals simply restate the current as-is process, where you could build a system that obviated the need for those processed in the first place?

You mean Dependency Inversion, Right?

In 2004, Martin Fowlerpublished an article on Dependency Injection (DI) and Inversion of Control (IoC). Is the DIP the same as DI, or IoC? No, but they play nice together. When Robert Martin first discussed the DIP, he equated it a first-class combination of the Open Closed Principle and the and the Liskov Substitution Principle, important enough to warrant its own name. Here's a synopsis of all three terms using some examples:

  • Dependency Injection
    • Dependency Injection is about how one object knows about another, dependent object. For example, in Monopoly a player rolls a pair of dice. Imagine a software player that needs to send theroll() message to a software pair of dice. How does the player object get a reference to the dice object? Imagine the game tells the player totakeATurn(:Dice) and gives the payer the dice. The game telling a player to take a turn and passing the dice is an example of method level dependency injection. Imagine a system where the Player class expresses a need for Dice instead and it gets auto-wired by some kind of so-called IoC container like Spring. A recent example of this is in the system I'm working as of Q1, 2013. It involves the use of Spring profiles. We have 4 named profiles: demo, test, qa, prod. The default profile is demo, which brings the system up configured with 10 simulated devices and certain test points enabled. The test profile brings the system up with no simulated devices and the test points enabled. Both qa and prod bring the system up such that the system connects to real devices over a cellular network and test points are not loaded, meaning if a production component attempts to use the test point, the system will fail to start. One more example comes from an application that involved mixing Java and C++. If the system is started via a JVM, then it is configured to simulate the C++ layer. If it is instead started via C++, which then starts the JVM, then the system is configured to hit the C++ layer. These are all kinds of dependency injection.
  • Inversion of Control
    • Inversion of control is about who initiates messages. Does your code call into a framework, or does it plug something into a framework, and then the framework calls back? This is also referred to asHollywood's Law; don't call me, I'll call you. For example, when you create a ButtonListener for Swing, you provide an implementation of an interface. When the button is pressed, Swing notices that and calls back into the code you provided. Imagine the Monopoly system created with a number of players. The game orchestrates interaction between players. When it's time to have a player take a turn, the game might ask the player if it has any pre-movement actions such as selling houses or hotels, then the game will move the player based on the roll of the dice (in the real world, a physical player rolls dice and moves his or her token, but that's an artifact of the board game not being a computer - that is, it's a phenomenological description of what's going on rather than an ontological description). Notice that the game knows when a player can make decisions and prompts the player accordingly, rather than the player making the decision. As a final example, a Spring Message bean or a JEE Message Bean is an implementation of an interface registered with the container. When a message arrives on a Queue, the container calls into the bean to process the message, the container will even remove the message or not based on the response of the bean.
  • Dependency Inversion Principle
    • Dependence Inversion is about the shape of the object upon which the code depends. How does DIP relate to IoC and DI? Consider what happens if you use DI to inject a low-abstraction dependency? For example, I could use DI to inject a JDBC connection into a Monopoly game so it could use a SQL statement to read the Monopoly board from DB2. While this is an example of DI, it is an example of injecting a (probably) problematic dependency as it exists at an abstraction level significantly lower than the domain of my problem. In the case of Monopoly, it was created several decades before SQL databases existed, so to couple it to a SQL database introduces an unnecessary, incidental dependency. A better thing to inject into Monopoly is a Board Repository. The interface of such a repository is appropriate to the domain of Monopoly rather than described in terms of a SQL connection. As IoC is about who initiates a calling sequence, a poorly designed callback interface might force low-level details (framework) details into code you write to plug into a framework. If that's the case, try to keep most of the business stuff out of the callback method and in a POJO instead.

DI is about how one object acquires a dependency. When a dependency is provided externally, then the system is using DI. IoC is about who initiates the call. If your code initiates a call, it is not IoC, if the container/system/library calls back into code that you provided it, is it IoC.

DIP, on the other hand, is about the level of the abstraction in the messages sent from your code to the thing it is calling. To be sure, using DI or IoC with DIP tends to be more expressive, powerful and domain-aligned, but they are about different dimensions, or forces, in an overall problem. DI is about wiring, IoC is about direction, and DIP is about shape.

What's coming up?

Armed with a definition of the Dependency Inversion Principle, it's time to move on to examples of the DIP in the wild. What follows are several examples that all share a common thread; raising the abstraction level of a dependency to be closer to the domain, as limited by the needs of the system.

Flexibility is costly

A common thing I've done and I've seen is to make a class "easier" to use by adding more methods than those required to solve the current problem. It might stem from "just in case" thinking, maybe it's from a history of practices that lead to hard to change code base, which means putting stuff in now is perceived as easier than adding it later if we need it. Unfortunately, more methods leads to more ways in which to write incorrect code, more paths of execution that will need verification, more need for discipline when using the "easier" interface, etc. The larger the surface area of a class, the more likely it will be difficult to use that class correctly. In fact, the larger the surface area, the more likely it becomes easier to use the class incorrectly than it is to use it correctly.

Which hammer should I use?

Consider logging. While logging isn't necessarily the best way to run DevOps, it seems to be a heavily practiced way to do things. On the last several projects I've worked, logging eventually became a problem. The problems were varied:

  • Too much
  • Not enough
  • Disagreement on the level at which something should be logged
  • Disagreement on which logging methods to use
  • Disagreement on which logging framework to use
  • Inconsistent use of the Logger class
  • Incorrect/inconsistent configuration of logging across all of the various open source logging libraries being used across all of the open source projects used on the project
  • Multiple logging frameworks used by different open source projects in use
  • Inconsistent logging messages, making it hard to use the log
  • Insert your particular experiences here...

While this is not a comprehensive list, I'd be surprised if you've been on moderately sized project and not had discussions on some of these subjects.

Too Many Methods

Have a look at Figure 2. This includes the Logger built into the JDK and two other common open-source logging frameworks used by several open source projects. The key thing to look at is the number of methods in each class.

Figure 2: Complexity of Existing Loggers

Let's consider just theLogger class from the JDK. You are a new developer working on a team. Hopefully you're not working alone, but if you are, you are probably told to "look at the code base" and left to your own devices. When you have a need to do some logging, which of thelog methods do you use?

Figure 3: Which Log Method?

Islog even the correct method in the first place? You can search the code base for examples, do you take the first example you find, or do you check to see if there are multiple ways?

This is a trivial example. It seems like nothing. Here's a good rule of thumb I live by fromJerry Weinberg (paraphrased)

Nothing + Nothing + Nothing eventually equals something. While this one thing really isn't a big deal, it won't be the only such thing on a project. Knowing which method to use increases the burden on each developer just a little bit. It also increases the difficulty of adding people to an ongoing project or into the team. This kind of detail, one which seems trivial and unimportant, eventually falls into the bucket oftribal knowledge. While there may be advantages to team identity by having a healthy amount oftribal knowledge, things that lead to unnecessary inconsistency probably are not worth their cost.

Performance Considerations

Another argument for this, which becomes weaker at time goes on, is maybe a touch less obvious at first. Consider the following code example:

Logger logger = Logger.getLogger(getClass().getName());
String message = String.format("%s-%s-%s", "part1", "part2", "part3");
logger.log(Level.INFO, message);

This use of the logger seems straightforward but it has a problem: it performs String concatenation regardless of whether the logger ultimately records messages at the INFO level. This leads to both unnecessary work as well as additional garbage collection. To write this "correctly", it should look more like this:

Logger logger = Logger.getLogger(getClass().getName());
if (logger.isLoggable(Level.INFO)) {
 String message = String.format("%s-%s-%s", "part1", "part2", "part3");
 logger.log(Level.INFO, message);
}

The burden is on the writer to remember this. Imagine a system entry point with several logging statements:

  • this code will be replicated (or we hope)
  • this kind of detail is incidental rather than essential
  • this increases the mental burden to look at code
  • Oh, and it violates theDRY principle

If you use a modern API such as Slf4j, some of this is addressed in that there are methods to take in a varying number of parameters and perform a check before concatenating. That's great, but then we are back to having 50+ methods from which to chose. I cannot remember a project with more than about 3 people where a discussion of consistent logger use hasn't come up, so clearly the number of methods becomes an unnecessary (incidental) source of complexity.

To address this, I'd like something that reduces the need for duplication and complexity. Here is one thing I've done on a number of projects:

Figure 4: Narrowing the API

Using this new logger is now less likely to cause a problem:

SystemLogger logger = SystemLoggerFactory.get(getClass());
logger.info("%s-%s-%s", "part1", "part2", "part3");

This particular implementation makes use of "modern" Java 1.5 features:

public void info(String message, Object... args) {
 if (logger.isInfoEnabled()) {
 logger.info(String.format(message, args));
 }
}

Martin Fowler calls this agateway. I like that name as it evokes the idea of passing through as well as a separation of one thing from another. Reducing flexibility leads to something that is a little less burdensome, so we can spend our time thinking of the next bit of code to write test-first instead.

This solution introduces an additional method invocation, but the cost of a method invocation compared to removing the chance of doing something incorrectly seems well worth it. On a modern runtime, this method won't be invoked dynamically, it will be optimized to be called without a virtual dispatch. Last time I measured method invocations (2008), I could get about 2,000,000,000 per second so this touch of overhead is negligible on a system where we're likely to be using a logger. As an added bonus, if there is any configuration of logging, it can be managed in one place, leading to DRYer code.

Conclusion

Flexibility in a logging library can easily lead to inconsistent use, longer code, or code that does unnecessary work based on the state of logging in the system. From the author of the framework's perspective, this makes sense. Logging conceptually might exist at an application level, but the implementation of logging a framework needs to be flexible enough to support multiple JVM versions, varied uses, and be everything to everybody. A particular system's use of logging can choose to be more focused and consistent. Logging interfaces typically exists at a lower level of abstraction than my system's need of a logger.

Solution abstracted, but that's not my problem

Is using a SQL database an essential part of your system? Is the actual requirement that information entered into your system needs to be durable? How soon? To which users? In fact, these kinds of questions used to be easier as they were not generally asked.

Background

Last century we worried aboutACID transactions. Even then, we typically traded ACID, which is pessimistic, with something not quite as strong such as last one wins or object versioning, which is optimistic. Now, as systems have gotten bigger and we've moved to the cloud andNoSql solutions with eventual consistency, the landscape is even more varied.

How does this relate to Java? I worked on and deployed my first application with JDK 1.0.2. In those days, if you wanted to work with a database, it looked something like this:

Figure 5: First there was the Database

Java punted on the issue and you had vendor lock-in. Or worse, you wrote your code to handle "any" database - SQL or Object-Oriented.

Java 1.1 gave us JDBC. This improved our use of a database so long as we could find a JDBC driver:

Figure 6: JDBC Gave us an interface of sorts

However, while this made it easier to use a database with less vendor lock-in, this abstraction let things like transactions, prepared statements, and such bleed into your domain. JDBC raised the level of abstraction, but the level was still too low.

There were a number of improvements to JDBC, then JDO, ORMs, Hibernate and other ORMs and somewhat recently JPA (I'm ignoring things like Spring Data, Hades, etc. because it doesn't significantly change the situation). Something to notice is that we still have a bunch of arrows pointing from the system to the database.

Figure 7: JPA gave us a standard ORM

Like the discussion on Logging interfaces, using any of these interfaces is probably still a violation of the DIP. Assuming you are not writing a database, your business probably doesn'tneed a database, it probably needs some kind of durable information. The chances that a general thing like a SQL database (or a NoSql database, or hierarchical, object-based, etc.) exists at the same level as your business is low unless you are writing something direclty related to databases.

Hide DB Behind Something Domain-related

Confusinga solution with the problem is a common mistake. Luckily, this is a well understood problem and you might already know a solution to it. A common one is to use aRepository:

Figure 8: Give Domain what it wants to see

A repository is a gateway to a conceptual (maybe actual) potentially large collection of durable objects. The interface should comprise methods that make sense to goals of a user in a domain, not to a database. If it just so happens that behind the repository sits a database, then the Repository will deal with mapping from the requests that make sense to the domain into something that makes sense to the database. Make the implementation of the abstraction do the work once rather than all consumers of a lower-level abstraction duplicating the effort.

The typical interface might include basicCRUD operations (assuming the domain calls for them) but then we'll add methods that make sense for the needs of the system. That is, as we grow the system by adding new use cases, scenarios, user stories or backlog items, we'll extend the interface so that it supports the current needs of the system. No more, no less.

Consider a system that works with travel schedules for trains. There are a number of scheduled journeys between stations. Over time new stations get built, others are closed for maintenance, and the schedule of trains between stations changes due to changes in capacity, to match seasonal demand, or to introduce specials in an attempt to lure new business. Train schedules are planned well in advance and then added to the system for future activation. The system needs to periodically find schedules that are no longer relevant, ones that are about to become active, and potential conflicts such as overlapping schedules or gaps in schedules.

Figure 9: Operations that match what my domain needs

Does this mean for a given system we will only have one Repository for a given domain concept? Maybe. Maybe we'll have multiple based on considerations like the use ofBounded Contexts, or we might split a single Repository interface based on theInterface Segregation Principle. The important consideration from the DIP perspective is that the interface exists at an appropriate abstraction level for thecurrent needs of the system. What drives the current needs of the system? The use cases, user stories, scenarios, backlog items. That is, who are your actors, and what do they need to do?

Conclusion

When we use JDBC, we use a bunch of interfaces. An interface is an abstraction. However, while using some kind of abstraction often helps writing decent code, it's not sufficient. The abstraction should be at a level that is appropriate for your domain. A general solution like JDBC doesn't try to solve your problem, it tries to solve a general problem. This is similar to the Logging example where there were too many methods. The features of JDBC address the full scale of all things you might need to deal with when using a database. Typical domains don't care about all of those problems, so a particular domain's consumption can be simplified to conform to its needs.

And That's a Wrap

We've seen a few of examples of the DIP in the wild:

  • Taking an unwieldy API with too many methods and taming it.
  • Removing a mismatch between the abstraction level of a library and the domain

Some are more clearly an application of the DIP, while others might seem more like they fit into other design principles. In the end, which principle applies more to a situation is irrelevant. Dan North captures this idea well when he claims thatall software is a liability.

As a developer, it appears my goal is to write code. However, that is like asking an orthodontist if you need braces. The answer is yes, thank you, I need to make another down-payment on my boat. Something that seems to fit here is Dan North's take on software:all software is a liability. As it turns out, I enjoy writing code, learning new programming languages, and all of that. However, if I'm working on solving a problem, it's important for me to remember that software is typically a means to an end, not the end itself. This is true of design principles as well as agile practices. What makes sense is remembering the point of the work and then let the context dictate what makes sense. If you are looking for a way to frame a solution to a particular problem, the DIP is handy to know.

More generally, principles and practices that help me solve a particular business problem sooner are good for that context. They may not work for another. I tend to work on long-lived systems that often involve depending on work done by multiple reporting structures, so identifying problematic dependencies and getting them under control with a design principle like the DIP tends to be a recurring theme for me. Any of these ideas could end up being a terrible for your particular problem.

If you happen to be working on something with a shortsoftware half-life, then the best thing for your context might be to be heavily dependent on those dependencies. Also, if you practiceTDD as Robert Marin defines it (simply writing automated tests has almost nothing to do with TDD), then you are probably in a position to make sweeping changes as needed. In this case, the DIP informs a refactoring rather than an up-front design.

The practice of identifying dependencies and then determining if it is worth explicitly handling them and if so, where, is a worthy skill to practice. You can take these specific examples as things to try, guidelines on the kinds of things to look for when you're doing work, or even specific things you can do to get your dependencies under control. Whether these examples, or the DIP for that matter, help or hurt will be driven by the problem you are trying to solve.


Viewing all articles
Browse latest Browse all 9433

Trending Articles