Saturday, April 30, 2011

High Performance Computing: Create an AMI

High Performance Computing: Create an AMIThis blog will guide you through creating an AMI (Amazon Machine Image) from a launched Instance. In this tutorial we will create S3 backed AMI from running instance. Before getting down to create an actual AMI let’s try to understand some basic terminologies:

Monday, April 18, 2011

Oracle Technology Network: Contexts and Dependency Injection in Java EE: Which Annotations to Use?

Java Platform, Enterprise Edition (Java EE) 5 brought dependency injection (DI) with Convention over Configuration to Enterprise JavaBeans (EJB) 3.0. Java EE 6 introduces the flexible and powerful @Inject dependency injection model (JSR-330 and JSR-299) in addition to the already existing @EJB annotation. So when should you use what?

Read more at the Oracle Technology Network in "Contexts and Dependency Injection in Java EE."

Friday, April 15, 2011

Alex talks about Java: java.util.Objects. A new JDK 7 class for managing Objects

Alex talks about Java: java.util.Objects. A new JDK 7 class for managing Objects: With new release of JDK 7, a lot of really useful features has been developed, some of them I have write off before in this blog (JSR 203 and JSR 166y). In this post I am going to talk about one new small enhancement. This new feature is the addition of java.util.Objects class. This class is similar to java.util.Arrays or java.util.Collections but for objects instead of arrays or collections.

Wednesday, April 13, 2011

The Brain Dump: Gavin King unveils Red Hat's top secret Java Kille...

The Brain Dump: Gavin King unveils Red Hat's top secret Java Kille...: "Gavin King of Red Hat/Hibernate/Seam fame recently unveiled the top secret project that he has been working on over the past two years, a ne..."

Is Having a Build Specialist an Anti-Pattern?

A common pattern in software development teams is to have a person who owns the build system. This may be a deliberate decision, or it may evolve organically as a particular team member gravitates towards dealing with the build scripts, automated testing and deployment, etc. While it's normal for some team members to have a deeper understanding of these things than others, it's not a good idea for the knowledge and responsibility for the build to become overly concentrated in one person.

The build system should be looked at as a module or component of the software application or platform being developed, so the philosophy taken towards code ownership apply.

If a single person owns the build system, everyone else becomes dependent on them to fix issues with it, and to extend it to meet new needs. There is also a risk, especially for projects which are big enough that maintaining the build system becomes a full time job, that a bit of a siloed mentality can develop.

If developers have a poor understanding of how their software is built and deployed, their software is likely to be difficult and costly to deploy. On the flip side, if build and test tools are implemented and maintained entirely by people who don't develop or test the software, it isn't likely to make the life of those who do as easy as it could be.

In the past few months I've taken on a role which is largely focused on this area, and have been helping a development team get their build and delivery system in place. Pairing with developers to implement aspects of the system has worked well, as has letting them take on the setup of particular areas of the build and test tooling. This follows what Martin Fowler calls "Weak Code Ownership", allowing everyone to take part in working on the build and test system.

Special attention is needed for stages of the path to production as they get further from the developer's workstation. Developers are keen to optimize their local build and deployment, but can often be fuzzy on what happens when things are deployed in server environments. This is exacerbated when the platforms are different (e.g. developers working on Windows, code deployed on Linux).

Even without platform differences, developers understandably focus on the needs of their own local build over those of production system deployment. This is natural when server deployment is not a part of their daily world. So the best way to compensate for this is to keep developers involved in implementing and maintaining server deployment.

Driving the implementation of the build and deployment system according to the needs of business stories has also been useful. So rather than setting up tooling to test parts of the system that haven't been developed yet, wait until the design of the code to be tested starts to be understood, and the code itself has actually started being developed. This helps ensure the tooling closely fits the testing and deployment needs, and avoids waste and re-work.

Blog post originally from Kief Morris' Grok and Roll

Wednesday, April 6, 2011

Linux-Recipe: List all listening ports and the PID of the associated processes

Once in a while, you find yourself having to work on a box you know very little about. You try running an application you get an error telling you another application is already using the network port you need. What do you?
  • #lsof -Pan -i tcp -i udp
The lsof command is used to give a list of all open files - very powerful if you consider that *everything* is a file in Linux. This includes regular files, directories, block special files, a character special files, executing text reference, libraries, streams and network sockets.

It works for AIX, Apple Darwin, FreeBSD, Linux, NetBSD, NEXTSTEP, SCO OpenServer and Solaris 9 and 10.

Options
  • -P don't bother converting port numbers to port names - speeds up lsof.
  • -n don't bother converting ip addresses to host names
  • -i internet address must match the address specified (-i4 and -i6 are used to IPv4 and IPv6 respectively).
  • -a used to AND list options i.e show only files that satisfy all list options. The default behaviour is to show files that satisfy any of the list options.
Remember to refer to the man pages if in doubt.