Innergy Software

Services   About   

Software Development using Ruby on Rails and Javascript

Globalization for Spree eCommerce 1.2+ with Rails 3.2

There was much talk on the Spree mailing list and on StackOverflow about the lack of proper internationalization support for Spree as of 1.2. By this it is meant the ability to offer your store in different languages, and let the user select the language.

Two spree extensions were recommended to this effect, spree multi-lingual ( and globalize_spree. I wasn’t lucky getting the first one to work, but I made progress with the second one.

After realizing the second extension wasn’t compatible with Rails 3.2 and Spree 1.2, I made a fork and added the necessary changes to make it compatible.

The fork can be found here

Using this extension allows an admin to specify name and description for models and taxons in the admin section, and these will then appear in the appropriate language.

Instructions for usage are on the github README.

To set the default locale, inside config/initializers/spree.rb, inside the block Spree.config do |config| … end, add the line

 default_locale = “ro” 

To set the language dynamically in my Spree shop, I used something like this inside an initializer:


Spree::BaseController.class_eval do

  before_filter :set_user_language

  def switch_language

    session[:selected_locale] = params[:language].to_sym

    redirect_to :back



  def set_user_language

    I18n.locale = session[:selected_locale] || Spree::Config.get(:default_locale)


This sets and stores the language in the session. You can then use an override to provide links for setting the language, e.g.

app/overrides/add_language_chooser_to_header.rb: => “spree/shared/_header”,

                     :name => “header_language_chooser”,

                     :insert_after => “#logo”,

                     :partial => ‘spree/shared/language_chooser’,

                     :disabled => false


and then provide a partial like this


<div id=”language_chooser”>

  Choose Language:<br/>

  <a class=”choose_language choose_english” href=”/switch_language?language=en”>ENGLISH</a>

  <a class=”choose_language choose_romanian” href=”/switch_language?language=ro”>ROMANIAN</a><br/>


I hope some may find this useful. Feel free to submit any suggestions for improvement.

— 1 year ago
Implementing Social Login with JavaScript and Ruby, using the Gigya API

Gigya is a service that provides an general API for authentication to different services, such as Facebook, Twitter, Google and LinkedIn. The benefit is tha developers don’t have to deal with the particulars of each API, so it decreases maintenance costs.

The code to be implemented is usually JavaScript, for the front-end (where they provide several user-definable UI controls) and most likely backend code (which can be Ruby, PHP, Java, or whatever your backend uses).

On the downside, Gigya used to be very pricey (and such beyond the reach of small startups, for example) and the documentation on their site leaves much to be desired.

In particular, for verifying the signature (front-end to back-end call), there is no Ruby code sample on their site. After figuring out the steps, I thought of posting it here, in the hopes that it will benefit others who come across this issue.

def self.verify_signature(uid, timestamp, signature)

    #Validate that the timestamp is within 3 minutes of your current server time  
    if ( - timestamp.to_time) > 180
      raise“GIGYA ERROR: Invalid Timestamp”)

    base_string = “#{timestamp}_#{uid}”
    hmacsha1 = OpenSSL::HMAC.digest(DIGEST, Base64.decode64(SECRET), base_string)
    my_sig = Base64.encode64(hmacsha1).chomp.gsub(/\n/,”) 
    if (my_sig != signature)
     raise“GIGYA ERROR: Invalid Signature”)

where DIGEST =‘sha1’) and SECRET is the Gigya secret.

Important things are the fact that the SECRET has to be Base64 decoded. Also, one should use OpenSSL::HMAC as it is the most efficient method for generating a sha1 digest.

— 3 years ago
My review of the Strange Loop Developer Conference 2010, St. Louis, MO

I have recently attended two conferences: Hadoop World 2010, here in New York City, and Strange Loop 2010, in St. Louis, MO. Strange Loop’s location was, you could say, the strangest of these two, but it proved astoundingly strategic: as it’s near the middle of the U.S. people from all over the country were able to attend it, and sure enough I met a lot of people from both the West and East coast, and other places in between.

Also St. Louis is a quaint little city without many distractions, yet full of good eating places and nice people.

While I have to say that it is not generally easy to socialize with tech people, as anyone in the industry can attest, I found the people at Strange Loop a surprisingly friendly bunch. It probably has to do with the contents of this conference: I could describe it as an eclectic mix of ideas from the forefront of technology.

Although the focus was not on any particular technology or language, there were a few broad themes emerging from the talks. One was the growing predominance of JavaScript, and how it should be embraced for what it is, despite its obvious shortcomings. More about that in my analysis below. Closely related were the HTML5 & Node.JS subjects.

Another theme was parallelism and concurrency. Several very smart people contributed talks here, including Guy Steele. The other big theme was the NoSQL trend, with practical examples from various types of databases and business scenarios.

Before I begin I would like to point out that the pdf slides of most of the talks can be found currently at

Below, I will attempt to describe very briefly some of the talks that caught my interest.

Edward Yavno


Edward talked about “Event Driven Architecture”, a topic which is drawing a lot of interest these days given its relevance to a lot of areas. The JavaScript event model comes to mind immediately, but this was not the focus on Edward’s talk, which was focused rather on the more “enterprise aspects”. The first idea is that open source is taking the world by storm, in particular traditionally closed source and enterprise areas such as the financial field, in which Edward has a lot of experience.

A book that was recommended for the ideas presented is “Open Source SOA”.

In the beginning, some of the building blocks were defined.

Complex Event Processing (CEP) is a technology to process events and discover complex patterns among multiple streams of event data.

Event Stream Processing (ESP)  involves processing multiple streams of event data with the intention of identifying meaningful events within those streams, and to derive meaningful information from them.

Finally, ESPER ( is an open source ESP framework and an CEP engine. It also provides an Event Processing Language (EPL), for dealing with high frequency time-based event data.

Another relevant and recommended resource mentioned was the book on Enterprise Integration Patterns ( ).

Edward then went on to give a practical example of an electronic trading system.

Yehuda Katz, “Making Your Open Source Project More Like Rails”


Yehuda, better known from Ruby/Rails circles, had a non-technical discussion which was in no way specific to Rails but rather to open source projects in general, and lessons they can learn from the Rails success.

The emerging ideas were:

Rails was optimized for “developer happiness” and not for “performance”

You have to “optimize” for something, and as such make compromises. Once you choose your main “optimization factor”, it is possible afterwards to improve other factors (for example, performance). The idea is that it is very important to have ‘developer happiness’ as a central focus, and that unfortunately many open source projects neglect this and thus, get neglected.

Nothing Beats Adoption

Release early, and get people to contribute and to give suggestions.


Try to NOT tie up the project to a particular company. Although it was spearheaded by 37signals, Rails has in no way been tied up with them, but rather it was given into the hands of the community at large. Yehuda thinks that having a company behind the project is therefore dangerous and non-optimal. Examples that come to mind are MongoDB (10gen) and JQuery (The JQuery company).

Attribution and Credit Builds Community

The idea is that we, as human beings, tend to often underestimate the potential of the network effect. The MIT license is probably best suited for taking advantage of the network effect of open source developer software, while GPL (for example) would be more suited for smaller, more contained applications such as “Adium”.

Another point is that the importance of “marketing”  should not be underestimated: things like blog posts and showing practical applications of your software are very powerful tools to encourage adoption. To add my own comments here, this is to be contrasted with “geeky”, “dark room” projects that apart from technical discussion show little interest in discussing any practical application. Unfortunately, in my experience, this is all too often the case with most open source projects out there.

I have attended several very interesting talks on NoSQL databases, which was another well debated topic. More specifically, I got a good mix of a “success stories” (Steve Smith - Real World Modeling with MongoDB), “failure stories” (avoiding the pitfalls, Billy Newport Enterprise NoSQL: Silver Bullet or Poison Pill?)

Steve Smith, Real World Modeling with MongoDB


I was attracted from the start to the title of this presentation and I was not disappointed. Steve talked about their real world experiences with their startup (Harmony, - a CMS for building websites) and what motivated their move from MySQL to MongoDB.

As a general observation of mine, each time your application has to deal with “dynamic data types” (when datatypes can be decided only at runtime), there are a few possible approaches:

1. Model using SQL relationships (polymorphic associations, multiple table inheritance, etc)

2. Model using Entity Attribute Value (EAV),

3. Model using a NoSQL database

We can think of a few real-life scenarios when this will be the case. One example is any kind of CMS, and a more specific one is an eCommerce platform. (Unrelated to this conference, a friend of mine has blogged about the need to use NoSQL/MongoDB within the eCommerce space, )

The second approach, EAV, has been shown to be traditionally unacceptably slow. Some frameworks still use it and I don’t know about their performance (The Spree Rails eCommerce project), but in general this is not a recommended approach. It is actually one of the reasons why the semantic web ideas, which rely essentially on these kind of modeling, have not been more successful so far.

Going back to Steve’s talk, their approach was to use MySQL to model the dynamic data types. After having developed their entire application like that, they realized it had become an entangled mess that was very hard to maintain and no fun at all to extend. Switching to Mongo worked out very well for them and they have been very pleased with the results.

When data “belongs together”, there are alternative approaches to the SQL joins in the concept of “embedding” - which in a document store like Mongo means that you will embed related information each data item. (as opposed to spreading it over several tables), for example a “template” document that contains particular subfields embedded into it. Of course, this will not work well if you need frequent access to the subdata, but the idea is to model things that you will use “most” (e.g. 99%) of the time.

When it comes to storing images and files (binary data storage), Mongo is well suited for storing these directly in the db (which is done efficiently through its GridFS storage specification). This gives many benefits: for example, when backing up your database, you will include automatically all the binary files, without having to process those separately, as with traditional SQL.

Steve gave some specific examples of data types, such as items, templates, activity streams. Despite not going into the guts and internals about why things works the way they do, altogether this was a useful and very practical talk, that essentially convinced me to go with Mongo/NoSQL for my own startup idea, as opposed to experimenting with SQL first.

Billy Newport Enterprise NoSQL: Silver Bullet or Poison Pill?


Billy Newport’s talk was situated on the “enterprise” side of things, as opposed to Steve’s earlier experience with a fairly small startup. He implemented NoSQL solutions for large clients, and in some cases it proved to be a failure (from which lessons were learned) because some clients didn’t realize the drawbacks that come with NoSQL.

One thing I need to point out is the drawback of labeling things with “NoSQL” when in fact there are so many different databases that go under that category. For example, there are key-value stores like Redis, graph databases like Cassandra, and document stores like Mongo. This was mentioned in the talk, but perhaps not emphasized as much.

It was pointed out that there is no “join” possible as in SQL, so you have to do a full (and thus, very inefficient) table scan (implement programmatically database access through algorithms such as MapReduce). That is certainly a valid concern in many of the NoSQL dbs, although some databases such as Mongo provide very fast querying abilities.

Other topics discussed: having a single System of Record (SOR) as in SQL versus having multiple ones, choosing to have denormalizing data (in NoSQL) for performance vs normalization in the SQL way. A host of new problems arise when there are multiple clusters of data (on separate machines), which is often the case with large enterprises. In that case, with NoSQL it will be often impossible to do multiple table scans “online” meaning with real-time responses, and those kinds of scans will have to be done programmatically. Instead, the preferred approach is to do map/reduce algorithms offline and to do as much caching as possible, for as many queries as possible. Because of the many possibilities, things like group by/limit/joins will always be hard to cover.

Another very important idea was the fact that, with NoSQL, you need to know in advance the kinds of operations (queries, updates) you will do on your data types, and that will dictate the way you design and model your data. If, for example, you decide to embed / partition data in a certain way, you can tell goodbye to efficient querying for ways that don’t match that model. This is in sharp contrast to the misconception that with NoSQL “anything goes” and that “upfront modeling” is unimportant - quite the contrary, correct upfront modeling for NoSQL is essential.

The NoSQL panels and these talks drove home the idea that “there is no magic bullet” solution that will work in all circumstances. Different needs can be achieved optimally by using different dbs. Also, DBAs will still be needed, since a lot of work needs to be done on the db side, whether that db is sql or NoSQL.

Concurrency and Parallelism

There were several interesting talks dealing with the related concepts of parallelism (multi-core, multi-processor or distributed processing) and concurrency.

Guy Steele, “How to Think about Parallel Programming: Not!”


Guy L. Steele Jr. is one of the brightest minds alive in Computer Science and I found his talk incredible, in both its delivery and content. Guy began his talk in a very funny way - by telling us how he spent his weekend reverse-engineering a computer program he wrote decades ago, from which all he had was a paper card with zeroes and ones. (back in the times when punchcards were in use)

What began as a half-serious joke turned the audience into stupefied listeners as Guy turned the instructions into assembly code and went on an on about the intricacies for writing that particular assembly code. He was remembering the old ways and tricks such as register specifics, interrupt codes, bits for communicating with a matrix printer, and even bit patterns. It turned out that this whole apparently crazy part had an extremely good point - it showed how things used to be, and really how difficult it was to get things done. And then how things have evolved steadly:

from coding in octal or decimal

to assemblers

to relocating assemblers and linkers

to expression compilation

to register allocation

to stack management of local data

to heap management

to virtual memory / address mapping

In other words, thing have been evolving into higher “abstractions” while the lower level work has been automated. Steele goes on to explain then that when it comes to parallel computing, what we want is to have that layer automated - in other words, to not have the developer deal with it directly.

In order to achieve that, however, applications have to be written in certain ways, that are amenable to parallelism. Enter divide and conquer, and out with the “accumulator pattern”, the latter being how the applications have been traditionally written.

Then came a beautiful example of such an algorithm on a practical problem - splitting a string into words. It turns out that expressing the problem in these terms require a fairly different thinking and ingenuity with regards to choice of data structures and ways of combining solutions to subproblems. In essence, we are implementing “Map Reduce”-like algorithms.

What also came out of this example is that there are algebraic properties of the data structures and operators chosen that are essential to this process - namely associativity, commutativity, idempotence, and the existence of an identity and a zero. (Who could have thought pesky math would come to be of such use!:)

Paul King, Groovy and Concurrency

Somewhat related to Guy’s concepts was Paul King’s talk on Groovy and Concurrency. Although it involved Groovy, the concepts described there were very general. King gave a tour de force of several ways of doing concurrency, and actually implemented Guy’s problem in several different ways by using Groovy’s concurrency features. The slides for this are not up at the time of this writing, but I will upload them as soon as they are.


— 3 years ago
Process excellency not a guarantee of success

(Originally posted on October 7th, 2010)

One of the main tenets of the Agile methodologies, exemplified in the Agile manifesto, is “people and communication over process and tools”. Thus, one is led to believe that Agility is not a process, but some kind of overarching, magical recipe that has the power, by itself, to guarantee success.

Some who subscribe to this extreme view will then attribute failure to an incorrect, or incomplete, implementation of Agility - a situation sometimes referred to as “Scrum Butt” (which comes from “we are doing Scrum, but…”).

First, I would argue, quite the contrary, Agility is fundamentally a process - one that is markedly different than Waterfall, and one that emphasizes people and communication - yet, nevertheless, a process. The dictionary definition of a process is “a series of actions or operations conducing to an end”. Clearly, in implementing a type of Agility such as Scrum, one imposes a set of rules, such as daily stand-up meetings and regular iteration meetings, which must be followed. Even though people are involved, we have prescribed a process.

Secondly, I would argue that the Agile process is by not a sure way to success. A methodology such as Scrum contains no discussion of good software engineering practices, yet these are arguably fundamental to the delivery of a good implementation. XP Programming provides these, while being less comprehensive on the process aspects.

Similarly, if the team members do not have the adequate technical skills, and if they are unable to acquire them, the execution is compromised. Yet another area concerns the external constraints imposed on the technical team - if the project timeline negotiated by the business side is unrealistic, no matter how high the quality of people, process and engineering, the project will be destined to fail.

We can see therefore that Agile is not a holy grail of software development. Successful companies such as Google have been able to deliver quality software without relying on a religious mantra, but by being able to combine the best of each area and to see the big picture. Failure of any of the essential components (people, business context, technology, process) can compromise the entire project. It is therefore no wonder that the failure rate is so high in software - since there are so many different types of pitfalls to avoid.

— 3 years ago
Agile Contracts: what are they?

(Originally posted on May 19th, 2009)

It’s a question that often gets asked and puzzles even the top managers of many consulting companies.

If you are going “Agile”, how to go about negotiating contracts? Aren’t “fixed bids” fundamentally incompatible with Agile? If so, then how can you handle potential clients who want a number to be quoted?

It turns out there are a number of ways to go about Agile contracts, described in more detail at

1.  The “Sprint Contract”

2. Fixed Price / Fixed Scope

3. Time and Materials

4. Time and Materials with Fixed Scope and a Cost Ceiling

5. Time and Materials with Variable Scope and Cost Ceiling

6. Phased Development

7. Bonus / Penalty Clauses

8. Fixed Profit

9. “Money for Nothing, Changes for Free”

10. Joint Ventures

Here’s another very useful collection of links on the subject, put together by Joe Little:

— 3 years ago
More IT projects failing - Standish Report 2009

(Originally posted on May 13th, 2009)

The 2009 Standish report on the success of IT projects is out. Perhaps surprisingly, it shows that there has an increase in failure of IT projects, from previous years

“This year’s results show a marked decrease in project success rates, with 32% of all projects succeeding which are delivered on time, on budget, with required features and functions” says Jim Johnson, chairman of The Standish Group, “44% were challenged which are late, over budget, and/or with less than the required features and functions and 24% failed which are cancelled prior to completion or delivered and never used.”

In my opinion, the increase in failures is due to a few more or less obvious factors:

- technology has been increased in complexity

- adoption of Agile practices has been surprisingly slow

- there is a difficult economic environment, with scarcer resources (such as investment money) leading to more stressful and error-prone environments

To these, I’d like to add Jeff Sutherland’s remarks that with most current practices, project success doesn’t pay because …

“* Industry incentives now are for projects to be late.

*Many vendors only make money if the project is late

and over budget due to change requests and building

functionality the end users do not want.

*CIOs participate in this dysfunctional behavior using

their current proposal and contracting process.

*The whole industry could be viewed as driven by bad

incentives and faulty practices”


— 3 years ago
Navigating Company Politics

(Originally posted on May 7, 2009)

(by Joe Little. Posted with permission). In the course of my work, I hear people talk about how hard is to get things done in organizations. (This happened again recently.)

And I know from personal experience too, it is hard.

But I wanted to emphasize that organizational politics is not as hard as we make it for ourselves (at least sometimes it is not).

Here are a few nuggets mined in the field of hard knocks.

A few suggestions re ACTION (perhaps you find one useful):

* When boxing, do not expect to have the first punch be a knock out.  Set ‘em up for the kill in the 4th round.  Lots of combination punches.

* The truth is hard to resist.  (Yes, I know people will deny the truth and will often kill the bearer.)  Keep finding ways for the truth to be repeated and dealt with.  Scrum throws up the truth.

* If a bunch of people go together to a manager’s office, it is much harder for the manager to resist.  (Make sure you have the truth on your side, and that your idea makes sense.)   Maybe even harder if the manager comes to the Team room.

* Justify your impediment removals.  Do much better cost-benefit analysis.  Do them as small experiments (eg, show the actual results later).

* Justifications include: higher NPV for the product, higher velocity for the team, faster delivery, etc, etc.  Make the link from your improvement back to these key things.

* Make the case.  Make it so obviously right that the only question is: “How do I know your numbers are right?”  Managers only like to approve obviously right things.

* Ask to do an experiment.  Make sure the test sample is big enough to draw conclusions from.

Go get ‘em.

Nothing I said guarantees success. Accept that the other person is free and you can’t make him change.  Give him some respect.

— 3 years ago
Product Owner’s responsibilities in Agile software projects

(Originally posted on Aug 24, 2008)

According to the Standish report, “User Involvement” has been ranked as the number-one factor in software projects success. The following is the complete list of factors found to significantly impact success:

  1. User Involvement
  2. Executive Management Support
  3. Clear Business Objectives
  4. Optimizing Scope
  5. Agile Process
  6. Project Manager Expertise
  7. Financial Management
  8. Skilled Resources
  9. Formal Methodology
  10. Standard Tools and Infrastructure


How to achieve an optimal user involvement?

First off, we would need to define who the “user” is. The user is anybody who will benefit from the use of the system. This term is not to be confused with “Client”, the latter being a certain entity who wants the software to be built. The Client may or may not be the ultimate user of the system.

In any project, there should be a designated “Product Owner” (an Agile/Scrum term) who has detailed knowledge of the business domain and user needs. Often, technology companies are started by such a person, who has identified a clear business need and knows what the right product for it will be. However, this ideal situation is not always the case: sometimes, the initial idea needs to be developed in much more detail beyond the initial insight. In other cases, an existing company wants to develop new products about which it has an intuition, but not yet a complete understanding. Both of these scenarios provide challenges in which the product owner may not yet be knowledgeable enough about what needs to be built. One solution is to develop that knowledge through group studies and interviews with the target users, and to pair the Product Owner with other stakeholders.

Another challenge in building software products is distinguishing between end-user needsand wants. Roughly defined, we could say that a want is a feature that the user thinks the system should have. We define a need, on the other hand, as something that the system should definitely have, from the perspective of the software implementer.

Users will often demand certain features to be incorporated, because of the perception that they would make their life easier. However, it is exception rather than the norm that such a want SHOULD actually be translated into a feature. The reason is that individual users, from their own perspective, have only a limited view of the problem. It is the Product Owner who can have an integrated, “holistic” view of the needs across the entire spectrum, and who SHOULD decide what wants will be translated into needs. A common mistake is trying to add features for every want.

The Product Owner also needs to be aware of the difference between a function of the product, and the visual interface for that function. It may sound like common sense, but a great deal of products do not place enough emphasis on user friendliness because the product owner doesn’t have that skillset. In that case, it is vital that the Product Owner collaborate closely with a user experience specialist. Apple is an example of a company that understood it is not just about the feature, but also about the best way to interact and express that feature.

Finally, there is a need to understand the technical details of the implementation in the Product Ownership group. That idea is alien to traditional software approaches, but central in Agile. If the Product Owner doesn’t have the necessary technical skillset, a sharp separation is often imposed throughout the development cycle, which could be catastrophical for the project. The solution is to have the Product Owner in continuous collaboration with the technical team - such as by pairing them with a technical lead. At every step of the way, feature feasibility and prioritization should be considered together with the technical details - the result being much better choices, faster development time, and ultimately a better product.

— 3 years ago
Advices for hiring engineering talent

Originally posted on: June 16th, 2008 

Recruiting software professionals for a new technology is difficult. Coupled with a general job-seeker’s market and a region short on technical talent in general results in an even more serious problem. Such was our experience with one of our New York City-based clients; the particular technology in question was Ruby on Rails (RoR).

As with every challenge, there are ways out, and it can be turned into a big opportunity. In fact, since most companies will face the same problem, the opportunity to solve it will be all the better.

Here are the approaches that proved successful for us.

1) Recruit people without explicit experience in the particular technology, but who are totally capable and eager to learn it

Almost all of the strong RoR developers around had a job or enough contracts. What to do, stall the project or change the technology? Not the smartest thing to do.  Our approach was to get promising people, with a background in related technologies - in this case, PHP and other Object Oriented Programming languages. These developers learned Rails quickly and were able to surpass our expectations.  Interestingly enough, this principle runs counter to some people’s advices advocating only hiring people with at least a Rails project in their portfolio.

2) If faced with a choice, get someone with a lot experience and skill in a similartechnology, over someone with a little experience in the same technology.

Unfortunately, some of our offshore consultants were in the second category - that is, they had some Rails experience, but their overall web development experience was low. Fortunately, we made the opposite choice when we recruited the on-site people. A developer with significant background in PHP, CSS/Javascript and databases proved to be an exceptional Rails developer in a short period of time.

Which brings us to the next point:

3) Do not underestimate the teaching power of the web

A lot of technical problems have been already solved, and chances are someone out there posted a description of how to do it. A significant number of programming constructs (plugins, modules) have already been made. And for Rails, the web community is so good that often all it takes to uncover how to do it is a simple Google search. Therefore, knowledge of the specifics of a particular technical language has become even more irrelevant - and someone who has a strong understanding of the principles is the best asset.

4) Recruit people with communication skills. Encourage and grow communication skills.

It is common to assume as a given that IT professionals lack good communication skills, but that doesn’t need to be the case. In fact, particularly if the development methodology is intended to be Agile, the assumption is unacceptable. If there is a need, management can (and should) set up processes that will encourage better  communication. We were able to get people with both technical abilities and excellent communication skills at the same time and that was instrumental to the success of the projects.

5) Engage technology professionals in the hiring process

It is tempting to try to delegate the candidate search to recruiters, whose sole job is to look for people. Yet sometimes with new technologies it is the developers themselves who can be more successful at getting new hires. Through their memberships in various online communities, personal networks and knowledge of specific job boards, excellent referrals can be obtained.


In conclusion, not starting or stalling a project just because people with that particular skill set can’t seem to be found is not a solution. Lost opportunity can be a lot more costly. A good principle is to look for people showing the general traits of adaptabilitypassion for technology, general technical ability and work ethic.

— 3 years ago