Showing results for 
Search instead for 
Did you mean: 

Changing IT Mindsets From Deployment To Adoption


I’ve always found it interesting why the various tools we’ve previously thoughts as the “right answer” are continually replace by tools we believe to be “the next right answer”. We used to love e-mail back in the nineties – now we know hold it in disregard. We now think micro-blogging and activity streams will solve the e-mail problem. I wonder how long it will take for us to complain about those tools and pontificate eloquently about the next right answer that should replace them.

The point? Almost every widely deployed technology today that has failed to meet expectations over time, at one point was revered as the answer to our problems (productivity suites, content management systems, “groupware”, search, portals, etc.). While this might be the natural cycle of things, we might also have ourselves to blame based on the way we think about technology delivery.

One of the tag lines I’ve used over the years is that Enterprise 2.0 strategists need to think of their projects in terms of adoption not deployment. What I wanted to achieve with the catchphrase was simple – to persuade people to think about their projects differently. We never seem to be happy with enterprise technology – this is especially true when it comes to collaboration tools – and might very well be the situation with tools associated with Enterprise 2.0 in a few years unless we alter our view on what it is that we’re “delivering” and what constitutes “being done”. We should not be so enamored by what we think is our current right answer from a technology perspective that we forget the non-technological things we need to enable so that people (and the organization at large) can realize and sustain the value derived from use of the tools.

Within many organizations, the plan-build-run philosophy still frames our view of IT – once a system is implemented (i.e., deployed), the project is “over” – resources are reallocated, budgets are closed out, systems go into some type of maintenance mode or await the next release cycle of new development. We then wait and watch for the results promised by the project (e.g., ROI). Often those results are based on metrics that examine cause-effect impacts and improved business outcomes. We want benefits to be self-evident quickly. We tend to struggle when project results are subjective, can only be inferred, or correlated to improvements that take more time to emerge than anticipated.

While the situation I describe above is somewhat of an over-simplification, and not applicable to every enterprise, it is a reasonable scenario. My perspective is based on talking to many IT organizations involved in collaboration-related projects since 1996.

When it comes to collaboration, knowledge management, and E2.0 efforts, “culture” is often cited as the reason results fail to meet expectations set when the project is approved. While some projects might acknowledge the need to support post-deployment activities early in the planning stages, strategists and project leaders have consistently told me that they were surprised (and not in a good way) at how they underestimated the effort necessary to gain “adoption” of their solution.

To make up for the shortsightedness, we respond (after the solution has been deployed and after the project has officially ended) with governance programs, user experience enhancements, change management processes, and community outreach efforts. We collectively hope that “better late than never” heroic efforts can rectify the original mistake. In my experience, it’s a toss-up as to whether you can salvage the project in a timely and effective manner at that point.

As long as we equate project delivery with the end of the project – our frustration over time with “the new right answer” is going to follow the same cycle as previous techno-centric approaches. In many ways, what strategists and project leaders were saying is that delivery of the technology (e.g., blog, wiki, social network site, activity stream, micro-blogging) actually represented the beginning of the project from an organizational perspective. Some take-aways I hope you consider:

  • Governance programs, user experience design, change management processes, and community outreach efforts must be treated as required components of the project. That means “time, money, and resources” must be forecasted upstream.
  • Project completion cannot be viewed as coincident with technology deployment. That means thinking of people-centric initiatives like Enterprise 2.0 as part of an overall transformation program, not a one-off project.
  • Metrics (and ROI) for people-centric solutions need to be recast. This remains a work-in-progress for the industry. We are in a state were we can find great stories of people working in new ways and in some cases, with real impacts. However, we're still in search of the right measures and ways to value people's participation. 

As an industry, we're still struggling with measuring E2.0 value. Some might call it a crock - some might say that the solution is to make all of E2.0 process-centric. Each argument has some level of truth. If we cannot define the business or organizational value then we are guilty of arguing for something that is academic, conceptual or a leap-of-faith (ala KM in the 90's). That does not make E2.0 wrong, simply difficult to "prove" (cause-effect). If we fail to include "social" with the context of actual work, then we limit how E2.0 can deliver real improvements. However, we should not hide behind process - or cede the value of participation outside a process context. All it means is that we're early into understanding the organizational dynamics in play when it comes to issues related to identity, social roles, formation of group structures, and social networks.

Note: The reason I am saying “as part of an overall transformation program” is to acknowledge that E2.0 is complimentary to the real business and organizational initiatives an enterprise might have. E2.0 for the sake of E2.0 is not the point – that would be making the same mistake KM initiatives made in the nineties. E2.0 can help with strategic talent and learning initiatives, CRM and BPM efforts, and so on. It is additive, or a means to an end. It should not be positioned as an end onto itself.


I wholeheartedly agree with your supposition that the emphasis should be on adoption - which requires a totally different perspective from a solution where a product or tool is deployed. I am not sure if there is a parallel in the industry where most things are product centric. It begs the question, though, on how to practically facilitate the ideas you are espousing - governance, project completion aspects, and metrics. Let me through out a few things; these are topics of broad discussion individually.

In the governance area, I believe we need a "starter" set of templates, processes, and tools to assist the customer. I have seen artifacts in the past that have been used to assist customers in establishing governance in the enterprise- and this needs to be tailored toward ESS. Of course, one size does not fit all, but we need to jump start the process. In the project area, we perhaps need a new process methodology, an engagement model if you will, that takes into account the full lifecyle of ESS engagements. This includes tasks, activities, milestones, and dependencies that are applicable to ESS engagements. The standardization of this would need to be enforced as much as practical as well. And lastly, I believe it is possible to establish metrics in many cases in spite of the more nebulous nature of ESS. Admittedly, the capturing of those metrics are more difficult in some cases. In one of my posts,, I list some of these depending on the type of project. Of course, this assumes that you can get a meaningful baseline and that the metrics are quantifiable. A help desk may have good metrics on their call volume, time to respond, and such - while measuring innovation depends upon more difficult quantifiable factors (how to quantify innovation, new products, etc - and separate out causality factors).

In short, I think we can make meaningful progress on these areas - and with experience, refine the governance artifacts, project methodology, and better establish metrics.



As I was reading this, I was visualizing a project timeline where the technology deployment was a milestone but not the project endpoint. As I thought further I started thinking about the beginning of the project. I think you implicitly addressed that, but I think it is worth addressing in more detail.

If we think about the "project" from the beginning as the achievement of some business objective or meeting some business need, we can build a better foundation for the endgame you discuss. When we plan the project from the start with a holistic/systemic perspective, then the technology just becomes a single track of the project that fits into a larger context. When approached this way, I think we will find that some of the vexing problems we face, like metrics and behavioral change, become easier to solve, because we have a bigger picture of what is to be accomplished.


Hi Lee, I've also framed certain initiatives as being more "program like" than "project like" and I think E2.0 is one of those initiatives that should be treated as a program - a series of activities that should not be framed solely in the context of an IT project. I agree with you that the more we acknowledge the multiple dimensions that need to be addressed (not just the tooling), the better off we are - I'm not sure if makes success or metrics any easier, but at least we are better prepared.


Hi Don, Thanks for the feedback. Some thoughts:

1. In my experience, governance begins upstream and does not center on technology per se but the agreed-upon practices of its usage. I believe the anchor point for effective governance involves people, policy, process,and community. The cornerstone of governance is to establish the degrees of control between various constituencies and local decision-rights for those being governed. Governance programs also provide a feedback loop to policy formation. I've come to believe that governance should not include enforcement - enforcement is best left to architecture boards, standards councils, etc. Governance should remain business and organizationally focused.

2. Templates, starter-kits, etc are all great ideas. Totally agree.

3. I'm not a big fan of the list AiiM came up with. Some of the items in the list I just cannot validate - and I've talked to a lot of E2.0 teams over the years. e-Mail replacement would be a terrible business justification for a project. It could be a possible benefit, but not a justification. Portal replacement is also unlikely - how E2.0 can renovate/extend portal value is a more valid idea but still technology-centric. Collaboration complexity is a bit nebulous. KM is a tainted term and many E2.0 teams do not want to associate themselves with it. Innovation and ideation are perfect reasons for an E2.0 project. In fact, the E2.0 Adoption Council surveyed members a while back and innovation was the first real business-focused reason for such initiatives. Customer support, SME, and communities are other reasons I've come across.

4. By metrics, I'm focusing on business metrics - how does E2.0 improve talent management, learning, hiring, etc. How does E2.0 improve cross-selling/up-selling. We need metrics beyond technology usage - when I've talked to large enterprise organizations, it comes back to E2.0 as being a discretionary spend because the ROI (and metrics) are "squishy". Lots of work to be done around metrics that point to business value.


Interesting the focus on metrics and ROI.

I am curious about being able to find an ROI study and the metrics employed to justify and evaluate the value of e-mail in the enterprise.

While we are at it, it would be good to see the ROI and metrics employed for the people who purchased VisiCalc in 1979 and subsequent spreadsheet products like Lotus' 123, and Microsoft's Excel.

I would be surprised if those could be found.  If they were found, I would be even more surprised/amazed if the justification for a spreadsheet and email wasn't done using a spreadsheet and distributed using email - but I digress.

Why no ROI and metrics?  The 'justification' was easily understood in that it automated and made generally available what had previously been a manual or highly specialized task: eMail vs. Snail Mail/Currier and/or phone calls; Spreadsheet vs. Calculator/Pencil and paper, or a dedicated Formula Translation (aka FORTRAN) program.

I would suggest the wait for the day Enterprise Social Collaboration has its own 'hard and fast', distinguishable ROI and distinct business value metrics will be a long one (read years) and those waiting will have missed the opportunity to improve their enterprise's business performance.

Enterprise Social Software is an integral part of what is currently the best workable means to bring together the best of the last 30 years of point solutions (eMail, wikis, blogs, Instant Messaging, Self Serve Tele-Conferencing, etc. - I would even put Knowledge Management in here) and move the enterprise forward.  As mentioned by others previously, it is not exclusively - and I would argue not primarily - a technology or cost issue: it is a business issue.

I think we are near the point where leading business executives understand the value and will start ask "Why aren't we doing this?" as opposed to "Why should we do this?"  Regardless of demonstrable ROI or proven metrics.



While no one does e-mail ROI today, or has for over a decade, "back in the day", organizations did work hard at trying to identify it. There's an "urban myth" out there that no one ever tried to do ROI on e-mail. It's simply not true - they tried - and failed. It was too difficult and subjective. I know this because I worked at Meta Group during the time and my colleague Matt Cain (who still covers e-mail) took all those calls. Ditto on investments in office productivity tools (ok, so now I am dating myself). While I was at a large insurance company in Hartford, we did investigate it in terms of TCO (not ROI per se).

Often the argument for ROI is also used because you need to tell other projects why they are not getting the funding. For instance, if I give "social software Project ABC" a sum of funds, I need to tell 3 other projects why they did not get the funding - ROI (right or wrong) makes it seem like a business decision. Organizations need to allocate resources and they use ROI as one way of assessing priorities - not saying it's right - just the way it often is in many large enterprises.

Metrics can be important and can be separated from an ROI discussion.

A lot of what social software is about relates to enabling organizational capabilities - not sure we have the right metrics in place for that or a clear idea on what the ROI is on scaling expertise, better business ability, etc.

Interestingly enough, Deloitte published an interesting study you might find helpful:



I never subscribed to the urban legend of an ROI on email never being tried, I was suggesting what you experienced - it was too subjective to be of value and never really existed; thus the final purchase decision had to be based on something other than ROI.  TCO is great for comparing functionally similar items and to influence a decision based on the cost - as you intimate it doesn't by itself provide an indication of ROI.

As a Marketing Representative (Sales), Program Manager, and Product Manager at IBM I developed an appreciation for ROI and its usage as one of the inputs to the decision making process.  As you know, most businesses and executives do not make ROI the exclusive reason for pursuing different investments.  ROI is is part of the mix of course, but clearly - and I offer eMail and Spreadsheets as simple proof points - ROI is demonstrably not essential to making sound business investments.

The Deloitte study is interesting - my cursory read indicates many of its observations - apply to specific issue, be strategic, act now - resonate with the pursuit of any new business approach or opportunity (and perhaps supporting technology) in an enterprise - they are not unique to ESS.  Enterprise disruptors are not new.  Like other disruptors, effective enterprise collaboration is, as you have noted, not a technology issue alone: it requires the discipline and methodical execution required of enterprise transformational endeavors.  We can learn much from others who have done the same in the past.

You might find Kenneth Wentland's "Ford Flexes Back" worth looking at.   Among its many insights are lessons we can apply about effective virtual enterprise collaboration and the executive leadership and decisiveness required to embrace and develop the disciplines around large-scale virtual collaboration across professions and organizations.  There is an excerpt here ( )

CreatePlease to create content
Content for Community-Ad
August's Community Spotlight Awards