Getting the most out of your document capture solution – Multistream, color dropout for forms processing

Leveraging an investment in scanning hardware and software should always be a priority.  After all these are typically not cheap investments although the ROI can be outstanding if implemented properly.

In this blog I would like to share some little known, yet extremely useful, features that can dramatically improve forms processing automation and accuracy.  I am occasionally asked about these features and I believe if more people knew these were available then it would help improve efficiency in the capture process tremendously.

Multistream – Multiple versions of one captured image

The first feature I would like to explain is “Multistream”.  As the word would indicate this means that for each image captured, the scanner can output two or more versions of the image.  Why in the world would anyone want to do this you ask?  Good question and the answer is to improve Forms Processing data extraction accuracy.  Typically when people use Multistream they will output a color version of the image and a bitonal (black and white) version of the image.  The color version is stored for the purpose of retaining an electronic version of the original document.  This version of the image is for human’s to retrieve and view images.  However, the bitonal version is used for the capture technology such as OCR to process by computers.  Bitonal images are preferred for OCR because the color is unnecessary for a computer to interpret pixels and might actually decrease the level of accuracy.

As you can see in the image below the OMR (Optical Mark Recognition – checkboxes), ICR (Intelligent Character Recognition – Handwritten) and OCR (Optical Character Recognition – Machine characters) are much cleaner on the bitonal image on the left.  While the color image on the right is good for human viewing but not as good for capture and data extraction.

Dropout Color – Remove form background color

Another useful feature to use, in conjunction with, or just use in general on certain types of forms, is called “Dropout Color”.  This means that either the scanning hardware, sometimes the scanner driver or even capture application, can remove the forms background color.  In the image below the form color for the Healthcare form is a red color.  This red color is a good way to guide humans completing these forms to which area of the form to fill-in information.  However, this color is unneccasary and not needed for a computer to read this information via OCR, ICR or OMR.  Therefore, we can “dropout” the color to expose only the information on the form that we really care about.

 Forms Processing – Automatically extracting data from forms

Now, after using Multistream and/or Color Dropout, as you can see in the image below, you can now expose all the data you wish to capture in a neat manner which a computer can better understand and interpret.  The combination of using these advanced features can certainly help improve your data capture automation and accuracy levels.

Gaining value by using tools available to you

Enabling these features is quite simple so I encourage everyone to consider if these, or other features, might be available to you in your document capture solution that might help improve productivity.  These are just a few examples of using available functions to enhance process.  Within the entire capture process there are many techniques, functions or features that can be incorporated that would make capture much more efficient.

What do you think?  Are you getting the most out of your capture solution or do you think that there are possibly areas of improvement had you known about capabilities such as Multistream or Color Dropout?

The logic of document capture

Indexing, Metadata, Keyword, SharePoint, Capture, Scanner, Documents, ECM, Content Management

What is wrong with the collection of words above?  Well, it’s a collection of terms that are closely related but have no logical structure in order to be of value to anyone reading them.  In order for these words to be valuable in terms of readability for context they need to be logically organized into a sentence.  The logic of document capture and Enterprise Content Management is much the same.  In this blog post, instead of going into the nuts and bolts of document capture I thought it is more important to discuss two critical components to your overall success, or failure, of your content management strategy.  These two critical components are taxonomy and metadata.  This is philosophy and not technology.

To break down document capture in its simplest form, just think of this as the process of extracting information from a document and making that information available in the future.  The future could be immediate where a scanned invoice, for example, immediately kicks-off a payment process.  Or it could be two weeks from now where a customer service agent needs to retrieve a signed airbill for a proof of delivery.  The point is that document retrieval is based on some unique keyword or a set of keywords related to a particular document.  In the case of the invoice it could have been the invoice number and in the case of the airbill it could have been the shipping tracking number.

If you do not consider a well thought-out strategy then your organization could have accomplished the task of taking an organized paper mess and simply converted it to an electronic mess.

Establish a well thought-out taxonomy

Taxonomy is defined as classifying organisms into groups based on similarities.  Why is taxonomy relevant for document capture?  For several reasons, including security, quicker access to information and retention policies.  So, if you work backwards in the methodology of how and what, technology to implement for your document capture solution a solid consensus of the end result is of paramount importance.  The end result is typically a high-quality scanned image conducive for data capture (OCR, ICR, OMR, bar code, etc.) and the metadata itself.  So if your taxonomy has organized methodology then it should assist in making your document capture strategy fairly obviously.  Let’s take security as a benefit for a well thought-out taxonomy strategy.  By segregated documents based on a logical taxonomy, organizations are afforded an addition level of comfort knowing that a set of security policies can be applied to, for example, Human Resource, documents allowing access to everyone for a general set of available scanned documents such as the café menu which is clearly not a information sensitive document.  Additionally, another benefit of a well thought-out taxonomy is quicker access to information for users.  Many content management software applications and search engines use a ‘crawl’ method to check newly added content and add them to an index (database) which is then searchable.  As you can imagine, common sense and logic dictates that ‘crawling’ a more narrow scope is much quicker to keep the database up-to-date, but also access times could be considerably less by not having to search the entire database and only the relevant data indexed.  This makes access to data quicker.  Lastly, in regards to retention policies, having your data well organized is a major benefit for this area.  Imagine that an organization has all of their tax documents properly electronic stored via a well thought-out taxonomy in their content management system.  If they did then easily, and within corporate governance standards and policies the organization can removed these images from their repository based on a retention schedule.  So, as illustrated, investing the time to develop a strong taxonomy is important for many reasons including security, searchability and retention.

It is extremely important to not over look this important concept when planning out a document capture strategy.  A simple taxonomy might be organized like below:

  • Accounting
    • Accounts Receivable
      • Check
      • Statement
    • Accounts Payable
      • Invoice
      • Receipt
  • Human Resources
    • Applications
    • Resumes
    • W2 Forms

taxonomy

Considering a well thought-out strategy might seem cumbersome in the initial stages of establishing your document capture strategy, but it can save organizations significant time, money and aggravation in the long-run.  As a best document capture practice it is important to establish a solid taxonomy for scanned documents and also re-evaluate the strategy as it relates to taxonomy as any new documents are introduced within your organization.

 

Consider what information is important, and what is not

Creating Searchable PDF’s is one form on document capture; however, it is not always an ideal document capture strategy.  While sometimes, in certain situations, creating Searchable PDF images of your scanned documents is the right approach for an organization sometimes this technique of document capture often creates inefficiencies.  You might be thinking to yourself how could creating a fully Searchable PDF with all the words of the document indexed be construed as being inefficient?  Let me elaborate.  When creating a Searchable PDF the scanning software does its best job possible to recognize every single character and every single word on a page.  This might sound appealing but let’s consider the possible results in real-world applications.  Imagine that an organization in the insurance business scans as little as 100 single-page documents and creates Searchable PDF documents.  Then they want to retrieve a document based on a keyword so they use the word “claim” in their search criteria to find a document a user is searching for.  As you can imagine the user would most likely be presented with a long set of links to possible documents but only one is the important document they are looking for and the rest is “irrelevant search”.  This is because the entire page was indexed via the Searchable PDF method.  Alternatively, if your data capture strategy had included only extracting “relevant search” terms that apply to a particular document then you make the organization much more efficient by being able to find the data you have requested much quicker with the first search.

One of the other significant benefits with an integrated document capture/content management strategy is that often times any sort of metadata fields created, and rules applied, in the content management system can be brought forward and applied into the document capture system itself.  For example, if an organizations’ policy dictates that on a healthcare insurance form that for a metadata field the social security number is required and can only be nine characters long of numeric characters, then directly in the document capture system these rules can be enforced.  This allows for great business continuity and consistency in your data capture process.

An analogy I like to use is go to your favorite internet search engine and enter in a vague term such as “taxonomy for document capture” then you will get a long list of ‘hits’ that probably are not of interest because you might be looking for a specific piece of information, or a scanned image.  In the contrary, if the user enters-in a more specific term such as “aim document taxonomy” then the focus of the search is narrowed down to a more relevant list of potential information the user is searching for.  This is an example of relevant search versus irrelevant search and it’s all related to applying metadata to web pages, electronic documents and, yes, especially scanned images.

Summary: Organized taxonomy + relevant metadata = Efficient process

In summary, my point is to carefully plan out your document capture process.  Pay close attention to developing an effective taxonomy for your documents.  Determine what information is important on a particular document and what is not.  Document capture technology has evolved to nearly magically proportions but, the truth is that organizations can still greatly help their efficiency and content management effectiveness through careful planning; after all there still is logic to document capture.

Do you have thoughts of the topic of document capture, taxonomy or classification?  Please share your comments.

The world’s largest scanning device event ever – Dreamforce 2012

If you had to select from the list below what the world’s largest gathering of scanning technology would be, what would be your guess?

    1. The AIIM conference
    2. The ARMA conference
    3. The CES tradeshow
    4. The Macworld conference
    5. None of the above

The answer is not as obvious as most of us would have guessed such as the AIIM conference.  After all, AIIM is known as a leading organization in ‘image management’ so of course this would be the world’s largest collection of scanning devices ever.  The correct answer is “None of the above”.  I would strongly argue, and have plenty of evidence, that Salesforce.com’s recent Dreamforce 2012 conference in San Francisco was by far-and-away the largest collection of scanning technology to ever be assembled at one conference.  Specifically I’m referring to the number of camera-enabled devices at this conference and creating images from smart phones instead of document feed paper scanners.  There were 90,000 registered attendees and each attendee probably averaged two devices whether they were iPhone’s, Andriod’s, iPads, Galaxy’s or whatever.  These devices were in abundance, that’s for sure!

Therefore, conservative estimates of around 180,000 camera-enabled mobile devices plus all the devices in the vendor’s booths themselves probably puts the number of “capture” devices at around 200,000!  This is a remarkable opportunity to leverage the fact that most of the devices these days include high-quality cameras.

            

 Of course I’m not talking about large production-type scanners typically seen at the annual AIIM conference where you would capture a stack of 100 or 500 pages at a single time, for example.  I’m talking about ‘transactional’ capture where the use case is to capture one, or just a few, documents at a time.

 

Education and awareness – Old habits die hard

Even with all these devices readily available to all attendees, and all this revolutionary software on display I witnessed utter failure, not because any of these people or technologies were bad, but because people were not aware of the incredible advances in Mobile Data Capture.  Let me clearly explain what I mean by utter failure with specific examples.

 

1.  Mobile Data Capture Use Case # 1:  Business Card with recognition on device

First, I had several people hand me their business cards.  Why?  Why not just take a picture of the card and automatically put in to Salesforce as a contact?  Yes, the technology does exist!

 

2.  Mobile Data Capture Use Case # 2:  Marketing materials with recognition hosted

The next utter failure was when I was handed some marketing materials.  What typically happens with these items?  That’s right; they often get filed right into the circular file cabinet (a.k.a. trash bin) to never be found again.  Why not just snap a photo with a smart phone and have the document made into a fully Searchable PDF image and then stored in some system?  Then I can quickly, and easily, retrieve it in the future based on some keyword related to the material that I was looking for.  This functionality is not only very useful for retrieval purposes but also general organizational purposes.  For example, at a typical tradeshow you will meet many people and get introduced to new companies that you probably hadn’t known of before.  In these cases you will most likely only remember something vague about the company, person and/or product but not the actual name of the person, company or product.  Therefore, you can easily search for a term such as “consulting” to retrieve all the documents with that particular word contained in them.

 
 

3.  Mobile Capture Use Case # 3:  Batching and document collections

Then one of the last utter failures I would like to share is a personal story but it just goes to illustrate that capture from mobile devices is not top-of-mind like it should be because the technology is so new.  Like most of us returning to our offices after a business trip, we will have acquired various documents during our travels such as meal receipts, contracts or just environmental photos to save and share with our fellow colleagues.  While the types of documents themselves could be vastly different, the collection of these documents will most likely have something similar such as the location or name of the event.  In my case the similarity between these documents was ‘Dreamforce 2012’.  So what I did was whip out my handy iPhone and snapped several photos at once to create a collection of documents.  This was a very different user experience that I was used to where I would take a picture of one image, and then uploaded.  Then take a picture of a second image, then uploaded, and repeat the process until I was finished.  This was simply a horrible experience and I would delay getting this information saved electronically because I dreaded the time wasted doing this activity.  With the ability to capture many images at once, it allowed me to get these images uploaded quickly without much effort at all.  Next, since the documents were different sizes, I used the auto crop feature to automatically resize the images to the proper size.  Then, to make my stored images really smart I added ‘tags’ so that I can type a search term such as ‘biz card’ and find all the business cards stored on my phone.  I then had the option to send to a wide variety of popular cloud storage destinations, send via e-mail or even print.

 

Batch capture

Capture several items at once instead of one at a time.  Greatly saves time when gathering a collection of related images.

Enhance Image

Auto binarization, auto cropping, page rotation and other useful features to create excellent image quality.

Tags

Easily add tags, or metadata, to each image to make them searchable or better organized.  Custom tags can be added at anytime.

Batch Collections

Your smart phone can now be a simple version of a mobile document management system with the ability to save collections of images on the phone itself.

 

So the question begs, with this great capture technology literally at people’s fingertips why is it that we seem so naïve about this amazing technology?  I think there probably are several viable reasons including, but not limited, to the following:

    • Awareness that this type of technology exists in the first place.  More education is needed.
    • As a society we are on “mobile application overload” so we have a difficult time weeding through all the available applications and try and find the most useful ones.  There’s an app for that!
    • We are still in the early days of mobile application development.  Companies rush to get an application to market first, then will gradually add business productivity capability such as mobile data capture.
    • Use case scenarios need to be clearly defined and return on investment needs to definitively articulated.

 

Therefore if, as an industry, if we can provide more overall education and bring awareness to this type of technology, then the greater likelihood there is that everyone can benefit from the tremendous potential of Mobile Capture.  When we truly consider all the great possibilities of using mobile devices to contribute content, instead of just purely information consumption, then we can absolutely achieve the next major milestone in achieving the ultimate in business efficiency.

Capture … with Confidence

Prelude:  I’ve included many screen prints in this post and there is a lot of detail that may be interesting to you.  Click the thumbnail images for a larger view.

I wrote a story the other day about Frankie-the-Frustrated worker and his frustration dealing with the lack of automatic data entry in his daily work activities.  In Frankie’s case I admittedly way over simplified the solution to illustrate the point that technology such as advanced data capture is a reality, yet can still be easy to use.  In other words, we don’t have to sacrifice automation for a pleasant user experience or vice-versa.  One of the nice AIIM commenter’s on the story rightfully pointed that Frankie would soon be known as Frankie-the-FUDer due to the fact without the all-important “data verification and/or validation” step in the process.  Frankie would soon be Feared because of the Uncertainty, as well as, Doubted in the accuracy of data that he was contributing to his organizations business systems.  This made me consider that maybe many of us haven’t seen advanced capture software capabilities in action or even know what sort of capabilities are possible with modern technology.  Therefore, for this reason, I would like to provide a bit of a deeper drive into what makes Data Capture solutions highly effective and give you very specific details, with many screen prints, so that hopefully we can help Frankie to become Frankie-the-Fabulous worker that he desires to be.

There are several factors that contribute to a successful document capture solution.   While each vendor’s exact terminology might vary a bit, the truth of the matter is that the ‘process’ of data capture is quite similar.  If you carefully consider each step and how it can contribute to improving data accuracy and quality, you will recognize that there is quite a lot of moving parts to make this “magic” happen.  The key point I would absolutely like to stress before this deeper drive into technology is this; so much of this process can be done automatically which is totally transparent to the user.  I would like to detail a few techniques so that we can be aware of the technology available to make the user experience the best it can be.  Once the system is configured for production then all the user has to do is basically capture images and verify data which translates directly into a very easy and simple experience for the users themselves.

 

The logic of Automatic Data Capture

The very first thing to do when considering designing an effective Data Capture solution has nothing to do with the technology itself.  An absolute, must-do, critical step that you will hear from all the experienced professionals in the capture business is to gather as many document samples as you possibly can.  Gather all the different types of documents you wish to capture such as invoices, agreements, surveys or whatever, but gather as much volume and as many varieties as you can.  Also, do not just gather high quality original documents that someone might have just printed on a pristine piece of paper from the laser printer in the office.  Gather the ones that that have been in filing cabinets for years and ones that have coffee stains with wrinkles.  The idea is that you want documents that are going to represent a true production Data Capture environment.

 

Initial document analysis and index fields

After gathering as many documents as you can then the first step in configuring the Data Capture solution is to import the sample documents.  Scan them at 300 dots per inch (300 dpi) which is the optimal output resolution for automatic recognition accuracy.  Next, you will want to run an initial document analysis on your documents.  In this analysis the software will make its best guess on the structure of the documents.  You should not expect that this analysis will be absolutely perfect but in many cases this step can do a good portion on settings up your solution that typically took a lot of time and effort.  As seen in the screen print below (click the image to zoom) the software can automatically detect form fields such as “First and last name” and draws an extraction zone around this particular area.  The software can also detect Groups such as the “Company Business” and automatically create index fields for all the available options in this group (i.e. “IT, Healthcare, Education”, etc.)  So after the initial pass you will want to check each field and apply some logic to improve the accuracy of the data captured and there are many useful techniques as you will see below.

1_index fields_small

Useful tips and tricks to improve data capture accuracy

2_types of documents properties

3_generalGeneral

From the General tab in your data capture application you can provide a useful field name for each individual field you wish to extract data. This configuration tab will allow you to decide such basic functionality such as if the field is Read Only or Cannot be blank.  Also, you can decide whether to Export field value because sometimes you might wish to recognize some information such as a line item amount but do not wish to export the line item, just overall total amount.  The most commonly used functionality is enabled by default.

4_data typeData Type

The Data Type configuration is an extremely valuable function to allow for field-level recognition accuracy.  For example, if the field is a Number only field then you can enforce the recognition to only output numbers.  Or if the field was an Amount of Money then you can enforce an output in the form of an amount.  You can also add custom dictionaries and other useful validation rules

 

5_recognitionRecognition

This is the area were you would fine-tune character level accuracy.  In the Recognition tab you can select which type of recognition you wish to perform on a certain field whether it might be Intelligent Character Recognition (ICR) for handwritten text, Optical Character Recognition (OCR) for machine printed text or even the font type.  The more information that is known about your documents, and if you can apply that logic to your capture system, will make the overall accuracy much greater.

6_verificationVerification

While pure processing speed of getting images captured and recognized is important, the importance of uploading accurate data is often the most important consideration in a data capture solution.  So, therefore in an effective data capture solution there is a “verification” step in the process where you can set certain character confidence thresholds.  If these thresholds are not met then a human will view and/or correct the data, if needed.

7_rulesRules

This is one of the most critical-steps in the data capture process.  With Rules configuration options, this is where the Data Capture system starts to use logic, and lookups into other systems, to compare data fields for any contradictions in the data captured.  For example, just imagine if a Social Security Number was captured incorrectly by one digit.  The system can do a Database Check and look into a different system to check the SSN based on a different field such as Mailing Address.  In this case if there was a mis-match then the user can easily and quickly correct the data before sending to the back-end repository.  Another great example is to read line item amounts from an invoice and then use the Check Sum option to validate that the total amount is equal to all the line items combined.  This is incredibly effective to catch any potential errors BEFORE they are committed to a system.

8_custom action

Custom Action

When standard capabilities or functions just aren’t enough, or if your business process dictates customization, there are options to incorporate custom scripts.  User scripts are custom scripting rule triggered by the user when viewing a field during field verification or in the document editor. The script is triggered by clicking  to the right of the field value.  To make the creation or modification of the scripts simple, there is a script editor available directly in the data capture configuration interface.


Putting it all together (Data Capture from the User Perspective)

9_acquire imageNow that we’ve taken a look at a few of the ways to improve the quality of data in your Data Capture solution, hopefully you can have a greater appreciation for how all the moving parts can make this type of system highly accurate.  These configurations are typically setup by system administrators or persons with specialized training.  However, what really drives adoption of a particular technology for mass appeal and high adoption rates is a pleasant user experience.  So, therefore, what I would like to do is show, in a few screen prints, how simple all this advanced technology is to use from the User Perspective.  Please note that the screen prints might vary depending for many factors including hardware capture deviceprocessing/verification user interface design and/or the ultimate storage destination.

  • Step # 1 – Capture images

o   This can be from a dedicated scanner, multifunction peripheral or even a mobile device with camera.  In the screen print below this is the simple desktop capture interface.  As you can see I can ‘Load Images’, ‘Scan Images’, ‘Import Images’ or the capture system can be configured to automatically process images from shared folders, FTP sites or other sources.  So, you can just imagine that the Data Capture solution can be setup in a way that can process images from any device at any time.  Again, making the user experience to contribute images very easy and accessible from anywhere.

10_scan image_small

 

  • Step # 2 – Verify data for accuracy

o   After the first step of capturing the images themselves, the images are run through all the recognition rules, validation steps and/or database lookups to provide the highest quality of data possible on the first-pass.  But, as I said earlier, it is not always possible to achieve absolute perfection for many reasons so you will want to have the user “verify” the results if the data did not meet a particular threshold of confidence or there was other exceptions.  Please note that the user interface screen print below is from the desktop version of a verification station but you can imagine that this could just as easily have been optimized for other devices such as touch screen interfaces or even mobile devices.

11_recognition and validation rules_small

 

  • Step # 3 – Export to database

o   Lastly, after the user has checked that all the extracted data is accurate then they can simply export the quality data to the database.  Of course, these export results then can set off a whole series of workflow events based on what the back-end systems capabilities might be.

12_export to database_small

 

Confident data capture for everyone

As I illustrated, Data Capture from the user perspective can be quite simple.  There are many additional techniques and tricks that you can use but I wanted to cover some of the standard ways to achieve highly accurate Data Capture results.  The end result is beautifully accurate, as well as useful, data in your database.  This will give the organization a high level of confidence that adherence to business policy, enforcement of business rules, in addition to the users themselves trusting the system to be accurate when they are looking for information helps to create overall efficiency.

13_field mapping_small

In summary, Data Capture has progressed to the point that it can nearly be totally automated but there are many variables involved that still make human “data verification and/or validation” necessary at certain times.  The quality of data input it your system should be the priority, not the sheer volume.  With a little planning and using modern tips and tricks to achieve highly accurate Data Capture results you can realize both benefits of accuracy and speed.  Then Frankie-the-frustrated will truly be given the adequate tools to become Frankie-the-Fabulous to ‘Capture…with Confidence’.

Capture: The ideal application for Cloud

Capture:  The ideal application for Cloud

As I was brainstorming on a topic to write for this blog, I was inspired by Bob Larrivee’s latest AIIM community blog entitled “It Came From The Cloud” (http://www.aiim.org/community/blogs/expert/It-Came-From-The-Cloud) where he asked some simple, yet thought-provoking questions.  So this begs the question why anyone would resist such obvious benefits of “cloud” (http://www.aiim.org/community/blogs/expert/A-cloudy-future-for-document-capture)?  I’m sure there are many legitimate concerns and issues but I would like to focus on the concern of security for the purpose of this blog post.

These days the term “cloud” as it relates to usage in corporate enterprise typically engenders strong feelings one way or the other.  Benefits such as quicker application deployment, reduced IT costs and the ability to offer a more feature-rich experience to workers is not often debated.  What is debated, and is a reasonable discussion, is the viability of “the cloud” from a security standpoint.

Security: Technology versus Trust

These concerns are well founded and should be addressed but we should definitely draw a major distinction between the technology itself and whether a provider is trusted with data.  Therefore, when we understand this distinction between technology and trust, the cloud should not be discounted as a legitimate option for enterprise simply due to fear alone from a technology perspective.

Below is a short list of various security items that should be considered when contemplating a cloud strategy.  This short list is not by any means an extensive list of security items to consider, however, please ask yourself this, for each one of these items is an individual business or a mass data center more equipped to handle capabilities?  For those who would really consider the question of whether on-premise or cloud is more secure then the conclusion to me is clear.

  • Private clouds – Dedicated servers and databases to only one organization
  • Physical access – Limit access to only those that might need to physically touch equipment
  • Data encryption – Encrypt data in motion and data at rest
  • Device authentication – Trust devices in addition to users
  • System updates and patches – Apply security updates as soon as possible
  • Secure disk wiping – Securely erase temporary data from disk drives
  • Network architecture – Databases beyond firewalls and web data on front-end servers
  • Logging – Track all activity to detect intrusions
  • Policy/Governance – Consistently review policies and procedures for improvement

Conservative cloud adoption by Enterprise

While I certainly would not expect major enterprise organizations to jump in head-first and move all their data and applications to the cloud, what does make logical sense is for them to move transactional applications (versus storage applications) to the cloud.  Specifically, moving “Capture” to the cloud makes complete sense.  Why?  Capture processes images only temporarily then stores the data wherever you’d like, including in the security-hardened ECM system.  In other words, the capture application does not store images or metadata in a database.  Capture is a processing activity, not storage and retrieval.

One other observations about Cloud for the Enterprise; I can absolutely see a trend towards building massive infrastructure now in preparation for delivering robust applications eventually.  Having attended Cloud Connect 2012 (Santa Clara)http://www.cloudconnectevent.com/santaclara/, it was remarkable to see the level of interest among major IT providers and well-known Enterprise organizations.  Without a doubt, the infrastructure is being implemented now for what will be an onslaught of cloud services in the not-too-distant future.

Major adoption by Small and Medium-Sized Businesses (SMB’s)

In contrary to Enterprise, Small and Medium-Sized Businesses (SMB’s) have to make a decision on how to improve efficiency with no or limited IT resources.  For SMB, the cloud offers opportunities like never seen before.  Why?  Because a shared resource makes sophisticated technology available to a greater audience.  Why?  Because costs to the vendors are decreased through mass-consumption by users and this allows vendors to make these advanced technologies available to the masses.  Also, and from a security perspective, using cloud storage and capture as a rented service from providers allows SMB organizations to focus on their businesses instead of burdened by maintaining technology.  When the choice is to not utilize any technology and continue to process paperwork manually, or to utilize cloud technology to capture, store and retrieve with a little, yet limited, risk, it’s clear that SMB’s have chosen limited risk with great efficiency improvements.

Like never seen before, SMB’s are empowered to create a mash-up of useful business applications without the high cost associated with doing-so.  Clearly there is an undeniable trend towards Cloud Storage from providers such as Box, Evernote, Catch, Google Docs, Dropbox, etc. and Cloud Capture is a logical complementary technology to further improve efficiencies and decrease operational costs.

 

Next steps: Being indecisive is inefficient

With such overwhelming evidence that adopting cloud services makes sense then the next logical question is “what now?”.  Clearly security is, and should be, a major concern for enterprise as well as SMB, but with enterprise the stakes are much greater.  SMB inherently has this element of risk/reward that drives them to make business decisions quicker.  The topic of “access vs. security” balance is often discussed within the ECM industry and the truth is that you have to find a balance of making information available to users, yet also making sure the data is protected in a responsible manner.  SMB that does not have dedicated IT resources can utilize “the cloud” to improve business efficiency at minimal costs and trust that security is taken care of by their storage provider.

There are many wonderful solutions available right now for businesses of all sizes to benefit from “the cloud”.  For example, for an organization to migrate e-mail, CRM, expense management, document management, corporate web site and an accounting system to 100% cloud today is do-able.  With known monthly operating expense costs and no IT burden.  Also, these cloud applications are not cheesy, cheap applications; these are robust, Enterprise-ready applications that are now made available to everyone which are easy to use and secure.

What do you think about “the cloud”?  Is it a fad?  Will it be embraced by Enterprise?  Is it secure?