IT Management Perspectives

Recent Posts

  • Multiple File Upload to Google Drive Using Google Script This script and form was developed for a project facilitating photo upload by professional photographers for a non-profit organization.  It creates a folder in Google Drive based off the text input of the photographer name and then uploads the user selected files.  This script was adapted from other available scripts on the Internet and customized to suit my needs.  If you are the originator of the script, please let me know and I'll give proper credit! This is the contents of Server.gs: function doGet() {  return HtmlService.createHtmlOutputFromFile('form')    .setSandboxMode(HtmlService.SandboxMode.IFRAME);}function uploadFileToDrive(base64Data, fileName, dropbox) {  try{    var splitBase = base64Data.split(','),        type = splitBase[0].split(';')[0].replace('data:','');    var byteCharacters = Utilities.base64Decode(splitBase[1]);    var ss ...
    Posted Jul 19, 2016, 8:19 PM by Christopher Furton
  • Python: Scripting TSP data import to Quicken Written by: Christopher Furton One great thing about being a geek is having the ability to automate those tasks that you simply just don't want to do.  This post is a perfect example of that.  Every month, I import data from the Thrift Savings Plan (TSP) into Quicken, but sadly, the TSP doesn't have the ability to download the data in a format that can be imported into Quicken.  To do this, I would spend about 5 minutes per month retrieving share prices from the TSP website, modifying the format, and then importing it into Quicken.  Sounds like a good opportunity to automate.  I haven't coded in about 10 years, so I figured I would give it ...
    Posted Sep 22, 2017, 3:40 PM by Christopher Furton
  • Veteran Hiring: selfless or selfish? Photo credit: Military.comWritten by: Christopher Furton It seems as if nearly all companies proudly exclaim that they hire veterans – whether through official programs aimed to hire X number of Veterans per year, flocking to Hiring our Heroes career fairs, or through veteran affinity programs.  Many people have likely read several “top 10” lists with resounding reasons to hire those who have served.  That is great; those benefits are simply undeniable.  We know veterans have adaptability honed through proven experience in unstable environments, leadership skills far beyond what would be typical for someone early in their careers, and technical proficiency only seen through years of education and hands-on application.  However, that is not what this article is about ...
    Posted May 1, 2016, 7:40 PM by Christopher Furton
  • Cyber Threat Report: Malvertising and Watering Holes Written by: Christopher Furton Cyber Threat Report: Malvertising and Watering Holes Slide Text  Christopher Furton - Cybersecurity Threat Brief: Malvertising and Watering Holes 1.  Evolution of technology leans to more and more web-based usage › HTML5 applications › Software-as-a-Service  Business involvement in Social Media › Increase reliance on Facebook, twitter, and other social sites for customer interactions › Brand development and growth for reasonable cost  It is all about the Web. And will continue that way. 2.  Malvertising (or malicious advertising) uses legitimate advertising channels to propagate malicious ads.  Victims may or may not have to click the ads depending on the attack. › Clicked ads can redirect victim to malicious site › Zero-day exploit (i.e., Adobe Flash) can install malware ...
    Posted Dec 13, 2015, 10:32 AM by Christopher Furton
  • Threat Report - Ransomware Written by: Christopher Furton Threat- Ransomware and Digital Extortion Industry has experienced a 4,000 percent increase in crypto-ransomware attacks where generic ransomware grew at 113% in 2014 ( (Symantec Corporation, 2015, p. 7). Traditional ransomware attacks trick victims into paying a “fine” for accessing illegal or stolen content. This is typically done by a threat actor portraying to be a government official (i.e., FBI Agent) with official looking banners and websites (See figure 1). A victim can often escape this trap without paying any fees or fines. In contast, the crypto-ransomware attack holds a victim’s files and other digital media hostage by encrypting the contents and offering to sell the victim the decryption key. These ransoms ...
    Posted Dec 13, 2015, 10:33 AM by Christopher Furton
  • Information Architecture Techniques and Best Practices Written by: Christopher Furton Abstract Developing information structures, such as websites or systems, involves a complex set of processes with the goal of making information usable, findable, and organized. Information Architecture tools, techniques, and best practices provide the building blocks to achieving the end state. With hundreds and possibly thousands of tools and techniques available, this paper explores five specific options: card sorting, free-listing, perspective-based inspection, personas, and content value analysis. These five techniques span the breadth of the information architecture project and provide insight into the constantly evolving and developing information architecture field. Information Architecture Techniques and Best Practices This paper explores the tools, techniques, and best practices for Information Architecture. However, it would be impossible to ...
    Posted Dec 13, 2015, 10:33 AM by Christopher Furton
  • The Internet of Things in the Retail Industry Written by: Christopher Furton The Internet of Things in the Retail Industry Overview The Internet of Things (IoT) is described as a paradigm where everything of value will communicate in a networked form with one another. This involves development of sensor network technologies that ultimately form a new standard in which information and communication is embedded within our environments. Objects, such as refrigerators and coffee cups, interact with other objects within a designated geographical range, all with the intention of improving the lifestyles of people (Yan, Zhang, Yang, & Ning, 2008, p. Ch. 13 Para. 1). To accomplish this, three criteria are important: sensors and actuators, connectivity, and people and processes. First, sensors and actuators create a “digital nervous system” through ...
    Posted Dec 13, 2015, 10:34 AM by Christopher Furton
  • IT Capital Planning: Enterprise Architecture and Exhibit 300 processes for the CDC and NNSA Written by: Christopher Furton Abstract This paper explores two of the topics relating to IT Capital Planning and compares those processes in place at two federal agencies. The Enterprise Architecture and the Exhibit 300 business cases are reviewed from the Centers for Disease Control and Prevention (CDC) and the National Nuclear Security Administration (NNSA). The findings are that both agencies have programs in place to address Enterprise Architecture and the Exhibit 300, however, the amount of information made public varies resulting in inadequately level grounds for comparing and contrasting. Regardless, this paper explores the agencies’ programs highlighting the positive aspects and the growth opportunities of each while evaluating the overall IT Capital Planning posture. IT Capital Planning: Enterprise Architecture and ...
    Posted Dec 13, 2015, 10:34 AM by Christopher Furton
  • A Case Study on Effective IS Governance within a Department of Defense Organization Written by Christopher Furton A Case Study on Effective IS Governance within a Department of Defense Organization Abstract     This case study develops influencing factor that should be considered when developing an effective information security governance program with a Department of Defense weapons system test and evaluation organization. The influencing factors are then incorporated into an existing governance framework developed by A. Da Veiga and J. H. P. Eloff (2007). The result is a unique framework tailored to the organization which can be used as the foundation to building a holistic information security program. A Case Study on Effective IS Governance within a Department of Defense Organization     With the advancements of technology and the Internet, security of information has become a ...
    Posted Dec 13, 2015, 10:35 AM by Christopher Furton
  • Mitigating Botnet Information Security Risks through EA and the ITSA - Part 4 of 4 Written by: Christopher Furton Mitigating Botnet Information Security Risks through Enterprise Architecture (EA) and the Information Technology Security Architecture (ITSA) Part 4 of 4 Part IV – Conclusion In conclusion, botnet activity is a substantial threat to the enterprise environment. With evolving capabilities, botmasters will continue to stay at the cutting edge of technology and devise new ways to avoid detection. Part I of this paper discussed the evolution of botnets from the days of Internet Relay Chat to the modern social media. Propagation techniques have evolved to stay ahead of security professionals and some advanced botnets are specifically designed to attack an intended target of the enterprise environment. Lastly, part I briefly described some of the malicious activities that botmasters ...
    Posted Dec 13, 2015, 10:35 AM by Christopher Furton
Showing posts 1 - 10 of 18. View more »



Multiple File Upload to Google Drive Using Google Script

posted Jul 19, 2016, 8:18 PM by Christopher Furton   [ updated Jul 19, 2016, 8:19 PM ]

This script and form was developed for a project facilitating photo upload by professional photographers for a non-profit organization.  It creates a folder in Google Drive based off the text input of the photographer name and then uploads the user selected files.  This script was adapted from other available scripts on the Internet and customized to suit my needs.  If you are the originator of the script, please let me know and I'll give proper credit!

This is the contents of Server.gs:

function doGet() {
  return HtmlService.createHtmlOutputFromFile('form')
    .setSandboxMode(HtmlService.SandboxMode.IFRAME);
}

function uploadFileToDrive(base64Data, fileName, dropbox) {
  try{
    var splitBase = base64Data.split(','),
        type = splitBase[0].split(';')[0].replace('data:','');

    var byteCharacters = Utilities.base64Decode(splitBase[1]);
    var ss = Utilities.newBlob(byteCharacters, type);
    ss.setName(fileName);

    //var dropbox = "WTELNSD2016"; // Folder Name
    var folder, folders = DriveApp.getFoldersByName(dropbox);

    if (folders.hasNext()) {
      folder = folders.next();
    } else {
      folder = DriveApp.createFolder(dropbox);
    }
    var file = folder.createFile(ss);

    return file.getName();
  }catch(e){
    return 'Error: ' + e.toString();
  }
}

And this is the form.html

<body>
  <div id="formcontainer"><center>
<img src="https://drive.google.com/uc?id=0B1FVNNdar_tpaklWSW15Rk1ld1U"><br></center>
    <label for="myForm"></label>

    <br><br>


    <form id="myForm">
      <label for="myForm">Photographer Name:</label>
      <div>
        <input type="text" name="pName" id="pName2" placeholder="Enter your name or company here">
      </div>
 
      <br>


      <label for="myFile">Upload Pictures: <br> (hold Shift or Ctrl to select more than one):</label>
      <br>


      <input type="file" name="filename" id="myFile" multiple>

      <input type="button" value="Submit" onclick="iteratorFileUpload()">


    </form>
  </div>

  <div id="output"></div>
<div id="progressbar">
    <div class="progress-label"></div>
</div>

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/themes/smoothness/jquery-ui.css">
<script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js"></script>

<script>

var numUploads = {};
numUploads.done = 0;
numUploads.total = 0;

// Upload the files into a folder in drive
// This is set to send them all to one folder (specificed in the .gs file)
function iteratorFileUpload() {
    var allFiles = document.getElementById('myFile').files;

    if (allFiles.length == 0) {
        alert('No file selected!');
    } else {
        //Show Progress Bar

        numUploads.total = allFiles.length;
        $('#progressbar').progressbar({
        value : false
        });//.append("<div class='caption'>37%</div>");
        $(".progress-label").html('Preparing files for upload');
        // Send each file at a time
        for (var i = 0; i < allFiles.length; i++) {
            console.log(i);
            sendFileToDrive(allFiles[i]);
        }
    }
}

function sendFileToDrive(file) {
    var reader = new FileReader();
    reader.onload = function (e) {
        var content = reader.result;
        console.log('Sending ' + file.name);
       
        var currFolder = document.getElementById('pName2').value;
        console.log('Folder name ' + currFolder);
        google.script.run.withSuccessHandler(updateProgressbar).uploadFileToDrive(content, file.name, currFolder);
    }
    reader.readAsDataURL(file);
}

function updateProgressbar( idUpdate ){
   console.log('Received: ' + idUpdate);
   numUploads.done++;
   var porc = Math.ceil((numUploads.done / numUploads.total)*100);
   $("#progressbar").progressbar({value: porc });
   $(".progress-label").text(numUploads.done +'/'+ numUploads.total);
   if( numUploads.done == numUploads.total ){
      document.write("Upload Complete")
      //uploadsFinished();
      numUploads.done = 0;
   };
}
</script>

  <script>
    function fileUploaded(status) {
      document.getElementById('myForm').style.display = 'none';
      document.getElementById('output').innerHTML = status;
    }

  </script>

  <style>
    body {
      font-family: verdana;
      max-width: 400px;
      padding: 20px;
      margin: auto;
    }
    input {
      display: inline-block;
      width: 100%;
      padding: 5px 0px 5px 5px;
      margin-bottom: 10px;
      -webkit-box-sizing: border-box;
      ‌​ -moz-box-sizing: border-box;
      box-sizing: border-box;
    }
    select {
      margin: 5px 0px 15px 0px;
    }
    input[type="submit"] {
      width: auto !important;
      display: block !important;
    }
    input[type="file"] {
      padding: 5px 0px 15px 0px !important;
    }
#progressbar{
    width: 100%;
    text-align: center;
    overflow: hidden;
    position: relative;
    vertical-align: middle;

}
.progress-label {
      float: left;
margin-top: 5px;
      font-weight: bold;
      text-shadow: 1px 1px 0 #fff;
          width: 100%;
    height: 100%;
    position: absolute;
    vertical-align: middle;
    }
  </style>
</body>

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Python: Scripting TSP data import to Quicken

posted May 11, 2016, 11:41 AM by Christopher Furton   [ updated Sep 22, 2017, 3:40 PM ]

Written by: Christopher Furton

One great thing about being a geek is having the ability to automate those tasks that you simply just don't want to do.  This post is a perfect example of that.  Every month, I import data from the Thrift Savings Plan (TSP) into Quicken, but sadly, the TSP doesn't have the ability to download the data in a format that can be imported into Quicken.  To do this, I would spend about 5 minutes per month retrieving share prices from the TSP website, modifying the format, and then importing it into Quicken.  Sounds like a good opportunity to automate.  I haven't coded in about 10 years, so I figured I would give it a shot using Python.  About 8 hours later, I had all the needed applications installed and the code below accomplishing the task.  

*** Don't forget to "pip install requests" !!

If time permits, I hope to build a GUI for this and change out some of the hard-coded information for something more dynamic.  In the meantime, here it is:

## Create CSV file for from TSP.Gov website for import into Quicken 2015
from lxml import html
import requests
import datetime

# Function to retrieve data from TSP website 
def retrieveData():
   page = requests.get('https://www.tsp.gov/InvestmentFunds/FundPerformance/index.html')
   tree = html.fromstring(page.content)
   dates = tree.xpath('//td[@class="leadingCell"]/text()')
   values = tree.xpath('//td[@class="packed"]/text()')
   funds = tree.xpath('//th[@class="packed"]/text()')
   return(dates,values,funds);
   
# Function to parse data and then export to file
def parseData(dates,values,funds):
   #Change dates from 'Apr 10, 2016' to '4/10/2016
   x=0
   while(x < len(dates)):
      date = dates[x]
      dates[x] = datetime.datetime.strptime(date,'%b %d, %Y').strftime('%m/%d/%y')
      x += 1 
 
   #Change fund names to match Quicken Ticker Symbol
   x=0
   while(x < len(funds)):
      fund = funds[x]
      if fund == 'L Income':
         funds[x]='TSPLIncome'
      elif fund == 'L 2020':
         funds[x]='TSPL2020'
      elif fund == 'L 2030':
         funds[x]='TSPL2030'
      elif fund == 'L 2040':
         funds[x]='TSPL2040'
      elif fund == 'L 2050':
         funds[x]='TSPL2050'
      elif fund == 'G Fund':
         funds[x]='TSPGFund'
      elif fund == 'F Fund':
         funds[x]='TSPFFund'
      elif fund == 'C Fund':
         funds[x]='TSPCFund'
      elif fund == 'S Fund':
         funds[x]='TSPSFund'
      elif fund == 'I Fund':
         funds[x]='TSPIFund'         
      x += 1
   
   #remove excess spaces and /n from share values
   x=0
   while(x < len(values)):
      temp = values[x]
      values[x] = temp[4:11]
      x += 1
   
   #Format into CSV rows and write to file
   fo = open("exportToQuicken.txt", "w")
   itemNum=0
   for date in dates:
      colNum=0
      while(colNum < len(funds)):
         fo.write(funds[colNum] + "," + date + "," + values[itemNum] + "\n")
         itemNum += 1
         colNum += 1 
   fo.close()
   return;

def main():
   dates, values, funds = retrieveData()
   parseData(dates,values,funds)
   return;

main()



About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Veteran Hiring: selfless or selfish?

posted May 1, 2016, 7:34 PM by Christopher Furton   [ updated May 1, 2016, 7:40 PM ]

image from military.com
Photo credit: Military.com

Written by: Christopher Furton

It seems as if nearly all companies proudly exclaim that they hire veterans – whether through official programs aimed to hire X number of Veterans per year, flocking to Hiring our Heroes career fairs, or through veteran affinity programs.  Many people have likely read several “top 10” lists with resounding reasons to hire those who have served.  That is great; those benefits are simply undeniable.  We know veterans have adaptability honed through proven experience in unstable environments, leadership skills far beyond what would be typical for someone early in their careers, and technical proficiency only seen through years of education and hands-on application.  However, that is not what this article is about. Instead, let’s dive further into the intent behind these programs and ask, “Why do we hire veterans and can we do more?”

Talent acquisition and recruiters likely dance with glee when that diamond candidate – the one with the perfect resume and interview skills – leaves military service and ventures out into Corporate America.   They see those benefits and the potential positives that can be achieved by the company and rush to suck them into their fold.  If that candidate has an active security clearance, then even better!  No doubt this is good for the Veteran, but the intent seems rather selfish.  It isn’t about helping that “perfect” veteran, it is about reaping the benefits.  So let’s take a look deeper at what it means to be a veteran today.

According to the “Swords to Plowshares” report from the Institute for Veteran Policy (2011):

·         When factoring in delayed onset, post-traumatic stress (PTS) rates are as high as 35 percent.

·         Of Global War on Terror veterans who need treatment for major depression, only 53 percent seek help.

·         Substance abuse/dependence often becomes a primary means of self-medicating for underlying untreated mental health issues.

·         Untreated psychological conditions or lack of economic support leads to a cycle of poverty, homelessness, drug use, theft and property crimes, arrest, and criminal conviction.

With the frightening increase in PTS, alcoholism, drug use, or other addictions, hiring a veteran may reveal that diamond candidate has the dreadful red flag – that item in a candidate’s past that is simply too much risk.  This red flag could come in many forms: domestic violence or criminal convictions, employment gaps caused by residential drug programs, or possibly homelessness.

At this point, you may be asking yourself why I’m trying to convince you NOT to hire veterans!  That of course is not my intent.  Instead, I urge hiring managers and human resource professionals to ask yourselves if you are doing everything to help THOSE veterans: that percentage of veterans who need an extended hand the most.  I like to refer to them as veterans with scars.  Specifically, I’m referring to those invisible mental scars that may only be visible through unhealthy decisions and choices.  At first it may look risky, but remember that some risks are worth taking.  Veterans do come with many benefits, but more and more have lived complex and difficult lives as a result of service to our country.

Veterans are not perfect and I doubt that anyone claims they should be.  With that said, there is so much opportunity that remains for companies to go the next step: selflessly hire veterans with scars.  Let’s change the flight response when learning of a red flag and seriously consider the candidate based on his or her merits.  You will likely get a dedicated and competent new employee while also extending a hand to someone who has sacrificed so much. 

Veteran handshake



About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Cyber Threat Report: Malvertising and Watering Holes

posted Oct 18, 2015, 8:18 PM by Christopher Furton   [ updated Dec 13, 2015, 10:32 AM ]

Written by: 

Cyber Threat Report: Malvertising and Watering Holes

Christopher_Furton_CyberSecurity_Threat_Brief_WateringHoles_Malvertising.pptx


Slide Text

 Christopher Furton - Cybersecurity Threat Brief: Malvertising and Watering Holes

  1. 1.  Evolution of technology leans to more and more web-based usage › HTML5 applications › Software-as-a-Service  Business involvement in Social Media › Increase reliance on Facebook, twitter, and other social sites for customer interactions › Brand development and growth for reasonable cost  It is all about the Web. And will continue that way.
  2. 2.  Malvertising (or malicious advertising) uses legitimate advertising channels to propagate malicious ads.  Victims may or may not have to click the ads depending on the attack. › Clicked ads can redirect victim to malicious site › Zero-day exploit (i.e., Adobe Flash) can install malware without user action
  3. 3.  Attacks are generally broad in nature and typically use known vulnerabilities.  Attacks leverage wide distribution of ads through legitimate ad networks to increase likelihood of luring a victim.  According to ComScore1 data, 53 billion ads contained malicious content or redirected to malicious content.
  4. 4.  Leverage rich content from Adobe Flash Player, Reader, etc.  Can use iframe injection to trigger background installations.  Pop-up and banner ads through ad networks.  Clickjacking - tricking a victim into clicking something other than what was intended.
  5. 5.  Patching – keep browsers (i.e., Firefox, IE, Chrome) up to date. This ensures known vulnerabilities can’t be exploited.  Vulnerability Management – implement a scanning process for known vulnerabilities. Identify and remediate.  Monitor outbound traffic – Whitelist if possible. Block traffic to known bullet-proof hosts.  Use Ad blocking software. Ghostly or NoScript. (keep in mind implications)  Train users to hover before clicking.  Configure X-Frame Options and employ anti- clickjacking attributes.
  6. 6.  Watering Holes – Compromised trusted websites contain malware.  Trust relationships between sites are exploited to push malware to user.  Often use zero-day vulnerabilities to execute attack.
  7. 7.  Attacks are generally narrow in nature and typically use unknown vulnerabilities.  Attacks typically are targeted and require significant intelligence resources.  Much more sophisticated than other attacks. (i.e., smells like state-sponsored)
  8. 8.  Leverages application layer protocols including TLS/SSL and HTTP.  Often browser-specific due to unique vulnerabilities.  Can exploit Application Programming Interfaces (API) such as ActiveX
  9. 9.  Very little can be done to specifically mitigate watering hole attacks. However: › Vulnerability Management will help patch holes as soon as they are announced. › Monitoring outbound traffic can help identify if an exploit has been successful. › Strong incident response to identify and react to minimize damage. › Network segmentation to minimize exposure › Overall high security awareness in the organization.
  10. 10.  1 - http://www.mintel.com/blog/technology-market- news/malvertising-the-internets-billion-dollar-problem  2 - Cyveillance – a QinetiQ Company - https://blog.cyveillance.com/when-good-sites-go-bad-malvertising- and-watering-holes- infographic/?utm_source=social&utm_medium=twitter&utm_conten t=post%204&utm_campaign=MWH  Great Infographic: https://blog.cyveillance.com/wp- content/uploads/Malvertise_info_6001.jpg
  11. 11. Christopher Furton is an Information Technology Professional with over 12 years in the industry. He attended The University of Michigan earning a B.S. in Computer Science and completed a M.S. in Information Management from Syracuse University in 2015. His career includes managing small to medium size IT infrastructures, service desks, and IT operations. Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management. He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE. He can be found on LinkedIn, Google+, and Twitter @IT_Mgmt_Chris. Additional information available on Christopher Furton's website at http://christopher.furton.net.

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Threat Report - Ransomware

posted Oct 2, 2015, 12:26 PM by Christopher Furton   [ updated Dec 13, 2015, 10:33 AM ]

Written by:


Threat- Ransomware and Digital Extortion

Industry has experienced a 4,000 percent increase in crypto-ransomware attacks where generic ransomware grew at 113% in 2014 ( (Symantec Corporation, 2015, p. 7). Traditional ransomware attacks trick victims into paying a “fine” for accessing illegal or stolen content. This is typically done by a threat actor portraying to be a government official (i.e., FBI Agent) with official looking banners and websites (See figure 1). A victim can often escape this trap without paying any fees or fines. In contast, the crypto-ransomware attack holds a victim’s files and other digital media hostage by encrypting the contents and offering to sell the victim the decryption key. These ransoms can range from $300-$500 without any guarantee of successful decryption (Symantec Corporation, 2015, p. 7).

Windows environments are more typically affected by crypto-ransomware; however, Symantec reports seeing an increase in versions developed for other operating systems and mobile devices. Additionally, some crypto-ransomware is designed to attack network attached storage (NAS) devices and rack stations namely from Synology (McAfee Labs, 2015, p. 16).

A fairly new variant of crypto-ransomware named CTB-Locker is distributed through nested .zip files with a screen saver executable file. Transmission mediums include peer-to-peer networks, Internet Relay Chat, newsgroup postings, and email spam. Additional variants include CryptoWall, TorrentLocker, BandarChor, and Teslacrypt (McAfee Labs, 2015, p. 14).

Christopher Furton Ransomware Image

Figure 1 – Sample ransomware attempt

Risk Profile

Crypto-ransomware attacks increased dramatically up to 45 times more frequent in 2014 compared to the prior year (Symantec Corporation, 2015, p. 7). For organizations that run predominately Windows, this threat is in a higher risk category. The potential impact to the business of a successful crypto-ransomware attack is potentially devastating. Fortunately, the likelihood of a successful attack can be greatly reduced through mitigation techniques. Including ransomware into an organizations Enterprise Risk Management (ERM) program is advised as well as conducting a deep-dive into existing security controls to ensure proper mitigation efforts are in place.

Currently, there is no way to recover data encrypted in a crypto-ransomware attack. However, in some cases where law enforcement successfully shuts down a control server, recovery tools can be produced.

Information Security Controls 

Control: User Awareness Training

In Brief: Crypto-ransomware is often distributed through phishing attacks on users. According to McAfee Labs, at least one in every 10 is successful (p. 22).

NISP Special Publication 800-53 (rev 4) controls:

AT-1 – Security Awareness and Training Policy and Procedures
    This control outlines the higher governance for a Security Awareness Training program.
AT-2 – Security Awareness Training
    This control outlines training for new users and periodic re-training.

Council on Cyber Security – Critical Security Controls (V 5.1)

CSC 9-1 – Build training and awareness roadmap
    This control requires building a training awareness roadmap based off gap analysis of user behaviors.
CSC 9-2 – Deliver Training
    This control required delivery of training by internal staff or external teachers.
CSC 9-3 – Online Security Awareness Program
    This control outlines five steps to having a successful online awareness training program.

Contol: Data Backup

In Brief: Crypto-ransomware is ineffective if the organization can recover the data being held hostage with little impact to business productivity.

NISP Special Publication 800-53 (rev 4) controls:

CP-9 – Information System Backup
    This control outlines details of creating user-level, system-level, and security-related documentation back up.
CP-6 – Alternate Storage Site
    This control establishes a geographically distinct alternate storage site including necessary agreements to permit the storage and retrieval of backup information.
CP-10 – Information System Recovery and Reconstitution
    This control provides for recovery and reconstitution of the information system to a known state after a disruption, compromise, or failure.

Council on Cyber Security – Critical Security Controls (V 5.1)

CSC 8-1 – Data Recovery Capability - Backup
    This control requires backup of data at least weekly but more often for sensitive information.
CSC 8-2 – Data Recovery Capability - Restoration
    This control requires testing restoration capability of backed up data.
CSC 8-3 – Data Recovery Capability – Protection of Backup Media
    This control requires proper protections of backup data commensurate with the sensitivity contained on the media.
CSC 8-4 – Data Recovery Capability – non-addressability
    This control ensures that at least one backup destination is not continuously addressable through operating system calls. **Very important for crypto-ransomware mitigation activity

Regulatory/Compliance and Best Practices

This information was compiled from the Unified Compliance Framework website at http://www.unifiedcompliance.com

User Awareness Training

Nearly all regulatory models require a level of User Awareness Training procedures including the following:
  • Local information security coordinators shall have a channel of communication with the information security function (e.g., via regular reporting of duties and results of activities). (CF.12.02.03f, The Standard of Good Practice for Information Security) 
  • Local information security coordinators should meet regularly with business owners (i.e., people in charge of particular business applications or processes) to review the status of Information Security in business applications and systems. (CF.12.02.07-1, The Standard of Good Practice for Information Security) 
  • Security-positive behavior should be encouraged by incorporating Information Security into regular day-to-day activities (e.g., by considering security requirements in planning decisions and budgeting activities, and including the consideration of information risk in business decisions, meetings, an… (CF.02.02.04d, The Standard of Good Practice for Information Security) 
  • Local information security coordinators shall have a channel of communication with the information security function (e.g., via regular reporting of duties and results of activities). (CF.12.02.03f, The Standard of Good Practice for Information Security, 2013) 
  • Local information security coordinators should meet regularly with business owners (i.e., people in charge of particular business applications or processes) to review the status of Information Security in business applications and systems. (CF.12.02.07-1, The Standard of Good Practice for Information Security, 2013) 
  • Security-positive behavior should be encouraged by incorporating Information Security into regular day-to-day activities (e.g., by considering security requirements in planning decisions and budgeting activities, and including the consideration of information risk in business decisions, meetings, an… (CF.02.02.05d, The Standard of Good Practice for Information Security, 2013) 
  • Top management shall demonstrate leadership and commitment by supporting other management roles. (§ 5.1 ¶ 1(h), ISO 27001:2013, Information Technology - Security Techniques - Information Security Management Systems - Requirements, 2013) 
  • Verify that personnel assigned to the engagement are familiar with the applicable professional organizations, such as aicpa and the Financial Accounting Standards Board. (Ques. AT410, Reporting on Controls at a Service Organization Checklist, PRP §21,100) 
  • Decision-makers (including executive management; business unit heads; department heads; and owners of business applications, computer systems, networks, and systems under development) should be aware of the need to carry out information risk assessments for target environments within the organizatio… (SR.01.01.03, The Standard of Good Practice for Information Security) 
  • The security awareness program should be based on the results of one or more documented information risk assessments. (CF.02.02.01g, The Standard of Good Practice for Information Security) 
  • Decision-makers (including executive management; business unit heads; department heads; and owners of business applications, computer systems, networks, and systems under development) should be aware of the need to carry out information risk assessments for target environments within the organizatio… (SR.01.01.03, The Standard of Good Practice for Information Security, 2013) 
  • The security awareness program should be based on the results of one or more documented information risk assessments. (CF.02.02.01g, The Standard of Good Practice for Information Security, 2013) 
  • Managers are responsible for maintaining awareness of and complying with security policies, procedures and standards that are relevant to their area of responsibility. (IS-14, The Cloud Security Alliance Controls Matrix, Version 1.3) 
  • The organization should align the person's roles and responsibilities to the exact degree and content of the information security awareness and training. (Control: 0253, Australian Government Information Security Manual: Controls) 
  • Personnel in responsible positions should receive training for managing and using systems in their field of responsibility. (¶ 1, PE 009-8, Guide to Good Manufacturing Practice for Medicinal Products, Annex 11, 15 January 2009) 
  • Personnel shall be trained, as appropriate for their duties, in avoiding, detecting, mitigating, and disposing of suspect fraudulent and counterfeit parts. (§ 4.2.10.a, SAE AS6081, Fraudulent/Counterfeit Electronic Parts: Avoidance, Detection, Mitigation, and Disposition - Distributors) 
  • Personnel directly handling electronic parts shall be trained in ways to detect suspect fraudulent or counterfeit parts. (§ 4.2.10.b, SAE AS6081, Fraudulent/Counterfeit Electronic Parts: Avoidance, Detection, Mitigation, and Disposition - Distributors) 
  • Personnel who are responsible for detecting fraudulent or counterfeit parts with specialized technology and methods shall be trained to ensure their competence in its use. (§ 4.2.10.c, SAE AS6081, Fraudulent/Counterfeit Electronic Parts: Avoidance, Detection, Mitigation, and Disposition - Distributors) 
  • Personnel who are responsible for detecting fraudulent or counterfeit parts with radiographic inspection shall be trained and certified to NAS-410 National Aerospace Standard or its equivalent. (§ 4.2.10.c, SAE AS6081, Fraudulent/Counterfeit Electronic Parts: Avoidance, Detection, Mitigation, and Disposition - Distributors) 
  • Personnel are furnished specific training based on their roles and responsibilities. (Generally Accepted Privacy Principles and Criteria § 1.2.10, Appendix B: Trust Services Principles and Criteria for Security, Availability, Processing Integrity, Confidentiality, and Privacy, TSP Section 100 Principles and Criteria) 
  • The organization should provide specific training to personnel based on their roles and responsibilities. (Table Ref 1.2.10, Generally Accepted Privacy Principles (GAPP), CPA and CA Practitioner Version, August 2009) 
  • Is security training commensurate with levels of responsibilities and access? (§ E.4.4, Shared Assessments Standardized Information Gathering Questionnaire - E. Human Resource Security, 7.0) 
  • Do constituents responsible for Information Security undergo additional training? (§ E.4.5, Shared Assessments Standardized Information Gathering Questionnaire - E. Human Resource Security, 7.0) 
  • The training for System Administrators must include Public Key Infrastructure awareness. (§ 3.4.2.2 ¶ AC34.100, DISA Access Control STIG, Version 2, Release 3) 
  • The training for System Administrators must include how to configure the system for certificate-based logon. (§ 3.4.2.2 ¶ AC34.100, DISA Access Control STIG, Version 2, Release 3) 
  • The training for System Administrators must include how to configure the system for digital signatures. (§ 3.4.2.2 ¶ AC34.100, DISA Access Control STIG, Version 2, Release 3) 
  • The training for System Administrators must include how to configure the system to encrypt e-mail. (§ 3.4.2.2 ¶ AC34.100, DISA Access Control STIG, Version 2, Release 3) 
  • The training for System Administrators must include how to configure the system for web server certificates. (§ 3.4.2.2 ¶ AC34.100, DISA Access Control STIG, Version 2, Release 3) 
  • The information assurance officer must designate personnel who can override false rejections and ensure they have the proper training for implementing the fallback procedures and verifying a user's identity. (§ 4.5.2 ¶ BIO6040, DISA Access Control STIG, Version 2, Release 3) 
  • The Information Assurance training must include familiarizing users with their assigned responsibilities. (PRTN-1, DoD Instruction 8500.2 Information Assurance (IA) Implementation) 
  • Have key employees received training on network controls, application controls, and security controls? (IT - WLANS Q 4, Automated Integrated Regulatory Examination System (AIRES) IT Exam Questionnaires, version 073106A) 
  • Individuals who have been granted access to personally identifiable information should receive appropriate training and, where applicable, specific role-based training. (§ 4.1.2 ¶ 3, NIST SP 800-122 Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)) 
  • The organization should conduct training on how to interact with the media about security incidents. (§ 5.1 ¶ 3, NIST SP 800-122 Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)) 
  • The organization should determine what the content of the security training will be based on the roles and responsibilities and the organizational requirements. (SG.AT-3 Supplemental Guidance, NISTIR 7628 Guidelines for Smart Grid Cyber Security: Vol. 1, Smart Grid Cyber Security Strategy, Architecture, and High-Level Requirements, August 2010) 
  • The security engineering principles must include the ongoing secure development training requirements for smart grid system developers. (SG.SA-8 Requirement 1, NISTIR 7628 Guidelines for Smart Grid Cyber Security: Vol. 1, Smart Grid Cyber Security Strategy, Architecture, and High-Level Requirements, August 2010) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities. (AT-3, Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, High Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities before authorizing access to the information system or performing assigned duties. (AT-3a., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, High Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities when required by information system changes. (AT-3b., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, High Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities {organizationally documented frequency} thereafter. (AT-3c., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, High Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities. (AT-3, Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Low Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities before authorizing access to the information system or performing assigned duties. (AT-3a., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Low Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities when required by information system changes. (AT-3b., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Low Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities {organizationally documented frequency} thereafter. (AT-3c., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Low Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities. (AT-3, Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Moderate Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities before authorizing access to the information system or performing assigned duties. (AT-3a., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Moderate Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities when required by information system changes. (AT-3b., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Moderate Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities {organizationally documented frequency} thereafter. (AT-3c., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Moderate Impact Baseline, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities. (AT-3, Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities before authorizing access to the information system or performing assigned duties. (AT-3a., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities when required by information system changes. (AT-3b., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Revision 4) 
  • The organization provides role-based security training to personnel with assigned security roles and responsibilities {organizationally documented frequency} thereafter. (AT-3c., Security and Privacy Controls for Federal Information Systems and Organizations, NIST SP 800-53, Revision 4) 

Data Backup

Nearly all regulatory models require a level of data backup procedures including the following:
  • O29.2: To ensure the quality of programs and determine the time intervals for saving backup copies, the organization shall establish a generation management method by considering how much time it takes to recover damaged programs and the impact during that downtime. O34.2: The organization shall rou… (O29.2, O34.2, FISC Security Guidelines on Computer Systems for Banking and Related Financial Institutions, 7th Edition) 
  • Technical and organizational instructions will be issued to ensure data back-ups are conducted at least weekly. (Annex B.18, Italy Personal Data Protection Code) 
  • The frequency for storing the backup data at a safe storage location should be based on an analysis of the risk to the data. (¶ 19.6 Bullet 1, Good Practices For Computerized systems In Regulated GXP Environments) 
  • Business continuity plans should identify data back up frequency. (§ 5.2 (Business Continuity) ¶ 3, IIA Global Technology Audit Guide (GTAG) 7: Information Technology Outsourcing) 
  • There should be documented standards / procedures for performing back-ups, which cover back-up cycles. (CF.07.05.02b, The Standard of Good Practice for Information Security) 
  • There should be documented standards / procedures for performing back-ups, which cover methods for performing back-ups (including validation, labelling and storage). (CF.07.05.02c, The Standard of Good Practice for Information Security) 
  • There should be documented standards / procedures for performing back-ups, which cover back-up cycles. (CF.07.05.02b, The Standard of Good Practice for Information Security, 2013) 
  • There should be documented standards / procedures for performing back-ups, which cover methods for performing back-ups (including validation, labelling and storage). (CF.07.05.02c, The Standard of Good Practice for Information Security, 2013) 
  • Backup arrangements should take into account legal, regulatory, and contractual requirements (e.g., the handling of personally identifiable information, document retention, and customer information). (CF.07.05.04, The Standard of Good Practice for Information Security, 2013) 
  • Backups should be created as soon there are indictors that a security-related incident has occurred. New (unused) media should be used to back up the system to prevent juries from being convinced that the "evidence is faulty" because it could have been present prior to the incident. Backing up the i… (Action 3.4.1, SANS Computer Security Incident Handling, Version 2.3.1) 
  • Business Continuity Planning. An organization should implement safeguards to protect business, especially critical business processes, from the effects of major failures or disasters and to minimize the damage caused by such events, an effective business continuity, including contingency planning/di… (¶ 8.1.6(4), ISO 13335-4 Information technology - Guidelines for the management of IT Security - Part 4: Selection of safeguards, 2000) 
  • The backup system should include a regular backup schedule, providing routine and urgent access to the backup tapes, multiple copies on different media, and dispersed storage locations. (§ 4.3.7.3 ¶ 1(a), ISO 15489-2: 2001, Information and Documentation: Records management: Part 2: Guidelines) 
  • The type and frequency of backups should be determined based on the business needs, the security requirements of the information, and the criticality of the information to the organization. (§ 10.5.1, ISO 27002 Code of practice for information security management, 2005) 
  • CSR 5.4.1: The contingency plan must specify what critical data is and how often it is backed up. CSR 5.4.5: The organization must create backup files on a prescribed basis and store enough off-site to avoid a disruption if the current files are damaged or lost. CSR 5.4.6: The organization must per… (CSR 5.4.1, CSR 5.4.5, CSR 5.4.6, Pub 100-17 Medicare Business Partners Systems Security, Transmittal 7, Appendix A: CMS Core Security Requirements CSR, March 17, 2006) 
  • Government data backups must be performed by remote users on a regular basis. (§ 3.3, DISA Secure Remote Computing Security Technical Implementation Guide, Version 1 Release 2) 
  • The organization must conduct backups at least weekly. (CODB-1, DoD Instruction 8500.2 Information Assurance (IA) Implementation) 
  • The organization must conduct backups daily. (CODB-2, DoD Instruction 8500.2 Information Assurance (IA) Implementation) 
  • A redundant secondary system must be used to maintain the data backup. (CODB-3, DoD Instruction 8500.2 Information Assurance (IA) Implementation) 
  • The frequency of the backups must be determined by the Information System Security Manager (ISSM). (§ 8-603.a, NISPOM - National Industrial Security Program Operating Manual (DoD 5220.22-M) February 26, 2006, February 28, 2006) 
  • The continuity plan should include the back-up schedule and method for all vital records. The frequency of backups should be adjusted based on the volume of data processed and the amount of data that may need to be recreated. (Pg 30, Pg G-7, Pg G-12, Pg G-15, FFIEC IT Examination Handbook - Business Continuity Planning, March 2008) 
  • The organization should develop written standards documenting the methodology used to back up the system. (Pg 30, Exam Tier I Obj 6.1, Exam Tier I Obj 6.4, FFIEC IT Examination Handbook - Operations, July 2004) 
  • The service provider shall determine how to verify the Information System backup and how often to verify it. (Column F: CP-9, FedRAMP Baseline Security Controls) 
  • The joint authorization board must approve and accept the verification procedures and the time period for the Information System backups. (Column F: CP-9, FedRAMP Baseline Security Controls) 
  • Does management schedule the backup and retention of data? (IT - Business Continuity Q 15, Automated Integrated Regulatory Examination System (AIRES) IT Exam Questionnaires, version 073106A) 
  • System data should be backed up on a regular basis, and a policy should be developed specifying the back-up frequency based on data criticality and the frequency new data is introduced into the system. The method used for backing up the data should be based on the system and data integrity and avail… (§ 3.4.2, § 5.1.2 ¶ 3 thru 5, Contingency Planning Guide for Information Technology Systems, NIST SP 800-34, Rev. 1 (Draft)) 
  • Information Security should ensure that electronic mail data is periodically backed up and stored offsite. Information systems data or functions should be classified as critical data if the unavailability of the information would completely interrupt the business from functioning (i.e., the process … (ATCS-265, ATCS-826, Archer Control Table) 

Reference

  • McAfee Labs. (2015). Threats Report. Santa Clara: Intel Security. Retrieved from http://www.intelsecurity.com 
  • Symantec Corporation. (2015). Internet Security Threat Report. Mountain View: Symantec Corporation. Retrieved from http://www.symantec.com/threatreport 
  • Unified Compliance Framework. http://unifiedcompliance.com 

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Information Architecture Techniques and Best Practices

posted Jun 24, 2015, 3:54 PM by Christopher Furton   [ updated Dec 13, 2015, 10:33 AM ]

Written by:

Abstract

Developing information structures, such as websites or systems, involves a complex set of processes with the goal of making information usable, findable, and organized. Information Architecture tools, techniques, and best practices provide the building blocks to achieving the end state. With hundreds and possibly thousands of tools and techniques available, this paper explores five specific options: card sorting, free-listing, perspective-based inspection, personas, and content value analysis. These five techniques span the breadth of the information architecture project and provide insight into the constantly evolving and developing information architecture field.

Information Architecture Techniques and Best Practices


This paper explores the tools, techniques, and best practices for Information Architecture. However, it would be impossible to discuss every possible tool or technique. Instead, this paper focuses on five different techniques: card sorting, free-listing, perspective-based inspection, personas, and content value analysis. These five techniques were chosen as a subset of the hundreds of options because, when combined, they span the entire length of a project lifecycle.

The paper is split into two major sections. The first provides a brief overview of information architecture as background and to develop context. The second section provides information on the five tools and techniques. Lastly, the figure provides a chronological view of the information structure design and development project showing the recommended relationship between each technique and the associated phase of the project.

Information Architecture Overview

Information Architecture is a constantly evolving discipline transposing elements of traditional architectural into the digital frontier. According to Morville & Rosenfeld’s (2006) definition, IA is composed of four elements:
1) The structural design of shared information elements.
2) The combination of organization, labeling, searching, and navigation systems within web sites and intranets.
3) The art and science of shaping information products and experiences to support usability and findability.
4) An emerging discipline and community of practice focused on bringing principles of design and architecture to the digital landscape. (p. 4)
The field of information architecture is still relatively young and developing. As Nathaniel Davis (2010) described, “information architects have an opportunity to begin an era of methodical practice that enables a discipline as well as the transference of acquired knowledge to address the real qualitative effects of information.” As the field of study and practice grows, tools, techniques, and best practices surface to aid architects in accomplishing their groundbreaking efforts. Ultimately, the architect should focus efforts on the user experience when making decisions about how to organize and present the content of a site (Chapman, 2010).

Tools & Techniques Analysis

This section will explore several tools and techniques available to information architects. Each sub-section will consist of a description of the technique, review advantages and disadvantages, and provide insight into the best usage of the technique during the design and development process.

Card Sorting

The first technique discussed is the card sorting technique. Card sorting is a “user-centered design method for increasing a system’s findability. The process involves sorting a series of cards, each labeled with a piece of content or functionality, into groups that make sense of users or participants” (Spencer & Warfel, 2004). Depending on the method of card sorting, the participants may or may not be given a set of pre-defined groups to place the content cards into. If groups are provided, the method of card sorting is considered closed; conversely, if no groups are provided, then the method is of card sorting is considered open (Spencer & Warfel, 2004).
Executing the card sort technique involves three simple steps. First, the practitioner creates cards where each card identifies a single piece of existing or desired content. Second, the participants place the cards into groups based on similarities. Additionally, groups can be based off statistical factors derived from cluster analysis and tree graphs. Some cards may fit into multiple groups whereas others don’t seem to fit into any. The practitioner may duplicate cards that fit into multiple groups or review labels for those that don’t fit at all. Lastly, the participants determine names for each of the newly created groups by identifying keywords, commonalities, or re-occurring themes (Ooi, 2011).

This technique has several advantages and disadvantages. The advantages relate to the ease of use and low cost of execution. Since the technique is performed using the real users, the resulting information structure does not rely on those less intimate with the content such as designers or architects. However, one significant criticism is that the technique is inherently content-centric. This can cause problems as the resulting information structure does not take into consideration the users’ tasks (Spencer & Warfel, 2004). Combining the card sorting process with a non-content centric technique such as a needs analysis or task analysis can ensure user views are considered in the design and development effort.

The card sorting technique is best used early in the design or redesign of a site. The technique is not intended to be an evaluation technique. Once the content inventory and assessments are conducted, the practitioner will have the prerequisite information needed (Spencer & Warfel, 2004). If new content is going to be added to the structure, the open card sorting technique can be used on the new content to establish appropriate groupings. See figure 1 for a diagram depicting where this technique fits into the overall design and development process.

Free-listing

The next technique discussed is the free-listing technique. The purpose of free-listing is to help the architect gather data about topics by asking people to list all the related items that they can think of. The data gathered can then be analyzed and used to explore user categorizations (Sinha, 2003). For example, the architect can ask a group of users to list all the tasks that they perform on a project. The resulting lists are compared and analyzed noting the frequency and order of answers which provide a sense of “coherency” of the domain. Once complete, the architect can create a frequency diagram showing the items collected most frequently (Wilson, Method 3 of 100: Freelisting, 2011).

Free-listing has advantages and disadvantages. One advantage of free-listing is that it is highly cost effective. Large amounts of data can be collected quickly without the need to train participants or facilitators (Wilson, Methods - Free Listing, 2009). Additionally, the technique can be used in many different ways such as Internet based, face-to-face interviews, or in a group setting. However, one disadvantage is that analyzing for patterns can be time consuming especially with large sample sizes (Wilson, Method 3 of 100: Freelisting, 2011).

The free-listing technique is best used early in the design process. The technique can be combined with brainstorming and card sorting to help form categories of content (Sinha, 2003). Also, the technique can be used when short on time with the user communities. The process can take a few minutes and still collect a significant amount of information. For example, using this technique at a large meeting with hundreds of users can provide a substantial amount of data in just minutes if all contributors write their responses down on paper. See figure 1 for a diagram depicting where this technique fits into the overall design and development process.

Perspective-Based Inspection

The next technique discussed is the perspective-based inspection. This technique can be performed by the design and architecture staff without the user involvement. Each member of the staff is given a perspective in which they assume while reviewing a website or information structure. The resulting observations can provide a wide range of critiques that would not have normally been identified. The following perspectives were identified by Chauncey Wilson (Wilson, Method 10 of 100: Perspective-Based Inspection, 2011):
  • Consistency czar – this person looks for any consistency issues with the product.
  • Disabled user – this person would role play a person with specific disabilities and note where the product is not accessible.
  • Psychologist – this person would look for violations of psychological principles related to memory, learning, attention, fatigue, interruption, persuasion, etc.
  • Super power user – this person would look at issues of efficiency, shortcuts, and aspects of the product to support the very expert and frequent user.
  • Error expert – this person would focus on areas where users might make errors and in places where the system could support error prevention.
  • Artist – this person would look for problems with aesthetics like clutter, poor use of color, graphics that look amateurish, and ugly icons.
  • The Keyboarder – this person does not use the mouse and evaluates a product for complete keyboard access. (p. 1)
The use of perspectives in usability inspection has proved to be beneficial; those using perspectives have higher success than those that did not. An experiment conducted at the University of Maryland showed a 30% improvement by perspective inspectors in detection of usability problems compared to heuristic inspectors (Zhang, Basili, & Shneiderman, 1998).

As with many of the techniques discussed so far, this technique has an advantage of being relatively inexpensive. Additionally, this technique can be implemented at different sites using a variety of online tools. However, a significant disadvantage lies in the inspector’s ability to assume a perspective that he or she may know very little about (Wilson, Method 3 of 100: Freelisting, 2011). For example, it may be difficult for an inspector to assume the perspective of “The Keyboarder” if that person normally uses a mouse. Some people are not as adept to role-playing activities as others.

Perspective-based inspections can be used later in the design process to identify usability problems. Once a prototype is developed, these inspections will broaden the problem-finding ability of inspectors or testers (Wilson, Method 10 of 100: Perspective-Based Inspection, 2011). This technique can also be used earlier in the design process by assigning perspectives during review of architecture products or content maps. See figure 1 for a diagram depicting where this technique fits into the overall design and development process.

Personas

The next technique can assist architects by ensuring the information architecture and design meets the demands of the expected typical user. This technique involves creation of personas based on the user research conducted early in the design and development. Use of personas falls within a user-centered design process and provides a link between development efforts and the user’s behavior, attitudes and needs. Personas can be a fun way to communicate user research to the designers in a simple and accessible manner which will ensure the research is taken into account throughout the project (Gray, 2010).

Using personas involves creating made-up examples of people that fit each of the major segments of the site’s user base. Typically, the site would have three to five personas to prevent overwhelming the development and design team (Gray, 2010). Too many personas may result in blurring several into one losing the effectiveness of the technique. The information that goes into each persona can be unique to each organization or project. According to Gray (2010), the following information may be helpful for the design/development team:
  • Pictures – makes the persona feel real and helps designers/developers empathize with the persona.
  • A description – demographic information including job, their technology, and the pressures or situation they are in when they use the system or site.
  • Needs, goals, & features – either a straight list or a table which helps define a priority list for features before development starts.
  • A ‘quote’ from the persona – this gives the persona more personality while also summarizing their needs and goals into a single sentence.
  • Frustrations – these are the problems that the user encounters in using the system or website and focuses the designers/developers to the biggest issues
  • Ideal features – these features may be impossible to produce but help designers and developers think of alternative approaches.
  • ‘Need to know’ – this is information the persona needs when trying to accomplish tasks on the system or site.
  • Behaviors – these are the typical behaviors the persona exhibits that directly relate to the needs or requirements of the system or site.
  • Scenarios – these are sample use cases that the persona might do and can clarify how the system will be used (p. 1).
The key to successfully use of personas depends on the adaptation during design and development. Some organizations find it useful to display the personas on posters within the workspace to help keep the development team focused on the user-centric design process (Gray, 2010).

The personas technique offers several advantages and disadvantages. The main advantage is that the personas provide a vehicle to represent user research in a tangible and workable medium. Conversely, one of the disadvantages is that the very realism that makes personas effective may complicate and potentially distract the designer because of extra fluff and mere artistic concoctions (Constantine).

The persona exists throughout the lifecycle of the design and development project. The personas should be created early in the research phases and exist throughout the testing and ‘go live’ events. See figure 1 for a diagram depicting where this technique fits into the overall design and development process.

Content Value Analysis

The final technique discussed in this paper is a Content Value Analysis (CVA). The role of a CVA is to provide quantitative feedback on content value for a current system or website. This technique aims to provide a mechanism for analyzing and assigning a value based on heuristics to a subset of the content. Furthermore, this technique then uses statistics to back the notion that the arrived upon value is representative of the entire collection of content (Walsh, 2008).

To accomplish a content value analysis, Walsh (2008) developed an activity flowchart outlining a six step process:
1) Scope the site – for accurate results, the size of the site to be assessed should be determined based off number of pages.
2) Select the sample size –a statistical sampling table to define the Acceptable Quality Level (AQL) should help the practitioner develop the required sample size for statistical relevancy. Military Standard 105E provides a statistical sampling table which can be used in CVA calculations.
3) Select the pages to assess – to select pages, Walsh recommends using a random set of pages selected through an online tool at random.org.
4) Select the heuristics to be used – value indicators are assigned to each page from the sample set. One set of heuristics described by Walsh is a 0 to 1 scale based on content value. For example, of the page has obsolete or no content, the practitioner should assign a value of 0. If the page has good content, the practitioner should assign a value of 1.
5) Record the findings and scores – the practitioner should record each page visited and the corresponding score.
6) Analyze the findings for patterns and produce a report – the practitioner should then develop charts or graphs to represent the value of the current content. For example, 48% of pages have content that is irrelevant or outdated.

The content value analysis has advantages and disadvantages. Specifically, the CVA has the ability to provide quantitative feedback to the site owner about the content using statistical modeling. This technique is relatively inexpensive to implement and requires only one person. However, the quality of the heuristic can be debated which still leaves an element of human opinion in the analysis. For example, one practitioner may feel that a page is irrelevant while another feels that the same page has adequate content.

The ideal use of this technique is before the start of a website renovation effort or as a check and balance during the final testing stages. Use of this technique early will develop a quality baseline as well as identify whether a significant content problems exists. Conducting the same technique during final testing will measure the growth in content quality from the initial baseline. Unfortunately, identifying content problems late in the project may create substantial costs to fix compared to finding the problem early.

Conclusion

The evolving field of information architecture is bursting with endless tools and techniques to help architects develop effective solutions. Some tools are user-based while some are content-based, but all tools are aimed at providing an architecture solution that keeps up with technology. As technology changes, the tools and techniques used by architects will continue to adapt. This paper only scratches the surface on the variety of tools and techniques available to create a positive user experience.

The card sorting technique describes a method to cluster content into logical groupings based on user input. The free-listing provides a mechanism to gather ideas and brainstorm (which could be a tool within itself) by asking people to list as many aspects as they can about topic X. Perspective-based inspection describes a method of evaluating a prototype site by role playing a certain perspective. Similarly, the personas technique involves usage of realistic potential examples of customers or visitors to a site. Lastly, the content value analysis allows architects to obtain quantitative measures of content effectiveness based of sample sizing and statistical analysis.

Along with descriptions of several tools, this paper reviewed advantages and disadvantages for each technique and offered suggestions on when to use each technique. Ultimately, there is not a regulated or mandated usage of any tool set; rather, the tools can be used as needed dependent on the situation at hand. Similar to a plumber’s toolbox, the information architect’s tool box contains hundreds of techniques which can help organizations develop usable, findable, and organized information.

References

  • Chapman, C. (2010, October 18). Information Architecture 101: Techniques and Best Practices. Retrieved March 18, 2012, from Six Revisions: Useful Information for Web Developers & Designers: http://sixrevisions.com/usabilityaccessibility/information-architecture-101-techniques-and-best-practices/
  • Constantine, L. (n.d.). Users, Roles, and Personas. Constantine & Lockwood, Ltd.
  • Davis, N. (2010). Information Architecture, Black Holes and Discipline; On Developing a Framework for a Practice of Information Architecture. Bulletin of the American Society for Information Science and Technology, 25-29.
  • Gray, A. (2010, April). Personas - the definitive guide. Retrieved March 18, 2012, from Webcredible: http://www.webcredible.co.uk/user-friendly-resources/web-usability/personas.shtml
  • Malone, E. (2003, September). The Information Architecture Institute. Retrieved March 18, 2012, from Learning IA - Process Maps: http://aifia.org/tools/download/ExperienceDesignFlow_bw.pdf
  • Morville, P., & Rosenfeld, L. (2006). Information Architecture for the World Wide Web. Sebastopol, CA: O'Reilly Media Inc.
  • Ooi, Y. (2011, February). Methods for analysing card sort results. Retrieved March 18, 2012, from Webcredible: http://www.webcredible.co.uk/user-friendly-resources/ucd/methods-card-sorting.shtml
  • Sinha, R. (2003, 02 24). Beyond cardsorting: Free-listing methods to explore user categorizations. Retrieved 03 16, 2012, from boxesandarrows: http://www.boxesandarrows.com/view/beyond_cardsorting_free_listing_methods_to_explore_user_categorizations
  • Spencer, D., & Warfel, T. (2004, 04 07). Card sorting: a definitive guide. Retrieved 03 16, 2012, from boxesandarrows: http://www.boxesandarrows.com/view/card_sorting_a_definitive_guide
  • Walsh, P. C. (2008, October 08). Content Value Analysis for Intranets. Part 2 - a methodology. Retrieved March 18, 2012, from ManIA: http://patrickcwalsh.wordpress.com/2008/10/08/content-value-analysis-for-intranets-part-2-a-methodology/
  • Wilson, C. (2009, 06). Methods - Free Listing. Retrieved 03 16, 2012, from Preview of the Usability Body of Knowledge: http://www.usabilitybok.org/methods/free-listing
  • Wilson, C. (2011, 03 24). Method 10 of 100: Perspective-Based Inspection. Retrieved 03 16, 2012, from Designing the User Experience at Autodesk: http://dux.typepad.com/dux/2011/03/method-10-of-100-perspective-based-inspection.html
  • Wilson, C. (2011, 01 13). Method 3 of 100: Freelisting. Retrieved 03 16, 2012, from Designing the User Experience at Autodesk: http://dux.typepad.com/dux/2011/01/this-is-the-third-in-a-series-of-100-short-articles-about-ux-design-and-evaluation-methods-todays-method-is-called-freeli.html
  • Zhang, Z., Basili, V., & Shneiderman, B. (1998). An empirical study of perspective-based usability inspection. Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting (pp. 1346-1350). College Park, MD: University of Maryland.

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

The Internet of Things in the Retail Industry

posted Jun 3, 2015, 4:37 PM by Christopher Furton   [ updated Dec 13, 2015, 10:34 AM ]

Written by: Christopher Furton

The Internet of Things in the Retail Industry

Overview

The Internet of Things (IoT) is described as a paradigm where everything of value will communicate in a networked form with one another. This involves development of sensor network technologies that ultimately form a new standard in which information and communication is embedded within our environments. Objects, such as refrigerators and coffee cups, interact with other objects within a designated geographical range, all with the intention of improving the lifestyles of people (Yan, Zhang, Yang, & Ning, 2008, p. Ch. 13 Para. 1).

To accomplish this, three criteria are important: sensors and actuators, connectivity, and people and processes. First, sensors and actuators create a “digital nervous system” through the use of Global Positioning System (GPS) data, cameras, microphones, temperature sensors, pressure sensors, and many more. Next, connectivity moves that data across networks such as Personal Area Networks (PAN), Local Area Networks (LAN), or Wide Area Networks (WAN) using a variety of different protocols and architectures including cellular 3G/4G LTE, WiFi, Bluetooth, Near Field Communication (NFC), and many others. This connected data is combined into bi-directional systems where people and processes can utilize it to make better decisions, both human and automated (Harbor Research & Postscapes, 2015).

From a consumer products perspective, the Internet of Things offers potential for new product lines to turn everyday homes into “smart homes” by developing supporting architectural equipment as well as end-point devices like smart lightbulbs (Entertainment Closeup, 2015, p. 1). Over the past several years, smart consumer products have hit retail product portfolios putting home automation and the Internet of Things “safely poised on the brink of entry into the mainstream market” (Koyfman, 2014, p. 30).

Consumer products already on the market can adaptively adjust a home’s thermostat based off usage patterns and occupancy, continuously capture health information like calorie expenditure and temperature 24 hours a day, and remotely turn off appliances using a cell phone by terminating outlet power. However, this is just the beginning as Internet of Things devices are spreading throughout industries effecting home consumers, transport mobility, health care, building infrastructures, and industry practices (Harbor Research & Postscapes, 2015).

As the Internet of Things edges close to the mainstream in the consumer products market, it also has substantial potential value in backend operations. Technologies like Radio-Frequency Identification (RFID) allow objects to use radio waves to transfer information to readers without direct line of sight, potentially driving supply chain efficiency (Bardaki, Kourouthanassis, & Pramatari, 2012, p. 233). Through innovation, companies can leverage IoT technologies to outperform competitors and gain competitive advantage.

Industry Applicability


The Internet of Things technologies will likely affect all industries with transformational affect throughout retail, manufacturing, finance and insurance, and information services. According to Readdy (2014), retail has the second largest potential for gain from IoT technologies at US$1.6 trillion behind manufacturing at US$3.9 trillion. Information services and finance and insurance are tied for third and fourth with US$1.3 trillion of potential gain (p. 3). In total, estimates are upwards of US$14.4 trillion throughout all industries. These gains are anticipated through improvements in customer experience for US $3.7 trillion, innovation for US $3.0 trillion, supply chain and logistics for US$2.7 trillion, employee productivity for US$2.5 trillion, and asset utilization for US$2.5 trillion (Readdy, 2014, p. 3).

With retail poised to benefit upwards of US$1.6 trillion, early adoption offers potential to achieve competitive advantage. Particularly, retail companies can benefit from stock-out prevention, i.e. prevent empty shelves, because of connected and intelligent supply chains. Furthermore, the IoT opens the door for innovative use of technologies for predicting customer behavior and trends by performing big data analytics on data collected from video surveillance cameras, social media, Internet browsing, and mobile devices. Although the manufacturing industry is predicted to gain the most from IoT, retailers have the potential to build strong business cases for enhanced revenue, increased efficiencies, and improved asset management (Readdy, 2014, p. 2).

Strategy Implications for Retailers


The Internet of Thing will affect strategy through six key areas: energy, security, smarter analytics, new revenue streams, productivity, and travel. The first strategic area, energy, is similar to the concepts discussed in the Overview regarding consumer products. Through the use of connected devices that regulate lighting and temperature, retail stores as well as distribution centers and backend offices will reduce energy expenditure (Anderle, 2015). This “greening” of facilities utilizes Information Technology strategy in the form of the IoT to intelligently reduce energy expenditure without compromising productivity or business functionality.

The second strategic area, security, will primarily affect retail organization’s Loss Prevention (LP) or Asset Protection (AP) divisions. Because these LP/AP divisions are responsible for physically securing property with locking mechanisms, closed-circuit television (CCTV), and employee and customer safety, the IoT will have significant impact. These functions exist at store level, distribution level, and corporate level alike. Smart locks will keep track of who is in the buildings at any particular time, smart doorbells will inform employees of who is trying to gain access, and smart surveillance systems will potentially save substantial man-hours reviewing video footage for specific events (Target Corporation, n.d.) (Harbor Research & Postscapes, 2015).

The third strategic area, smarter analytics, is one of the most important areas that offers significant benefits to retailers through IoT. Essentially, more smart devices means more data collection for analytics that, with the right people and processes, can improve strategy and customer experience (Anderle, 2015). Historically, e-commerce sites have been able to leverage analytics to figure out where items should be positioned on web sites to catch the eyes of shoppers and suggest follow-up purchases. With the IoT, retail stores will be able to track customer movement throughout the store, analyze pauses in movement and collect data points for analysis. RetailNext, a comprehensive in-store analytics company, already offers a service that provides 10,000 data points per store visitor. This video-based service collects 57 petabytes per year from 300 million shoppers at 50 retail chains (Groenfeldt, 2012).

In addition to smarter tracking of customers through stores for marketing purposes, the IoT analytics enables multi-channel functionality in brick and mortar stores. Through cross channel integration of back-end systems of product information, inventory, promotions, Customer Relationship Management (CRM) along with smarter physical shopping aisles, retail companies can achieve in-aisle consumer interaction through the use of mobile devices (Wonnagy, 2011). For example, based off Internet browsing, a retailer learns a consumer is interested in purchasing a new coffee brewer. The CRM system keeps track of this data and provides it to the physical store when the customer’s cell phone passes a sensor in the entry doors. Based off product information, inventory, and promotions, a push notification is sent to the consumer’s device with an advertisement and directions to the appropriate aisle number.

The fourth strategic area, new revenue streams, moves the IoT beyond cost reduction into a profit center driving sales and profitability. First, retailers can profit from added product lines directly related to IoT technology consumer products. According to forecasts, the sale of connected devices and related services could result in US$2.5 trillion in revenues by 2020. Forecasts also predict the number of smart devices to exceed 50 billion and machine-to-machine connection, the fundamental backbone of the IoT, to grow to 18 million, up from two billion in 2011. This growth, combined with the declining sensor cost, increase in computing and processing power, low-cost data storage, and widespread high-bandwidth connectivity, positions the retail companies to exploit substantial revenue growth by adding IoT consumer product lines to existing portfolios (Readdy, 2014, pp. 3-4).

The fifth strategic area, productivity, offers the benefit of increased efficiency to reduce costs. Currently, two thirds of organizations that have IoT solutions report having achieved 28 per cent cost reduction in daily operations (Inside Retail, 2015). Specifically, retailers can benefit through better supply chain management, inventory, logistics, and fleet management. Currently, bar code and RFID technologies let retailers monitor inventory levels, but IoT technologies will increase the data coming in to these monitoring systems. This provides better insight to products moving through the supply chain leading to improved efficiencies and leaner inventories (Sankaran, 2014, p. 1).

According to research by The Economist Intelligence Unit (2015), companies see improving productivity as the key immediate benefit to IoT despite long-term expectations for revenue growth (p. 5). Productivity benefits include improved overall employee production, optimized utilization of assets, reduced operational expenses, improved Internet oversight and control, and enhanced worker safety (The Economist Intelligence Unit, 2015, pp. 8-9).

The sixth strategic area, travel, affects retail logistical fleets as well as corporate travel via commercial airlines. Updating retail logistical fleets with smart technologies can increase safety and potentially avoid costly accidents. IoT technology may allow trucks to interact with other vehicles on the roadways through predictive algorithms and models providing best escape avenues for drivers in emergency situations or to identify potentially dangerous drivers. Current technologies include self-braking vehicles but it is anticipated that the IoT will develop with vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) systems. For example, Ford’s safe car technology converts intersections into “smart intersections” that can predict when a driver is going to run a red light and transmit warning to surrounding vehicles (Bertolucci, 2013, p. 1).

In addition to upgrading retail fleets, IoT has strategic benefits for retail companies through increased efficiency in commercial airline travel. Despite the growth of virtual collaboration, travel is a necessary evil of business that will not go away. Fortunately, the IoT has potential to improve the airline industry through better flight planning and operational changes that will have secondary effects on retail executive travel (Readdy, 2014, pp. 2,4).

Potential Threats and Challenges


Although the Internet of Things can be exploited to improve revenue streams, reduce costs, and enable innovation leading to competitive advantage, there are four challenges that must be considered. First, lack of standards and industry-wide agreement on protocols may introduce interoperability problems down the road. Second, cybersecurity concerns over network-enabled sensors and devices must be considered to prevent misuse or abuse.

Next, existing infrastructure must be able to support the increased bandwidth resource utilization caused by IoT devices. Lastly, retail culture in respect to employees and customers must support the increased data gathering and potential privacy concerns arising from Big Data initiatives including IoT.

First, with so many stakeholders in the IoT, achieving widespread use of standards will take some time. Likely protocol wars will emerge especially as legacy IoT device companies try to protect their proprietary systems. Open system proponents will likely push for industry standards to encourage better systems integration (Kocher, 2014, p. 1). Having universal standards would also reduce risk regarding security concerns.

Second, security concerns must be considered. The increase in devices leads to more decentralized entry points for malware. Devices will likely be placed in physically accessible areas which could be subject to tampering and exploitation. The increase in software, middleware, programmer interfaces, and machine-to-machine communications results in additional complexity and security requirements. These can be addressed with internal policies, but likely commercial products that leverage a policy-driven approach and provisioning will become available (Kocher, 2014, p. 1).

Weak information and telecommunication infrastructure is another concern when considering IoT initiatives. According to a report from the Economist Intelligence Unit (2015), 44% of companies surveyed identified poor information and telecommunication infrastructure as the one of the most significant obstacles to developing the IoT (p. 9). As the number of devices increases, there may also be a shift in the use patterns of network bandwidth demand. Estimates show an increase in global peak traffic (per hour) from 2,823Gb in 2012 to 16,215Gb in 2020 (Zhuang, Cappos, Rappaport, & McGeer, 2013, Table 3). The increased processing power to perform analytics, storage space to maintain databases, and communication pathways supporting wireless solutions, cloud, and mobile computing must be in place to support an IoT initiative (Zhuang, Cappos, Rappaport, & McGeer, 2013, pp. 10-11).

Lastly, privacy concerns must be considered and cultural changes made that support the collection and analysis of significant amounts of consumer information. Retail customers may object to being tracked throughout the retail brick and mortar stores despite similar current practices in e-commerce. Trust must be developed between consumer and retailer so that information can be gathered without fear of abuse or compromise (Kocher, 2014, p. 1).

Exploiting Opportunity – The Path Forward


Despite the challenges presented by the Internet of Things, retail companies are uniquely positioned to develop business strategies that leverage this cutting-edge technology. Making the move towards an IoT ecosystem requires careful consideration of the fit within business, organizational, and information systems strategy. As described in Pearlson & Sanders (2013), the company will need ensure the organizational strategy is in alignment with the business and information systems strategies creating the Information Systems Strategy Triangle (pp. 23-24).

First, business strategy needs to be assessed to determine which specific areas an IoT technology can help achieve desired results. For example, as a retailer, there are several strategic functions where IoT can be beneficial -- namely, supply chain management, marketing (analytics), multi-channel sales, and improved in-store checkout process. Despite the temptation to integrate IoT into all these functions simultaneously, a “rip and replace” mentality is not advised. Because retailers will not be able to create use cases from scratch, they will need to integrate the new technologies with current systems, data, and infrastructure investments. This is best done by ensuring alignment of IoT with existing company strategies. Furthermore, the company needs to avoid treating IoT as a technology experiment by building a technical solution in search of a problem. The IoT needs to meet business objectives and be directly linked to business strategy (Kocher, 2014, p. 2).

One method of doing this would be to develop a phased deployment of IoT technologies focusing first on small scale opportunities that enable quick wins without jeopardizing existing processes. For example, developing a marketing/analytics strategy where IoT sensors placed strategically in pilot stores increase customer touch-points to improve available analytic information on customer behavior. The same IoT architecture could be further expanded over time to include multi-channel sales. The next big step would be phasing out barcodes replacing with RFID leading to better supply chain management options with an end-state goal of eliminating slow moving checkout lines where cashiers scan barcodes (Bardaki, Kourouthanassis, & Pramatari, 2012).

This is a big feat that requires modifications to Information System strategy so that alignment is achieved with business strategy. The IoT ecosystem will need to encompass existing data structures and create many new ones. For retail organizations, that means linking loyalty card programs with Internet browsing activity and linking brick and mortar customer touch-points into these data warehouse repositories. Also, employing a “Green IS” into the phased IoT implementation can help reduce costly energy consumption.

Lastly, to achieve alignment under the Information Systems Strategy Triangle model (Pearlson & Saunders, 2013, pp. 23-24), the retail company will need to consider evaluating organizational hierarchy and assessing manpower requirements to achieve such a large scale innovation initiative such as IoT. People and processes are critical for an IoT initiative which places great emphasis on the need for organizational strategy alignment. An increase in qualified Information Technology professionals, as well as experienced marketing analysts and logisticians, will be required to properly fulfill the IS strategy with the end goal of successful business strategy and competitive advantage.

Conclusion

The Internet of Things will have a profound impact on businesses and consumers in the near future. Most industries will develop innovative ways to leverage the power of the IoT through the use of sensors and actuators, extensive connectivity, and intelligent people and processes. In particular, companies within the retail industry are positioned to gain significant benefits for early adoption impacting several strategic areas: energy, security, smarter analytics, new revenue streams, productivity, and travel. More specifically, creative use of IoT can enable retailers to increase revenue through additional product lines, decrease costs through lean processes and green initiatives, and enable technical innovation through next-generation supply chains and cashier-less RFID checkouts.

Despite these opportunities, the IoT does have several challenges that must be considered. Lack of industry standards, weak cybersecurity posture, inefficient or nonexistent infrastructure, and privacy concerns must all be addressed when planning an IoT initiative. Regardless, the rewards are greater than the risks so the IoT opportunity should still be explored. Through business, organizational, and information systems strategy, a phased implementation allowing opportunity to adjust to consumer demands and build trust, expand existing infrastructures, and ensure security measures are sufficient will position the retail company favorably to profit in the short and long-term.

References

  • Anderle, M. (2015, April 8). Infographic: How the Internet of Things is transforming the workplace. Tech Page One. Retrieved April 10, 2014, from https://techpageone.dell.com/business/infographic-internet-things-transforming-workplace/
  • Bardaki, C., Kourouthanassis, P., & Pramatari, K. (2012). Deploying RFID-Enabled Services in the Retail Supply Chain: Lessons Learned toward the Internet of Things. Information Systems Management, 233-245. doi:10.1080/10580530.2012.687317
  • Bertolucci, J. (2013, June 10). Big Data: When Cars Can Talk. InformatonWeek. Retrieved 04 13, 2015, from http://www.informationweek.com/big-data/big-data-analytics/big-data-when-cars-can-talk/d/d-id/1110305?
  • Brookson, C. (2014, October 13). Internet of Things: a Force for Good or Evil? ITU Telecom World. Retrieved 04 15, 2015, from http://ituworldblog.itu.int/internet-of-things-a-force-for-good-or-evil/
  • Entertainment Closeup. (2015, January 09). Misfit Introduces New Smart Bulb. Entertainment Close-Up. Retrieved April 11, 2015, from http://search.proquest.com.libezproxy2.syr.edu/docview/1643279179?accountid=14214
  • Groenfeldt, T. (2012, August 03). E-Commerce Style Big Data Analytics Meet Brick and Mortar Retailers. Forbes.
  • Harbor Research & Postscapes. (2015, February 09). What Exactly Is The "Internet of Things"? Retrieved from Poscapes.com: https://s3.amazonaws.com/postscapes/IoT-Harbor-Postscapes-Infographic.pdf
  • Inside Retail. (2015, March 02). Internet of Things for retailers. Inside Retail. Retrieved April 13, 2015, from http://www.insideretail.com.au/blog/2015/03/02/internet-things-retailers/
  • Kocher, C. (2014, November 17). The Internet of Things: Challenges and Opportunities. Sand Hill. Retrieved 04 15, 2014, from http://sandhill.com/article/the-internet-of-things-challenges-and-opportunities/
  • Koyfman, S. (2014, Nov/Dec). Can smart homes calculate consumer needs? Home Channel News, pp. 30-31.
  • Pearlson, K., & Saunders, C. (2013). Managing & Using Information Systems: A Strategic Approach (5th ed.). Hoboken: Jon Wiley& Sons, Inc.
  • Readdy, A. S. (2014). Reading the Benefits of the Internet of Things. Teaneck: Cognizant Reports. Retrieved April 13, 2014, from http://www.cognizant.com/InsightsWhitepapers/Reaping-the-Benefits-of-the-Internet-of-Things.pdf
  • Sankaran, A. (2014, February 3). Internet of Things: The next major disruptor for retail. The Future of Commerce. Retrieved 04 13, 2014, from http://www.the-future-of-commerce.com/2014/02/03/internet-of-things-retail-disruption/
  • Target Corporation. (n.d.). Assets Protection & Loss Prevention. Retrieved April 12, 2014, from Target Career: https://corporate.target.com/careers/career-areas/assets-protection-loss-prevention
  • The Economist Intelligence Unit. (2015). CEO Briefing 2015: From Productivity to Outcomes. AccentureStrategy. Retrieved 04 13, 2015, from http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-CEO-Briefing-2015-Productivity-Outcomes-Internet-Things.pdf
  • Wonnagy, A. (2011, December 21). Five Retail Trends Driving Wi-Fi. Retrieved from Revolution Wi-Fi: http://revolutionwifi.blogspot.com/2011/12/5-retail-trends-driving-wi-fi.html
  • Yan, L., Zhang, Y., Yang, T. L., & Ning, H. (2008). The Internet of Things: From RFID to the Next Generation Pervasive Networked Systems. Auerbach Publications. Retrieved April 11, 2015, from <http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=26480>
  • Zhuang, Y., Cappos, J., Rappaport, T., & McGeer, R. (2013). Future Internet Bandwidth Trends: An Investigation on Current and Future Disruptive Technologies. New York: Polytechnic Institute of NYU. Retrieved from http://www.nycmedialab.org/wp-content/uploads/2014/04/tr-cse-2013-04-1.pdf

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

IT Capital Planning: Enterprise Architecture and Exhibit 300 processes for the CDC and NNSA

posted May 15, 2015, 8:00 AM by Christopher Furton   [ updated Dec 13, 2015, 10:34 AM ]

Written by: Christopher Furton

Abstract

This paper explores two of the topics relating to IT Capital Planning and compares those processes in place at two federal agencies. The Enterprise Architecture and the Exhibit 300 business cases are reviewed from the Centers for Disease Control and Prevention (CDC) and the National Nuclear Security Administration (NNSA). The findings are that both agencies have programs in place to address Enterprise Architecture and the Exhibit 300, however, the amount of information made public varies resulting in inadequately level grounds for comparing and contrasting. Regardless, this paper explores the agencies’ programs highlighting the positive aspects and the growth opportunities of each while evaluating the overall IT Capital Planning posture.

IT Capital Planning: Enterprise Architecture and Exhibit 300 processes for the CDC and NNSA
    This paper will explore two federal agencies in relation to two capital planning topics. The first agency, the Centers for Disease Control and Prevention (CDC), is an agency within the Department of Health and Human Services (DHHS). The CDC “strives to improve the quality of people’s health in the U.S. and worldwide through an integrated health protection goal approach” (Centers for Disease Control and Prevention, 2008b). The second agency, the National Nuclear Security Administration, is an agency with the Department of Energy (DoE). The NNSA implements programs with major national security endeavors involving nuclear weapons, nuclear nonproliferation, and providing safe and effective nuclear propulsion to the U.S. Navy (U.S. Department of Energy, 2011a).

    The CDC and the NNSA are nearly equivalent in financial size with similar budget requests (U.S. Department of Health and Human Services, 2011; U.S. Department of Energy, 2011a). This allows for an appropriate comparison between the agencies on the topics of Enterprise Architecture (EA) and Exhibit 300 submission trends. The format of this paper will begin each topic area with a brief overview of the topic and then provide detailed information about how each agency approaches the concepts. At the end of each topic area, a section comparing and contrasting to the two agencies give insight into the effectiveness and efficiencies discovered. After completion of both topic areas, the conclusion evaluates both agencies overall on both topics and provides recommendations for increased performance.

    To adequately analyze each agency’s EA and Exhibit 300 processes, occasional review of higher level policy within the Department of Health and Human Services and the Department of Energy may be necessary. References to the agency’s higher-level organizations will be kept minimal; however, by nature of both topics, some elements are inherited from senior organizations and are directly applicable and must be considered.

Topic Area 1: Overview of Enterprise Architecture

The Clinger-Cohen Act of 1996 assigned the Chief Information Officers (CIO) of each agency with the requirement to develop information technology architectures. In order to achieve this goal, each agency must develop an architecture that aligns with the cabinet level department’s architecture and the top-level Federal Enterprise Architecture. To achieve this, the Federal CIO Council chose a segmented approach that allows departments and agencies to develop architectural segments that integrate into the larger enterprise architecture framework. This method is designed to facilitate the effective and efficient coordination of the lines of business of the Federal Government across the elements of enterprise architecture (The Chief Information Officers Council, 1999).

    The Federal Enterprise Architecture (FEA) encompasses the Federal Government’s approach to enterprise architecture providing a framework for cross-agency investment analysis, management, and use. This is accomplished by providing five inter-related reference models incorporated into the Consolidated Reference Model (CRM). The reference models are:
    (1) Performance Reference Model (PRM)
    (2) Business Reference Model (BRM)
    (3) Service Component Reference Model (SRM)
    (4) Data Reference Model (DRM)
    (5) Technical Reference Model (TRM).
Using these reference models, agencies can identify gaps, redundancies, and opportunities for collaboration across three general profiles: Geospatial Profile, Records Management Profile, and Security and Privacy Profile (FEA-SPP Working Group, 2010).

    To achieve these goals and requirements, each agency develops an EA program to achieve improvements in agency mission performance and other measurement areas. The program helps organize and clarify strategic goals, investments, business solutions, and measurable performance improvements. This forms the results-oriented Performance Improvement Lifecycle comprising of three phases: architect, invest, and implement. This simple value chain links EA with IT investment management and project execution. According to the Office of Management and Budget (2008a), the chief architects and staff support business stakeholders through continuous performance improvement to:
  •  “identify and prioritize enterprise segments and opportunities to improve mission performance, linked to agency goals and objectives;
  •  plan a course of action to close performance gaps, using common or shared information assets and information technology assets;
  •  allocate agency resources supporting program management and project execution;
  •  measure and assess performance to verify and report results; and
  •  assess feedback on program performance to enhance architecture, investment, and implementation decisions” (p. 4).
    To help agencies with Enterprise Architecture compliance, the Office of Management and Budget developed the EA Assessment Framework version 3.0 (EAAFv3). The EAAFv3 provides a mechanism for agencies to conduct internal diagnostics as well as oversight from the OMB level. The OMB reviews agency self-assessments and provides detailed feedback on each criterion and an overall final assessment rating. The assessment focuses on the three capability areas: completion of enterprise architecture, use of EA to drive improved decision-making, and results achieved to improve the agency’s program effectiveness. Each focus capability area is graded with a color scale where red is bad, yellow is moderate, and green is good (Office of Management and Budget, 2008a).

    To aid in analysis of the federal agencies for this paper, a more granular view of Enterprise Architecture is needed. The FEA framework partitions a given architecture into business, data, applications, and technology architectures. The current state of an agency across these four partitions if referred to as the current, or as-is, views. The agencies develop target architectures, or to-be, views based on vision, principles and overall strategic direction. Progress towards the to-be view is governed by standards and transitional processes. These transitional processes contain capital IT investment planning and decision making while the standards contain security, data, application, and technology standards (The Chief Information Officers Council, 1999). Figure 1 provides a graphical representation of the Federal Enterprise Architecture Framework.

Christopher Furton Federal Enterprise Architecture Framework (FEAF)


    In summary, the Federal Enterprise Architecture Framework focuses on the business requirements of an agency. It defines the current state of the agency and architects produce the artifacts based off standards and processes ultimately transitioning the agency to the future state. The overall EA framework is developed using segments where each lower leveled bureau, agency, or department owns a slice of the overall architecture. Governance bodies such as the Office of Management and Budget provide oversight by review of agency conducted self-assessments and provides feedback on overall progress based on completion, use, and results obtained from the EA. The end-state goal of EA to earn value by promoting interoperability, resource sharing, reducing costs, and supporting capital IT investment planning (The Chief Information Officers Council, 1999).

Centers for Disease Control and Prevention – Enterprise Architecture

    In order to discuss the Centers for Disease Control and Prevention’s Enterprise Architecture program and progress, a brief discussion about the Department of Health and Human Services program is needed. Because EA forms a hierarchical structure, the DHHS framework maps directly to the Federal Enterprise Architecture. This is accomplished through an eight layered model as depicted in figure 2 (Centers for Disease Control and Prevention, 2007):
    1) Strategy layer – described from the HHS Strategic Plan
    2) Stakeholder layer – identifies partners and recipients of HHS products and services
    3) Business layer – business operations of HHS that support the mission
    4) Data layer – information that HHS needs to perform the business operations
    5) Systems & Services layer – automated systems and services that create or manipulate data or information
    6) Technology layer – hardware, software, networks, and standards that comprising technology infrastructure
    7) Workforce later – job categories associated with performing critical HHS work
    8) Facilities layer – geographical distribution and locations where work is performed.

Christopher Furton Federal Enterprise Architecture Framework at Health and Human Services


    The eight layer HHS model encompasses the five reference models of the FEA. Specifically, the Performance Reference Model’s uniquely tailored performance indicators map directly to the HHS Strategy layer. The FEA Business Reference Model’s lines of business map to the HHS Business layer. The Service Component Reference Model’s service domains and types map to the HHS Systems & Services layer. The Data Reference Model’s business-focused data standardization maps to the HHS Data layer. Lastly, the Technical Reference Model’s component interfaces map to the HHS Technology layer (Centers for Disease Control and Prevention, 2007). This mapping of the performance and business-driven FEA reference models to the HHS architecture ensures that the HHS and subordinate agencies are compliant with overall EA objectives. See figure 2 for a visual representation.

    At the CDC level of HHS, the Unified Process (UP) is the overarching framework and methodology that contains the tools to help project managers follow best practices. The CDC Unified Process contains process guides for enterprise architecture as well as other essential elements such as capital planning, performance measurement, and strategic planning. The Unified Process also contains Practice Guides that provide guidance to teams about key project management practices as well as templates and checklists to facilitate consistency and reduce re-work. The Unified Process can be applied across the agency to address informatics projects as well as general construction and campaign projects to increase efficiency and effectiveness of project management processes (Centers for Disease Control and Prevention, 2012c).

    The process guide for EA outlines the process for CDC when developing or updating information systems. Specifically, the process guide initiates an Enterprise Architecture Review process in the Initiating Phase of all projects which helps ensure that project sponsors work with the EA activity team prior to approving a project. This critical process helps sponsors and project managers ensure that the purpose of their projects properly align with the enterprise architecture and the overall CDC strategic plan. The process guide also draws the correlations between enterprise architecture and other information technology processes, discusses timing of EA during the overall project lifecycle, and associated costs in time and dollars (Centers for Disease Control and Prevention, 2012c).

    To support execution of the EA Process Guide and compliance with HHS architecture and FEA architecture, the CDDC developed the Enterprise Architecture Development Method (ADM). This method outlines six-steps: initiation, baseline analysis, target analysis, transition planning, transformation, and compliance. These steps involve early activities within the initiation step progressing towards creation of a current view, transitioning to a future view, and ending at compliance and continuous monitoring (Decker & Fitzpatrick, 2007).

    As part of the overall system lifecycle, the CDC has adapted the OMB prescribed Performance Improvement Lifecycle (PIL) for linking goals to IT results. Using the phases of Architect, Invest, and Implement, the CDC has a solid process in place to validate and provide “assurances that a project or investment is addressing specific capability gaps and providing intended performance improvements” (Centers for Disease Control and Prevention, 2009).

    As of October 2008, the CDC is well on track with meeting the requirements set forth for segmented architecture. The CDC has “all ready linked all major, tactical, and supporting investments to an EA defined segment” (Centers for Disease Control and Prevention, 2008a). However, public record of an evaluation of the CDC Unified Process on EA is not available from the Government Accountability Office (GAO) or the HHS Office of the Inspector General (OIG).

    One significant contributing factor to the success CDC has achieved can be attributed to the structure and dedication of resources. The CDC has established an Enterprise Architecture Program Management Office (PMO) that oversees and provides guidance to CDC program managers. In addition the EA PMO established the Enterprise Architecture Review Board (EARB) responsible for verifying that project proposals meet the requirements outlined by the EA. The CDC has established EA architecture teams throughout the organization with contracted expertise in supporting segment architecture. Additionally, the CDC has developed EA working groups (AWG) comprised of members across the CDC components that focus on specific EA related topics. For example, the topic of open source software in system development was addressed by an AWG resulting in specific policy for usage of open source software to reduce cost. Lastly, the CDC had developed Standing AWGs for security and for software management that are continuous working groups addressing these two concerns and the impact of each to the overall enterprise architecture (Centers for Disease Control and Prevention, 2007).

National Nuclear Security Administration – Enterprise Architecture

    As with the CDC, in order to properly evaluate the National Nuclear Security Administration’s Enterprise Architecture program, some discussion of their higher organization will be needed. The Department of Energy’s Information Resource Management (IRM) Strategic Plan identifies EA as the fourth objective of the second strategic goal. Specifically, the IRM, developed in 2009, has an objective of developing and maintaining an EA that “allows the Department’s Senior Leaders to make informed decisions when managing Information Technology” (U.S. Department of Energy, 2009).

    In April of 2011, the DOE Office of the Chief Information Officer published the DOE Enterprise Architecture Enterprise Transition Plan. This plan describes the DOE’s high-priority initiatives for migrating the department to the target (or to-be) architecture. To accomplish this transition, the DOE established the EA Governance Framework containing processes and oversight elements at the Departmental and staff office-levels. The specific entities involved in overseeing the development of the EA are the IT Council (ITC) and the Architecture Review Board (ARB). The review board is comprised of senior members of each Program and Staff Offices which oversees the EA work groups (U.S. Department of Energy, 2009). The DOE also appoints a Chief Architect and establishes the Enterprise Architecture Working Group to coordinate integration of respective program and staff office architectures into the DOE EA (U.S. Department of Energy, 2008).

    The DOE has established Baseline Architectures based off the Strategy layer, Business layer, services layer, data layer, and technology layer. Furthermore, EA baseline segments are identified that align with the FEA reference models. The 16 segments identified by DOE include core mission segments, business service segments, and enterprise service segments. At the center of the EA program is the Business Intelligence Tool and Enterprise Repository. This web-based application provides EA and portfolio management reporting and analytical functions to support stakeholders and governing bodies. It captures EA data such as application inventories, segment data, and dependencies across systems (U.S. Department of Energy, 2009).

    At the NNSA level, the CIO implementation plan published in 2012 defines a strategic goal for “enhancing our capabilities” by applying “an enterprise architecture approach to redesign the NNSA unclassified networks into one integrated NSE network” (National Nuclear Security Administration, 2011a). Another strategic goal is the “development of enterprise governance process for IT investments” by “establishing a process to identify clear roles and responsibilities for IT projects and ensure alignment with the NNSA Enterprise Architecture” (National Nuclear Security Administration, 2011a).

    The NNSA has limited information publicly available about its current progress with Enterprise Architecture. However, NNSA has contracted with Federated IT who develops and maintains an IT strategy and develops the Enterprise Architecture (Federated IT).

Compare and Contrast of Enterprise Architecture

    Based off information made available to the public, the Centers for Disease Control and Prevention have established an effective program for management of capital expenses including Enterprise Architecture. In contrast, the National Nuclear Security Administration program is in its infancy and has a long way to go before being truly effective and efficient. Minimally, both agencies have at least acknowledged the need and value of the EA program in context to IT capital planning.

What did the CDC do right?

    The CDC fully understands and has made sufficient steps to integrating EA into their overall capital planning practices. Specifically, the Unified Process is the key to success for the CDC (Centers for Disease Control and Prevention, 2012c). By having a centralized function for integrating EA, Program Management, Budgeting, and Capital Planning and Investment Control, the CDC has proven a level of maturity beyond that of the NNSA. In terms of EA, the CDC and the DHHS have established direct linkages between the FEA without reinventing the wheel (Centers for Disease Control and Prevention, 2007). The CDC’s Architecture Development Method further emphasizes the role that CDC plays in the overall DHHS architecture. The ADM focuses on providing a method to development segmented architecture supporting the current view (Centers for Disease Control and Prevention, 2007). This aspect seems to be missing from the NNSA program. Lastly, the DHHS has committed $57.5 Million towards enterprise architecture (United States Government, 2012) which is just over three times as much as the NNSA.

CDC growth opportunities

    In terms of Enterprise Architecture, the CDC is on the right track. Adequate financial commitment and emphasis has been placed on EA within the IT Strategic Plan (Centers for Disease Control and Prevention, 2008b). With the great progress already made, the agency can now focus on continual process improvement. Additionally, the CDC architecture defines two layers that have not been further defined, the Workforce layer and the Facilities layer (Centers for Disease Control and Prevention, 2007). This unique addition by the DHHS offers an interesting mix of human capital with the IT capital planning which may prove useful to the CDC in the future.

What did the NNSA do right?

    According to the IT Federal Dashboard, the DOE has committed $18.6 Million towards Enterprise Architecture (United States Government, 2012). Although the specific investments for those dollars could not be located within the dashboard, the commitment of financial resources at the department level matches the IT Strategy EA goals declared by both DOE (U.S. Department of Energy, 2009)and the NNSA (National Nuclear Security Administration, 2011a). The CDC and the NNSA overall annual budget is nearly identical with $11.2 Billion (U.S. Department of Health and Human Services, 2011) and 11.0 Billion (U.S. Department of Energy, 2011c) respectively; however, the amount the NNSA contributed to EA is substantially less than the CDC. Lastly, the DOE has appointed the CIO as having the responsibility for the EA program (U.S. Department of Energy, 2008). Assigning responsibility is essential to accomplishing the task and the DOE is on the right path.

NNSA growth opportunities

    The NNSA is planning to take the leap into enterprise architecture with the 2012-2016 strategy. Strategic goal 1-1 objectives includes applying an enterprise architecture approach to the current unclassified systems and goal 3-4 focuses on enterprise governance and establishing roles and responsibilities for IT projects and alignment with the NNSA EA (National Nuclear Security Administration, 2011a). The agency has acknowledged that its “current IT investments within NNSA are loosely controlled at best” (National Nuclear Security Administration, 2011a) and has made development of an EA a goal for the next five years.

    The DOE Inspector General conducted an audit of the overall DOE Enterprise Architecture program in 2005 with negative results. The IG noted that “the Department had not fully defined its current or future information technology requirements, essential elements if an architecture is to be an effective tool in managing technology investments” (U.S. Department of Energy, 2005). Furthermore, the NNSA did not provide comments on the draft IG report which does not reflect well upon the agency. (U.S. Department of Energy, 2005).

    Lastly, the NNSA received scrutiny from the GAO in February of 2011 in an audit on Nuclear Weapons. Among the findings, the GAO found that the “NNSA lacks complete data on (1) the condition and value of its existing infrastructure, (2) cost estimates and completion dates for planned capital improvement projects, (3) shared use facilities within the enterprise, and (4) critical human capital skills” (U.S. General Accounting Office, 2011). This finding further emphasizes the need for the NNSA to establish and utilize an Enterprise Architecture Program.

Topic Area 2: Overview of Exhibit 300s

    The exhibit 300 is an essential part of the overall IT capital planning process. The exhibit 300 “is designed to coordinate OMB’s collection of agency information for its reports to the Congress required” by federal law; “to ensure the business case for investments are made and tied to the mission statements, long-term goals and objectives, and annual performance plans” are developed (Office of Management and Budget, 2008b). The exhibit 300 is a one-stop shop for IT management issues pertaining to IT investments. The exhibit 300 is submitted annually along with an agency’s budget submission. The public-releasable exhibit 300 is required to be posted to the agency website within 2 weeks of the release of the President’s Budget (Office of Management and Budget, 2008b).

    In the overall IT Capital Planning and Control Process, the exhibit 300 works together with the agency’s Enterprise Architecture program and the exhibit 53. The exhibit 53, which will not be discussed in this paper, is a tool for reporting the funding of the portfolio. The exhibit 300 is further broken down into two documents, the A and B. Exhibit 300A provides detailed justifications for major IT investments where Exhibit 300B manages the execution of those investments through the life cycle. “By integrating the disciplines of architecture, investment management, and project implementation, these programs provide the foundation for sound IT management practices, end-to-end government of IT capital assets, and the alignment of IT investments with an agency’s strategic goals” (Office of Management and Budget, 2011).

The Exhibit 300A is broken down into four sections:
    1. Section A: Overview
    2. Section B: Investment Detail
    3. Section C: Summary of Funding (Budget Authority for Capital Assets)
    4. Section D: Acquisition/Contract Strategy (All Capital Assets)

    Section A provides a general overview of the investment with a name and unique investment identifier. Section B further breaks down the investment requiring a brief summary, description of association to existing performance gaps, and relation to specific legislation. This section further includes progress towards major milestones and accomplishments as well as anticipated out-year budget requests. Lastly, Section B includes contact information for those associated with that investment. Section C is a table listing financial figures for previous years, current year, and future years. Lastly, Section D outlines contract information associated with the investment including statuses, effective dates, descriptions, and earned value management data (Office of Management and Budget, 2011).

The Exhibit 300B is broken down into six sections:
     1. Section A: General Information
     2. Section B.1: Projects
     3. Section B.2: Activities
     4. Section B.3: Project Risk
     5. Section C.1: Operational Performance Information
     6. Section C.2: Operational Risk

    Section A provides a general overview of the investment with a name and unique identifier. Section B focuses on project execution data where B.1 lists project information and Program Manager information. Section B.2 lists all activities occurring during the current fiscal year in terms of key deliverables and earned value achieved. Lastly, Section B.3 includes risk assessments for the investment including probability, impact, and mitigation plans. Section C focuses on operational data where C.1 lists operational performance metrics as either results specific or technology specific. Section C.2 lists all significant operational related risks that are currently open including risk impact and mitigation plans (Office of Management and Budget, 2011).

Centers for Disease Control and Prevention – Exhibit 300s

    To better understand the Centers for Disease Control and Prevention’s exhibit 300 content, the current year (FY-12) Exhibit 300 and the future year (FY-13) Exhibit 300A&B documents for two major IT initiatives were reviewed. The first initiative reviewed is the CDC Information Technology Infrastructure with a total FY2012 spending of $78.5M. The second initiative reviewed is the CDC National Electronic Disease Surveillance System with a total FY 2012 spending of $12.5M.

    The CDC Information Technology Infrastructure initiative was reviewed by comparing the FY11 Exhibit 300 (Centers for Disease Control and Prevention, 2012a) with the 2009-2012 CDC IT Strategic Plan (Centers for Disease Control and Prevention, 2008b). Direct correlations were found between the strategic plan and the exhibit 300 on several topics. First, the CDC defined a strategic goal within the Information Technology Foundation as objective 2: “provide office-based and mobile computing power and data storage capacity to support CDC’s mission and operations” (Centers for Disease Control and Prevention, 2008b). This goal is reflected in the Exhibit 300 by acknowledging the need for a mobile CDC workforce and the accomplishment of implementing CDCMail, a private cloud model mail system (Centers for Disease Control and Prevention, 2012a).

    In addition to mobile computing, the CDC exhibit 300 has another correlation to the Strategic Plan within Goal Area 5 – Collaborative Work and Innovation. Objective 1 of this goal is to “accelerate innovation in public health by leveraging information technology and processes that support collaborative work” (Centers for Disease Control and Prevention, 2008b). The Exhibit 300 reflects that goal with the accomplishment of establishing a videoconferencing system connecting eight CDC locations (Centers for Disease Control and Prevention, 2012a).

    The FY 2012 Exhibit 300B for the CDC’s Information Technology Infrastructure initiative provided excellent insight on performance with addition of Key Performance Indicators and Performance Metrics. All performance metrics for the initiative were met or exceeded including cost per user ($5822/user), number of expanded service offerings (+5), number of users supported per IT staff member (78), and service level agreement timeliness (100%) (Centers for Disease Control and Prevention, 2011a). As with the FY11 Exhibit 300, the FY12 Exhibit300A&B has several correlations to the overall IT Strategy including customer-centric IT services, improving service delivery, and providing scientists with need tools via service offerings (Centers for Disease Control and Prevention, 2008b).

    The next CDC initiative evaluated is the National Electronic Disease Surveillance System (NEDSS). The FY11 Exhibit 300 and the FY12 Exhibit 300A&B both emphasize the role that this project has in relation to the overall CDC mission and vision. Furthermore, the NEDSS system directly contributes to the core mission to “enable CDC to effectively share knowledge…and deliver health information and interventions using customer-centered… strategies to protect and promote the health of diverse populations” (Centers for Disease Control and Prevention, 2008b). This system also aligns with the CDC Director’s strategic priority to “strengthen epidemiology and surveillance capability” (Centers for Disease Control and Prevention, 2012b).
    Despite the overwhelming positive pictures painted by the all the Exhibit 300s analyzed, the FY12 Exhibit 300 A&B shows potentially poor performance based off earned value management within the NEDSS system. Specifically, the comparison of planned work completed and actual work completed shows several activities that have not accomplished the anticipated amount of work. However, in every instance the actual dollar amount cost was significantly lower than the planned costs (Centers for Disease Control and Prevention, 2011b). This data should raise concern among CDC leadership.

National Nuclear Security Administration –Exhibit 300s

    To better understand the National Nuclear Security Administration’s Exhibit 300 content, two current FY12 exhibit 300A&B documents were retrieved from the Federal IT Dashboard (United States Government, 2012) and compared to the agency’s 2012-2016 IT strategic planning documents. The first initiative reviewed was the NNSA’s Enterprise Resource Planning (ERP) system with a total spending for FY2012 of $8.3 Million. The second initiative reviewed was the Advanced Simulation and Computing program named Sequoia with a total spending for FY2012 of $101.9 Million.

    The Exhibit 300A&B for the ERP initiative described a steady state system that has already met a 1.5 year return on investment. The document directly referenced the Department of Energy strategic goal of building a “fully integrated resource management strategy that supports mission needs” as well as the NNSA goal to “create an efficient, effective, and less risk-averse enterprise through simplified business and management processes” (National Nuclear Security Administration, 2011c). The exhibit 300 defines six gaps that were closed by the ERP initiative including Year 2000 (Y2K) issues, manual workflows, manual business processes and operations, data redundancy and inconsistency, cost reporting shortfalls, and costly technical migrations of legacy systems.

    When comparing this initiative to the NNSA 2012-2016 CIO Implementation Plan (National Nuclear Security Administration, 2011a), there was a lack of references within the strategic goals of which this initiative would fulfill. However, the initiative could play a minor role in achieving some of the strategic goals but in an indirect manner. For example, strategic goal 1-5 is to “Improve Business Processes” (National Nuclear Security Administration, 2011a). Because the ERP initiative aims to reduce manual workflows by creating electronic or automated processes, this initiative could improve the overall business processes and contribute to accomplishing that goal.
The next Exhibit 300A&B reviewed is the NNSA initiative for the Advanced Simulation and Computing (ASC) program Sequoia. This initiative is an “uncertainty quantification and weapon science resource” utilized by several weapons laboratories (National Nuclear Security Administration, 2011b). Because of the nature of this classified system, very little information was provided. However, the Exhibit 300A&B provided general information showing that the project manager is qualified to hold the position, that an analysis of alternatives has been conducted, and that the system is not compatible with cloud computing. Furthermore, the Exhibit 300A&B showed a recent update to the project’s risk register. In terms of project performance, the initiative is currently on schedule and under budget.

    The Sequoia initiative’s performance metric attributes table shows that the program is substantially meeting the performance indicators. For example, the “intended use” indicator had a target monthly usage percentage of 20% while the actual results were 74% usage. Additionally, the “percent time available” achieved a difficult 99% uptime when the target goal was 75%. Both of these performance indicators emphasize the success of the program.

    In terms of strategic planning and Sequoia’s overall role in the capital planning process, the program has earned a “green” rating depicting a successful program. The initiative fits into the agency’s overall strategic plan by contributing to strategic goal 1-5: improving business processes. The exhibit 300A&B briefly mentions the role in Enterprise Architecture but detailed information had been removed due to classification (National Nuclear Security Administration, 2011b).

    Despite the quality of the exhibit 300A&B documents, the NNSA has not identified very many major IT investments. Within the Federal IT Dashboard (United States Government, 2012), the NNSA only has four investments: two simulation investments, one classified network infrastructure investment, and the ERP investment. The FY2012 budget document identifies a $1 Billion dollar IT capital investment portfolio and nearly $500 Million in other annual IT expenditures (U.S. Department of Energy, 2011b, p. 30). Initial review shows significant IT capital investments not being reported in the Federal IT Dashboard.

Compare and Contrast of Exhibit 300s

    Reviewing both agencies’ exhibit 300s and assessing the overall contribution to their IT capital planning process was challenging due to insufficient information. Both agencies removed significant information from the exhibits in the public versions. All exhibit 300s that were reviewed did not have detailed information about how the initiative fits into the Enterprise Architecture or information about future budget year projections. However, with the exception of redacting information, both agencies had substantial content in their business cases and supporting documentation.

    One unique aspect worth noting is the variance in quantity and quality of information provided between investment initiatives. Within the CDC, the most expensive capital investment was the IT infrastructure project but the corresponding business case was comparatively less substantive than the other initiatives. Similarly, the NNSA’s most expensive initiative was the Sequoia program but the business case for that program was also less thorough than the low cost ERP program. When reviewing the business cases for quality, it is the author’s opinion that both agencies are lacking substantial information and provided minimal data as required by law.

Conclusion

    In conclusion, the Centers for Disease Control and Prevention and the National Nuclear Security Administration both have programs in place to address IT capital planning. However, the agencies’ programs cannot be fairly compared with each other to determine which is better. Due to the nature of the NNSA, very little information has been made publicly available about their enterprise architecture program. In contrast, the CDC has made their entire investment control program known as the Unified Process (UP) available to the public. There is ample information available showing that both agencies are addressing enterprise architecture despite the lack of detailed public information.

    The review of exhibit 300 documents from both agencies also revealed a similar trend. Both agencies prepare the exhibits as required by law but the details of those documents are removed before being uploaded to the public dashboard repository. This prevents thorough analysis. However, both agencies reference the overall capital planning process including Capital Planning and Investment Control (CPIC) and Enterprise Architecture.
This paper provided insight to the overall capital planning process employed by two federal government agencies. A brief overview of Enterprise Architecture and the Business Case was presented along with a description of how that agency addressed the requirements. When possible, comparisons between agencies provided insight into best practices and identified some shortcomings in processes. While each agency approached the two elements of capital planning in different ways, it was found that both agencies successfully achieved the end-state goal of matching strategic goals to IT capital investments.

References
  • Centers for Disease Control and Prevention. (2007, June 22). Enterprise Architecture at CDC. Retrieved March 10, 2012, from Centers for Disease Control and Prevention Homepage: http://www2a.cdc.gov/cdcup/library/presentations/ea/default.asp
  • Centers for Disease Control and Prevention. (2008a, October). EA Segment Architecture. Retrieved March 12, 2012, from Project Management Newsletter: http://www2a.cdc.gov/cdcup/library/newsletter/CDC_UP_Newsletter_v2_i10.pdf
  • Centers for Disease Control and Prevention. (2008b). Information Technology Strategic Plan FY 2009-2012. Washington, DC: Office of the Chief Information Officer.
  • Centers for Disease Control and Prevention. (2009, January). Enterprise Architecture Alignment with EPLC. Retrieved March 12, 2012, from Project Management Newsletter: http://www2.cdc.gov/cdcup/library/newsletter/CDC_UP_Newsletter_v3_i1.pdf
  • Centers for Disease Control and Prevention. (2011a, February 28). CDC Information Technology Infrastructure - FY 12 Exhibit 300A&B. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/investment?buscid=258
  • Centers for Disease Control and Prevention. (2011b, February 28). National Electronic Disease Surveillance System - FY12 Exhibit 300. Retrieved March 22, 2012, from Federal IT Dashboard: National Electronic Disease Surveillance System - Current Exhibit 300
  • Centers for Disease Control and Prevention. (2012a, February 22). CDC Information Technology Infrastructure - Current Exhibit 300. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/investment?buscid=258
  • Centers for Disease Control and Prevention. (2012b, February 24). National Electronic Disease Surveillance System - Current Exhibit 300. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/investment?buscid=262
  • Centers for Disease Control and Prevention. (2012c). About the CDC Unified Process. Retrieved March 10, 2012, from Centers for Disease Control and Prevention Homepage: http://www2.cdc.gov/cdcup/library/other/about_up.htm
  • Decker, A., & Fitzpatrick, J. (2007). How the CDC Enterprise Architecture Development Methodology Can Help You. CDC.
  • FEA-SPP Working Group. (2010). Federal Enterprise Architecture Security and Privacy Profile. Washington, DC: Government Printing Office.
  • Federated IT. (n.d.). Project Profile - Department of Energy (DOE) National Nuclear Security Administration (NNSA). Retrieved March 21, 2012, from Federated IT Corporate Site: http://federatedit.com/services/information-assurance/nnsa
  • National Nuclear Security Administration. (2011a). Implementation Plan 2012-2016. Office of the Chief Information Officer.
  • National Nuclear Security Administration. (2011b, February 28). NNSA ASC LLNL Sequoia Platform - FY122 Exhibit 300. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/investment?buscid=571
  • National Nuclear Security Administration. (2011c, February 28). NNSA Y12 ERP - FY12 Exhibit 300. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/investment?buscid=575
  • Office of Management and Budget. (2008a). Improving Agency Performance Using Information and Information Technology.
  • Office of Management and Budget. (2008b). Planning, Budgeting, Acquisition, and Management of Capital Assets. In Circular No. A-11. Washington, DC: Government Printing Office.
  • Office of Management and Budget. (2011). FY13 Guidance on Exhibit 300 - Planning, budgeting, acquisition, and management of IT Capital Assets. Washington, DC: Government Printing Office.
  • The Chief Information Officers Council. (1999). Federal Enterprise Architecture Framework. Washington, DC: Government Printing Office.
  • U.S. Department of Energy. (2005). Audit Report: Development and Implementation of the Department's Enterprise Architecture. Inspector General. Washington, DC: Government Printing Office.
  • U.S. Department of Energy. (2008). Order 200.1A - Information Technology Management. Washington, DC: Office of the Chief Information Officer.
  • U.S. Department of Energy. (2009). Information Resources Management Strategic Plan 2009-2011. Washington, DC: Office of the Chief Information Officer.
  • U.S. Department of Energy. (2011a). Department of Energy FY 2012 Congressional Budget Request - National Nuclear Security Administration. Washington, DC: Government Printing Office.
  • U.S. Department of Energy. (2011b). FY 2012 Congressional Budget Request - National Nuclear Security Administration. Washington, DC: Government Printing Office.
  • U.S. Department of Energy. (2011c). FY 2013 Summary Control Table by Organization. Washington, DC: Government Printing Office.
  • IT CAPITAL PLANNING: EA AND EXHIBIT 300 AT CDC AND NNSA 26
  • U.S. Department of Health and Human Services. (2011). FY2012 Centers for Disease Control and Prevention Justification for Estimates for Appropriations Committees. Washington, DC: Government Printing Office.
  • U.S. General Accounting Office. (2011). NNSA Needs More Comprehensive Infrastructure and Workforce Data to Improve Enterprise Decision-making. Washington, DC: General Accounting Office.
  • United States Government. (2012). Portfolio. Retrieved March 22, 2012, from Federal IT Dashboard: http://www.itdashboard.gov/

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

A Case Study on Effective IS Governance within a Department of Defense Organization

posted May 14, 2015, 4:59 PM by Christopher Furton   [ updated Dec 13, 2015, 10:35 AM ]

Written by Christopher Furton

A Case Study on Effective IS Governance within a Department of Defense Organization
Abstract
    This case study develops influencing factor that should be considered when developing an effective information security governance program with a Department of Defense weapons system test and evaluation organization. The influencing factors are then incorporated into an existing governance framework developed by A. Da Veiga and J. H. P. Eloff (2007). The result is a unique framework tailored to the organization which can be used as the foundation to building a holistic information security program.

A Case Study on Effective IS Governance within a Department of Defense Organization
    With the advancements of technology and the Internet, security of information has become a substantial concern for many companies, non-profits, government agencies, and educational institutions. With an abundance of information available to help organization’s establish information security programs, the challenge still exists on how to structure those programs to maximize benefits and reduce risk. Information Technology (IT) security governance plays a critical role with integration of business models with IT security models. Through the use of an IT security governance framework, organizations can develop appropriate information security components and align them with their overall strategic, business, and technical objectives and goals (Veiga & Eloff, 2007).
    The IT Security governance framework, along with the selected information security components, lives within the context of the Information Technology Security Architecture (ITSA) Framework developed by Bernard and Ho (2008). The intent of the IT security governance framework is to further explore the first layer of the ITSA Framework and provide organizations with a launching point in developing their security governance programs. Additionally, an important aspect of the IT security governance framework is that each organization can develop a unique framework that fits their needs.
    This case study paper will further explore IT security governance frameworks by assessing influencing factors that must be considered and developing a unique framework applicable to a Department of Defense weapons development program office within the United States Marine Corps. The organization, referred to as Program Office, specializes in testing and evaluation of prototype weapon systems and relies heavily on information systems for office applications, test data collection, report creation, and historical information archiving. The resulting unique framework will set the stage for implementation of an effective Information Security (IS) governance program.

Discussion: Influencing Factors

    As a Federal Government organization, several high level factors must be considered when creating the unique framework. Specifically, Congress has developed federal legislation defining legally accepted behavior and requires organizations to take specific steps to protect personal information. The Gramm-Leach-Bailey (GLB) Act “requires financial institutions to protect the confidentiality and integrity of the personal information of consumers” (Khoo, Harris, & Hartman, 2010). Additional legislation, such as the Sarbanes-Oxley Act (SOX), focuses on the financial sector where it supports “a simple premise: good corporate governance and ethical business practices are now required by law” (Khoo, Harris, & Hartman, 2010).
    A substantial piece of federal legislation known as the Federal Information Security Management Act (FISMA) is an influencing factor which must be considered for the IS governance program. As discussed by Ely (2010), FISMA can be overwhelming and confusing to many organizations. With emphasis on mandatory monitoring, reporting, control testing, and many other areas, FISMA compliance has developed a form of “cottage industry” where expensive contractors offer assistance meeting the demands of the legislation and audit support. FISMA compliance is an overarching requirement that the Program Office must consider when developing their IS governance framework.
    In response to the FISMA, the National Institute of Standards and Technology (NIST) developed a series of publications that will be important to the Program Office’s IS governance program. Federal Information Processing Standards Publications (FIPS PUBS) are issued by NIST and require Federal organizations to comply with outlined measures. For example, FIPS PUB 199 requires organizations to categorize their information systems in terms of potential impact should certain events occur. The impact level is assessed based off each security objective from FISMA – confidentiality, integrity, and availability (National Institute of Standards and Technology, 2004). Since the Program Office must be compliant with FISMA, the governance program will need to consider NIST’s FIPS publications as a key component.
    In addition to federal legislation, the IS governance framework should consider industry standards and accepted best practices for information security. While several organization’s develop standards, one particular organization should be included in the Program Office’s IS governance framework. The International Organization for Standardization (ISO) develops international standards enabling “a consensus to be reached on solutions that meet both the requirements of business and the broader needs of society” (International Organization for Standardization, 2011). Because the Program Office directly supports commercial weapons development programs, including ISO standards in the IS governance framework will increase interoperability between commercial developers and the government test and evaluation entity.
    In addition to international standards, the Program Office governance framework will include many of the information security components identified by A. Da Veiga and J. H. P. Eloff (2007). Table 1 lists the security components that will be considered for implementation in the framework. Many of those security components were adapted from ISO standards, NIST publications, and the maturity models.
Information Security Components


    Another relevant consideration for development of the governance program is the role of continual process improvement and organizational process maturity. Several tools already exist which can help the organization achieve gains. The Information Technology Infrastructure Library (ITIL) model offers a set of tools that the Program Office can leverage to implement best practices and promote higher service quality levels and cost optimization (MEGA international Ltd, 2005). Although a complete implementation of ITIL is often complicated, many of the components of ITIL will support the overall governance program and contribute to effective continual process improvement and organizational maturity.
    In addition to ITIL, the Open Group Information Security Management Maturity Model (O-ISM3) offers to help organizations align Information Security Management (ISM) systems with the business mission and compliance needs. The O-ISM3 “delivers a process-based approach to information security management, and enables continuous improvement through the use of key security metrics” (The Open Group, 2011). The O-ISM3 is one of many maturity models. Research conducted by Roberto Saco (2008) estimates that between 100 to 200 different models are in existence today. Despite the varied approaches to measuring maturity, the best option is to use a combined strategy that takes pieces of different models with the goal of developing a unique model developed specifically for the Program Office. No one model will offer exactly what the Program Office needs.

Program Office Unique Framework

    In developing the governance framework for the Program Office, the above discussed influencing factors were evaluated and organized into an overall implementable governance program. This framework is simply the starting point for the Program Office in implementing a breadth of governance programs. Overtime, the governance framework will need frequent review and revamping to keep current with business goals and environmental variables.
    The fundamental basis for the Program Office’s security governance framework is based from the work of Veiga & Eloff (2007). Adaptations were made to address the influencing factors mentioned earlier in this paper as they relate to the unique circumstances of the Program Office. The resulting framework consists of six pillars ranging from strategic level influences down to the technical level. A diagram is provided as Figure 1.
Information Security Governance Framework


    The first of six pillars is the Leadership pillar within the strategic realm of the organization. In order for the IS governance framework to work at the Program Office, the senior leadership (Colonels and Lieutenant Colonels) must support the initiatives and drive the program from top down. The Commanding Officer needs to sponsor the program giving the highest level endorsement stamp on the endeavor. In addition, the Chief Information Officer (CIO) must subscribe and make effort to develop an Information Technology governance program as the IS governance framework will interact with the IT program. Finally, the organization shall make use of business case documentation to drive capital decisions. Risk assessments and mitigation shall take place as part of the standard Risk Management Program (RMP) already in existence at the Program Office.
    The second of the six pillars is the Security Management & Organization. This pillar falls within the managerial and operational realm and considers the high level concerns such as overall structure of the organization as it relates to information security. Laws such as FISMA and the Digital Millennium Copyright Act are included in the Legal & Regulatory component of this pillar.
    The third of the six pillars is the IT Security Policies. This pillar falls within the managerial and operational realm where requirements from Department of Defense and United States Marine Corps Orders are considered. Best practices and guidelines are considered from various bodies of work including ITIL, FIPS, and the Defense Information Systems Agency’s (DISA) Security Technical Implementation Guides (STIG). The bulk of the Program Office’s information security policies fall within this pillar of the overall IS governance framework.
    The fourth of the six pillars is the Security Program Management. Monitoring the actions of employees and ensuring that technology is working as expected is an important part of the overall information security program. Monitoring allows for identification of anomalies and effective response to incidents (Veiga & Eloff, 2007). As discussed in the Influencing Factors section, FISMA contains continuous monitoring requirements and this pillar includes the needed compliance actions.
    The fifth of the six pillars is the User Security Management. This pillar is the last of the managerial and operational pillars and considers user-related security components. User awareness helps develop a pro-security culture where the human aspect is considered and not just the technical controls. According to Experian’s Chief Information Security Officer, “The human element is the largest security risk in any organization” (Kaplan, 2010). The emphasis of this pillar is to reduce the impact of the user on the overall security program by means of training, trust building, ethical conduct, and emphasis on privacy protection.
    Lastly, the sixth of the six pillars is the Technology Protection and Operations. This pillar is the only technical level of the IS governance framework. This pillar focuses on keeping track of capital equipment, managing incidents, and addressing identified risks (Veiga & Eloff, 2007). Additionally, this pillar includes physical controls needed to safeguard equipment from loss and theft.
In addition to the components identified for each pillar, two components stretch across all levels (strategic, operational, and technical) of the framework. The configuration management and business continuity planning components are applicable to all levels of the framework and must be considered before any changes are implemented. The Program Office’s Business Continuity Plan (BCP) falls within this component and is influenced by pieces of several other pillars of the framework, namely the policies requiring the plan and the need to continue operations based off mission requirements.

Conclusion
In conclusion, every organization has unique influences that should be considered when developing an information security governance program or framework. For the Program Office, the basic framework developed by Veiga and Eloff (2007) was adapted to meet the unique considerations imposed by federal legislation and Defense Department policies. Frameworks are broad and vague by nature and require further detail and customization to ensure successful implementation within various organizations. This case study paper identified the influencing factors and created a more specific framework ready for implementation. With ongoing adjustments throughout the life of the governance program, this framework can provide an effective information security governance program within the Program Office.

References

  • Bernard, S., & Ho, S. M. (2008). Enterprise Architecture as Context and Method for Designing and Implementing Information Security and Data Privacy Controls in Government Agencies.
  • Ely, A. (2010). 10 Steps To Ace A FISMA Audit. Information Week, 38-42.
  • International Organization for Standardization. (2011). About ISO. Retrieved 2 8, 2012, from ISO - International Standards for Business, Government and Society: http://www.iso.org/iso/about.htm
  • Kaplan, D. (2010, February 1). Weakest link: End-user education. Retrieved February 10, 2012, from SC Magazine: http://www.scmagazine.com/weakest-link-end-user-education/article/161685/
  • Khoo, B., Harris, P., & Hartman, S. (2010). Information Security Governance of Enterprise Information Systems: An Approach To Legislative Compliant. International Journal of Management & Information Systems, 49-55.
  • MEGA international Ltd. (2005, 11 9). MEGA International: MEGA ITIL accelerator helps IT departments optimise cost-effective quality of service, support and delivery. Coventry, UK.
  • National Institute of Standards and Technology. (2004, February). Standards for Security Categorization of Federal Information and Information Systems. Gaithersburg, MD.
  • Saco, R. (2008). Maturity Models: Inject New Life. Industrial Management, 11-15.
  • The Open Group. (2011, Apr 11). The Open Group Releases Maturity Model for Information Security Management: O-ISM3 Framework Ensures Security Management Processes Operate at a Level Consistent with Business Requirements. New York, NY, US.
  • Veiga, A. D., & Eloff, J. H. (2007). An Information Security Governance Framework. Information Systems Management, 361-372.

About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

Mitigating Botnet Information Security Risks through EA and the ITSA - Part 4 of 4

posted Apr 1, 2015, 1:55 PM by Christopher Furton   [ updated Dec 13, 2015, 10:35 AM ]

Written by: Christopher Furton
Mitigating Botnet Information Security Risks through Enterprise Architecture (EA) and the Information Technology Security Architecture (ITSA)

Part 4 of 4

Part IV – Conclusion

In conclusion, botnet activity is a substantial threat to the enterprise environment. With evolving capabilities, botmasters will continue to stay at the cutting edge of technology and devise new ways to avoid detection. Part I of this paper discussed the evolution of botnets from the days of Internet Relay Chat to the modern social media. Propagation techniques have evolved to stay ahead of security professionals and some advanced botnets are specifically designed to attack an intended target of the enterprise environment. Lastly, part I briefly described some of the malicious activities that botmasters use botnets for including distributed denial of service and for-profit activities. Throughout part I, 19 risk area topics were identified that directly relates to botnet activity. If unmitigated, these risk area topics can result in botnet infection and subsequent damages.

Part II of this paper introduced a method to mitigate the risk area topics by implementing the Enterprise Architecture and Information Technology Security Architecture models. Through the layers of these models, it was shown that many of the botnet risks can be mitigated by implementing a holistic approach to information security.

Lastly, Part III of this paper provided a case study where a nation-state uses part of the business continuity planning process of the Information Technology Security Architecture to mitigate a distributed denial of service attack.

References

  • (IN)Secure. (2010, April 02). Botnets drive the rise of ransomeware. Retrieved April 25, 2012, from Help Net Security: http://www.net-security.org/secworld.php?id=9095 
  • Uses of botnets. (2008, August 10). Retrieved April 22, 2012, from The Honeynet Project: http://www.honeynet.org/node/52 
  • Bailey, M., Cooke, E., Jahanian, F., Xu, Y., & Karir, M. (2009). A Survey of Botnet Technology and Defenses. Ann Arbor, MI. 
  • Bernard, S. (2008-2009). Enterprise Information Security Architecture V2.2. KSA Learning Point 5.7. 
  • Bernard, S. A. (2005). An Introduction to Enterprise Architecture: second edition. Bloomington, IN: AuthorHouse. 
  • Bernard, S., & Ho, S. M. (2008). Enterprise Architecture as Context and Method for Designing and Implementing Information Security and Data Privacy Controls in Government Agencies. 
  • Bu, Z., Bueno, P., Kashyap, R., & Wosotowsky, A. (2010). The New Era of Botnets. Santa Clara, CA: McAfee Labs. 
  • Choo, K.-K. R. (2007). Zombies and botnets. Woden: Australian Institute of Criminology. 
  • Cooke, E., Jahanian, F., & McPherson, D. (2005). The Zombie Roundup: Understanding, Detecting, and Disrupting Botnets. Ann Arbor, MI: University of Michigan and Arbor Networks. 
  • Dagon, D. (2005). Botnet Detection and Response: The Network is the Infection. Retrieved Aorik 22, 2012, from OARC Workshop: http://www.caida.org/funding/dns-itr/events/200507/slides/oarc0507-Dagon.pdf 
  • Dagon, D., Gu, G., Lee, C. P., & Lee, W. (n.d.). A Taxonomy of Botnet Structures. Atlanta, GA: Georgia Institute of Technology. 
  • Damballa. (2011, February 8th). Canned Sandboxing. Retrieved April 26, 2012, from Damballa - The Day Before Zero: http://blog.damballa.com/?p=1097 
  • Ferguson, R. (2010, December 15). 2010 - Year of the Zombie Cloud? Retrieved April 20, 2012, from TrendMicro - ConterMeasures Blog: http://countermeasures.trendmicro.eu/2010-year-of-the-zombie-cloud/ 
  • Ferguson, R. (2010, September 24). The history of the botnet - Part I. Retrieved April 19, 2012, from TrendMicro - CounterMeasures Blog: http://countermeasures.trendmicro.eu/the-history-of-the-botnet-part-i/ 
  • Ferguson, R. (2010, September 27). The history of the botnet - Part II. Retrieved April 19, 2012, from TrendMicro - CounterMeasures Blog: http://countermeasures.trendmicro.eu/the-history-of-the-botnet-part-ii/ 
  • Ferguson, R. (2010, November 5). The history of the botnet - Part III. Retrieved April 19, 2012, from TrendMicro - CounterMeasures Blog: http://countermeasures.trendmicro.eu/the-history-of-the-botnet-part-iii/ 
  • Haag, S., Cummings, M., & Rea, Jr, A. I. (2004). Computing Concepts, 2nd Edition. McGraw-Hill Higher Education. 
  • Hinson, G. (2008, April 29). CERT's podcasts: Security for Business Leaders: Show Notes. Retrieved April 25, 2012, from Cert.org: http://www.cert.org/podcast/notes/20080429hinson-notes.html 
  • Kartaltepe, E. J., Morales, J. A., Xu, S., & Sandhu, R. (2010). Social Network-Based Command-and-Control: Emerging Threats and Countermeasures. San Antonio, TX: Springer-Verlag Berlin Heidelberg. 
  • Kolakowski, N. (2010, March 03). Spain, IT Security Companies Sting Mariposa Botnet. Retrieved April 22, 2012, from eWeek: IT Security & Network Security News: http://www.eweek.com/c/a/Security/Spain-IT-Security-Companies-Sting-Mariposa-Botnet-390027/ 
  • Martin, R. A. (2003). Integrating Your Information Security Vulnerability Management Capabilities Through Industry Standards (CVE & OVAL). IEEE, 1528-1533. 
  • McAfee, Inc. (n.d.). Network Intrusion Prevention. Retrieved April 26, 2012, from McAfee.com: http://www.mcafee.com/us/products/network-security/network-intrusion-prevention.aspx 
  • MXPolice. (2011, July 1). Zeus Botnet Being Spread Through Fake IRS Spam Campaign. Retrieved April 22, 2012, from MXPolice.com: http://www.mxpolice.com/spam-trends/zeus-botnet-being-spread-through-fake-irs-spam-campaign/ 
  • Nagaraja, S., Houmansadr, A., Piyawongwisal, P., Singh, V., Agarwal, P., & Borisov, N. (n.d.). Stegobot: a covert social network botnet. New Delhi, India & Urbana, IL. 
  • Naseem, F., Shafqat, M., Sabir, U., & Shahzad, A. (2010). A Survey of Botnet Technology and Detection. International Journal of Video & Image Processing and Network Security, 13-17. 
  • Ollmann, G. (2009, November 25). Enterprise versus Broad-spectrum Internet Botnets. Retrieved April 19, 2012, from Damballa Blog: The Day Before Zero: http://blog.damballa.com/?p=426 
  • Raywood, D. (2010, November 29). A condensed history of the botnet. Retrieved April 19, 2012, from SCMagazine UK: http://www.scmagazineuk.com/a-condensed-history-of-the-botnet/article/191636/ 
  • Scambusters. (2006). Ransomware: How to Protect Yourself. Retrieved April 25, 2012, from Scambusters.org: http://www.scambusters.org/ransomware.html 
  • Schectman, J. (2012, April 12). Get Ready for the Return of the Botnets. Retrieved April 29, 2012, from wsj.com: http://mobile.blogs.wsj.com/cio/2012/04/12/get-ready-for-the-return-of-the-botnets/ 
  • Security Focus. (2008, November 13). McColo takedown nets massive drop in spam. Retrieved April 20, 2012, from Security Focus: http://www.securityfocus.com/brief/855 
  • Singer, M. (2010). Security and the Virtual Enterprise. Retrieved April 26, 2012, from AT&T: http://www.corp.att.com/tlf/docs/singer_presentation.pdf 
  • The H Security. (2011, August 25). Botnet attacks pizza delivery service. Retrieved April 19, 2012, from The H Security: http://www.h-online.com/security/news/item/Botnet-attacks-pizza-delivery-service-1330816.html 
  • The H Security. (2011, April 4). Twitter-controlled botnet mines Bitcoins. Retrieved April 19, 2012, from The H Security: http://www.h-online.com/security/news/item/Twitter-controlled-botnet-mines-Bitcoins-1318497.html 
  • Tikk, E., Kaska, K., Runnimeri, K., Kert, M., Taliharm, A.-M., & Vihul, L. (2008). Cyber Attacks Against Georgia: Legal Lessons Identified. Tallinn, Estonia: CCDCOE. 
  • Wang, P., Sparks, S., & Zou, C. C. (2010). An Advanced Hybrid Peer-to-Peer Botnet. IEEE Transactions on Dependable and Secure Computing, Vol. 7(No. 2), 113-127. 
  • Websense. (2008). Websense Security Labs: State of Internet Security Q1-Q2, 2008. Websense, Inc. 
  • Xin-liang, W., Lu-Ying, C., Fang, L., & Zhen-ming, L. (2010). Analysis and Modeling of the Botnet Propagation Characteristics. Beijing, China: IEEE- Beijing University of Posts and Telecommunications. 
  • Zavoina, A. (1998). Crafting an Internet Acceptable Use Policy. ABA Bank Compliance, 29-31. 
  • Zhang, G.-Y., Li, J., & Gu, G.-C. (2004). Research on Defending DDoS Attack - An Expert System Approach. 2004 IEEE International Conference on Systems, Man and Cybernetics, 3554-3558. 



About the Author

Christopher Furton author bio picture
Christopher Furton

is an Information Technology Professional with over 12 years in the industry.  He attended The University of Michigan earning a B.S. in Computer Science and recently completed a M.S. in Information Management from Syracuse University.  His career includes managing small to medium size IT infrastructures, service desks, and IT operations.  Over the years, Christopher has specialized in Cyber Security while working within the Department of the Defense and the United States Marine Corps. His research topics include vulnerability management, cyber security governance, privacy, and cyber risk management.  He holds active IT Certifications including the CISSP, CEH, ITIL Foundations, Security+CE and Network+CE.  He can be found on , , and .  

Additional information available on Christopher Furton's website at

1-10 of 18

Comments