John Boyer (IBM)Spectrum Control Base Edition blog by the IBM Flash Centers of Competency

The IBM Flash Centers of Competency has posted a very good blog about our latest SCB version

You can read the blog here

John Boyer (IBM)MQ Problem Determination : Resource problems

This article covers resource usage by MQ processes, determining and resolving problems related insufficient resources and user's resource limit configurations, tuning the resource limits etc.


Basic checks before tuning MQ or kernel parameters

  • Check whether the number of active connections is within the expected limit.
    • For example, if the system is tuned to allow 2000 connections with 3000 users processes limit, then an increase in more than 3000 processes indicates increase in number of connections. This could be either because of addition of new applications or connection leak.
    • Commands
      • Number of MQ processes(All Unix except Linux) : ps -elf|egrep "amq|run"|wc -l
      • Number of MQ processes(Linux) : ps -eLf|egrep "amq|run"|wc -l
      • Number of connections(All Unix/Linux)   : echo "dis conn(*) all" | runmqsc <qmgr name>|grep EXTCONN|wc -l
      • Shared memory usage(All Unix/Linux ):  ipcs -ma
      • Shared memory usage with project details(Solaris) : ipcs -mJ
  • If the number of connections is higher than the expected limit then check the source of the connections
  • If the shared memory usage is very high, check the following
    • number of topics
    • number of open queue handles
  • What resources need to be checked and tuned from MQ perspective?
    • Max user processes and Number of threads(Linux only)
    • data segment
    • stack segment
    • file size
    • open file handles
    • shared memory limits
    • thread limits(e.g. threads-max on Linux)
  • mqconfig script will be useful to check the current resource usage

Note : Some of resources listed above  need to be tuned at user level and some at OS level

Note : The list provided above is not a complete list, but would be sufficient for most common resource issues reported by MQ

Note:  Tuning is required at thread level in Linux as each thread is a LWP in Linux.

Problem in creating threads or processes from MQ or application

Failure in xcsExecProgram and xcsCreateThread


Probe Ids and Components

XY348010 from xtmStartTimerThread with from MQ process(e.g. amqzlaa0) or application

XC037008 from xcsExecProgram with xecP_E_PROC_LIMIT from amqzxma0.

XC035040 xcsCreateThread

XC037007 from xcsExecProgram with xecP_E_NO_RESOURCE

Probe IDs might be different. Check for the error codes xecP_E_PROC_LIMIT and xecP_E_NO_RESOURCE.


Resolving the problem on LINUX, AIX and HP-UX

  • MQ sets the error code xecP_E_PROC_LIMIT when pthread_create or fork fails with EAGAIN.
  • MQ sets the error code xecP_E_NO_RESOURCE when pthread_create or fork fails with ENOMEM.



Check and increase the number of processes per user resource limit and stack resource limits using ulimit command


  • Check and increase the stack and data resource limits for the user (e.g. mqm, mbadmin) who starts queue manger and MQ  application.
  • The resource limits can be increased by using ulimit or by changing the resource limit config file.
  • Note:  The changes using ulimit is temporary. Modify /etc/security/limits or /etc/security/limits.conf to make the changes permanent.  This configuration might be different on different Operating Systems.
  • Note: It also is worth checking whether the system is running short of resources (memory and CPU)


Additional configuration required on LINUX:

Review and increase the limit for /proc/sys/kernel/threads-max kernel parameter.

Additional configuration required on SOLARIS:

Check and increase the stack(process.max-stack-size) and data resource limit for the project using projadd or projmod command.

Problem in Creating Shared memory 

Error : shmget fails with errno 28(ENOSPC)

| Probe Id          :- XY132002                                               |

| Component         :- xstCreateExtent                                        |

| ProjectID         :- 0                                                      |

| Probe Description :- AMQ6119: An internal WebSphere MQ error has occurred   |

|   (Failed to get memory segment: shmget(0x00000000, 2547712) [rc=-1         |

|   errno=28] No space left on device)                                        |

| FDCSequenceNumber :- 0                                                      |

| Arith1            :- 18446744073709551615 (0xffffffffffffffff)              |

| Arith2            :- 28 (0x1c)                                              |

| Comment1          :- Failed to get memory segment: shmget(0x00000000,       |

|   2547712) [rc=-1 errno=28] No space left on device                         |

| Comment2          :- No space left on device                                |


MQM Function Stack







shmget fails with errno 22(EINVAL)

| Operating System  :- SunOS 5.10                                             |

| Probe Id          :- XY132002                                               |

| Application Name  :- MQM                                                    |

| Component         :- xstCreateExtent                                        |

| Program Name      :- amqzxma0                                               |

| Major Errorcode   :- xecP_E_NO_RESOURCE                                     |

| Probe Description :- AMQ6024: Insufficient resources are available to       |

|   complete a system request.                                                |

| FDCSequenceNumber :- 0                                                      |

| Arith1            :- 18446744073709551615 (0xffffffffffffffff)              |

| Arith2            :- 22 (0x16)                                              |

| Comment1          :- Failed to get memory segment: shmget(0x00000000,       |

|   9904128) [rc=-1 errno=22] Invalid argument                                |

| Comment2          :- Invalid argument                                       |

| Comment3          :- Configure kernel (for example, shmmax) to allow a      |

|   shared memory segment of at least 9904128 bytes                                                   |


MQM Function Stack










Resolving the problem on Solaris

  • Increase the shared memory resource limit(project.max-shm-memory) for the project used by MQ.
  • Finding the project id associated with the MQ processes and applications
    • Using ps command : ps -eo user,pid,uid,projid,args|egrep "mq|PROJID" and "projects -l" commands
    • <o:p>Using "Project Id" attribute in the FDC header </o:p>and "projects -l" command
    • <o:p>Using "ipcs -J" and "projects -l" commands</o:p>

Unexpected Process Termination and/or Queue Manager Crash

Errors : Process ending unexpectedly followed by FDCs from amqzxma0

Example FDCs:

Date/Time         :- Mon May 02 2016 01:00:58 CEST
Host Name         :-
LVLS              :-
Product Long Name :- WebSphere MQ for Linux (x86-64 platform)
Probe Id          :- XC723010
Component         :- xprChildTermHandler
Build Date        :- Oct 17 2015
Build Level       :- p800-004-151017
Program Name      :- amqzxma0
Addressing mode   :- 64-bit
Major Errorcode   :- xecP_E_USER_TERM
Minor Errorcode   :- OK
Probe Description :- AMQ6125: An internal WebSphere MQ error has occurred.

Possible Causes and Solutions:

  • User killed the process
    • Check if the user killed any process
  • MQ process ended because of a memory exception
    • Check whether the MQ process ended with FDC with "Component         :- xehExceptionHandler"
    • Apply the fix for known issues fixed in this area.
  • OS killed the process because of high memory usage by the process
    • Check whether MQ process consumed lot of memory
    • Check whether the OS killed the process by reviewing the OS system log(Example: OOM Killer on Linux - Jan 2 01:00:57 ibmtest kernel: amqrmppa invoked oom-killer:gfp_mask=0x201da, order=0, oom_score_adj=0)
    • Apply the fix for known memory leak issues

<o:p>Commands and config file</o:p>

  • ulimit -a         (Display user limits)
  • ulimit -Ha       (Display user Hard limits)
  • ulimit -Sa       (Display user Soft limits)
  • ulimit -<paramflag> <value>  where paramflag is the flag for the resource name (e.g. s for stack). Changes made by ulimit is temporary.
  •  /etc/security/limits.conf or /etc/security/limits - Permanent resource limit configuration file

Difference in user limits used by a process vs configured

The user limits used by the process might be different from the configured limits. This is likely to happen if the process is stared by a different user or by user scripts or HA script etc.. It is important to check the user who is starting the queue manager and set the appropriate resource limits for the relevant users.


Hopefully, this blog entry is helpful in understanding and resolving the resource problems reported by MQ.


John Boyer (IBM)日本語字幕付き動画「最新のワークロードは、AIXで実行できます」が公開されました。





ちょっとお時間のある時などにご覧いただければと思います。 スマイル


ProgrammableWebDo Native Mobile Apps Finally Have One Foot in the Grave?

A hot new thing in the mobile world is Progressive Web Apps (PWAs) and it’s possible they might spell the end for the humble native mobile app. Google has just effectively green-lighted PWAs to be the future of Android. Mobile expert Henrik Joreteg over at his blog explains. 

John Boyer (IBM)Tapping Into China’s eCommerce Addiction

China has embraced ecommerce with incredible zeal. Though still relatively new, Chinese ecommerce accounts for nearly half of global online retail sales, and it’s steadily growing.

According to, Chinese online sales activity came to more than $581 billion in 2015, which is up more than a third from 2014. Volume has long since eclipsed that of the U.S. by a healthy margin and is expected to continue to grow at the rate of 20 percent annually by 2020.

With all of that potential headroom, one of tenets of your growth strategy has to be tapping into China’s ecommerce addiction.

All of this growth is being driven by the nation’s increasing incomes, higher education and the more sophisticated consumption patterns these changes have brought about. Chinese online consumers are also especially appreciative of products typically not found on store shelves there, such as luxury goods from Europe and the U.S.

As good as all of this sounds, it gets even better. China imposes fewer licensing requirements upon online businesses, compared to its traditional brick and mortar retailers. Goods clear customs more quickly for ecommerce enterprises over there as well.

All of these factors make China an increasingly attractive market for international ecommerce. Further paving the way, the Chinese government has indicated it is open to foreign companies having full ownership of ecommerce business in the region.

China’s Minister of Industry and Information Technology is quoted by Reuters as having said; "Permitting full foreign ownership supports our country's ecommerce development, encourages it and brings in active participation of foreign investment."

As you might imagine, getting started in China can be somewhat tricky if you’re trying to enter one of the large cross-border marketplaces. The barriers to entry are quite high, with the largest online marketplaces requiring $100 million in annual revenues to get started. The good news is they have pre-existing traffic and provide back-end assistance including logistics. Plus, you’re almost guaranteed to make money.

However, if you’re just starting out and capitalization is something of a hurdle, you’ll likely be better served by working with one of the WeChat cross-border platforms. WeChat is easily the largest social platform in the world. More than that though, it is a marketing juggernaut.

These platforms enable you to collect payments, request translations when needed and they’ll provide a design consistent with Chinese UX guidelines. Further, they can be integrated with most existing enterprise ecommerce software to accept international payments. The downside of going this route is you’ll have to generate your own traffic, but it is still the most efficient way to get started if you don’t have exceptionally deep pockets.

Either way, to be successful; your platform will need to enable Chinese payment methods, provide a Chinese service for login and be capable of optimal speed in China. Mobile is huge over there, so you’ll also need to also make sure your platform is mobile-friendly. Of equal importance is ensuring you follow Chinese design and UX guidelines to ensure visitors trust your site.

To learn more, Thomas Graziania, CEO of Cross-border WeChat Shop Platforms offers some other very useful information for those interested in doing ecommerce business in China.

Yes, tapping into China’s ecommerce addiction does require one to jump through quite a few hoops. But for access to a market with the potential of China’s, it’s a worthwhile endeavor.


ProgrammableWebHazardHub Announces New API for Hazard Risk Data

HazardHub, a supplier of geospatial risk data, announced the release of their geospatial API. For the first time, geographic risk data is available via a real-time API for inclusion to client’s internal systems.

ProgrammableWebNew Startup Seeks to Disrupt CAD/CAM Industry with Smart APIs

A new tech company,, is hoping to shake up the CAD and manufacturing worlds with their SaaS-based APIs. is a 10-employee entity, with APIs that present an alternative to offerings from established players, such as 3D Systems and Autodesk.

Shelley Powers (Burningbird)The web site changes and the Transition Plan

Several people have tweeted about how the climate change page is no longer posted to the web site. What they’re not aware of is that this change was planned starting last October.

First of all, reflects whoever is the occupant of the White House. Unlike the EPA or Department of Labor web sites, we shouldn’t be surprised to see sweeping changes during this transition.

The National Archives and Record Administration has archived the Obama’s web pages, as well as Barack and Michelle Obama’s official POTUS and FLOTUS twitter accounts. So the pages aren’t gone. What you see now is what Trump’s team has put together during the transition. The pages specific to the tenant are going to be different.

In addition, the non-profit has preserved the Obama web pages, in addition to all government web pages. Yes, including the climate change page.

(If you’re feeling generous, could use a donation to help with expenses.)

This web site change is part of the transition, and not unexpected. When we should be concerned is when we see pages disappear from sites like the EPA and the Department of Labor once Trump’s cabinet members have taken over the departments.


The post The web site changes and the Transition Plan appeared first on Burningbird.

John Boyer (IBM)How Artificial Intelligence is depicted in movies

This week, I was reminded that back in 2011, Watson beat two human players, Ken Jennings and Brad Rutter on the TV game show "Jeopardy!" On his last response, Ken wrote "I for one welcome our new computer overlords." With IBM investing heavily in Cognitive Solutions, should people be worried, or welcome the new technology?

Back in 1950, Isaac Asimov proposed "Three laws of robots":

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Let's take a look at how Artificial Intelligence has been represented in the movies over the past few decades. I have put these in chronological order when they were initially released in the United States.

(FCC Disclosure and Spoiler Alert: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for cognitive solutions made by IBM. While IBM may have been involved or featured in some of these movies, I have no financial interest in them. I have seen them all and highly recommend them. I am hoping that you have all seen these, or at least familiar enough with their plot lines that I am not spoiling them for you.)
2001: A Space Odyssey

Back in 1968, Stanley Kubrick and Arthur C. Clarke made a masterpiece movie about a mysterious obelisk floating near Jupiter. To investigate, a crew of human beings takes a space ship managed by a sentient computer named [HAL-9000].

(Many people thought HAL was a subtle reference to IBM. Stanley Kubrick clarifies:

"By the way, just to show you how interpretation can sometimes be bewildering: A cryptographer went to see the film, and he said, 'Oh. I get it. Each letter of HAL's name is one letter ahead of IBM. The H is one letter in front of I, the A is one letter in front of B, and the L is one letter in front of M.'

Now this is a pure coincidence, because HAL's name is an acronym of heuristic and algorithmic, the two methods of computer almost inconceivable coincidence. It would have taken a cryptographer to have noticed that."

Source: The Making of 2001: A Space Odyssey, Eye Magazine Interview, Modern Library, pp. 249)

The problem arises when HAL-9000 refuses commands from the astronauts. The astronauts are not in control, HAL-9000 was given separate orders from ground control back on earth, and it has determined it would be more successful without the crew.


In 1973, Michael Crichton wrote and directed this movie about an amusement park with three uniquely themed areas: Medieval World, Roman World, and Westworld. Robots are used to staff the parks to make them more realistic, interacting with the guests in character appropriate for each time period.

A malfunction spreads like a computer virus among the robots, causing them to harm or kill the park's guests. Yul Brenner played a robot called simply "the Gunslinger". Equipped with fast reflexes and infrared vision, the Gunslinger proves especially deadly!

(Michael Crichton also wrote "Jurassic Park", which had a similar story line involving dinosaurs with catastrophic results!)

Last year, HBO launched a TV series called "Westworld", based on the same themes covered in this movie. The first season of 10 episodes just finished, and the next season is scheduled for 2018.

Blade Runner

Directed by Ridley Scott, this 1982 movie stars Harrison Ford as Rick Deckard, a law enforcement officer. Rick is tasked to hunt down and "retire" four cognitive androids named "replicants" that have killed some humans and are now in search of their creator, a man named J. F. Sebastian.

(I enjoy the euphemisms used in these movies. Terms like kill, murder or assassinate apply to humans but not machines. The word "retire" in this movie refers to destruction of the robots. As we say in IBM, "retirement is not something you do, it is something done to you!")

Destroying machines does not carry the same emotional toll as killing humans, but this movie explores that empathy. A sequel called "Blade Runner 2049" will be released later this year.


In 1983, Matthew Broderick plays David, a young high school student who hacks into the U.S. Military's War Operation Plan Response (WOPR) computer. The WOPR was designed to run various strategic games, including war game simulations, learning as it goes. David decides to initiate the game "Global Thermonuclear War", and the military responds as if the threats were real.

Can the computer learn that the only way to win a war is not to wage it in the first place? And if a computer can learn this, can our human leaders learn this too?


In this series of movies, a franchise spanning from 1984 to 2009, the US Military builds a defense grid computer called [Skynet]. After cognitive learning at an alarming rate, Skynet becomes self-aware, and decides to launch missiles, starting a nuclear war that kills over 3 billion people.

Arnold Schwarzenegger plays the Terminator model T-800, a cognitive solution in human form designed by Skynet to finish the job and kill the remainder of humanity.

I, Robot

In this 2004 movie, Will Smith plays Del Spooner, a technophobic cop who investigates a crime committed by a cognitive robot.

(Many people associate the title with author Isaac Asimov. A short story called "I, Robot" written by Earl and Otto Binder was published in the January 1939 issue of 'Amazing Stories', well before the unrelated and more well-known book 'I, Robot' (1950), a collection of short stories, by Asimov.

Asimov admitted to being heavily influenced by the Binder short story. The title of Asimov's collection was changed to "I, Robot" by the publisher, against Asimov's wishes. Source: IMDB)

Del Spooner uncovers a bigger threat to humanity, not just a single malfunctioning robot, but rather the Virtual Interactive Kinesthetic Interface, or simply VIKI for short, a cognitive solution that controls all robots. VIKI interprets Asimov's three laws in a manner not originally intended.

Ex Machina

In this 2015 movie, Domhnall Gleeson plays Caleb, a 26 year old programmer at the world's largest internet company. Caleb wins a competition to spend a week at a private mountain retreat. However, when Caleb arrives he discovers that he must interact with Ava, the world's first true artificial intelligence, a beautiful robot played by Alicia Vikander.

(The title derives from the Latin phrase "Deus Ex-Machina," meaning "a god from the Machine," a phrase that originated in Greek tragedies. Sources: IMDB)

Nathan, the reclusive CEO of this company, relishes this opportunity to have Caleb participate in this experiment, explaining how Artificial Intelligence (AI) will transform the world.

(The three main characters all have appropriate biblical names. Ava is a form of Eve, the first woman; Nathan was a prophet in the court of David; and Caleb was a spy sent by Moses to evaluate the Promised Land. Source: IMDB)

The premise is based in part on the famous [Turing Test], developed by Alan Turing. This is designed to test a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Movies that depict the bad guys as a particular nationality, ethnicity or religion may be offensive to some movie audiences. Instead, having dinosaurs, monsters, aliens or robots provides a villain that all people can fear equally. This helps movie makers reach a more global audience!

Of course, if robots, androids and other forms of Artificial Intelligence did exactly what humans expect them to, we would not have the tense, thrilling action movies to watch on the big screen.

This is not a complete list of movies. Enter in the comments below your favorite movie that features Artificial Intelligence and why it is your favorite!

technorati tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

John Boyer (IBM)z/VM now supports OpenStack Newton

The PTFs for z/VM's OpenStack Newton support are expected to close by end of day January 20 US time ("today", for much of the world).  Those PTFs will update z/VM's Cloud Manager Appliance (CMA) and other SMAPI servers, so there are several of them (details below).

If you're already running z/VM 6.4

  • Congratulations, you're eligible to run Newton!  Once you do, you get:
    • Ubuntu 16.04 guest deployment support - boot from volume has been delayed, but should be coming soon.  All other functions are ready.
    • Substantially faster guest deployment times
    • Substantially faster CMA restart time when running OpenStack services
    • CMA configuration wizard - much simpler way to configure CMA, either new/from scratch or for incremental changes like adding another LVM disk
    • Simpler CMA health monitoring (geeky details further down, for those interested)
    • Guest consoles are readable through the OpenStack horizon dashboard GUI
  • Existing 6.4 orders will have Liberty "in the box"/on DVD.  New orders will have Newton.  There is no supported way to down-grade when ordering or after receipt.
  • Starting now, Liberty will receive only security fixes; Liberty fix pack 3 should be the last of any functional fixes. 
  • You have 6 months to upgrade to Newton; after July 2017, you should expect Liberty no further updates, not even security fixes. This is aligned with the expectations set in the Statement of Direction included in the z/VM 6.4 announcement materials.

If you're currently running z/VM 6.3

  • Sorry, no Newton for you Crying z/VM's Newton support runs only on z/VM 6.4.
  • Gentle reminder, only 11 months left before 6.3 goes out of service!  Hope you are building your migration plan.
  • You should expect only security updates, no functional updates, for OpenStack Liberty from here on out.  Unlike z/VM 6.4 though, you'll get those security updates until the end of 2017 when 6.3's support ends.
  • In order to migrate to Newton,

Ordering Newton

  • VM65893 updates CMA.  The installation instructions are in the CMA140 FILE on MAINT 400 once you apply the APAR.  Follow them, or you won't be running the code afterward, Promise!
  • VM65954 updates other SMAPI servers and adds new files for the CMA configuration wizard.
  • VM65955 updates SMAPI's caching server, LOHCOST
  • As I'm writing this blog post, the APARs are orderable but the PTFs are not closed.  That just means it may take a few hours to receive the PTFs, based on when you order and whether or not they're closed when you do order them.
  • As usual, the z/VM OpenStack maintenance page will have a service list file that you can download and run the SERVICE command against to get the complete order list, including pre-requisites.
  • VM65970 will come "soon" to provide help files for the new configuration wizard.  No need to wait for this one to get started ordering the others though.

Newton Publications

  • PDF copies of the publications (Enabling Newton, SMAPI 6.4) are now available on the publibz site and (z/VM Migration Guide excerpts specific to CMA Newton) on the wiki.  The full Migration Guide that incorporates the excerpts will come to publibz later; best current guess, early 2Q2017.
  • The Knowledge Center publications will not be available immediately; best current guess, early 2Q2017 for those.
  • There were some documentation updates that missed the deadline for the PDFs. We're using the wiki to make the really important (as in: not just editorial) ones available immediately.  Fair warning, it will likely be several months before the PDFs are updated.
  • Selected clarifications on the configuration wizard are also on the documentation updates page.

Geek department

  • Checksums for all the VM65893 files are on the wiki, so if you're a tin foil hat wearer like me you can verify that they're unscathed after all the download, upload, and unpacking commands.  FTP is notoriously finicky, perfectly innocent-looking variations from CMA140's instructions have resulted in the wrong bits coming out the other end of the process, with no visible errors before that.
  • If you're running a production-like environment, have a look at our tuning wiki page.  We've learned things from other clients as well as our regression test system.
  • Less obvious changes included in Newton:
    • HTTPS is required by default now when calling OpenStack APIs.  In Liberty (starting with FP2), you had to run a procedure to secure them.
    • CMA uses IUCV instead of ssh to manage guests by default.  Much more on this in the Migration Guide.
    • The chef client has been removed, so CMA tends to use noticeably less virtual storage and CPU to run equivalent work.  Scripts are installed to handle optional functions that used to require chef.
    • Logging and serviceability improvements.
    • Installation verification tests (IVP) are run when the CMA is first started, and are scheduled to run on an on-going basis so you have more time to react to conditions like DirMaint storage pools filling up, and any other "slow creep" problems.  The xCAT GUI allows you to control the periodic runs.
    • When an OpenStack-requested guest deployment fails because CMA was unable to connect to the guests before OpenStack times out, a CP MSG is sent to the notify userid to facilitate automated or live monitoring.

| Edited 15:30 ET 2017-01-20 ... PTFs closed, added VM65970

John Boyer (IBM)WebSphere eXtreme Scale profile creation fails after upgrading to (or) later



An error occurs in WebSphere eXtreme Scale (WXS) profile creation after upgrading to (or) later. The following error message is received when creating a new profile after the WebSphere eXtreme Scale product is upgraded to or later in a WebSphere Application Server (WAS) V8.5.5.x integrated setup.

<message>java.lang.UnsupportedClassVersionError: JVMCFRE003 bad
major version;class=com/ibm/websphere/models/config/catalogservice/impl/CatalogservicePackageImpl, offset=6

WebSphere eXtreme Scale (or) later doesn't support Java 6. Thus, if a profile is using Java 6, no server with WXS installed will start. This is why a profile that is set to use java 6 fails at creation. The profile would be a invalid configuration and isn't allowed. The only way to get the profile creation to work is to create profiles which are using either Java 7, Java 7.1 or Java 8. By default, WebSphere Application Server 8.5.5.x ships with Java 6. When a new profile is created it uses the default Java version. To resolve the issue, the "default" Java version for WAS must be changed so that when the new profile is created it uses a Java version other than Java 6.    

This can be done using a command similar to this one:

Example :  WAS_HOME\bin> -setNewProfileDefault -sdkname 1.7_64

This developerWorks article has more about managing the SDKs:  
System administration in WebSphere Application Server V8.5, Part 4: Using pluggable SDK 7 to enable WebSphere Application Server for Java 7

Here are a few details from the above article above:  
Enable new profiles to use SDK 7  
After enabling new profiles to use SDK 7, all profiles you subsequently create are automatically initially enabled to use SDK 7.  

Before beginning, list the available SDKs to determine the SDK name of an available SDK 7 installation, then use the managesdk –setNewProfileDefault command with the -sdkName option to enable new profiles to use SDK 7. The example in Listing 3 assumes that the SDK name of an available SDK 7 installation is 1.7_32.

Listing 3 :
C:\wasInstalls\v85\Base\bin>managesdk.bat -setNewProfileDefault  -sdkName 1.7_32  < CWSDK1022I: New profile creation will now use SDK name 1.7_32.  
CWSDK1001I: Successfully performed the requested managesdk task.  
Note that I set up a WAS 8.5.5 environment with WXS 8.6.1. I had the default JDK set to Java 6 and was able to recreate the profile creation failure. Once I changed the default JDK this was resolved. So, this solution should work for you.

In a nutshell, to use WXS or higher, it is required that you use Java 7 or later. This is a limitation of WXS product and working per design.  
This is documented in the following WXS 8.6.1 product documentation and also on the supported software page :  
WebSphere eXtreme Scale 8.6.1 Detailed System Requirements  
Note: Please go into "Supported software" tab , then check "Java SDK"  section.  
WebSphere eXtreme Scale 8.6.1 Hardware and software requirements





Shelley Powers (Burningbird)Power to the People and Saturday’s March

I don’t join “movements”. I’ve seen them co-opted too many times.

I saw this with Blogher, which was supposed to be a movement to give attention and voice to women writers. But three people turned it into a profit-making venture and ruined everything.

We also saw this with Occupy and Black Lives Matter.

Now we’re seeing it with the Women’s March, as one of the self-appointed  leaders  used the event to slam Hillary Clinton by deliberately leaving her name off a list of women who have led the way in this fight. This, even though the list started off with an unattributed Hillary Clinton quote.

The inevitable problems that typically occur with any “movement” have surfaced, and some have talked about not marching. However, what we have to remember is that though some people seek to co-opt a “movement”, they can’t steal the power and the passion that started it.

I hope people, all people, march tomorrow…not for the Women’s March, the movement, but for your own passion. Whatever led you to want to march isn’t gone.

As for me, I have all my feelers out and ready to expose any and all actions Trump, his cabinet, and this Congress do, starting with today’s signed Executive orders. That’s how I march: across the page.

Power to the people.

The post Power to the People and Saturday’s March appeared first on Burningbird.

John Boyer (IBM)BMW e IBM Watson creando el auto del futuro



IBM ha anunciado una nueva colaboración con el Grupo BMW con el fin de explorar conjuntamente las capacidades cognitivas de IBM Watson para personalizar la experiencia de conducción y crear sistemas de apoyo para el conductor más intuitivos.

Como parte del acuerdo, Grupo BMW establecerá un equipo de investigadores e ingenieros en la sede global de IBM Watson Internet de las Cosas (IoT) en Múnich, Alemania. Las compañías trabajarán juntas para mejorar las funciones de asistencia inteligente para los automóviles BMW.

IBM ha invertido recientemente 200 millones de dólares para hacer de su nuevo centro en Múnich, una de las instalaciones más avanzadas para la innovación colaborativa como parte de una inversión global de 3.000 millones de dólares para aplicar la computación cognitiva a la Internet de las Cosas. BMW, que también tiene la sede en la capital Bávara, es una de las primeras compañías en trabajar dentro de uno de los espacios colaborativos de la nueva sede de IBM Watson IoT. 

Para avanzar en su investigación automovilística y demostrar las posibilidades de IBM Watson IoT a sus clientes, IBM colocará 4 autos deportivos híbridos BMW i8 en su sede de Múnich Watson IoT. Los prototipos de las soluciones, que se ejecutarán en la plataforma en la nube IBM Bluemix, ayudarán a mostrar cómo IBM Watson puede generar nuevas interfaces de conversación entre vehículos y conductores.

Las capacidades de aprendizaje de Watson permiten que los sistemas de los autos puedan aprender de las preferencias, necesidades y hábitos de conducción, para de este modo personalizar la experiencia de conducción y mejorar los niveles de confort y seguridad.  Watson también será entrenado con el manual del automóvil para que los conductores puedan hacer preguntas sobre el vehículo en lenguaje natural mientras conducen. Además, incorporará datos de The Weather Company (una empresa de IBM) así como actualizaciones contextuales y en tiempo real sobre la ruta, el tráfico y el estado del vehículo con el fin de enriquecer la experiencia de conducción y hacer recomendaciones al conductor.


“Una nueva relación – Personas y automóviles”, por el IBM Institute of Business Value

Según un estudio del IBM Institute of Business Value, “Una nueva relación – personas y automóviles”, los vehículos se están convirtiendo en parte del Internet de las Cosas (IoT), ya que las nuevas opciones de movilidad están transformando la vida y las expectativas de los consumidores. Los coches de hoy en día están pasando de ser un medio de transporte a ser un nuevo tipo de centro de datos en movimiento con sensores a bordo y ordenadores que captan información sobre el coche, el conductor, los pasajeros y el entorno. 

Según datos del estudio de IBM los automóviles cada vez incorporan más:

  1. Auto-reparación: Vehículos que son capaces de diagnosticarse y arreglarse e incluso arreglar otros vehículos con problemas.
  2. Auto-socialización: Vehículos que se conectan con otros vehículos y con el entorno.
  3. Auto-aprendizaje: Vehículos con capacidad cognitiva para aprender continuamente y dar consejos basados en el comportamiento del conductor, los pasajeros y otros vehículos.
  4. Auto-conducción: Vehículos que están pasando de una automatización limitada a una autonomía total.
  5. Autoconfiguración: Vehículos que se adaptan a las preferencias personales del conductor, desde la altura y posición del asiento hasta los destinos preferidos de los conductores.
  6. Auto-integración: Al igual que otros dispositivos inteligentes, estos vehículos serán parte del IoT, conectando información de tráfico, clima y movilidad a medida que se mueven.


Te invitamos a nuestra página de Facebook para que dejes tu comentario y opinión.

John Boyer (IBM)Blockchain: Evolucionando el Futuro


Blockchain, la plataforma tecnológica que soporta al Bitcoin, está ganando cada día mayor popularidad, en gran parte debido a que la necesidad de cadenas de valor transparentes, fiables y seguras va creciendo acorde surgen nuevas oportunidades de negocio.

Esta cadena de bloques es una base de datos compartida que funciona como un libro para el registro de operaciones de compra-venta o cualquier otra transacción. Al utilizar claves criptográficas y estar distribuido por muchas personas, Blockchain presenta ventajas en la seguridad frente a manipulaciones y fraudes. Una modificación en una de las copias no serviría de nada porque hay que hacer el cambio en todas las copias, ya que la base es abierta y pública.

Los analistas consideran que Blockchain puede transformar radicalmente, no sólo el mundo financiero, sino a multitud de sectores como empresas energéticas, telecomunicaciones, administración pública, logística, transporte y medios de comunicación, por citar algunos.

Con Blockchain no hay secretos en la red de negocios. Si alguien es deshonesto, todos los demás en la cadena de lo sabrán al instante. Esta “auto-vigilancia” puede mitigar la necesidad de depender del nivel actual de regulaciones y sanciones para monitorear y controlar el flujo de transacciones comerciales.

Debido a que cada transacción se basa en otra transacción, cualquier corrupción es evidente y todo el mundo es consciente de ello. Blockchain incorpora así la confianza en la tecnología y esto puede hacer que una gran parte de la población mundial que no tiene acceso a los bancos y sistemas de transacción sea capaz de intercambiar valores y activos sin depender de un intermediario.

Cabe mencionar que lo anterior inyecta un gran valor a la tecnología Blockchain y por ello, según el Foro Económico Mundial, 80% de los bancos en el mundo están ya trabajando en proyectos Blockchain. CLS, el sistema más grande de disposición en efectivo multidivisas, se encuentra en la implementación de esta tecnología en todas sus transacciones internacionales. El Banco de Tokio desarrolló un prototipo de contrato inteligente para transacciones de negocio. Otro ejemplo es UnionPay de China, que usa Blockchain para programas de lealtad que operan a través de múltiples bancos.

En México, en caso específico, muchos jugadores de la industria financiera están explorando con esta tecnología, sobre todo en el segmento Fintech.

IBM está impulsando un ecosistema completo de los jugadores de la industria trabajando en conjunto a través de Hyperledger Project, de la Fundación Linux, cuyo propósito es desarrollar aplicaciones basadas en Blockchain, específicamente enfocadas en los negocios. Hyperledger Project está integrado por una gran cantidad de empresas entusiastas de Blockchain y sus numerosas posibles aplicaciones.

Según datos de IBM, la aplicación de Blockchain a las cadenas de suministro global podría generar más de USD$100 mil millones en eficiencias anuales.


Te invitamos a nuestra página de Facebook para que dejes tu comentario y opinión.



John Boyer (IBM)La Importancia de Datos en la Era Cognitiva


En la actualidad, cada uno de nosotros interactúa diariamente con las empresas de una u otra manera. Ya sea por medio de un tweet, una compra, una llamada al centro de servicio o a través de un correo electrónico, detrás de estas interacciones está lo que sin lugar a dudas es el corazón de cualquier negocio y su éxito: los datos.

La importancia que tienen estos datos no es una novedad y de hecho, la mayoría de las personas está de acuerdo en que son el activo más valioso de una compañía. Pero lo que sí es nuevo es el tamaño de esta montaña de datos que ya no pueden ser medidos en términos de terabytes, petabytes, exabyte, zettabytes o incluso yottabytes. Muy pronto se escuchará sobre un nuevo término para su medición, el brontobyte.

El término brontobyte es nuevo para muchos, así que pongámoslo en perspectiva. Un brontobyte es aproximadamente el equivalente a mil millones de veces todos los granos de arena en el planeta. Esta es una cantidad enorme de información y la reacción de muchos puede ser ‘hacerse de la vista gorda’ -o huir y esconderse- y continuar con su negocio de manera normal. Pero la realidad es que los profesionales del marketing no tendrán otra opción que conectarse a esta data para lograr el involucramiento del cliente y satisfacer sus necesidades.

En la parte central de este engagement estarán las soluciones cognitivas y justo en este momento, los profesionales del marketing están incrementando su inversión en ellas. ¿Por qué? Porque la tecnología cognitiva tiene el poder de “traer” datos de un número inimaginable de fuentes que van desde el sentimiento social expresado en las redes, las tendencias micro y macro, el clima, los eventos locales e internacionales, las noticias económicas y más.

El poder y la importancia que tienen los datos ya no es un secreto, pero el volumen total de estas informaciones de inteligencia, la manera que “destilamos” los detalles y la ponemos en uso, están cambiando drásticamente al contar con más compañías que están adoptando tecnologías cognitivas dentro de su operación.

Y la buena noticia es que los grandes ganadores seremos usted y yo, los consumidores.


Te invitamos a nuestra página de Facebook para que dejes tu comentario y opinión.


John Boyer (IBM)IBM y Panasonic juntos en Smart Home


Panasonic está trabajando con IBM para utilizar la plataforma de computación cognitiva Watson para mejorar su oferta actual de Smart Home. La computación cognitiva representa una nueva era en la computación, en la cual los sistemas comprenden el mundo de la misma manera en que lo hacen los humanos: a través de los sentidos, el aprendizaje y la experiencia.

El objetivo de Panasonic es transformar los servicios que proporciona a través del proceso de aprendizaje y lenguaje natural de Watson para brindar una mayor tranquilidad de saber que las casas son seguras e inteligentes. Al mismo tiempo, 
Panasonic planea crear nuevos productos e innovaciones de servicio, integrando su mejor tecnología de sensores y dispositivos Smart Home con la plataforma Watson IoT de IBM.


Una de las áreas de mayor enfoque es la seguridad del hogar, dentro de la cual las cámaras de seguridad de Panasonic y los sensores de puertas, ventanas, movimiento y cristal roto estarán acoplados con las capacidades de computación cognitiva de Watson. Esto significa que el sistema de seguridad no reaccionaría si los niños de los vecinos están jugando a futbol evitando una falsa alarma, pero sí alertaría automáticamente a la policía en el caso que un intruso intente escalar una valla para entrar en casa.


“Con una gran variedad de sensores, tenemos la oportunidad de transformar nuestra relación con el mundo físico. Estamos dando ojos y orejas a los objetos para que puedan percibir e interactuar mejor con nosotros”, afirma Harriet Green, General Manager de IBM Watson IoT.


Te invitamos a nuestra página de Facebook para que dejes tu comentario y opinión.

John Boyer (IBM)IBM: Las 5 Tecnologías a Futuro


IBM dio a conocer los 5 hitos científicos innovadores con el potencial de cambiar la forma en la que la gente trabajará, vivirá e interactuará durante los próximos 5 años.

  • Con la Inteligencia Artificial, nuestras palabras serán una ventana hacia nuestra salud mental.
  • La hiper-imagen y la Inteligencia Artificial nos darán visión de superhéroes.
  • Los macroscopios nos ayudarán a entender la complejidad de la Tierra en detalle infinito. 
  • Los laboratorios médicos "on a chip" servirán como detectives de la salud para rastrear enfermedades a nano-escala.
  • Los sensores inteligentes detectarán la contaminiación medioambiental a la velocidad de la luz.

Por más de 7 décadas, IBM Research ha definido el futuro de las tecnologías de la información, con más de 3,000 investigadores en 12 laboratorios localizados en 6 continentes. Los científicos de IBM Research han producido seis Premios Nobel, 10 Medalla Nacional de Tecnología de los Estados Unidos, cinco Medallas Nacionales de Ciencia de los Estados Unidos y seis Premios Turing.


Para mayor información en detalle sobre el impacto de estas 5 innovaciones, visite:


Te invitamos a nuestra página de Facebook para que dejes tu comentario y opinión.

John Boyer (IBM)Taking your first steps toward cloud - with Blueprints

In their recent webcast, IBMers Chad Holliday and Steve Barbieri both discussed and provided a demonstration on how “cloud blueprints” can be used to develop infrastructure and application layers across different cloud environments.

The use case is fairly typical:  Traditional enterprises are looking to hybrid cloud to deal with “multi-speed IT” issues – these are your legacy applications (systems of record) working with your front-end, systems of engagement (typically in the cloud).  And ‘cloud blueprints’ can help organizations with legacy applications develop their infrastructure and application layers across different cloud environments – allowing you to continuously deliver applications into the cloud.

Many questions were asked during this informative webcast.  We’ve consolidated the main questions and appropriate answers below for your continued learning!

You can watch a full replay of the webcast here, which includes a demonstration of the UrbanCode Deploy Blueprint Designer.

In addition, you can flip through their presentation in SlideShare, below.

<iframe frameborder="1" height="290" scrolling="no" src="" width="340"></iframe>

A Q&A Session with Chad Holliday and Steve Barbieri

Are you seeing Blueprints becoming more and more common as we head into 2017? - and if so, why is that?

Blueprints are being used more as more and more customers are starting to make the move to the cloud.  Full-stack blueprints are an important tool to bring together application deployments and dynamic infrastructure in the cloud.

You mentioned the integration with VRA -- is that new?   And can you talk a bit more about that?   

Sure.  VMWare vRealize Automation (vRA) is VMWare's infrastructure modeling tool.  We provide the ability to integrate (build and request catalog items, etc) via the blueprint designer, and tie in the ARA support from UCD to provide the 'full-stack' support that provisions both applications (from UCD) into infrastructure on VMWare (managed by vRA).  Brings two market-leading technologies together.

Is the built-in Git integration located on premise or in the cloud?

You can use the built-in GIt server that comes with blueprint designer, or manage repos from any Git server (github, or elsewhere). 

Can the Blueprint source be integrated with RTC or just GIT?   

Just GIT.  

Can I pull services (for example a monitoring service, auto scaling service) provided by Cloud providers (like AWS) and add them to my blueprint?

We do have support for ASGs in AWS, so you can create and manage them in a blueprint to some degree

Can UrbanCode Deploy handle containerized application management?   

UrbanCode Deploy has a rich set of plugins to manage Docker and containerized applications.  Managing containers can absolutely be used as part of a full-stack blueprint.  The next webinar on January 25th with Michael Elder (How do you deliver your applications to the cloud) will likely cover the UrbanCode Deploy / Container support in more detail, so please tune into that as well.

I'd like the ability to get a complete hyperlink to a given blueprint, so that it's easier to share its references in documents, collaboration tools (Slack, etc) - is that possible?

The blueprint has an output with the URL to the blueprint.  This is handled via the design server and isn't a direct link to the file.

How can I compare the capabilities of UrbanCode Deploy Blueprints vs ICO?  If I have UrbanCode Deploy, do I not need ICO?

ICO can be used in conjunction with the blueprint designer.  You can attach your ICO to UrbanCode Deploy blueprint designer to provision infra via ICO and apps via UCD.  You can also request the provision of a heat blueprint from the blueprint designer via a workflow in ICO. 

Can I use Blueprint Designer to provision my local VCenter instead of cloud?   

Yes, you can provision to your local vCenter

Are there generic common Blueprints available/shared? Or is it always "build your own"?   

The Blueprint Designer does not ship any generic/common blueprints - so it's build your own.  Providing some out-of-the-box blueprints is something we are considering in our roadmap.

Everywhere you mention "Chef" can I replace with "Puppet"?   

No, sorry, we don't currently support Puppet.

On your Blueprint reply - how about a community to share Blueprints?

This is great suggestion.  We are currently moving more of the UCD plugins to the community, so supporting some common blueprints in a similar fashion makes a lot of sense.

How are secret parameters handled?   

Parameters had a "hidden" property in the blueprint.  Those will show up as **** in the provision dialog.  Also, if you look at the environment details, the values for these parameters will also be ****

How does UrbanCode compare to technologies like Desired State or Ansible?   

UrbanCode Deploy, including the blueprint designer, has a lot of advantages over its competition, including tools like Ansible.  In this webinar, we focused on the ability to perform cloud-portable full stack provisioning, which is one of the areas that UCD exceeds the competition.  There are others as well :-)   (check out Ovums Decision Matrix: Selecting a DevOps Release Management Solution)

Is there monitoring around provisioning errors or manual changes?   

We will capture and report provisioning errors during full-stack deployment.  We don't currently manage or detect manual changes or other config drift on the environment.

Is the IP public button in SL by default?   

By default, the public IP address button is turned off.  In SL, you can either attach your virtual machine to both a back-end and front-end vlan (network), or attach to a back-end vlan and use the public IP button to gesture a front-end vlan.  Skipping the front-end vlan step will only serve out an IP from the back-end vlan.

When deploying VMware infrastructure, is VIO required?   

No, it is not required.  VIO (Vmware Integrated OpenStack) is supported, as the blueprint designer talks with Openstack APIs such as those provided by VIO.  However, the blueprint designer can also interact with traditional vCenter environments without VIO.

When do we expect support to integrate into Storage and Networking layer?

We do have networking and storage support available on most cloud platforms.  You can hook up your VMs to many types of networks, and add and modify storage as needed.


Please watch a full replay of the webcast here, which includes a demonstration of the UrbanCode Deploy Blueprint Designer.


Looking for additional resources on Cloud, Blueprints, and UrbanCode Deploy?   Here are few to get you started!




John Boyer (IBM)New in Maximo API_File Cron Task

Ever wonder what the API_FILE Cron task is? If yes, then I can give a bit of information about this specific Cron task.


The API_File is an out of the box Cron task which was introduced in Maximo together with the new Work Center. You can learn more about the Maximo Work Center here. The API_File Cron task is used for exporting Data Set to Watson in the Business Analyst Work Center.


Here's how the API_File Cron task looks like in the Cron Task Setup application:










API_FILE Cron Task
























BMXAA6359E - A runtime error for host API_FILE.api_file:null occurred. Increase the amount of memory that is available to the Java Virtual Machine (JVM).


If you are encountering the error above in your Maximo server logs (systemOut.log) and you are not using the Maximo Work Center, then you can just disable the API_File Cron task to make the error go away.


That would be all. Thanks for reading!

John Boyer (IBM)z/VSE service news: LE APAR

There were new Language Environment APARs for z/VSE released. Below are the details.



Users affected: COBOL/VSE and/or VS COBOL II customers running with LE/VSE enabled COBOL programs.  
Problem description: LE/COBOL termination processing may experience z/VSE cancel code 20 and interruption code 10.   



Sysroute of PI66469 and PI71676.

You can always get the latest service news from our z/VSE Service and Support web page - here.


Have a good weekend !


John Boyer (IBM)Fillo API : REad an write data from excel using sql query

To get  or  update the excel file use below mention methods .Tab name will be schema name in sql query


public static List<List<String>>  executeQuery(String fileName,
            String query) throws FilloException{
                return executeExcelQuery(fileName, query);
    public static String executeUpdateQuery(String fileName, String query)
            throws FilloException {

        Fillo fillo = new Fillo();
        Connection connection = fillo.getConnection(fileName);

        return "Update was Successful";
     * This method treats excel as a database.<BR>You can get data from the excel. 
     * file just using a basic query.<BR>
     * Dec 14, 2016
     * @author kadhikari
     * @param fileName
     * @param query
     * @return
     * @throws FilloException
    public static List<List<String>> executeExcelQuery(String fileName,
            String query) throws FilloException {
        List<List<String>> listOfLists = new ArrayList<List<String>>();
        List<String> someList = new ArrayList<String>();

        Fillo fillo = new Fillo();
        Connection connection = fillo.getConnection(fileName);
        query = query.toUpperCase();
        Recordset recordset = connection.executeQuery(query);

        logger.debug("total number of row returned " + recordset.getCount());
        while ( {
            ArrayList<String> dataColl = recordset.getFieldNames();
            logger.debug("Total data column " + dataColl);
            Iterator<String> dataIterator = dataColl.iterator();

            String[] columns = columnsSplit(query);

            int width = 0;
            // Width size
            if (query.contains("*")) {
                width = dataColl.size();
            } else {
                width = columns.length;

            String[] rowByRow = new String[width];
            int rowNo = 0;
            while (dataIterator.hasNext()) {
                for (int i = 0; i <= dataColl.size() - 1; i++) {
                    String data =;

                    if (query.contains("*")) {

                        String dataVal = recordset.getField(data);
                        rowByRow[rowNo] = dataVal;

                    } else {

                        for (String column : columns) {
                            if (column.length() > 0) {
                                if (data.equalsIgnoreCase(column)) {

                                    String dataVal = recordset.getField(column);
                                    rowByRow[rowNo] = dataVal;




                rowNo = 0;


        return listOfLists;

     * Dec 14, 2016
     * @author kadhikari
     * @param query
     * @return
    private static String[] columnsSplit(String query) {
        int start = 7;
        int end = query.indexOf(" FROM");
        String columnsWithComma = query.substring(start, end);
        return columnsWithComma.split(",");


John Boyer (IBM)IBM Watson IoT Support Lifecycle Resources

IBM Watson IoT Support Lifecycle Resources




End of Support (EOS) Announcements

IBM provides advance notification of End of Support (EOS) dates allowing customers reasonable time to complete software upgrades or to refresh application products. EOS announcements are made in April and Sept.



Announcement Letters

Announcement letter dates are U.S. only. Information for other country announcements is available on the IBM Offering Information page. Select the date to view the announcement letter. Note that some product versions may not have online announcement letters.


View all IBM Software EOS announcements for 2016 and 2017.

IBM Software End of Support (EOS)



 This section describes some of the standard and enhanced IBM Software Support Lifecycle Policies and common questions. Additional details and answers to commonly asked questions regarding the Support Lifecycle Policy can be found on our Frequently Asked Questions page.


Q: What are the major Support Lifecycle milestones?

A: The major Support Lifecycle milestones are:

  • General availability (GA) - Refers to the date that a new version or release of the product is available to all users.  A product version/release is not published to the Support Lifecycle web site until the GA date.
  • End of Marketing (EOM) - Refers to the effective date on which a version/release (and associated part number) ceases to be available and can no longer be ordered via standard price lists.
  • End of Support (EOS) - Refers to the last date on which IBM will deliver standard technical support for a given version/release of a product.
  • End of Life (EOL) - Refers to the effective date on which a Software product, an Appliance or a Hardware platform reaches the end of its useful life.


Q: How do you determine if your installed software is still supported?

A: Search by product name or keyword using the Support Lifecycle Search tool.  You can also view a list of IBM Software products that will reach EOS in 2016 and 2017 via the IBM Software End of Support page.


Q: What happens when EOS is announced?

A: Often, there is a newer version of the software available for download.  In most cases, you’ll have sufficient time to plan for and install the latest version.  For more information on the lifecycle stages, including EOS, view this short YouTube video on the IBM Product Lifecycle and EOS.


Q: What is the standard version format for IBM Software products?

A: The full product version is expressed by a four-digit code known as the IBM Version, Release, Modification and Fix Level structure, or VRMF.  View this Technote for additional information and description of each element.  You may also find this Glossary of product support and maintenance terms helpful.


Q: How can you connect with Watson IoT on social media?

A: You can follow us on Twitter - or subscribe to our IBM Watson IoT Support channel on YouTube.


Q: Where can you find more information on IBM Support policies?

A: You can view and download the IBM Support Handbook(s) that are relevant to the product(s) you use.





For more information on IBM software support topics, check out the following resources:


John Boyer (IBM)APM 8: Installing DB2 in a non-default location

When you are trying to install the APM server you may want to install the server in /opt/apm and install DB2 in a separate location, i.e. /db2.
Basically the correct thing to do would be to use a remote database, install it on a different system in any filesystem you want, and do not modify files provided on installation media.
However, in case of need the following instructions may be used.

1) Edit <install_dir>/
Change db2.installdir parameter to specify where DB2 will be installed.

2) Edit <install_dir>/files/db2wse.rsp.
Change FILE to match the value of db2.installdir in the file.
Change apm.DFTDBPATH to specify the location where databases should be installed.
Add apm.DIAGPATH parameter to specify the location of DB2 diagnostic logs.

3) Create the required directories.
Create directory specified by FILE above.
Create directory specified by apm.DFTDBPATH above.
Create directory specified by apm.DIAGPATH above.

4) Create the required DB2 instance owner ID.

5) Change the created directories to be owned by the DB2 instance owner with permission of 775.

6) Run

The following sample commands can perform all of the instructions above.

db2Owner=`grep "apm.USER" ${props} 2>/dev/null |cut -d "=" -f 2-`
db2Group=`grep "apm.GROUP" ${props} 2>/dev/null |cut -d "=" -f 2-`
db2Pass=`grep "db2apm.password" ${props} 2>/dev/null |cut -d "=" -f 2-`

sed -i "s#db2.installdir=.*#db2.installdir=${db2Dir}" ${props}

sed -i "s#@INSTALL_DIR@#${db2Dir}"
${rsp} echo "apm.DIAGPATH=${db2Dir}/db2dump" >> ${rsp}

mkdir ${db2Dir}
mkdir ${db2Dir}/DB ${db2Dir}/dump
userdel -r ${db2Owner}
groupdel ${db2Group}
groupadd ${db2Group}
useradd -g ${db2Group} -b "${db2Dir}" -m ${db2Owner}
/bin/echo "${db2Pass}" | passwd ${db2Owner} --stdin

chown -R ${db2Owner}:${db2Group} ${db2Dir}
chmod -R 775 ${db2Dir}




Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:












Check out all our other posts and updates:
Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:
Academy Google+:
Academy Twitter Handle:


John Boyer (IBM)Descubra o tesouro escondido nos dados usando análise de grafos

Ana Paula Appel

Marisa Affonso Vasconcelos

Nos últimos anos, tem-se observado uma crescente demanda por técnicas e ferramentas para a análise de grandes volumes de dados, o chamado Big Data. O principal objetivo dessas técnicas é prover insights e detectar padrões e correlações presentes nesses dados que auxiliem em processos de tomada de decisão, que podem ser desde tornar uma campanha de marketing mais efetiva até detecção de perdas devido a fraudes.

Dentre as técnicas que vêm mais se destacando na área de análise de dados é a análise de redes complexas que permite a modelagem não só de entidades, mas também do relacionamento entre seus vários tipos. Exemplos de relacionamentos entre entidades são encontrados em links que conectam páginas na web, entre clientes através de suas transações financeiras e podem modelar até relacionamento entre pessoas em redes sociais. Todos esses tipos de relacionamentos podem ser modelados como redes complexas, particularmente como arestas de um grafo que ligam entidades ou nós, que podem representar pessoas, processos, equipamentos, documentos, entre outros.

A teoria dos grafos surgiu em 1736 com o primeiro artigo de Leonhard Euler, quando ele solucionou o problema das sete pontes de Königsberg usando grafos. O problema consistia em determinar se era possível passear pela cidade usando uma única vez cada uma das sete pontes e retornando ao ponto de partida. Outro estudo essencial na área de grafos foi o experimento feito por Stanley Milgram, em 1967, que demonstrou que a sociedade que vivemos é um tipo de rede small world, na qual são necessários no máximo seis conexões de amizade para que duas pessoas quaisquer estejam ligadas. image

Somente em 1998, os pesquisadores Steven Strogatz e Duncan Watts puderam generalizar esse fenômeno, propondo um método para a construção desse tipo de rede e o identificando como pertencente à classe de grafos aleatórios. No ano seguinte, os pesquisadores Albert-László Barabási e Réka Albert identificaram as chamadas redes sem escala (scale-free), que capturam propriedades de redes do mundo real, como as de telecomunicações, as de proteínas, as sociais, dentre outras.

Essa descoberta mostrou que as redes do mundo real são, em geral, compostas por muitos nós com poucas conexões (grau baixo) e poucos nós com muitas conexões (grau alto). Além disso, essas redes também possuem processos de crescimento especiais, na qual um nó pode ganhar novas conexões ao longo do tempo, e de ligações preferenciais, na qual quanto mais conectado é um nó maior é a probabilidade de ele receber novas conexões. 

Atualmente, grafos são muito utilizados para modelagem de estruturas de dados, sejam eles estruturados ou não, para descoberta de algum tipo de padrão implícito. Nesse cenário, existem três grandes áreas: a modelagem de redes do mundo real por meio de dados empíricos, a análise da evolução temporal de um grafo e o entendimento da dinâmica de como a informação se propaga nessas redes. 

Uma das aplicações da mineração de grafos é a modelagem de dados de seguro saúde, mais especificamente dados de reembolso de consultas médicas. Nesse cenário, os nós do grafo representam médicos e a presença de uma aresta entre dois nós representa que esses médicos possuem pacientes em comum, ou seja, se um determinado paciente se consultou tanto com o médico A quanto com o médico B (Figura). Essa abordagem permite entender o fluxo de pacientes entre os médicos e identificar quais são os médicos que são indicados por outros médicos. Ou ainda, qual o médico que, se desligado do plano de saúde, poderá causar um grande impacto tanto para os pacientes quanto à rede de médicos disponível.

Outro exemplo de aplicação é na área financeira para detecção de fraudes. Cada nó representa uma pessoa física ou jurídica, e a presença de uma aresta indica se houve alguma transação financeira entre eles. Transações novas entres dois nós nunca antes conectados, mas que estão longe quanto ao número de arestas na rede, pode indicar uma transação ilegal, como roubo ou clonagem de cartão de crédito. Padrões suspeitos no grafo de transações como ciclos (fraud ring detection) ou conluio de entidades e criação de novas arestas em intervalos curtos de tempo, também são alertas de possíveis fraudes. Existem diversos métodos e sistemas que apontam a presença desses padrões em grafos a fim de alertar as instituições sobre possíveis fraudes.

Essas são apenas algumas das aplicações de análise de grafos. Novas análises, identificações de entidades influentes e relacionamentos implícitos trazem novas visões sobre os dados que não eram possíveis antes com análises tradicionais.

Para saber mais

Ana Paula Appel e Marisa Affonso Vasconcelos são pesquisadoras da IBM Research Brasil na área de mineração de dados. O Mini Paper Series é uma publicação quinzenal do TLC-BR e para assinar e receber eletronicamente as futuras edições, envie um e-mail para

Baixe a versão PDF deste mini paper clicando aqui.

John Boyer (IBM)AIX 7 SSD/Flash Cache Best Practice

AIX 7 SSD/Flash Cache Best Practice

I have been asked for this a couple of times and the answers that spring to mind are

  • Ha ha ha ha ha ha! Oh, you are serious!
  • Consultant standard answer number 1: "It depends"
  • Hmmm! Tricky!

I guess the problem is: The questioner is really wanting a prediction of what will happen and is it going to be cost efficient. And these questions have no answer.

If you look the topic up on Google you will get hits of the Official Announcement and a few cut'npastes of them, which amounts to a paragraph only. So below is my collection of information that might help you.  


If you come across other information and recommendations PLEASE COMMENT BELOW so we all benefit.


Best Practice

  1. Don't bother try this on a small Virtual Machine
    If you have say 1 or 2 CPU, 4 GB of memory and 20 GB of disks you should get instant performance gains just by adding memory and caching it that way. Very simple to try. Very low risk. No need for SSD/Flash. Relatively not too expensive.
  2. Have larger warm disk data than you can have RAM
    If you have more data in regular I/O (not just archive coldly on the disk) than you could possible afford in memory then the SSD/Flash cache can boost data access i.e. multiple TB of warm data.
  3. Don't Cache a Flash
    If you are already using SAN based Flash for your data The SSD/Flash cache is unlikely to help further - IMHO.
  4. Read the excellent article from Manoj Kumar performance gurufor technical back ground and details
    1. Downloadable PDF file
    2. Onle Web version!/wiki/Power%20Systems/page/Integrated%20Server%20Based%20Caching%20of%20SAN%20Based%20Data
  5. Add 4 GB extra RAM.
    The cache management algorithm needs to keep history of read access and than need memory - don't starve your Kernel.
    The documents I have read say 4 GB minimum virtual machine size = that would be bonkers!
    That 4 GB minimum size actually means the developers did not test this feature with stupidly sized micro partitions.  My Laptop here has 16 GB RAM!
  6. No TIPping = No Testing In Production
    Test it before production use with a reproducible workload (benchmark) so you know for sure you are getting a performance gain and how much.
    This also will justify the costs and time invested.
  7. Have your SSD Setup for high performance.
    Don't ass the SSD (or Flash) on an adapter that is already swamped with I/O - If the cache is not super fast there is not much point.
  8. There are some what unexpected limitations on the numbers of disks and cache -I hoped these will be improved in later releases.
    Only one cache pool.
    Only one cache partition.
    The command syntax suggests more could be possible but in practice it will give you an error if you try it.
  9. Don't ask for a prediction - if we could do that I would play the Loto, become a billionaire and then the President of the USA!
  10. Cache Size recommendation - ha ha ha ha ha ha ha ha!
    This creates a read cache.
    You can imagine how that operates as well as me.
    It is extremely unlikely you can measure how often your workloads re-reads disk blocks nor the working set size of the re-reads.
    If you can then you know how much disk I/O speeds will improve.
  11. Example from Jaqui Lynch (friendly performance guru that sparked of the need for the Best Practice)
    She is thinking off using four SSD's (700 GB each) with 20 TB of data on the file systems (unknown re-read rate or set size).
    Cache to data ratio = 14%   (4 x 700)/20,000*100
    My guess is that is pretty healthy and they have good chance of a large performance gain.
    If the ratio is just a handful of percent it still might be OK, if the working set is a lot less than the full 20 TB.
    For example, just just one SSD at 700 GB = 2.5% but the hot part of the data used in any one days is 5 TB then we get back to 14%.
  12. The Knowledge Centre (non American spelling) manual pages are here
    and the cache_mgt AIX/VIOS command can be found here
  13. Oldest AIX versions supported 
    AIX 7.1 TL4 SP2,
    AIX 7.2 TL0 SP0
    But please have the most up to date possible AIX and the latest firmware and VIOS and HMC = Good Practise for best performance.
  14. SAN Offload
    If your SAN is already busy you get the double win:win - the cache gives a performance boost AND less pressure on the SAN which means the non-cache disk I/O goes faster.
  15. RAS: Cache Failure
    Some are concerned the SSD/Flash is not redundant i.e. AIX does not mirror the cache to two devices to handle the case of a device failure.
    This is true.  You should think of the cache as a turbo-charger and the server sized to survive the workload without the cache.
    You may be able to provide hardware based mirroring or RAID5 in the SAS RAID adapter (assuming that is in the configuration) - I have not checked this option. Feedback welcome.
    The Cache is read only which means it can instantly be switched off on your command or a problem.  The next disk read I/O goes back to the real disks which is always the master copy.  The cache is never used to stage a write I/O - so there is 0% flushing the cache issues. The cache old copy is invalidated and the I/O goes to the disks.  This makes an instant cache stop 100% safe.
  16. RAS Cache Offline
    You should design your server so that will provide a satisfactory service without the Cache. Think of a car with a turbo charger. When the Turbo is working I can cruise along at 150 MPH (ignoring any legal issues). If the Turbo stops then I am reduced to 70 MPH but I can still get hoe at the end of the day.  The Turbo is a "great to have" feature but I can survive without it. 
  17. RAS Cache Redundancy
    I have system designers saying that the Cache must be available 100% of the time or they are not going to meet their Service Level Agreements so they demand it is fully redundant and IBM must address this immediately.  All I can suggest is they call their IBM representitive and fill in a Request for Enhancement - we will need client demand to make this a high priority.
  18. RAS: Machine Evacuation - LPM is IMHO Mandatory
    You can use Live Partition Mobility only if you are using the virtual I/O via the VIOS to the SSD/Flash.
    If you have direct SSD/Flash access via a physical adapter LPM is not available, obviously. But you could in an emergency switch off the cache, remove the Cache Devices from AIX, remove the adapter from the LPAR and then LPM. But then you have no cache unless you have the same hardware on the LPM target machine.
  19. VIOS based Cache 
    Good news this then allows LPM (by cache removal) but the bad news it is will be slower due to the virtual I/O layer between VM and VIOS (although that is in normal daily use on most servers). I have no indicative numbers so don't ask.
  20. RAS VIOS based Cache
    The cache will only be on one VIOS. There is no dual path to a SSD on one VIOS. If using the SSD Cache via a VIOS and you have to shutdown that  VIOS the cache is switched off.
  21. As the cache is mostly read it will not benefit much  from SSD attached by SAS adapters with write cache.
  22. On Scale-out POWER8 machines you can use the SSD slots in the System Unit or placed in a EXP24S Drawer.
  23. As the AIX SSD/Flash cache algorithm decides what to cache based on history it takes time to select the blocks.  From my simple tests this required 5 or more minutes as the performance accelerates.  With larger, more complex and changing over time of day disk I/O patterns for example RDBMS access you should leave a suitable time. Especially, for larger caches this could be hours or even over 24 hours if you have different workloads throughout the day like morning analysis, afternoon order processing and various batch runs over night.

If you come across other information and recommendations PLEASE COMMENT BELOW so we all benefit.


At the Power Technical University at Rome Nicolas Sapin, Oracle & AIX IT specialist, IBM presented his results of AIX Caching with a Oracle RDMS he achieved well over 3 times the transaction rate of SQL statements taking a quarter of the time. He pointed out that at around 30%+ the AIX cache was approaching the performance of the whole database on a Flash Disk Unit - which of course costs more. Note: this suggests that the hot data was roughly 30% of the database.   Sessions and shared experience like this are typical of the Technical Universities - Don't miss them.


My thanks to the various AIX Performance developers and designers for taking questions, delivering presentation and testing support:
- You know who you are and they are typically reluctant to be named publicly (or the phone will never stop ringing and the can't get on with the next new exciting feature for me to play with).



  • If you are giving AIX 7 SSD/Flash Cache a try, please let me know how it went.
  • Also the basic configuration, summary of the workload type and the results of your tests.

John Boyer (IBM)Pipe of the Day - The INTERNAL option on IMMCMD

When you're developing asynchronous pipelines with feedback loops (yes, some plumbers do that) it can be a challenge to figure out why a certain segment of the pipeline will not gracefully terminate when it is done. Using PIPMOD STOP may help terminate all waiting asynchronous stages, but it's often too disruptive when you have a server virtual machine that needs to run all the time.

A popular trick to sort out the state of your pipeline is to use the jeremy stage that goes through all control blocks and reports for each stage and pipeline segment whether it has terminated or is waiting for input or output. This is the same diagnostics that is used by CMS Pipelines to debug a pipeline stall (when you set the configuration variable accordingly).

The jeremy stage is triggered by an input record, and a practical approach is to add a small pipeline segment as shown below. An immediate command can be used to get the display when you believe the pipeline is hanging.

\ immcmd jeremy | jeremy | cons

Unfortunately, after you fixed your problem in the original pipeline, the immcmd stage itself makes the pipeline wait for another event. This can be very misleading and make it hard to notice the problem has been resolved. Even though a display like the one below immediately shows that the only thing still running is in fact the section with the jeremy stage, it's likely to confuse you when you're working on a problem.

5785-RAC CMS Pipelines   level 1.1.12 sublevel 13 ("000D"x)  
   From z/VM 6.4.0                                           
Pipeline specification 1          commit 0                   
   Pipeline 1                                                
      literal GET /               returned.                  
      restful cloudant txt        returned.                  
      cons                        returned.                  
   Pipeline 2                                                
      immcmd j                    wait.out.                  
         record on output 0:                               ""
      jeremy                      ready.                     
      cons                        wait.locate.               

The trick is to use the INTERNAL option, which means that CMS Pipelines will terminate the pipeline when the immcmd is the only one waiting. This is great for exactly this kind of debugging.

\ immcmd jeremy internal | jeremy | cons

This does not mean it is always easy to diagnose an asynchronous pipeline that does not gracefully terminate when it is done, but at least you have a better start at finding out what happened. In many cases the way to resolve such a situation is by making sure end-of-file is propagated through the pipeline correctly. In some cases it is necessary to add a gate to close things.

John Boyer (IBM)What you should know about landing page

Normally when talking about landing page, not many understand what they are and that is why today we will talk here about the landing page, as it is also known.

Now and without it is time for you to know what these pages are, what their functions are, why it is important to have them and much more.

What is a landing page and what do they do?

Landing page are those landing pages that are given to users of our site, as a place of arrival after clicking on one of our sponsored links.

This way we can define these landing pages as the main page that our website has to receive customers, but it can also be a page specially designed for our service or product and you are not required to link to your official website.

Since you have understood what a landing page is, it is time to talk about their function. Regarding this topic, we must clarify that the main objective that is sought with these pages is to achieve to convert the users or visitors that we have in clients of our business. It is from there that it is so valuable to have good pages of the entrance since with them you can have a special attention for the visitors.

It should be noted that landing pages are tools that can be customized, taking into account the segment of our audience that we want to reach.

Why is it important one and what are their types?

Put simply the importance of these pages arises because it is through them that we can keep the attention and interest of the users in us. This means that we first caught their attention with a link or ad on the network and when received on this landing page can have or not their interest to continue with the conversion.

So it is clear that these pages are very important for an online business because when looking to get the visitor to be a customer, what is really clearly wanted is to have a sale or improve traffic and site visits.

Now is the time to know the types of landing page and there are some tips to make a good landing page:

Landing page on the main web

  • Landing page or Microsites, are made in order to call the action.
  • Landing page or unique pages: They are performed in order to get user data or to encourage a download action.
  • Landing page on Facebook: Ideal for attracting customers or relying on existing ones. Their uses are varied.

Finally, as to the guidelines to be followed to have a good landing page the following are some of the most important:


  • Create simple, clean design and unique content pages.
  • Make note of your company's identity and logo.

It should draw attention with a good title, which is attractive and very clear, and contain a subtitle that describes well offered.

It should be very illustrative, so use good graphics.

And do not forget to highlight the call to action.

Did you like the article? What other things would you like to know about the landing page? Leave me your comment.

For your success

As a note, if you want to migrate your website to a reliable hosting provider, check out our unlimited reseller hosting plans.



John Boyer (IBM)Basic DevOps Principles and its 5 Unsung Tools

What is DevOps

  • The term “DevOps” typically refer to the emerging professional movement that advocates a collaborative working relationship between Development and IT Operations, resulting in the fast flow of planned work (i.e., high deploy rates), while simultaneously increasing the reliability, stability, resilience and security of the production environment.
  • Our DevOps online training enlightens you on how DevOps differs from Agile. One tenet of the Agile development process is to deliver working software in smaller and more frequent increments, as opposed to the the “big bang” approach of the waterfall method. This is most evident in the Agile goal of having potentially shippable features at the end of each sprint . Where as DevOps extends and completes the continuous integration and release process by ensuring the code is production ready and providing value to the customer.
  • Although many people view DevOps as backlash to ITIL (IT Infrastructure Library) or ITSM (IT Service Management). DevOps training informs you that -ITIL and ITSM still are best codifications of the business processes that underpin IT Operations, and actually describe many of the capabilities needed into order for IT Operations to support a DevOps-style work stream.
  • The goal of DevOps is not just to increase the rate of change, but to successfully deploy features into production without causing chaos and disrupting other services, while quickly detecting and correcting incidents when they occur. This brings in the ITIL disciplines of service design, incident and problem management.

Basic DevOps Principles

DevOps training experts have outlined few basic principles to guide you through

  • Referring to DevOps Cookbook, there are three ways. First way emphasizes the performance of the entire system, as opposed to the performance of a specific silo of work or department — this can be as large as a division (e.g., Development or IT Operations) or as small as an individual contributor (e.g., a developer, system administrator
  • The Second Way is about creating the right to left feedback loops. The goal of almost any process improvement initiative is to shorten and amplify feedback loops so necessary corrections can be continually made.
  • The Third Way is about creating a culture that fosters at two things: continual experimentation, which requires taking risks and learning from success and failure; and understanding that repetition and practice is the prerequisite to mastery.


5 Unsung Tools of DevOps

Most of us are always try to optimize our work so we’re constantly looking for new and improved tools. Plus playing with new tools is fun.

The tools we use play a critical role in how effective we are. In today’s ever-changing world of technology, we tend to focus on the latest and greatest solutions and overlook the simple tools that are available. Constant improvement of tools is an important aspect of the DevOps movement, but improvement doesn’t always warrant replacement.

Companies of all shapes and sizes are adopting DevOps principles today. And while a constant improvement of tools as new ones become available is an important aspect of DevOps, it doesn’t always warrant replacement of existing tools that work.

So here are five tools that we must use almost every day. They either provide insight into or control over the environment around us while requiring minimal installation and configuration. They are not the flashiest tools, but they are time tested and just work.

  • RANCID: A suite of utilities that enables automation retention of your configurations in revision control.
  • Cacti: A round robin database-based statistics graphing tool primarily targeted at network equipment using SNMP (Simple Network Management Protocol).
  • lldpd: One of the most underutilized, yet extremely useful, networking protocols that shows you exactly which port a server is plugged into.
  • IPerf: A network testing tool designed to measure the throughput between two points and run as a client/server pair.
  • MUltihost SSH Wrapper: A shell script wrapper around SSH that allows you to execute the same command across multiple hosts either in sequence or parallel.

The thing these tools all have in common is that they allow developers to get better access to their systems so they can get access to machines, get logs and run any relevant processes.


John Boyer (IBM)Add a option for Title Case

Alongwith the option to change the input to UPPERCASE format , have an option for Title Case, in which we can capitalise the first letter of each word.


It would be great if anyone can share the javascript function to be added on application onstart event for securejs = "true"

John Boyer (IBM)j2pg high CPU usage on AIX

Recently I've come across an odd issue at two different customers. I thought I'd share the experience, in case others also come across this strange behaviour.


In both cases they reported j2pg high CPU usage.


Similar to this...



And, in both cases, we discovered many /usr/sbin/update processes running. Unexpectedly.


When we stopped these processes, j2pg's CPU consumption dropped to nominal levels.


The j2pg process is responsible for, among other things, flushing data to disk and is called by the sync(d) process.


The /usr/sbin/update command is just a script that calls sync in a loop. Its purpose it to "..periodically update the super block., ..execute a sync subroutine every 30 seconds. This action ensures the file system is up-to-date in the event of a system crash".


# cat /usr/sbin/update




while true



sleep 30

done &

exit 0


Because of the large number of /usr/sbin/update (sync) processes (in some cases over 150 of them), j2pg was constantly kept busy, assisting with flushing data to disk.


It appears that the application team (in both cases) was attempting to perform some sort of SQL "update" but due to an issue with their shell environment/PATH setting they were calling /usr/sbin/update instead of the intended update (or UPDATE) command. And yes, a non-root user can call /usr/sbin/update - no problem. So, in the "ps -ef" output we found processes that looked similar to this:


fred 50791260 1 0 Jan 09 - 0:04 /usr/bin/bsh /usr/sbin/update prd_ctl_q_and_p.af_data_obj set as_of_dt=2016-12-09 00:00:00.000000 where DATA_OBJ_NM like %LOANIQ%20161209%


# ls -ltr /usr/sbin/update

-r-xr-xr-x    1 bin      bin             943 Aug 17 2011  /usr/sbin/update

The application teams were directed to fix their scripts to prevent them calling /usr/sbin/update and instead call the correct command.


And here’s some information (more than you’ll probably ever need to know) about j2pg on AIX.


"j2pg - Kernel process integral to processing JFS2 I/O requests.

The kernel thread is responsible of managing I/Os in JFS2 filesystems,
so it is normal to see it running in case of lot of I/Os or syncd.

And we could see that j2pg runs syncHashList() very often.The sync is
done in syncHashList(). In syncHashList(), all inodes are extracted from hash
list. And whether the inode needs to synchronize or not is then judged
by iSyncNeeded().

** note that a sync() call will cause the system to scan *all* the
memory currently used for filecaching to see which pages are dirty
and have to be synced to disk

Therefore, the cause of j2pg having this spike is determined by the
two calls that were being made (iSyncNeeded ---> syncHashList). What is
going on here is a flush/sync of the JFS2 metadata to disk. Apparently
some program went recursively through the filesystem accessing files
forcing the inode access timestamp to change. These changes would have
to propogated to the disk.

Here's a few reasons why j2pg would be active and consume high CPU:
1. If there several process issuing sync then the j2pg process will be
very active using cpu resources.
2. If there is file system corruption then the j2pg will use more cpu
3. If the storage is not running data fast enough then the j2pg process
will be using high amount of cpu resources.

j2pg will get started for any JFS2 dir activity.
Another event that can cause j2pg activity, is syncd.
If the system experiences a lot of JFS2 dir activity, the j2pg process
will also be active handling the I/O.
Since syncd flushes I/O from real memory to disk, then any JFS2 dir's
with files in the buffer will also be hit."

"Checking the syncd...

From data, we see:
$ grep -c sync psb.elfk
351 << this is high
$ grep sync psb.elfk | grep -c oracle
348 << syncd called by Oracle user only

It appears that the number of sync which causes j2pg
to run is causing spikes.

We see:
/usr/sbin/syncd 60

J2pg is responsible for flushing data to disk and is
usually called by the syncd process. If you have a
large number of sync processes running on the system,
that would explain the high CPU for j2pg

The syncd setting determines the frequency with which
the I/O disk-write buffers are flushed.

The AIX default value for syncd as set in /sbin/rc.boot
is 60. It is recommended to change this value to 10.

This will cause the syncd process to run more often
and not allow the dirty file pages to accumulate,
so it runs more frequently but for shorter period of
time. If you wish to make this permanent then edit
the /sbin/rc.boot file and change to the 60 to 10.

You may consider mounting all of the
non-rootvg file systems with the 'noatime' option.
This can be done without any outage:

However selecting a non-peak production hours is better:

Use the commands below: For Example:
# mount -o remount,noatime /oracle
Then use chfs to make is persistent:
# chfs -a options=noatime /oracle

- noatime -
Turns off access-time updates. Using this option can
improve performance on file systems where a large
number of files are read frequently and seldom updated.
If you use the option, the last access time for a file
cannot be determined. If neither atime nor noatime is
specified, atime is the default value."


John Boyer (IBM)【IBM i ニュース 第82号】 最新版 IBM i 7.3 徹底解剖


【IBM i ニュース 第82号】 最新版 IBM i 7.3 徹底解剖

配信日: 2017年1月18日




1.  Top News


☆ 最新版 IBM i 7.3の記事ご紹介

【iCafe記事】IBM i 7.3徹底解剖 ~ オープンソース対応、DB2機能強化と強固なセキュリティー でさらなる進化を遂げた新リリース

【e-BELLNET.com記事】IBM i 7.3テンポラル表ってなんですか


2.  製品情報


☆ クラウド時代のIBM i バックアップ - IBM Cloud Storage Solution for i -のご紹介

IBM i Cloud Storage Solution for i は新しいライセンス・プログラムで、IBM i をご利用のお客様がパブリック・クラウドまたはプライベート・クラウドに接続できるようにするAPI を提供します。この製品は、データ・バックアップおよびアーカイブに使用できます。2016/12時点では、パブリック・クラウド・サービスとしては、Bluemix(旧SoftLayer)をサポートしています。プライベート・クラウド環境内では、FTPサーバーに対して実行することができます。


3.  イベント案内


☆ 2/10開催 第4回 OSS勉強会のご案内

オープンソース協議会 - IBM i では、IBM i 技術者向けの勉強会を毎年開催してきました。前年度同様に、本年度も全5回の勉強会を行うこととなりましたので、皆さま多数の参加をお待ちしております。


◎日 時:2月10日(金) 15:30~17:30(受付開始15:00)

◎会 場:〒103-8510 東京都中央区日本橋箱崎町19-21 6階 IBMイノベーション・センター

◎テーマ:使わない手はない?!IBM i 7.3の新機能~DB2機能拡張~

◎内 容:IBM i 7.3の登場により、レコードの履歴管理(テンポラル表)が可能になりました。また、新しく機能拡張されたDB2 for i の新機能を中心に解説します。

◎定 員:30名


☆ 3/2-7開催 iCafe主催 IBM i World 2017 IBM i TECHセミナー 2017 春

IBM i とPower Systemsの最新情報をわかりやすくお届けする技術者向けのイベント「IBM i TECHセミナー」を東京・名古屋・大阪で開催します。今回は「企業活動の根幹を成す基幹システム基盤としてのIBM i の価値を見直す」と題して、今最も注目を集めるWatson AnalyticsとIBM i の連携をご紹介するセッションや、基盤の維持・拡張時に必要となる最新開発環境をわかりやすく解説いたします。さらに企業の大切な資産である基幹データをどうやって保全するか、最新ソリューションもご紹介します。

◎日時:名古屋:3月2日(木) 14:00~17:00(受付開始13:30)

      大 阪:3月3日(金) 14:00~17:00(受付開始13:30)

      東 京:3月7日(火) 14:00~17:00(受付開始13:30)


      大 阪:日本IBM大阪事業所

      東 京:日本IBM箱崎事業所





☆ 「IBM i モダナイゼーション & セキュリティ」セミナー ~モダナイゼーション手法の選択肢と新たなセキュリティリスクへの対応~

オープンな技術を取り込み日々進化するIBM i。当セミナーでは数あるモダナイゼーション手法の最適な選択肢と多様化する用途に対応するためのセキュリティ対策をご紹介します。

◎日時:2月17日(金) 13:30 ~ 17:30(受付開始13:00)

◎会場:日本アイ・ビー・エム株式会社 本社事業所



4.  研修案内


☆ IBM i 7.3 活用入門~新しいIBM i で開ける未来~

現在IBM i をお使いの方、もしくはこれからIBM i を導入しようとお考えの方に向けた、e-ラーニング教材です。IBM i の概要を把握したうえで、バージョン7.3で実現できるセキュリティコレクションやデータベース、テンポラル表などについての特長と機能を理解することができます。








John Boyer (IBM)Agent Deployment (ITM in Plain English)



This blog entry presents an overview of the complete agent

deployment process




ITM Agent Depot Location


Referred to as either an agent depot or deployment depot. It is an installation directory on the monitoring server from which you deploy agents and maintenance packages across your environment. The depots are usually located and maintained independently on the hub monitoring server and one or more remote monitoring servers. The default location of the depot is;


Windows: <install_dir>\CMS\Depot

Linux and UNIX: <install_dir>/tables/<tems_name>/depot


To change the default location, edit the KBBENV monitoring server configuration file located in the <install_dir>/CMS directory on Windows systems, and the <install_dir>/tables/tems_name directory on UNIX and Linux systems. Locate and edit the DEPOTHOME variable. If it does not exist, add it to the file.


When you remotely deploy to a server, the depot you use belongs to the monitoring server to which the operating system agent reports. However, a hub monitoring server and one or more remote monitoring servers can use a single depot. You just need to locate the depot in a central location that is accessible by all of the monitoring servers. See Locating and sizing the remote deployment depot for more information.


NOTE: You cannot create an agent depots on a z/OS monitoring server. There is no command to create a depot. The location of the depot is either a default location or one specified by the DEPOTHOME environment variable in the KBBENV file.


Populating the ITM Agent Depot


You can populate the ITM Depot during an installation (see Populating the agent depot from the installation image) or by using the tacmd addBundles command (see Populating the agent depot with the tacmd addBundles command). A bundle is the agent installation image and any prerequisites.


To view available bundles that you can load into a depot from the installation media or image, use tacmd listBundles. To list the types of bundles available in either the local or remote agent depot, use tacmd viewDepot.


NOTE: You do not install agents remotely directly from the installation media. You copy bundles of installation code from the installation media into the depot. You must run tacmd addBundles locally on a monitoring server containing a depot.


Command reference: tacmd addBundles; tacmd listBundles; tacmd viewDepot.


Agent Deployment


The command to use to remotely deploy operating system agents is tacmd createNode.You must deploy operating system agents to a target destination before remotely deploying application agents to that server.Operating system agents monitor operating system performance, but also include the required infrastructure for remote deployment and maintenance.


This command uses the following protocols:


Server message block (SMB), use primarily for Windows servers

Secure shell (SSH), only SSH version 2

Remote execution (REXEC)

Remote shell (RSH)


By default, it tries all available protocols until a protocol works. You can also specify which protocol to use.


This command also runs a prerequisite check automatically. To verify that the prerequisite checker runs, issue the tacmd createNode command using the EXECPREREQCHECK=Y and COLLECTALL=Y options. When using these two options, the prerequisite checker writes its results to /opt/IBM/ITM/logs/checkprereq_results. A prerequisite check can also be run locally as a stand-alone check (a tool needs to be copied from the installation media for the appropriate platform to the target computer), or remotely by using the tacmd checkPrereq command.


NOTE: Although you can install an operating system agent remotely, you cannot control it remotely via the command line or via the Tivoli Enterprise Portal client.


The command to use to remotely deploy application agents (non-OS agents) is tacmd addSystem. To use this deployment method, you must first deploy an operating system agent on the target server. You can also use the Tivoli Enterprise Portal client to remotely deploy application agents by right clicking on the server instance and selecting "Add Managed System". And you can control them remotely via the command line or via the Tivoli Enterprise Portal client.


Command reference: tacmd createNode; tacmd checkPrereq; tacmd addSystem.


Useful Deployment Commands


See the IBM Tivoli Monitoring Command Reference for the full syntax and description of all the commands.



Depot management

Deployment and configuration

Agent management


























NOTE: To take action, such as create, modify, delete, view, an operating system user must first authenticate against a server by issuing a tacmd login command.


Example Deployment Process


This is a summary of steps used to install and manage the Tivoli Log Agent on a remote Linux computer using tacmd commands for deployment.


  1. Login to the Tivoli Enterprise Monitoring Server.

  2. viewDepotShow the agents that you can install from the depot.

  3. listBundlesShow the details of one or more bundles in the CD.

  4. addBundlesAdd one or more bundles to the local deployment depot.

  5. listSystemsShow a list of agents, managed systems, or both.

  6. createNodeDeploy (install) an OS agent to a target remote computer.

  7. listSystemsList the current systems showing the new node that you create.

  8. viewNodeShow the characteristics of a node.

  9. addSystemDeploy a monitoring agent and prerequisite software to a node. The computer must already have an OS agent installed.

  10. viewNodeShow the characteristics of the node to verify that the agent is installed.

  11. stopAgentStop the agent.

  12. startAgentStart the agent. Can also use the restartAgentcommand.


Checking Deployment Status


Every deployment command (including checkPrereq) has a transaction ID. This is the long number displayed when a deployment command is successfully executed. You can use this number with the command tacmd getDeployStatus to check the progress of any deployment command. You can also view this information in the Tivoli Enterprise Portal client by right clicking Enterprise in the Navigator Physical view and selecting Workspace > Deployment Status Summary.


Commend reference: tacmd getDeployStatus; clearDeployStatus.


Deployment Troubleshooting


You can add options to the tacmd commands to increase the logging in the debug logs.




Prevent the tacmd addSystem command from removing the temporary directory in the target workstation.




Setting RAS1 trace levels


On Windows systems:


On the computer where the Tivoli Enterprise Monitoring Server is installed, select Start > Programs > IBM Tivoli Monitoring > Manage Tivoli Enterprise Monitoring Services.

Right-click the Tivoli Enterprise Monitoring Server service.

Select Advanced > Edit Trace Parms > to display the Trace Parameters window.

Type (UNIT:kdy all) in the Enter RAS1 Filters field.

Accept the defaults for the rest of the fields.

Click OK to set the new trace options.

Click Yes to recycle the service.


On Linux systems, set the following variable in $CANDLEHOME/config/lz.ini:




On UNIX systems other than Linux, set the following variable in $CANDLEHOME/config/ux.ini:




For dynamic tracing, please see Dynamically modify trace settings for an IBM Tivoli Monitoring component.


Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds;


image image image


Check out all our other posts and updates;





Twitter :

ProgrammableWebDaily API RoundUp: Dead Man&#039;s Snitch, NAB, EventUpon, Astronomer, Nestio, Sync Ninja

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesIntroducing the AWS IoT Button Enterprise Program

The AWS IoT Button first made its appearance on the IoT scene in October of 2015 at AWS re:Invent with the introduction of the AWS IoT service.  That year all re:Invent attendees received the AWS IoT Button providing them the opportunity to get hands-on with AWS IoT.  Since that time AWS IoT button has been made broadly available to anyone interested in the clickable IoT device.

During this past AWS re:Invent 2016 conference, the AWS IoT button was launched into the enterprise with the AWS IoT Button Enterprise Program.  This program is intended to help businesses to offer new services or improve existing products at the click of a physical button.  With the AWS IoT Button Enterprise Program, enterprises can use a programmable AWS IoT Button to increase customer engagement, expand applications and offer new innovations to customers by simplifying the user experience.  By harnessing the power of IoT, businesses can respond to customer demand for their products and services in real-time while providing a direct line of communication for customers, all via a simple device.



AWS IoT Button Enterprise Program

Let’s discuss how the new AWS IoT Button Enterprise Program works.  Businesses start by placing a bulk order of the AWS IoT buttons and provide a custom label for the branding of the buttons.  Amazon manufactures the buttons and pre-provisions the IoT button devices by giving each a certificate and unique private key to grant access to AWS IoT and ensure secure communication with the AWS cloud.  This allows for easier configuration and helps customers more easily get started with the programming of the IoT button device.

Businesses would design and build their IoT solution with the button devices and creation of device companion applications.  The AWS IoT Button Enterprise Program provides businesses some complimentary assistance directly from AWS to ensure a successful deployment.  The deployed devices then would only need to be configured with Wi-Fi at user locations in order to function.



For enterprises, there are several use cases that would benefit from the implementation of an IoT button solution. Here are some ideas:

  • Reordering services or custom products such as pizza or medical supplies
  • Requesting a callback from a customer service agent
  • Retail operations such as a call for assistance button in stores or restaurants
  • Inventory systems for capturing products amounts for inventory
  • Healthcare applications such as alert or notification systems for the disabled or elderly
  • Interface with Smart Home systems to turn devices on and off such as turning off outside lights or opening the garage door
  • Guest check-in/check-out systems


AWS IoT Button

At the heart of the AWS IoT Button Enterprise Program is the AWS IoT Button.  The AWS IoT button is a 2.4GHz Wi-Fi with WPA2-PSK enabled device that has three click types: Single click, Double click, and Long press.  Note that a Long press click type is sent if the button is pressed for 1.5 seconds or longer.  The IoT button has a small LED light with color patterns for the status of the IoT button.  A blinking white light signifies that the IoT button is connecting to Wi-Fi and getting an IP address, while a blinking blue light signifies that the button is in wireless access point (AP) mode.  The data payload that is sent from the device when pressed contains the device serial number, the battery voltage, and the click type.

Currently, there are 3 ways to get started building your AWS IoT button solution.  The first option is to use the AWS IoT Button companion mobile app.  The mobile app will create the required AWS IoT resources, including the creation of the TLS 1.2 certificates, and create an AWS IoT rule tied to AWS Lambda.  Additionally, it will enable the IoT button device via AWS IoT to be an event source that invokes a new AWS Lambda function of your choosing from the Lambda blueprints.  You can download the aforementioned mobile apps for Android and iOS below.


The second option is to use the AWS Lambda Blueprint Wizard as an easy way to start using your AWS IoT Button. Like the mobile app, the wizard will create the required AWS IoT resources for you and add an event source to your button that invokes a new Lambda function.

The third option is to follow the step by step tutorial in the AWS IoT getting started guide and leverage the AWS IoT console to create these resources manually.

Once you have configured your IoT button successfully and created a simple one-click solution using one of the aforementioned getting started guides, you should be ready to start building your own custom IoT button solution.   Using a click of a button, your business will be able to build new services for customers, offer new features for existing services, and automate business processes to operate more efficiently.

The basic technical flow of an AWS IoT button solution is as follows:

  • A button is clicked and secure connection is established with AWS IoT with TLS 1.2
  • The button data payload is sent to AWS IoT Device Gateway
  • The rules engine evaluates received messages (JSON) published into AWS IoT and performs actions or trigger AWS Services based defined business rules.
  • The triggered AWS Service executes or action is performed
  • The device state can be read, stored and set with Device Shadows
  • Mobile and Web Apps can receive and update data based upon action

Now that you have general knowledge about the AWS IoT button, we should jump into a technical walk-through of building an AWS IoT button solution.


AWS IoT Button Solution Walkthrough

We will dive more deeply into building an AWS IoT Button solution with a quick example of a use case for providing one-click customer service options for a business.

To get started, I will go to the AWS IoT console, register my IoT button as a Thing and create a Thing type.  In the console, I select the Registry and then Things options in console menu.

The name of my IoT thing in this example will be TEW-AWSIoTButton.  If you desire to categorize the IoT things, you can create a Thing type and assign a type to similar IoT ‘things’.  I will categorize my IoT thing, TEW-AWSIoTButton, as an IoTButton thing type with a One-click-device attribute key and select Create thing button.

After my AWS IoT button device, TEW-AWSIoTButton, is registered in the Thing Registry, the next step is to acquire the required X.509 certificate and keys.  I will have AWS IoT generate the certificate for this device, but the service allows for to use your own certificates.  Authenticating the connection with the X.509 certificates helps to protect the data exchange between your device and AWS IoT service.

When the certificates are generated with AWS IoT, it is important that you download and save all of the files created since the public and private keys will not be available after you leave the download page. Additionally, do not forget to download the root CA for AWS IoT from the link provided on the page with your generated certificates.

The newly created certificate will be inactive, therefore, it is vital that you activate the certificate prior to use.  AWS IoT uses the TLS protocol to authenticate the certificates using the TLS protocol’s client authentication mode.  The certificates enable asymmetric keys to be used with devices, and AWS IoT service will request and validate the certificate’s status and the AWS account against a registry of certificates.  The service will challenge for proof of ownership of the private key corresponding to the public key contained in the certificate.  The final step in securing the AWS IoT connection to my IoT button is to create and/or attach an IAM policy for authorization.

I will choose the Attach a policy button and then select Create a Policy option in order to build a specific policy for my IoT button.  In Name field of the new IoT policy, I will enter IoTButtonPolicy for the name of this new policy. Since the AWS IoT Button device only supports button presses, our AWS IoT button policy will only need to add publish permissions.  For this reason, this policy will only allow the iot:Publish action.


For the Resource ARN of the IoT policy, the AWS IoT buttons typically follow the format pattern of: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ iotbutton /ButtonSerialNumber.  This means that the Resource ARN for this IoT button policy will be:

I should note that if you are creating an IAM policy for an IoT device that is not an AWS IoT button, the Resource ARN format pattern would be as follows: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ YourTopic/ OptionalSubTopic/

The created policy for our AWS IoT Button, IoTButtonPolicy, looks as follows:

The next step is to return to the AWS IoT console dashboard, select Security and then Certificates menu options.  I will choose the certificate created in the aforementioned steps.

Then on the selected certificate page, I will select the Actions dropdown on the far right top corner.  In order to add the IoTButtonPolicy IAM policy to the certificate, I will click the Attach policy option.


I will repeat all of the steps mentioned above but this time I will add the TEW-AWSIoTButton thing by selecting the Attach thing option.

All that is left is to add the certificate and private key to the physical AWS IoT button and connect the AWS IoT Button to Wi-Fi in order to have the IoT button be fully functional.

Important to note: For businesses that have signed up to participate in the AWS IoT Button Enterprise Program, all of these aforementioned steps; Button logo branding, AWS IoT thing creation, obtaining certificate & key creation, and adding certificates to buttons, are completed for them by Amazon and AWS.  Again, this is to help make it easier for enterprises to hit the ground running in the development of their desired AWS IoT button solution.

Now, going back to the AWS IoT button used in our example, I will connect the button to Wi-Fi by holding the button until the LED blinks blue; this means that the device has gone into wireless access point (AP) mode.

In order to provide internet connectivity to the IoT button and start configuring the device’s connection to AWS IoT, I will connect to the button’s Wi-Fi network which should start with Button ConfigureMe. The first time the connection is made to the button’s Wi-Fi, a password will be required.  Enter the last 8 characters of the device serial number shown on the back of the physical AWS IoT button device.

The AWS IoT button is now configured and ready to build a system around it. The next step will be to add the actions that will be performed when the IoT button is pressed.  This brings us to the AWS IoT Rules engine, which is used to analyze the IoT device data payload coming from the MQTT topic stream and/or Device Shadow, and trigger AWS Services actions.  We will set up rules to perform varying actions when different types of button presses are detected.

Our AWS IoT button solution will be a simple one, we will set up two AWS IoT rules to respond to the IoT button being clicked and the button’s payload being sent to AWS IoT.  In our scenario, a single button click will represent that a request is being sent by a customer to a fictional organization’s customer service agent.  A double click, however, will represent that a text will be sent containing a customer’s fictional current account status.

The first AWS IoT rule created will receive the IoT button payload and connect directly to Amazon SNS to send an email only if the rule condition is fulfilled that the button click type is SINGLE. The second AWS IoT rule created will invoke a Lambda function that will send a text message containing customer account status only if the rule condition is fulfilled that the button click type is DOUBLE.

In order to create the AWS IoT rule that will send an email to subscribers of an SNS topic for requesting a customer service agent’s help, we will go to Amazon SNS and create a SNS topic.

I will create an email subscription to the topic with the fictional subscribed customer service email, which in this case is just my email address.  Of course, this could be several customer service representatives that are subscribed to the topic in order to receive emails for customer assistance requests.

Now returning to the AWS IoT console, I will select the Rules menu and choose the Create rule option. I first provide a name and description for the rule.

Next, I select the SQL version to be used for the AWS IoT rules engine.  I select the latest SQL version, however, if I did not choose to set a version, the default version of 2015-10-08 will be used. The rules engine uses a SQL-like syntax with statements containing the SELECT, FROM, and WHERE clauses.  I want to return a literal string for the message, which is not apart of the IoT button data payload.  I also want to return the button serial number as the accountnum, which are not apart of the payload.  Since the latest version, 2016-03-23, supports literal objects, I will be able to send a custom payload to Amazon SNS.

I have created the rule, all that is left is to add a rule action to perform when the rule is analyzed.  As I mentioned above, an email should be sent to customer service representatives when this rule is triggered by a single IoT button press.  Therefore, my rule action will be the Send a message as an SNS push notification to the SNS topic that I created to send an email to our fictional customer service reps aka me. Remember that the use of an IAM role is required to provide access to SNS resources; if you are using the console you have the option to create a new role or update an existing role to provide the correct permissions.  Also, since I am doing a custom message and pushing to SNS, I select the Message format type to be RAW.

Our rule has been created, now all that is left is for us to test that an email is successfully sent when the AWS IoT button is pressed once, and therefore the data payload has a click type of SINGLE.

A single press of our AWS IoT Button and the custom message is published to the SNS Topic, and the email shown below was sent to the subscribed customer service agents email addresses; in this example, to my email address.


In order to create the AWS IoT rule that will send a text via Lambda and a SNS topic for the scenario in which customers request account status to be sent when the IoT Button is pressed twice.  We will start by creating an AWS IoT rule with an AWS Lambda action.  To create this IoT rule, we first need to create a Lambda function and the SNS Topic with a SNS text based subscription.

First, I will go to the Amazon SNS console and create a SNS Topic. After the topic is created, I will create a SNS text subscription for our SNS topic and add a number that will receive the text messages. I will then copy the SNS Topic ARN for use in my Lambda function. Please note, that I am creating the SNS Topic in a different region than previously created SNS topic to use a region with support for sending SMS via SNS. In the Lambda function, I will need to ensure the correct region for the SNS Topic is used by including the region as a parameter of the constructor of the SNS object. The created SNS topic, aws-iot-button-topic-text is shown below.


We now will go to the AWS Lambda console and create a Lambda function with an AWS IoT trigger, an IoT Type as IoT Button, and the requested Device Serial Number will be the serial number on the back of our AWS IoT Button. There is no need to generate the certificate and keys in this step because the AWS IoT button is already configured with certificates and keys for secure communication with AWS IoT.

The next is to create the Lambda function,  IoTNotifyByText, with the following code that will receive the IoT button data payload and create a message to publish to Amazon SNS.

'use strict';

console.log('Loading function');
var AWS = require("aws-sdk");
var sns = new AWS.SNS({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    // Load the message as JSON object 
    var iotPayload = JSON.stringify(event, null, 2);
    // Create a text message from IoT Payload 
    var snsMessage = "Attention: Customer Info for Account #: " + event.accountnum + " Account Status: In Good Standing " + 
    "Balance is: 1234.56"
    // Log payload and SNS message string to the console and for CloudWatch Logs 
    console.log("Received AWS IoT payload:", iotPayload);
    console.log("Message to send: " + snsMessage);
    // Create params for SNS publish using SNS Topic created for AWS IoT button
    // Populate the parameters for the publish operation using required JSON format
    // - Message : message text 
    // - TopicArn : the ARN of the Amazon SNS topic  
    var params = {
        Message: snsMessage,
        TopicArn: "arn:aws:sns:us-east-1:xxxxxxxxxxxx:aws-iot-button-topic-text"
     sns.publish(params, context.done);

All that is left is for us to do is to alter the AWS IoT rule automatically created when we created a Lambda function with an AWS IoT trigger. Therefore, we will go to the AWS IoT console and select Rules menu option. We will find and select the IoT button rule created by Lambda which usually has a name with a suffix that is equal to the IoT button device serial number.


Once the rule is selected, we will choose the Edit option beside the Rule query statement section.

We change the Select statement to return the serial number as the accountnum and click Update button to save changes to the AWS IoT rule.

Time to Test. I click the IoT button twice and wait for the green LED light to appear, confirming a successful connection was made and a message was published to AWS IoT. After a few seconds, a text message is received on my phone with the fictitious customer account information.


This was a simple example of how a business could leverage the AWS IoT Button in order to build business solutions for their customers.  With the new AWS IoT Button Enterprise Program which helps businesses in obtaining the quantities of AWS IoT buttons needed, as well as, providing AWS IoT service pre-provisioning and deployment support; Businesses can now easily get started in building their own customized IoT solution.

Available Now

The original 1st generation of the AWS IoT button is currently available on, and the 2nd generation AWS IoT button will be generally available in February.  The main difference in the IoT buttons are the amount of battery life and/or clicks available for the button.  Please note that right now if you purchase the original AWS IoT button, you will receive $20 in AWS credits when you register.

Businesses can sign up today for the AWS IoT Button Enterprise Program currently in Limited Preview. This program is designed to enable businesses to expand their existing applications or build new IoT capabilities with the cloud and a click of an IoT button device.  You can read more about the AWS IoT button and learn more about building solutions with a programmable IoT button on the AWS IoT Button product page.  You can also dive deeper into the AWS IoT service by visiting the AWS IoT developer guide, the AWS IoT Device SDK documentation, and/or the AWS Internet of Things Blog.



John Boyer (IBM)Spectrum Control URL List

Check out my latest publication:


Similar to the TPC URL List, this serves as a quick reference for URLs to access various components of the product without having to navigate through the product.  While this is most useful to support personnel; system and storage administrators who wish to advance their skill and shortcut certain tasks can get some value from this list as well.



John Boyer (IBM)分析 OVS 如何实现 vlan 隔离 - 每天5分钟玩转 OpenStack(140)

上一节我们完成了 OVS vlan 环境的搭建,当前拓扑结构如下:

cirros-vm1 位于控制节点,属于 vlan100。
cirros-vm2 位于计算节点,属于 vlan100。
cirros-vm3 位于计算节点,属于 vlan101。

今天详细分析 OVS 如何实现 vlan100 和 vlan101 的隔离。
与 Linux Bridge driver 不同,Open vSwitch driver 并不通过 eth1.100, eth1.101 等 VLAN interface 来隔离不同的 VLAN。
所有的 instance 都连接到同一个网桥 br-int,
Open vSwitch 通过 flow rule(流规则)来指定如何对进出 br-int 的数据进行转发,进而实现 vlan 之间的隔离

具体来说:当数据进出 br-int 时,flow rule 可以修改、添加或者剥掉数据包的 VLAN tag,
Neutron 负责创建这些 flow rule 并将它们配置到 br-int,br-eth1 等 Open vSwitch 上。

下面我们就来研究一下当前的 flow rule。

查看 flow rule 的命令是 ovs-ofctl dump-flow <bridge>
首先查看计算节点 br-eth1 的 flow rule:

br-eth1 上配置了四条 rule,每条 rule 有不少属性,其中比较重要的属性有:

rule 的优先级,值越大优先级越高。Open vSwitch 会按照优先级从高到低应用规则。

inbound 端口编号,每个 port 在 Open vSwitch 中会有一个内部的编号。
可以通过命令 ovs-ofctl show <bridge> 查看 port 编号。
比如 br-eth1:

eth1 编号为 1;phy-br-eth1 编号为 2。

数据包原始的 VLAN ID。


br-eth1 跟 VLAN 相关的 flow rule 是前面两条,下面我们来详细分析。

priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:100,NORMAL
priority=4,in_port=2,dl_vlan=5 actions=mod_vlan_vid:101,NORMAL

第一条的含义是: 从 br-eth1 的端口 phy-br-eth1(in_port=2)接收进来的包,如果 VLAN ID 是 1(dl_vlan=1),
那么需要将 VLAN ID 改为 100(actions=mod_vlan_vid:100)

从上面的网络结构我们可知,phy-br-eth1 连接的是 br-int,phy-br-eth1 的 inbound 包实际上就是 instance 通过 br-int 发送给物理网卡的数据。

那么怎么理解将 VLAN ID 1 改为 VLAN ID 100 呢? 请看下面计算节点 ovs-vsctl show 的输出:

br-int 通过 tag 隔离不同的 port,这个 tag 可以看成内部的 VLAN ID。
从 qvo4139d09b-30(对应 cirros-vm2,vlan100)进入的数据包会被打上 1 的 VLAN tag。
从 qvo98582dc9-db(对应 cirros-vm3,vlan101)进入的数据包会被打上 5 的 VLAN tag。

因为 br-int 中的 VLAN ID 跟物理网络中的 VLAN ID 并不相同,所以当 br-eth1 接收到 br-int 发来的数据包时,需要对 VLAN 进行转换。
Neutron 负责维护 VLAN ID 的对应关系,并将转换规则配置在 flow rule 中。

理解了 br-eth1 的 flow rule,我们再来分析 br-int 的 flow rule。


priority=3,inport=1,dl_vlan=100 actions=mod_vlan_vid:1,NORMAL
priority=3,inport=1,dl_vlan=101 actions=mod_vlan_vid:5,NORMAL

port 1 为 int-br-eth1,那么这两条规则的含义就应该是:
1. 从物理网卡接收进来的数据包,如果 VLAN 为 100,则改为内部 VLAN 1。
2. 从物理网卡接收进来的数据包,如果 VLAN 为 101,则将为内部 VLAN 5。

简单的说,数据包在物理网络中通过 VLAN 100 和 VLAN 101 隔离,在计算节点 OVS br-int 中则是通过内部 VLAN 1 和 VLAN 5 隔离。

控制节点的 flow rule 非常类似,留给大家分析。

到这里,我们已经完成了 Neutron OVS vlan 的学习,下节开讨论 OVS 环境下的路由。 

John Boyer (IBM)Strange Crashes/Segmentation Faults in DB2

This is one of the latest interesting issue we debugged in the lab where we performed deep dive analysis from core file, related db2 source code and diagnostic data. So sharing some key points of this strange issue.

Initial Symptoms was whenever a connection is made to the database, it crashes the db2 instance.
When trying to understand scope of this issue, it was noted that there were more issues on this machine (i.e scope was not just limited to one specific db2 instance/database):
- Any db2 instance/database creation fails with SQL1224N  
- Any new install of DB2 fails with ‘segmentation fault’
- Any connect attempts to connect to existing database crashes, regardless of the instances in the box
- Any restore database attempts in the box crashes
- Attempt to capture DB2 trace is incomplete and formatting trace dump crashes

db2diag.log shows 'Memory validation failure' error.

2016-12-22- E556974486E1023       LEVEL: Critical
PID     : 3195                 TID : 47134241974592  PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000            DB   : SAMPLE
APPHDL  : 0-9                  APPID: *LOCAL.DB2.161222142900
AUTHID  : DB2INST1             HOSTNAME: db2machine
EDUID   : 33                   EDUNAME: db2taskd (SAMPLE) 0
FUNCTION: DB2 UDB, SQO Memory Management, sqloDiagnoseFreeBlockFailure, probe:10
MESSAGE : ADM14001C  An unexpected and critical error has occurred: "Panic".
          The instance may have been shutdown as a result. "Automatic" FODC
          (First Occurrence Data Capture) has been invoked and diagnostic
          information has been recorded in directory
          9.289067_0000/". Please look in this directory for detailed evidence
          about what happened and contact IBM support if necessary to diagnose the problem

2016-12-22- E556975510E2580       LEVEL: Severe
PID     : 3195                 TID : 47134241974592  PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000            DB   : SAMPLE
APPHDL  : 0-9                  APPID: *LOCAL.DB2.161222142900
AUTHID  : DB2INST1             HOSTNAME: db2machine
EDUID   : 33                   EDUNAME: db2taskd (SAMPLE) 0
FUNCTION: DB2 UDB, SQO Memory Management, sqloDiagnoseFreeBlockFailure, probe:999
MESSAGE : Memory validation failure, diagnostic file dumped.
DATA #1 : String, 28 bytes
Corrupt pool free tree node.
DATA #2 : File name, 39 bytes
CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol)
  [0] 0x00002ADE33A674C4 _ZN13SQLO_MEM_POOL32diagnoseMemoryCorruptionAndCrashEmPKcb + 0x284
  [1] 0x00002ADE34A29D9B _ZN13SQLO_MEM_POOL10MemTreeGetEmmPP17SqloChunkSubgroupPj + 0x46B
  [2] 0x00002ADE34A2A9D3 _ZN13SQLO_MEM_POOL19allocateMemoryBlockEmmjmPP17SqloChunkSubgroupPjP12SMemLogEvent + 0x53
  [3] 0x00002ADE34A280A1 sqlogmblkEx + 0xA21
  [5] 0x00002ADE31D8D5D8 _Z25sqliLoadIDXCBFromRootPageP8sqeAgentP16SQLB_OBJECT_DESCP8SQLD_TCBtPP9SQLD_IXCBP9SQLB_PAGEj + 0x188
  [6] 0x00002ADE31D8CE39 _Z8sqliindxP8sqeAgentP16SQLB_OBJECT_DESCP8SQLD_TCBjjPP9SQLD_IXCBt + 0x1B9
  [7] 0x00002ADE2DDF317D /SAMPLE/home/db2inst1/sqllib/lib64/ + 0x135E17D
  [8] 0x00002ADE2DDEA7AB _Z11sqldLoadTCBP8sqeAgentP8SQLD_TCBi + 0xB8B
  [9] 0x00002ADE3499197C _Z10sqldFixTCBP8sqeAgentiiiiPP8SQLD_TCBjj + 0x52C
  [10] 0x00002ADE3496B70A _Z19sqldLockTableFixTCBP8sqeAgenttthmiiimmiPciS1_iP14SQLP_LOCK_INFOPP8SQLD_TCBjj + 0x16A
  [11] 0x00002ADE34986C92 _Z12sqldScanOpenP8sqeAgentP14SQLD_SCANINFO1P14SQLD_SCANINFO2PPv + 0x9D2
  [12] 0x00002ADE31B103E7 _ZN16sqlrlCatalogScan4openEv + 0x467
  [13] 0x00002ADE2DA394CE _ZN9ABPDaemon29distributeToSingleDBPartitionEsbRm + 0x3FE
  [14] 0x00002ADE2DA38AD4 _ZN9ABPDaemon27distributeToAllDBPartitionsEv + 0x424
  [15] 0x00002ADE2DA37F60 _ZN9ABPDaemon4mainEv + 0x270
  [16] 0x00002ADE2DA4153D _Z19abpDaemonEntryPointP8sqeAgent + 0x6D
  [17] 0x00002ADE3085D10C _Z26sqleIndCoordProcessRequestP8sqeAgent + 0x127C
  [18] 0x00002ADE3086B896 _ZN8sqeAgent6RunEDUEv + 0x2B6
  [19] 0x00002ADE31E0DCA4 _ZN9sqzEDUObj9EDUDriverEv + 0xF4
  [20] 0x00002ADE31661617 sqloEDUEntry + 0x2F7
  [21] 0x0000003DB3C0683D /lib64/ + 0x683D
  [22] 0x0000003DB30D4FCD clone + 0x6D

Two trap files in FODC_Panic_2016-12-22- showing following stacktrace.

-----FUNC-ADDR---- ------FUNCTION + OFFSET------
0x00002ADE393C6625 _Z25ossDumpStackTraceInternalmR11OSSTrapFileiP7siginfoPvmm + 0x0385
0x00002ADE393C622C ossDumpStackTraceV98 + 0x002c
0x00002ADE393C132D _ZN11OSSTrapFile6dumpExEmiP7siginfoPvm + 0x00fd
0x00002ADE33A195CF sqlo_trce + 0x03ef
0x00002ADE33A6EBFF sqloEDUCodeTrapHandler + 0x025f
0x0000003DB3C0ECA0 address: 0x0000003DB3C0ECA0 ; dladdress: 0x0000003DB3C00000 ; offset in lib: 0x000000000000ECA0 ;
0x00002ADE33A623D0 sqloCrashOnCriticalMemoryValidationFailure + 0x0020
0x00002ADE33A674CD _ZN13SQLO_MEM_POOL32diagnoseMemoryCorruptionAndCrashEmPKcb + 0x028d
0x00002ADE34A29D9B _ZN13SQLO_MEM_POOL10MemTreeGetEmmPP17SqloChunkSubgroupPj + 0x046b
0x00002ADE34A2A9D3 _ZN13SQLO_MEM_POOL19allocateMemoryBlockEmmjmPP17SqloChunkSubgroupPjP12SMemLogEvent + 0x0053
0x00002ADE34A280A1 sqlogmblkEx + 0x0a21
0x00002ADE31D8D5D8 _Z25sqliLoadIDXCBFromRootPageP8sqeAgentP16SQLB_OBJECT_DESCP8SQLD_TCBtPP9SQLD_IXCBP9SQLB_PAGEj + 0x0188
0x00002ADE31D8CE39 _Z8sqliindxP8sqeAgentP16SQLB_OBJECT_DESCP8SQLD_TCBjjPP9SQLD_IXCBt + 0x01b9
0x00002ADE2DDF317D address: 0x00002ADE2DDF317D ; dladdress: 0x00002ADE2CA95000 ; offset in lib: 0x000000000135E17D ;
0x00002ADE2DDEA7AB _Z11sqldLoadTCBP8sqeAgentP8SQLD_TCBi + 0x0b8b
0x00002ADE3499197C _Z10sqldFixTCBP8sqeAgentiiiiPP8SQLD_TCBjj + 0x052c
0x00002ADE3496B70A _Z19sqldLockTableFixTCBP8sqeAgenttthmiiimmiPciS1_iP14SQLP_LOCK_INFOPP8SQLD_TCBjj + 0x016a
0x00002ADE34986C92 _Z12sqldScanOpenP8sqeAgentP14SQLD_SCANINFO1P14SQLD_SCANINFO2PPv + 0x09d2
0x00002ADE31B103E7 _ZN16sqlrlCatalogScan4openEv + 0x0467
0x00002ADE2DA394CE _ZN9ABPDaemon29distributeToSingleDBPartitionEsbRm + 0x03fe
0x00002ADE2DA38AD4 _ZN9ABPDaemon27distributeToAllDBPartitionsEv + 0x0424
0x00002ADE2DA37F60 _ZN9ABPDaemon4mainEv + 0x0270
0x00002ADE2DA4153D _Z19abpDaemonEntryPointP8sqeAgent + 0x006d
0x00002ADE3085D10C _Z26sqleIndCoordProcessRequestP8sqeAgent + 0x127c
0x00002ADE3086B896 _ZN8sqeAgent6RunEDUEv + 0x02b6
0x00002ADE31E0DCA4 _ZN9sqzEDUObj9EDUDriverEv + 0x00f4
0x00002ADE31661617 sqloEDUEntry + 0x02f7
0x0000003DB3C0683D address: 0x0000003DB3C0683D ; dladdress: 0x0000003DB3C00000 ; offset in lib: 0x000000000000683D ;
0x0000003DB30D4FCD clone + 0x006d

-----FUNC-ADDR---- ------FUNCTION + OFFSET------
0x00002ADE393C6625 _Z25ossDumpStackTraceInternalmR11OSSTrapFileiP7siginfoPvmm + 0x0385
0x00002ADE393C622C ossDumpStackTraceV98 + 0x002c
0x00002ADE393C132D _ZN11OSSTrapFile6dumpExEmiP7siginfoPvm + 0x00fd
0x00002ADE33A195CF sqlo_trce + 0x03ef
0x00002ADE33A6EBFF sqloEDUCodeTrapHandler + 0x025f
0x0000003DB3C0ECA0 address: 0x0000003DB3C0ECA0 ; dladdress: 0x0000003DB3C00000 ; offset in lib: 0x000000000000ECA0 ;
0x00000000004258FE __intel_ssse3_rep_memcpy + 0x19ee
0x00002ADE2DDF0E41 address: 0x00002ADE2DDF0E41 ; dladdress: 0x00002ADE2CA95000 ; offset in lib: 0x000000000135BE41 ;
0x00002ADE2DDEA339 _Z11sqldLoadTCBP8sqeAgentP8SQLD_TCBi + 0x0719
0x00002ADE3499197C _Z10sqldFixTCBP8sqeAgentiiiiPP8SQLD_TCBjj + 0x052c
0x00002ADE3496B70A _Z19sqldLockTableFixTCBP8sqeAgenttthmiiimmiPciS1_iP14SQLP_LOCK_INFOPP8SQLD_TCBjj + 0x016a
0x00002ADE34986C92 _Z12sqldScanOpenP8sqeAgentP14SQLD_SCANINFO1P14SQLD_SCANINFO2PPv + 0x09d2
0x00002ADE31B103E7 _ZN16sqlrlCatalogScan4openEv + 0x0467
0x00002ADE30C68A2C _ZN13sqm_evmon_mgr18getAutostartEvmonsEP14SQLP_LOCK_INFOPj + 0x022c
0x00002ADE30C67F5F _ZN13sqm_evmon_mgr15autostartEvmonsEv + 0x027f
0x00002ADE30C433B9 _Z11sqlm_a_initP8sqeAgent + 0x0469
0x00002ADE30890E95 _ZN14sqeApplication20InitEngineComponentsEcP8sqeAgentP8SQLE_BWAP5sqlcaP22SQLESRSU_STATUS_VECTORc + 0x0975
0x00002ADE3088EBAD _ZN14sqeApplication13AppStartUsingEP8SQLE_BWAP8sqeAgentccP5sqlcaPc + 0x075d
0x00002ADE30885A69 _ZN14sqeApplication13AppLocalStartEP14db2UCinterface + 0x0579
0x00002ADE30A8D970 _Z11sqlelostWrpP14db2UCinterface + 0x0040
0x00002ADE30A8C845 _Z14sqleUCengnInitP14db2UCinterfacet + 0x06f5
0x00002ADE30A8B1F1 sqleUCagentConnect + 0x04b1
0x00002ADE30B97AF6 _Z18sqljsConnectAttachP13sqljsDrdaAsCbP14db2UCinterface + 0x00b6
0x00002ADE30B5B289 _Z16sqljs_ddm_accsecP14db2UCinterfaceP13sqljDDMObject + 0x03b9
0x00002ADE30B50648 _Z17sqljsParseConnectP13sqljsDrdaAsCbP13sqljDDMObjectP14db2UCinterface + 0x0058
0x00002ADE349A0D77 _Z10sqljsParseP13sqljsDrdaAsCbP14db2UCinterfaceP8sqeAgentb + 0x0377
0x00002ADE30B4A8E4 address: 0x00002ADE30B4A8E4 ; dladdress: 0x00002ADE2CA95000 ; offset in lib: 0x00000000040B58E4 ;
0x00002ADE30B48EC9 address: 0x00002ADE30B48EC9 ; dladdress: 0x00002ADE2CA95000 ; offset in lib: 0x00000000040B3EC9 ;
0x00002ADE30B45F69 address: 0x00002ADE30B45F69 ; dladdress: 0x00002ADE2CA95000 ; offset in lib: 0x00000000040B0F69 ;
0x00002ADE30B45B5B _Z17sqljsDrdaAsDriverP18SQLCC_INITSTRUCT_T + 0x00eb
0x00002ADE3086BE91 _ZN8sqeAgent6RunEDUEv + 0x08b1
0x00002ADE31E0DCA4 _ZN9sqzEDUObj9EDUDriverEv + 0x00f4
0x00002ADE31661617 sqloEDUEntry + 0x02f7
0x0000003DB3C0683D address: 0x0000003DB3C0683D ; dladdress: 0x0000003DB3C00000 ; offset in lib: 0x000000000000683D ;
0x0000003DB30D4FCD clone + 0x006d


Investigation of corefile, memory diagnostic data shows that problem is with __intel_ssse3_rep_memcpy().
But based on the corefile we can see the copy requests from DB2 is always correct (i.e we have valid source, destination and length).
Further research shows that the memcpy problem happens when processor's cache size is incorrectly registered as 0 KB.

If you see such serious situation, check out cache size in /proc/cpuinfo, like

$ egrep "cache size" /proc/cpuinfo
cache size      : 0 KB
cache size      : 0 KB
cache size      : 0 KB
cache size      : 0 KB

If any of them were 0 KB, this is wrong and makes intel memcpy behaves incorrectly because if affects its internal block size and cache size 0 could result overrun of the copy beyond the expected end of destination. To our best knowledge, this meant that the Intel processor hardware incorrectly reported the processor cache size to Linux operating system.

Recommended actions are:
- Power off the HW after shutdown the Linux OS, then poweron HW, boot the Linux OS and see if the cache size in /proc/cpuinfo becomes correct value, i.e, positive value. If it is still 0, engage HW and/or Linux support to fix it


- If the Linux was running under Virtual machine OS such as VMWare, KVM, then restart everything including power off/on of the HW, boot the VM OS/VM/Linux, and see if the cache size in /proc/cpuinfo becomes correct value, i.e, positive value. If it is still 0, engage HW and/or Linux support to fix it.

Last important puzzle - why it is not indicating any issue any where in the system and only with DB2 ?

>>> DB2 uses Intel Fast memcpy library embedded in db2sysc binary in stead of using built-in memcpy in Linux OS for performance reason.
When Intel Fast memcpy behaved incorrectly, i.e. copied beyond the expected end of destination, memory corruption happens elsewhere in DB2 for varieties of operations. Non-DB2 operations/products rarely use Intel Fast memcpy, but typically use built-in memcpy in Linux OS, this is the reason you see the problems only with DB2.


Shashank Kharche



John Boyer (IBM)Interim Fix for Maximo Asset Management Build 014 now available

The Interim Fix for Maximo Asset Management Build 014 is now available.
IF014 ( is cumulative of all prior Interim Fixes for Maximo Asset Management
Here is the location to download this interim fix: 

John Boyer (IBM)Interim Fix for Maximo Asset Management Build 015 now available

The Interim Fix for Maximo Asset Management Build 015 is now available.
IF015 ( is cumulative of all prior Interim Fixes for Maximo Asset Management
Here is the location to download this interim fix:

John Boyer (IBM)Interim Fix for Maximo Integration Framework (MIF) Build 002 now available

The Interim Fix for Maximo Integration Framework (MIF) Build 002 is now available.
IF002 ( is cumulative of all prior Interim Fixes for Maximo Integration Framework (MIF)
Here is the location to download this interim fix:

John Boyer (IBM)What to expect at IBM Connect - Webcast Replay

In case you missed it, here's the replay of What to Expect at IBM Connect, where we give a little more insight into the things you can expect to see and do.  There's an amazing curriculum with something for everyone.

View online:


Some highlights

First off, the new venue in San Francisco:

Moscone West is as central as it gets in San Francisco. This three-level exhibition hall is walking distance to hotels, restaurants, museums, attractions, public transit and more.

Take a virtual tour of the space here: http<wbr></wbr>:/<wbr></wbr>/por<wbr></wbr>tal.<wbr></wbr>sftr<wbr></wbr>avel<wbr></wbr>.co<wbr></wbr>m/mo<wbr></wbr>scon<wbr></wbr>e-vi<wbr></wbr>rtua<wbr></wbr>l-to<wbr></wbr>ur<wbr></wbr>/


Solution Expo:

The place for conversations with all of the IBM Connect sponsors. Stop by their booths and learn about their products and solutions, many of which complement IBM's Collaboration Solutions portfolio and can help you get he most out of your IBM products.  There will also be some fun "activations" where you can interact with IBM Watson, including an escape room! Think you're smart enough to get out? form a team and give it a try!

There will also be a dedicated networking area for IBM Business Partners, a dedicated demo area for Notes/Domino, Verse, Connections, Box, and Cisco, and an engagement theater where you can listen to some amazing speakers (in addition to the break out sessions).



There are 3 regular tracks, plus special sessions:

Emerging Technologies: You want to learn about cool new tech? we've got you covered.

Strategy and Business: Where are we? Where are we going? What are the industry trends? we've got answers.

Development, Design, and Tools: All about the products. Notes, Domino, Verse, Connections, Bluemix, Webex, Watson Workspace, Spark, Box, and more!

Special Sessions: Your favourite sessions from past events are back! Nerd Girls, ask the Tech Team, Gurupalooza, and more!


Here's how your week at a glance looks!

imageWho's speaking at Connect?

You'll hear real life stories of how companies are using IBM's products to accelerate innovation!

Here's just a sampling of the companies you'll hear from:







Canal Barge

Cassa di Risparmio di Cento

Commonwealth Bank



Constance Hotels



Fiducia & GAD IT AG

Grupo Familia

Hendricks Regional Health

Janalakshmi Financial

Mears Group

Memorial Hermann






SES / Silverside

Sierra Nevada


Superior Group





thyssenkrupp AG


Unimed BH


USC Marshall

School of Business

Waters Corp

Build your experience:

Use Watson to help you build your personal curriculum:

Session Expert is now live!


Spend time with IBM Executives and get answers to your questions!

Ask your IBM rep to help you schedule a meeting


Executive Meeting Center is located at the Moscone West – 3rd Floor

Tuesday, Feb 21          8:00 am – 5:00 pm

Wednesday, Feb 22     8:00 am – 5:00 pm 

Thursday, Feb 23         8:00 am – 12:00 pm 


Keep up with what’s happening at the conference

Follow @IBMConnect

Official IBM Connect Hashtag:  #IBMConnect


Get nerdy on Monday and join the Hackathon:




John Boyer (IBM)Pascal Compilers, Voltage Thresholds and Vending Machines

Last week, fellow IBM blogger Barry Whyte Barry pointed out that my recent post on [Cognitive University for Watson Systems SmartSeller] was my 1,000th blog post. After 10 years of blogging, I have reached the 1,000 mark!

(As IBM is focused on its transformation from a "Systems, Software and Services" company to a "Cognitive Solutions and Cloud Platform" company, it seems appropriate to highlight my 1,000 blog post on the concept of cognitive solutions.)

A lot of people ask me to explain what exactly does IBM mean by "cognitive", which is a fair question. Let's start with the [Dictionary definition]:

  1. of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.
  2. of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.

What exactly does IBM mean by Cognitive? IBM has taken this definition, and focused on four key strategic areas:


In the summer of 1981, I spent a summer debugging a "Pascal" compiler at the University of Texas at Austin. I wasn't told that was what I was doing. Rather, I was tasked with writing sample Pascal programs that would demonstrate the features and capabilities of the language.

Every day, I would come up with a concept of a program, punch up the cards, run it through the CDC hopper, and verify that it would work properly. If I didn't have it working by lunch, I would take it to the "help desk", they would look it over, and tell me how to fix it after I got back.

Most of the time, it was a mistake in my software. A few times, however, it was a flaw in the compiler itself. My programs were basically test cases, and the Pascal Compiler development team was fixing or enhancing the compiler code every time I had a problem.

Compilers basically work by parsing the program text, looking for fixed keywords that are entered in a specifically prescribed order to make sense. Other keywords may represent data types, variables, constants or pre-defined macros.

But compilers are not cognitive. Cognitive solutions can understand natural language, and have to handle all the ambiguity of words not being in the correct order, or different words having different meanings.


As an Electrical Engineer, I had to take many classes on classical analog signal processing. In fact, all computers have some amount of analog components, where threshold processing is used to differentiate a zero (0) from a one (1).

For example, if a "zero" value was represented by 1 volt, and a "one" value by 5 volts, then you can set a threshold at 3 volts. Any voltage less than 3 would be considered a "zero" value, and anything 3 volts or greater a "one" value.

But threshold processing is not cognitive. Cognitive solutions also use thresholds, but their thresholds are dynamically determined, through advanced analytics and statistical mathematical models, and may adjust up and down as needed, based on machine learning over time.


IBM Research is proud to have developed the world's most advanced caching algorithms for its storage systems. Cache memory is very fast, but also very expensive, so offered in limited quantities. Caching algorithms decide which blocks of data should remain in cache, and which should be kicked out.

Ideally, a block in read cache would be kicked out precisely after the last time it was read, with little or no expectation for being read again anytime soon. Likewise, a block in write cache would be destaged to persistent storage precisely after the last time it was updated, with little or no expectation for being updated again anytime soon.

Traditional approach is "Least Recently Used" or [LRU]. Cache entries that were read recently or updated recently, would be placed on the top of the list, and the least referenced would be at the bottom of the list. When space is needed in cache, the entries at the bottom of the list would be kicked out.

IBM's [Adaptive Cache Algorithm outperforms LRU]. For example, on a workstation disk drive workload, at 16MB cache, LRU delivers a hit ratio of 4.24 percent while ARC achieves a hit ratio of 23.82 percent, and, for a SPC1 benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19 percent while ARC achieves a hit ratio of 20 percent.

But caching algorithms, including IBM's Adaptive Cache, are not cognitive. These algorithms respond pragmatically based on the current state of the cache. Cognitive solutions learn, and improve with usage. This is often referred to as "Machine Learning".


The human-computer interface (HCI) has much room for improvement in a variety of areas.

Take for example a snack vending machine. In college, we had assignments to simulate the computing logic of these. We had to interact with the buyer, receive coins entered into the slot--nickels, dimes and quarters representing 5, 10 and 25 cents--determine a total monetary balance, and then dispense snacks of various prices and return an appropriate amount of change, if any. There is even a [greedy algorithm] designed to optimize how the change is returned.

But vending machines are not cognitive. Like the caching algorithms, vending machines interact based on fixed programmatic logic, treating all buyers in the same manner. Cognitive solutions can interact with different users in different ways, customized to their needs, and these interactions can improve over time, based on machine learning.

IBM is exploring the use of Cognitive Solutions in a variety of different industries, from Healthcare to Retail, Financial Services to Manufacturing, and more.

technorati tags: , , , , , , , , , , ,

John Boyer (IBM)Is your world FLAT? If not, then CONCAT!

Despite the quirky title we will NOT talk about the CONCAT function in extended rule logic. The CONCAT works best when combining 2 pieces of data. In this blog entry we are going to discuss two (2) ways we can take a repeating structure of like segments and make them into a single occurring field. We will FLATTEN the group.

There are of course other ways of doing this, but hopefully this will help  give you some ideas.


I want to combine the Child NTE records into one field of its Parent segment.
BUT ONLY if the NTE has a qualifier of "LIN".
I then want to store the DET sequence number into a variable which will be
shown in the output as "SEE LINES 001, 005".

There are 4 NTE segments spread across 3 DET segments. But only 3 of these NTE have a qualifier of "LIN". So the NTE*XXX will not be in the output.
Since DET*001 has 2 NTE*LIN I want these combined into one field. To do this I will store the data into a temporary field and add a " " space between each piece of data

for readability.



Part 1, Combine the Sequence numbers

Since we also want to see which DET sequence numbers have the "NTE*LIN" we stored the first time we saw this condition into a variable (current_PO1). On Begin of the DETAIL_GRP we set the
varible = ""

so it is emptied before each new DET segment is read by the translator.
On the element #LINE_NUMBER we store the current data into this variable. Later On End of the NOTES segment we store this variable into another variable which we will use
to populate the output element with all the other #LINE_NUMBER only when we have the "DET*LIN."

IF #QUALIFIER = "LIN" THEN                                                             

   . . . //other rules before this section
   xNTECHK = xNTECHK + "," + current_PO1; //running var of all positive #LINE_NUMBER


since this will create out as "SEE LINES ,001,005" we want to not have the first ",".
So we add a little check where On End of the input we use all the data except the first ",".
The first thing we run On End of the NOTES segment is
xLINLoopCnt = xLINLoopCnt + 1; //keep track of the length of this variable
                                                     //since we may have trouble doing a len of the variable to get the true length, we use
If xLINLoopCnt = 1 then
     note_var_cnt = note_var_cnt + 4;

// we use 4 because each time we write into this variable it will be ",nnn" or
// 4 characters. After everything is entered into this variable

On End of the INPUT side we use
xNTECHK = right(xNTECHK,note_var_cnt-1);

// which will take the length -1 of the variable starting at the right, or end, of
// the variable and store the result back into it.

// Also the If/Then where we keep track of the length we are adding to the variable can be accomplished if we store into a temp field. Then we could use this rule to take everything except the first character
// xNTECHK = right(#TEMP_FIELD, (len(#TEMP_FIELD)-1));
// We can use the right and len functions.


Part 2 Flattening the world

Here is the first part of the rule where we check the data of each NTE to look for the "LIN" trigger.

// this section of the rule which is on end of the NOTES repeating segment will move the current #LINE_NOTE in the temp #LINE_NOTE:2 (in the temp segment TEMP_NOTES)

//This is used any time you want to take a repeating structure's data and flatten it out into one output record.The key for this rule is that ALL repeating data is under 1 Parent Group.


O_HDR*991-E03085*See Lines 001,005~

You will see that the O_DET*001 has the values from the 2 NTE*LIN are in the 3rd field
combined into one sentence.
Also that the DET*004 which had an NTE segment from the input file is not in the

output of the O_DET*004 because its qualifier is XXX not LIN.


The rules as a whole

  STRING[80]  xNTECHK;                                                                                                                                                                                  
  xNTECHK      = "";                                                                                                                                                                                    
  On Begin  
  INTEGER xLINLoopCnt;                                                                                                                                                                                  
  INTEGER note_var_cnt;                                                                                                                                                              
  xLINLoopCnt  = 0;                                                                                                                                                                                     
  note_var_cnt = 0;                                                                                                                                                                                     
  On End                                                                                                                                                                                                
  xNTECHK = right(xNTECHK,note_var_cnt-1);                                                                                                                                                              
  On Begin    
  STRING[4] current_PO1;                                                                                                                                                                          
  current_PO1 = "";                                                                                                                                                                                     
  xLINLoopCnt = 0;                                                                                                                                                                                      
  current_PO1 = #LINE_NUMBER;                                                                                                                                                                           
  On End  

xLINLoopCnt = xLINLoopCnt + 1;                                                                                                                                                                


       IF xLINLoopCnt = 1 then


         note_var_cnt = note_var_cnt + 4;


      xNTECHK = xNTECHK + "," + current_PO1;


  #ATTENTION = "See Lines" + " " + xNTECHK;



I have multiple occurrences of the parent LX group. Each has at least 1 L5 segment.
I want to concat the L5 segments from the last LX group only. These are the notes
of the document, and I would like them on a single output record. The are highlighted below.


L5*1*POWER SUPPLY 25CTNS ON 1PLT*061490-00*N^
L1*3*2080*PS*3981****FUE****LTL FUEL ADJUSTMENT^
L5*5*59.29 CUBIC FEET^
L5*5*+++++ ATTENTION +++++^
L5*5*888 555 5500^



integer lx_cnt, l5_cnt,tmp_cnt;

tmp_cnt = 1; //counter to stop while do loop represents the current itteration of LG

lx_cnt = count($0400_LX[*]); //store the total count of all LX groups used as counter only for last LX
l5_cnt = count($L5[lx_cnt][*]); //store the total count of L5 segments in the last LX

while tmp_cnt <= l5_cnt do
$INPUT.#0093:86 = $INPUT.#0093:86 + " " + $L5[lx_cnt][tmp_cnt].#0079;
tmp_cnt = tmp_cnt + 1;

This while do loop example can be used to parse out any repeating structure to flatten it out.
If you wanted to flatten every L5 you would not need to do the count of the LX you would just
need to have a running count rule like lx_cnt = lx_cnt + 1 to move through each LX group.
We use the <= in the while do rule so we can get ALL the L5. If we used just < we would not get
the very last L5 of each LX.


The output


ProgrammableWebOracle Buys Apiary to Bolster API Roster

Oracle today said it has agreed to acquire Apiary, the company behind the APIFlow framework. Oracle says the deal will help it offer developers the most complete, cloud-based API creation and management platform in the market.

John Boyer (IBM)Maximo Asset Management Interim Fix 015 released

The Interim Fix (IFIX) is available at Fix Central.


As with all IFIXes, MAM 7602 IFIX 015 is cumulative and includes all fixes provided with Maximo Asset Management Interim Fix 014, with these additions:



Application Name



Start Center

In the Chart Options dialog box, the Display By field lists all fields in the RSCONFIG table instead of the only fields that are associated to your layout.



In the Organizations application, when you add a site while other users are logged in to Maximo, the system hangs.


Purchase Requisitions

In the Purchase Requisition application, when a requisition is at an approved status, you can still update the fields on the requisition.


System User Interface

The Time Out dialog box displays behind some application windows.


Preventive Maintenance

In the Preventive Maintenance application, when you enter a meter reading for an asset that has a status of OPERATING and multiple meter-based PM records against it, not all PM work orders are generated.


To install the IFIX, see Installing an Interim Fix in Maximo 7.6.

John Boyer (IBM)ITM Agent Insight: Unable to upgrade Tivoli Common Reporting V3.1.2.0

This blog  is intended to help you understand and walk you through steps where users are Unable to upgrade Tivoli Common Reporting V3.1.2.0 from Jazz for Service Management V1.1.2.0 as a root user.

1. Download Tivoli Common Reporting V3.1.2.0 from Fixcentral from the following location: 

        3.1.2-TIV-JazzSM-TCR-COGNOS- <Operating_System> .zip 

2. Extract 3.1.2-TIV-JazzSM-TCR-COGNOS-<Operating_System>.zip to the folder <tempdir> 

3. After the extraction, folder structure is as displayed in the image: 



                               Note: The BI folder is empty 

4. Copy Cognos 10.2 base image from existing Tivoli Common Reporting V3.1 or V3.1.0.1 folder to the location as detailed in the extract : 

For example, for UNIX use the following command:

cp -rf <TCR31 or TCR31.0.1 Extracted 

5. If the Reporting Services Environment entry does not exist in Installation Manager > Uninstall during upgrade, follow these steps:

- Copy the attached|View Details file to the file system. For example: C:\Users\Administrator\Downloads\Reporting_patch\
- Extract the zip. For example : C:\Users\Administrator\Downloads\Reporting_patch\ReportingService
- Launch Installation Manager
- Go to File-->Preferences--> Add Repository
- Provide the extracted location. For example : C:\Users\Administrator\Downloads\Reporting_patch\ReportingService

- Click Test Connection and see if that repository are connected and in green status.
- Click OK
- Click Install
- Select Reporting Services Environment-->Version
- Click Next
- Accept the License agreement and click Next
- Use the existing package group ( default option selected) --> Click Next
- Leave the options selected by default. Click Next
- Provide the Websphere Application Server credentials and Click Validate
- On successful validation click Next
- Provide the existing content store DB credentials and click Next.
(Note: A database already exists. You need not create a Database. The installation does not fail even if we create a new DB. If a new DB is created, it is not used if you are performing an upgradation. Select Existing Database and select the appropriate database used for installing the previous version. It is highly recommended to take a manual backup of the current content store DB before starting the upgrade.Once the content store is upgraded/connected to the newer version of Cognos, the schema gets updated that cannot be used in case rollback is required.
- Click Install
- After successful installation of Reporting Services Environment, Click Finish.

To continue with upgrading Tivoli Common Reporting:

- Add the new Jazz for Service Management repository link. For example : C:\Users\Administrator\Downloads\112\JazzSMFPRepository\disk1\diskTag.inf
- Click Upgrade option in Installation Manager.
- Refer for steps to upgrade Tivoli Common Reporting

6. You can now upgrade Tivoli Common Reporting either through IBM Installation Manager or launchpad 

Note: You can download IBM Cognos from Fixcentral or from PPA(Passport Advantage). PPA includes both IBM Cognos V10.2 base and V10.2 Fix Pack 4 whereas Fixcentral contains only IBM Cognos V10.2 Fix Pack 4.




Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:











Check out all our other posts and updates:

Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:
Academy Google+:
Academy Twitter Handle:



John Boyer (IBM)Adding Portal 9 to a 8.5 Portal installation

It is easy to add Portal 9 to an existing Portal 8.5 installation. In this blog article we will talk about the steps to do this.



- Download WebSphere Portal 9 package (Server, Enable, Extend or other offering depending on what is currently being installed with version 8.5).

- Stop the Portal process.

- Ensure that you have the Portal and WebSphere Application Server password.


Installation with Installation Manager User Interface:

1. Start the IBM Installation Manager.

2. Add the v9 repositories - e.g. in my case Portal Server and Enable:


3. In the main view select Install and select the Portal 9 packages:


4. Validate the license, the to be selected packages and enter the WebSphere Application Server and Portal administrative userid and password.

5. Select Install.

6. Once the installation is complete a screen like the following will be displayed:



Installation via command line:

A response file can be used to trigger the install of version 9. A sample file for installing Portal Server and Enable 9 could look like the following:


<?xml version='1.0' encoding='UTF-8'?>
        <repository location='/opt/IBM/install/WP90_Portal'/>
        <repository location='/opt/IBM/install/WP90_Enable'/>

        <offering profile="IBM WebSphere Portal Server V8.5" id="" features="portal9.upsell"/>
        <offering profile="IBM WebSphere Portal Server V8.5" id="" features="ext9.upsell"/>

    <profile id="IBM WebSphere Portal Server V8.5" installLocation="/opt/IBM/WebSphere/PortalServer">
        <!-- Specify your current WebSphere Application Server administrative user password  (WasAdminPwd) -->
        <!--   This value must be encrypted.                                                               -->
        <!--   Use the command 'imutilsc encryptString mypassword' to return an encrypted string.          -->
        <data key='user.p9.was.password,' value='4taxKp1Gj5q5aCi+LjSKHQ=='/>
        <!-- Specify your current Portal administrative user password  (PortalAdminPwd)           -->
        <!--   This value must be encrypted.                                                      -->
        <!--   Use the command 'imutilsc encryptString mypassword' to return an encrypted string. -->
        <data key='user.p9.wp.password,' value='4taxKp1Gj5q5aCi+LjSKHQ=='/>

    <preference name="" value="/opt/IBM/IMShared"/>
    <preference name="offering.service.repositories.areUsed" value="false"/>
    <preference name="" value="false"/>


Tags to be adjusted are the repository location, the installLocation and the Portal and Websphere Application Server password (note that the password needs to be encrypted via the command <InstallationManager>/eclipse/tools/imutilsc encryptString mypassword).


The installation can be triggered via:

<InstallationManager>/eclipse/IBMIM -silent -nosplash -acceptLicense -keyring /root/iim.keyring -input /opt/IBM/install/server-enable9-install.xml -ShowVerboseProgress

server-enable9-install.xml is the response file displayed above.


After a successful install a message like this will be shown in the command line:

Installed to the /opt/IBM/WebSphere/PortalServer directory.
Installed to the /opt/IBM/WebSphere/PortalServer directory.


Cluster considerations:

As with cumulative fixes, Portal install would first be performed on the primary and then on the secondary node(s). If the Deployment Manager is remote then it would need to be running during the Portal install.


Post Steps:

- The underlying WebSphere Application Server will stay at version 8.5.5.x and can later on be updated to version 9. This is optional though. For details check:

- The Watson Content Hub integration can be enabled via a configuration task - for details check:

John Boyer (IBM)IBM Performance Management WebSphere Applications Agent Interim Fix 5 is now available

1.  IBM Monitoring: Downloads and drivers

- TITLE: IBM Performance Management WebSphere Applications Agent Interim Fix 5 (
- URL:
- ABSTRACT: This is a cumulative interim fix for the Monitoring Agent for WebSphere Applications provided with the Performance Management version  family of products.  It upgrades the WebSphere Applications agent from version,,,,, or to

2.  IBM Monitoring: Fixes

- TITLE: IBM Performance Management WebSphere Applications Agent Interim Fix 5 (
- URL:
- ABSTRACT: This is a cumulative interim fix for the Monitoring Agent for WebSphere Applications provided with the Performance Management version  family of products.  It upgrades the WebSphere Applications agent from version,,,,, or to



Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:











Check out all our other posts and updates:

Academy Blogs:
Academy Videos:
Academy Google+:
Academy Twitter :


John Boyer (IBM)Resolving CRIMA1014W The location for the shared resources directory cannot be modified when installing APM/UI 7.7.x

I was attempting to install the APMUI 7.7.x. The customer had previously tried  
to install the APMUI and left the system in a bit of disarray. I am      
using new directories but I am getting an error from the Installation    
The error message is "CRIMA1014W The location for the shared resources   
directory cannot be modified: old /opt/bpm/apmui7cm1/..                  
/im/eclipsecache; new /opt/bpm/IBM/IMShared                           
The directory "/opt/bpm/apmui7cm1" no longer exists. I believe that is   
what the customer tried to use. I am trying to use                       


The first thing I discovered was the IBM Installation Manager has        
command line command called "imcl". I did a "find" on it to discover     
where IM was installed.                                                  
dbbpmadmin@xxxxxxx:~> find . -name imcl                                 
After that I listed the installed packages and saw apmui.                
dbbpmadmin@xxxxxxxx:~> ./im/eclipse/tools/imcl listInstalledPackages                                                                                                                                                                      
I uninstalled the apmui package.                                         
dbbpmadmin@xxxxxx:~> ./im/eclipse/tools/imcl uninstall
CRIMA1077E ERROR: The following errors were generated while              
  CRIMA1077E ERROR:   File /export/gcs1/bpm/apmui7cm1/bin/server not     
  Explanation: Installation Manager cannot locate the file that is       
required for the installation.An issue has occurred with the package     
that cannot be resolved by Installation Manager.                         
  User Action: Identify the package that has the issue. Contact IBM      
  CRIMA1077E ERROR:   File /export/gcs1/bpm/apmui7cm1/bin/server not     
  Explanation: Installation Manager cannot locate the file that is       
required for the installation.An issue has occurred with the package     
that cannot be resolved by Installation Manager.                         
  User Action: Identify the package that has the issue. Contact IBM      
  CRIMA1077E ERROR:   File /export/gcs1/bpm/apmui7cm1/tools/pdcollector.
sh not found.                                                            
  Explanation: Installation Manager cannot locate the file that is       
required for the installation.An issue has occurred with the package     
that cannot be resolved by Installation Manager.                         
  User Action: Identify the package that has the issue. Contact IBM      
  CRIMA1077E ERROR:   File                                               
/export/gcs1/bpm/apmui7cm1/SCR/XMLtoolkit/_uninst/uninstall not found.   
  Explanation: Installation Manager cannot locate the file that is       
required for the installation.An issue has occurred with the package     
that cannot be resolved by Installation Manager.                         
  User Action: Identify the package that has the issue. Contact IBM      
  CRIMA1077E ERROR:   File /export/gcs1/bpm/apmui7cm1/bin/server not     
  Explanation: Installation Manager cannot locate the file that is       
required for the installation.An issue has occurred with the package     
that cannot be resolved by Installation Manager.                         
  User Action: Identify the package that has the issue. Contact IBM      
Uninstalled from the  
/opt/bpm/apmui7cm1 directory.                                            
I still got the same error message when I tried to install APMUI but     
then I finally realized that /opt/bpm/apmui7cm1/../im/eclipsecache is    
really  /opt/bpm/im/eclipsecache.                                        
I then changed the line in the install file from:                        
<preference name=''      
<preference name=''      
and the install worked.                                                  
 /opt/bpm/im/eclipsecache is the cache from the already installed IM     
and you cannot change it. 



Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:











Check out all our other posts and updates:

Academy Blogs:
Academy Videos:
Academy Google+:
Academy Twitter :


John Boyer (IBM)Shadow IT Detection with IBM QRadar SIEM

Shadow IT refers to the information technology solutions used inside an organization without the explicit approval of the organization. In recent years, the advent of cloud computing has made it easier for employees to circumvent IT department and use a variety of cloud applications without the knowledge or approval of the organization. Despite the high visibility of recent data breaches, most employees still choose to use cloud services to be able to do their job more efficiently. In a study conducted by IBM Security, it was found that 1 in every 3 Fortune 1000 employees regularly saves and shares company data to third- party cloud-based platforms that are not explicitly approved by their organization [1]. This figure is expected to increase as the workplace demographic starts to change and millennials who are greater users of cloud applications [2] make up more and more of the workforce.


Read the full paper:  QRadarShadowIT.pdf|View Details.

John Boyer (IBM)OTMA problems

We came across an IMS OTMA problem and it took us a few minutes to sort out the problem.

We had


For SENSE CODE=00300001 this is IMS OTMA code 0030

OTMA has detected a message flood condition for an OTMA member or for all the members. The number of input messages waiting to be processed exceeds the currently defined maximum number allowed. To protect the IMS™ system, OTMA rejects the new input messages until the flood condition is relieved.
The 0001 is The flood condition for a specific OTMA member was detected. No new input transaction can be accepted from this member.

This means that there were transactions queued up in IMS - and IMS pushed back to MQ.




John Boyer (IBM)ITM Nuggets: ITM 6.3 Fix Pack 7 is here...... (Live links for download and APAR information)

What Each Package Contains



ITM 6.3 Fix Pack 7 has been released!  Is time to plan that upgrade, to get all the new features and fixes

Here is all the info you need to download, review and plan you next upgrade, Happy downloading!!!

LIVE and direct Download links, APAR and packages all listed below!


All of the links in the below tables are live. They will take you directly to the download location of the package

you require, or the IBM official technote for one of the APARs included in this fix pack (if you wish to know

more about the APAR) 




Link to Fix Pack 7 Readme:

Link to 6.3 FP07 Readme



What Each Package Contains

NOTE: All of the APAR hyper links are in the process of being published as I type and all will be alive be release date

If anyone needs a specific APAR information before the link is live, post a message below and I will obtain the data for you.


Product/Component Name


File Name

IBM Tivoli Monitoring Base

64-bit Windows

IBM Tivoli Monitoring Base

32-bit Windows

IBM Tivoli Monitoring Base



IBM Tivoli Monitoring Base

64-bit Linux(R) on AMD64 and Intel(R) EMT systems


IBM Tivoli Monitoring Base

32-bit Linux(R) on AMD and Intel(R) systems


IBM Tivoli Monitoring Base

Linux(R) on System z


IBM Tivoli Monitoring Base Agents



IBM Tivoli Monitoring Tools



IBM Tivoli Monitoring Agent Reports



IBM Tivoli Monitoring Tivoli Performance Analyzer Reports and Domain Definitions







List of APARs and Fixes included in the fix pack


Full list of APARs in Readme


Command Line Interface APARs



HUB monitoring server may crash during shutdown if monitoring server SOAP service did not complete start-up. The HUB monitoring server does not shutdown gracefully and just hangs. This affects the FTO environment. Since the HUB is not shutting down gracefully, agents and RTEMS are not switching to other FTO HUB.



When using tacmd createsit to clone another situation, the values of the MAP tags are not preserved.



The maximum timeout that an administrator can specify with the "-t" option in the "tacmd login" command may be too high for some customer security policies. This APAR introduces the possibility to reduce the maximum timeout to only 15 minutes in a centralized way. 



The tacmd commands getfile, putfile and executecommand need to log their actions into the TEMS audit log file. Specifically the following info will be logged when this APAR is applied: 
username, executed command, result, source and target servers.



tacmd commands that manage the situations are not processing all the situations after the upgrade of the ITCAM agent affinities. With the ITCAM Agent for WebSphere Applications installed, there are error on the tacmd commands that work with the situations. In particular with the situations that has affinity like &IBM.CAM7_WAS or &IBM.CAM_WAS, issuing any of the following commands return error on the situation definintion:
tacmd bulkimportsit
tacmd bulkexportsit
tacmd listsit
tacmd createsit
tacmd editsit
tacmd viewsit
tacmd deletesit



Running the "tacmd viewuser" CLI command returns a java.lang.NullPointerException 
./tacmd viewuser -i <USER_ID> 
Exception in thread "main" java.lang.NullPointerException 



On AIX, when the ue(tacmd) and portal server component are installed in the same CANDLE_HOME, the TACMD EXPORTWORKSPACES command does not search in local portal server directories where are stored JAR resource files. This issue causes a memory leak on local or remote HUB TEMS server because of the continuous requests from TACMD client to the TEMS HTTP SERVER on port 1920 to get these JAR resource files instead of having these JAR resource files locally stored just one time. The command TACMD EXPORTWORKSPACES itself completes successfully because the JAR resource files are always retrieved from the cache if they are not available locally on the system from which TACMD EXPORTWORKSPACES command is issued.



When using tacmd executecommand with a command longer more than 64 characters or tacmd getfile and tacmd putfile with a source or target file longer more than 64 characters, the command crashes.


IBM Tivoli Enterprise Monitoring Installation APARs



For remote deploy update agent, additional disk space is required in CANDLEHOME. The Monitoring Agent for UNIX OS prereq check configuration file needs to be updated to more accurately reflect actual disk space requirements. 



When configuring the Portal Server as a non-root user while it is running, the following message is incorrectly displayed:



The OS agent will not connect using IP.SPIPE after running "SetPerm -a" or "secureMain lock" on HP-UX. IP.SPIPE or any ssl based protocol is not initialized on HP-UX when the agent binary is owned by root with the SUID bit turned on and the agent is launched from a non-root ID.



IV65616 added RHEL7 support for regular components in That APAR neglected to cover System Monitor Agents (SMA).



For a pristine install, if the installation image does not contain the file kcirunas.cfg, the install process ends with the message " failure". There is no indication as to the nature of the failure.



Performing monitoring server seeding on a Remote monitoring server causes the SQL to be processed at the HUB monitoring server instead, restarting the MS_OFFLINE situation causing new alerts for numerous known off-line systems.



On some Windows machines, the installer goes into an endless loop reading dummy_files_list.txt. The operating system fails to signal end of file while the installer is reading dummy_files_list.txt. This causes the installer to continually read without ever stopping. Eventually, the install log is filled to capacity and the install process is aborted, or the customer cancels the install.



For some locales that do not recognize daylight saving time, the reported current time in the KinCInfo header may be off by one hour. Locale developed time is performed by an IBM Tivoli Monitoring call to NLS code which may fail for locales that do not recognize daylight saving time.



The Java Attach API is a mechanism provided by the Java Runtime Environment (JRE). It is designed to allow applications to connect to a running Java Virtual Machine (JVM). The interface is described here:



Windows OS Agent (KNT) does not start after upgrade to 6.3.0 FP5. File PSAPI.DLL in the Windows directory, the ITMHOME\InstallITM, ITMHOME\CMS, and ITMHOME\TMAITM6 directories is not compatible with Windows 2008 and above.



On Windows, the IBM Tivoli Monitoring (ITM) installer updates the PATH value to include ITM directories. Before doing so, ITM computes the maximum PATH length and aborts the install if the updated PATH will be more than Microsoft supports. When this happens, ITM displays a popup with the following message:



Tivoli Enterprise Monitoring Server queries for CT_Affinity using product code returns incomplete list of affinities. This can result in various tacmd commands to not process correctly, for example tacmd listsit or tacmd bulkexportsit are not listing/exporting all defined situations for some product codes.



The customer's security policy requires that the Tivoli Monitoring server / AIX password encryption algorithm be changeable from 'crypt' to 'sha256' for user validation at the Tivoli Monitoring server.



In a FTO environment, the Acting hub monitoring enterprise server is always enabled for receiving and processing SOAP requests. Submitting SOAP requests to a Mirror hub monitoring enterprise server is not permitted; this restriction is a safeguard to prevent Mirror hub monitoring enterprise server from getting out of sync with the Acting hub monitoring enterprise server in an FTO environment.



Submitting multiple event map updates consecutively without delay can result in the HUB Tivoli Enterprise Monitoring Server shutting down. This problem is not likely to occur when only a small number of event map updates are made. This problem will not occur if event forwarding is not used. 



These 'false' messages are benign. In spite of the 'false' error messages, 



The new situation take action command, ZOSWTO, will cause a multi-line WTO to be issued on the z/OS monitoring server that an agent is connected to or on a z/OS agent. The message will be produced when the situation is true or false. The message ID will always be KO41041I. Any data that follows the command name, ZOSWTO, will be present in the multi-line WTO. When the situation is false the values of substitution variables will be NA. The format of the message will be as follows:


IBM Tivoli Enterprise Portal Client APARs



When using the portal client through WebSEAL, clicking the "Logout" link does not display the WebSEAL logout page. This only occurs when using the browser version of the portal client.



The tivoli portal client contained some java script that is used to scub URI's of HTML tags. This java script function and the regrular expressions used to do the scrubbing was found to be vulnerable to attacks.



As of Java 8 u60 the java webstart process started throwing parse errors when processing the tep.jnlp file. Oracle introduced a new xml parser in the webstart process.



Currently when a Tivoli Portal user has "Take Action" authority, the dialog used for passing arguments with an action ('Edit Argument Values' pop-up ), command allows for inclusion of additional characters in the text field. By this APAR fix, the administrator is able to list the characters that are not allowed in this field.



A security vulnerability exists between the Tivoli Enterprise Portal client and the Tivoli Enterprise Portal Server in regard to user authorization. It was found that a mailicious attacker could, in principle, modify the information being transferred between the client and server in such a manner as to modify the user's authorization assignments associated with their user profile.



When user selects a row of data in a table view and then select the "take action" menu item, it lists all of their systems. 



The Tivoli Monitoring 6.3.x Tivoli Enterprise Portal browser client stopped functioning when launched using the Firefox browser from a Windows OS client.



More recent Java runtime releases from Oracle have maintenance levels that are greater than 99. An example would be Java 8u111 (release 1.8.0_111). One of the components packaged with the Tivoli Enterprise Portal requires that the maintenance level be < 100 in order to function properly. This component is responsible for the rendering of many graphical views in the Portal, including the Policy Workflow Editor, Graphic Views in workspaces, and Situation Formula display. If the Java runtime maintenance level is > 99, then access to these features in the Portal will fail, and the Portal client can become unstable.


IBM Tivoli Enterprise Portal Server APARs



The payload returned by the IBM Tivoli Monitoring Dashboard Data Provider for metric requests includes metadata information used by the dashboard widgets to calculate the displayed value. For percentage type attributes, this additional metadata includes the maximum range that the value can attain for the attribute. In some scenarios, that maximum range value was not computed correctly by the Dashboard Data Provider. This resulted in the displayed value on the dashboard being formatted incorrectly, where the decimal position was off by a factor of 10 or more.



Tivoli Enterprise Portal Server crashes after Windows Server reboot. The portal server is started automatically when Windows starts, but when the first portal client attempts to connect, the portal server crashes and generates a crash dump.



This display issue will occur under the following conditions:
1) The customer is viewing a dashboard constructed and displayed using DASH.
2) The dashboard displays timestamp atributes where the value originally provided by the monitoring agent contains all zero digits.



When an agent is offline and the portal server disconnects from the HUB Tivoli Enterprise Monitoring Server followed by the TEPS connecting to the HUB Tivoli Enterprise Monitoring Server again, in the navigator tree, the chid items associated with the offline agent ar no longer gred out. Clicking on the child items results in the workspace pane displaying the message 
"KFWITM454E Request failed due to offline manage systems(s)".



This CPU utilization problem can sometimes be observed when the Hybrid Gateway is used to retrieve agent metric information from agents at different version levels.



Portal server randomly crashes after applying 6.3.0 FP6.



The Visibroker libraries have been uplifted to address CVE-2016-6304 in OpenSSL when OSCP Stapling feature is used. The protal server does not explicitly use the feature but we are uplifting the libraries to cover the latest vulnerability.


Summarization and Pruning Agent APARs



Summarization and Pruning agent fails to create partitioned tables for various agent attribute groups. 



The Summarization and Pruning agent, as part of its normal purge processing, Drops tables from the Database Server. The DB2 Database Server performs Table Drop functions asynchronously. If the Summarization and Pruning Agent is terminated (Detached) by the user before the DB2 Database Server has completed its Drop processing, the Tables are left in an indeterminate state by DB2. When the Summarization and Pruning Agent is restarted, those Tables which were to have been Dropped reappear, and their status does not make it possible to clean up the Table status conflict. As a result the Summarization and Pruning Agent does not start (hangs).



The HTTP and HTTPS protocols are not supported by the KSY Agent to connect to the Portal Server.  See this Technote for more details.


Warehouse Proxy Agent APARs



When starting the WPA agent, any variables defined in the config/hd.environment file are ignored.



Monthly Paritions are not created if the Summarization and Pruning Agent is dormant (remains in a stopped state) for a time that exceeds the user configuration value of "Number of future partitions to maintain".



Let's connect!

To follow my social updates on IBM software, please feel free to connect with me by clicking on the images below:


image image image


Find all my other blogs here:

LINK<wbr></wbr> ---<wbr></wbr>----<wbr></wbr>----<wbr></wbr>->  <wbr></wbr> Ful<wbr></wbr>l In<wbr></wbr>dex <wbr></wbr>of M<wbr></wbr>y Bl<wbr></wbr>ogs <wbr></wbr>  <-<wbr></wbr>----<wbr></wbr>----<wbr></wbr>--- <wbr></wbr>LIN<wbr></wbr>K




Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:











Check out all our other posts and updates:

Academy Blogs: http<wbr></wbr>s://<wbr></wbr>goo.<wbr></wbr>gl/U<wbr></wbr>7cYY<wbr></wbr>Y
Academy Videos: http<wbr></wbr>s://<wbr></wbr>goo.<wbr></wbr>gl/F<wbr></wbr>E7F5<wbr></wbr>9
Academy Google+: http<wbr></wbr>s://<wbr></wbr>goo.<wbr></wbr>gl/K<wbr></wbr>j2mv<wbr></wbr>Z
Academy Twitter : http<wbr></wbr>s://<wbr></wbr>goo.<wbr></wbr>gl/G<wbr></wbr>sVec<wbr></wbr>H



Jeremy Keith (Adactio)Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

John Boyer (IBM)News about Linux on z Systems

As you know the z/VSE connectors may best integrate with Linux on z Systems solutions, e.g. by using HiperSockets.


Today I want to point you to new information for Linux on z Systems:

John Boyer (IBM)Choosing sessions at IBM Connect 2017

Explore what you might do at IBM Connect 2017, the leading collaboration technology conference.  The Curriculum page on the IBM Connect web sites allows you to browse the sessions by Tracks, by Topics, or by Roles. You're sure to find a session that fits your interests.

Learn more in Which #IBMConnect 2017 Curriculum Is Right for You?, then visit the curriculum page to begin exploring.

Early Bird registration ends January 20th.image


John Boyer (IBM)Why you should not miss Connect 2017


Each year the Connect conference brings together developers, IT professionals and business leaders to reveal the latest software advances, highlight innovations in workplace technologies and share best practices.

IBM Connect 2017 has a rich curriculum exploring the digital workplace, enterprise email, unified communication, enterprise content management, enterprise collaboration, team collaboration, compliance, and individual productivity. Discover innovations in augmented reality, Internet of Things, blockchain, augmented intelligence and Watson that are helping employees focus on work that matters.


IBM Connect 2017 provides an opportunity to:

  • Meet with developers and experts, attend technical labs and sessions.
  • Learn how other clients are succeeding with IBM software.
  • Hear from IBM executives about our strategies and roadmaps.
  • Network with peers and industry experts.
  • Discover how Watson is adding a powerful new dimension to collaboration solutions like IBM Connections and IBM Verse to change the way we work.
  • Learn how IBM's partnerships with Box and Cisco are transforming the digital workplace. 


A taste of what you can expect from the sessions:

  • See the latest updates and future plans for IBM Verse, including advances in offline, cognitive, calendaring and connecting to third-party applications.
  • Learn about the tools, configurations and options when migrating your mail platform to IBM Verse.
  • Learn how IBM Verse and IBM Connections Cloud use virtual servers and SAML support to extend your organizational security boundaries into the cloud.
  • See first-hand how IBM Connections can work for you and how you can embrace the new capabilities.
  • Hear how changes to the deployment model for IBM Connections will allow for simultaneous deployment of features to the cloud and on premises.
  • Learn how to plan, prepare and activate your SaaS entitlements and unlock the business value of SaaS purchases you have already made.
  • Hear the experts view on moving IBM Connections from on premises to the cloud.
  • Learn how to deploy, secure, customize and extend the IBM Connections Mobile App.
  • Learn how to integrate cognitive into work experiences using IBM Collaboration SaaS solutions.
  • See how to leverage services from the Watson Developer Cloud into Connections Cloud, Verse, Domino and Watson Workspace Services.
  • Walk through the application migration process to move your Domino applications to the cloud.
  • Learn how you can leverage IBM Bluemix and Docker to bring cognitive services to your applications.
  • See how Box can be used to extend your content management platform into the cloud.


See here to search the full curriculum.
See here to register for Connect 2017.


John Boyer (IBM)Less Click and Less Stress

When we designed IBM Verse, we focused on supporting the people rather than supporting mail. The difference is the preoccupation in building an integrated experience to reduce to a minimum delays between important contacts. We cut on the too common clicks!


Rather than force the user to search or browse at repetition for important incoming mail, we delivered a smart dashboard to actions and important contacts. Never miss an email from a client or the boss. Never miss a deadline thanks to the actions list. Be ready to join your meetings with the advance notice delivered. Easily schedule last minute meeting with the quick meeting creation - in half the time of the full event creation.


Work in real time with your desktop or your phone where the single click gained power. You no longer have to be good at Space Invader to be more productive. Thanks to IBM Verse smart design, you are less worried about missing a target. Just focus on your work.


John Boyer (IBM)Oil & Gas Industry Achieve Operational Excellence With IBM Maximo

IBM Maximo Asset Management (or ‘Maximo’ in short) is an integrated key productivity tool and a database designed to manage all the asset types of an organization in one single solution platform. Maximo, built on a SOA(Service Oriented Architecture) delivers a complete view of all asset types, their locations and conditions, work processes that support them and also provides control, optimal planning, compliance capability and audit.


Maximo database provides crucial information about asset resources, their configuration, key attributes and their logical and physical relationships to other resources.


Using Maximo user interface, you can build Key Performance Indicators(KPI) to manage asset locations and their conditions, and generate automated action depending on the changes. You can create, monitor, notify, assign and report on important process components right from beginning to end point such as purchase orders, work orders, service desk tickets, including status, and so forth. For better productivity and communication in future aspects, you can add attachments, urls, pictures and maps to each task or record. All these aspects are detailed in IBM Maximo Training in a clear and precise manner.


Galvanic Presence Of Maximo In Oil & Gas Industry:


Maximo for oil & gas company adds a feather of industry-specific performance to Maximo Asset Management, by that delivering the following functionalities or capabilities that are designed to gear up operational excellence.


  • Asset Management: Capable in handling detailed information of assets, which includes hierarchy modeling from enterprise to sub-assemblies, metering, condition monitoring, costing, hazards and precaution management, location management and rich work order history.

  • Action Tracking: Actions that come out from internal reviews and regulatory audits are tracked and provides a procedure ensuring that the recommendations and findings of external and internal audits are tracked and managed to closure.

  • Competency Management: Functionalities like adding, updating and modifying workforce competencies help in competency. Furthermore, Maximo for oil and gas can connect competency requirements and certificate requirements to permit and ensure identification and validation of competency requirements on work orders and job plans.

  • Condition For Work: Maximo accumulates similar jobs that span group of assets, assets, individual locations and areas and helps in supporting opportunity maintenance. Finding out work that can be combined into unplanned or planned work can increase efficiency and equipment reliability.

  • Control Of Work: Maximo ensures improvements in safety, communication , efficiency and collaboration between maintenance and operations through certificate requirements in work orders and job plans.

  • Contract Management: Maximo solution provides many types of contracts for overhaul materials and services, repair and maintenance. Types of contracts are master contracts, lease and rental contracts, warranty contracts, payment schedules, purchase contracts, labour rate contracts and terms & conditions.image

  • Defect Elimination: An integrated approach of maximo enables the management eliminate mechanical defects. With this approach, maintenance and operations team can record machine defects in real time, providing a better communication between various domains and helps in ensuring high service levels.

  • Calibration: Calibration processes are automated, allows traceability, helps in improving work planning and compliance management. With the explosion in the equipment devices, viewing of calibration work with other works helps in stimulating efficiency leading to positive impact on equipment reliability.

  • GIS Spatial Integration: Many Oil and Gas companies use GIS(Graphical Information Systems) application for storing and recording asset information that is a valuable aspect for asset management system. IBM Maximo integrates with other GIS systems of the company to provide spatial visualization and analysis of asset objects and work. In addition, bidirectional data exchange of valuable work and asset information between the GIS system and IBM Maximo.

Apart from the above mentioned, there are other concentric areas like Procurement, Risk Analysis, Regulatory Compliance, Investigations, Incident Management, Failure reporting, IBM Integrated Information Framework Integration, Materials Management, Operator’s Log, Linear Asset Modeling, Risk Matrices and Solutions wherein Maximo is the reason for effective operational excellence achievement by Oil & Gas industry.



Oil & Gas industry is a critical sector for any nation and Maximo is supporting this sector facing evolving and complex issues in operational processes. Maximo solution provides necessary Internet of Things (IoT) applications in collecting valuable information, knowledge, improving operational efficiency, operate and manage mission-critical assets productively and safely. As oil and gas companies are growing though at a slower rate, but have smarter software solutions like Maximo for new upcoming projects in this energy space.


John Boyer (IBM)DR for heterogenous storage islands with IBM Spectrum Virtualize

Software-defined storage (SDS) is a key component for clients adapting to the modern data center and enable hybrid clouds. By decoupling the storage hardware and software that manages it, SDS empowers the clients to not only maintain the existing heterogeneous storage hardware but also simplifies the management by virtualizing the underlying storage hardware. Moreover, clients can also avail the advantages for data replication and seamless migration between heterogeneous storage platforms.


Disaster Recovery as a Service using IBM Spectrum Virtualize

The solution is built on IBM Spectrum Virtualize software running on Intel x86 processor-based Lenovo servers at Recovery Site and IBM Storwize V7000 at Protected site (Production site). It leverages VMware Site Recovery Manager (SRM) to replicate between IBM Storwize V7000 and IBM Spectrum Virtualize. The diagram below shows the architectural overview of the environment.


Where can it be deployed

  • Cloud and managed service providers looking to implement DRaaS to users with heterogeneous or dissimilar storage infrastructures
  • Client looking to reduce capital expenditure (CAPEX) and operational cost for DR by using Software defined storage based approach
  • Client looking to integrate with cloud orchestration and interoperate with on-premises existing storage system
  • Organization looking for optimizing there existing heterogenous storage infrastructure with centralized storage management tool 


Look for more resources here.

Technical paper

Disaster recovery as a service using IBM Spectrum Virtualize and VMware Site Recovery Manager integration

YouTube url


Updated: .  Michael(tm) Smith <>