ProgrammableWebRestlet Announces New API Documentation Creation and Publishing Tool

Restlet, an API platform provider, has announced the availability of a new tool that automatically creates API documentation from API specification files such as OAS and RAML. Code-named Gutenberg, this new API documentation creation and publishing tool is available at no additional charge via Restlet Studio. This is the latest update to the Restlet API platform.

John Boyer (IBM)IBM Spectrum Control V5.2.12 Pearls in the documentation

V5.2.12 of IBM Spectrum Control is jam packed with new features, including new reports, enhanced hybrid cloud monitoring, and more device support. What do these features do? How can they help you monitor and manage your storage? There is a lot to take in with this release, and the IBM Knowledge Center is here to help.

In the continuing blog series of "pearls in the Spectrum Control documentation", I'll highlight some of the new information that we've added for 5.2.12, and some existing topics that you might not be aware of but could find useful.


What's new in 5.2.12?

The what's new topic provides a quick overview of all the new features that were added in 5.2.12. Screenshots are included in the topic to illustrate the features, and links to more detailed topics are provided in case you want to learn more.

Pro tip: You can also learn about what's new directly in the GUI by clicking the help icon in the upper right of the main window and selecting the "What's New" option:




What was new in previous releases?

Do you want to see a summary of all the changes in previous releases of Spectrum Control, at a glance? Check out the new topic Summary of changes for IBM Spectrum Control.




Understanding the units of measurement in Spectrum Control

Spectrum Control uses decimal and binary units of measurement to express the size of storage data.  Decimal units such as kilobyte (KB), megabyte (MB), and gigabyte (GB) are commonly used to express the size of data. Binary units of measurement include kibibyte (KiB), mebibyte (MiB), and gibibyte (GiB).


For more information about how these units of measurement compare, check out Units of measurement in Spectrum Control.



Getting storage information through the REST APl

The REST API provides with you with programmatic access to the extensive information that Spectrum Controls collects about your resources. You can access that information easily through the API by using a command line utility or plugging a URL into a web browser. What you do with that information is up to you.

A common practice is to extract the specific information that you care about and include it in custom configuration, capacity, and performance reports that are tailored for your organization.

Learn more about the IBM Spectrum Control REST API in the Knowledge Center.




Viewing the capacity of external storage

Your storage is valuable. Tracking the storage that you have, and determining the storage that you'll need in the future, is key in keeping your applications online and your services uninterrupted. In many environments, that storage is distributed across multiple storage systems, data centers, geographies, and now with IBM Spectrum Scale's Transparent Cloud Tiering (TCT), it can even be tiered (or mirrored) to external cloud object storage (public, private, or on-premises) .

In Spectrum Control 5.2.12, you can view the used space in external storage to understand how much data is being migrated from Spectrum Scale file systems to that external storage.  For external storage that is provided by IBM® Cloud Object Storage, you can also view capacity information, including the percentage of space that is being used.

Learn how to view the capacity of external storage in Spectrum Control.




Upgrading in a nutshell

Upgrading to a new release of Spectrum Control doesn't have to be a difficult experience, or result in unforeseen problems, right? But people are often anxious about figuring out, and running through, the upgrade process. To help address this potential anxiety, we've streamlined and consolidated the upgrade instructions in the Knowledge Center. Check out the revised upgrade instructions.




Viewing and sharing easy-to-read reports about your storage

Do you need to see the capacity, and cost of that capacity, for all the applications, departments, hypervisors, and physical servers in your storage environment in one, easy-to-read report? Do you want to create reports that show the block capacity and cost of the block capacity for a single storage consumer? In 5.2.12, you can do just that.

Check out the topics for creating chargeback reports and consumer reports for more information.




Getting help

Getting help and contacting IBM support shouldn't be difficult when you run into problems. With the updates to Contacting IBM Software Support information in the Knowledge Center, learning how to get that help is easier than ever before. You now have a central location that explains how to get help when you need it, including DIY troubleshooting tips for solving problems on your own.




imageJoining the conversation

Be social! Join the conversation at #IBMStorage and #softwaredefinedstorage.

We are always striving to improve the information that we deliver, and how we deliver it. At the bottom of every page in IBM Knowledge Center is a "Feedback" link. Please use this link to reach out to us and help influence the information experience.

For continuing news about the documentation for IBM Spectrum Control and IBM Storage Insights, follow me on View Chris King's profile on LinkedIn or Twitter @chris_tking.


John Boyer (IBM)New function delivered in OMEGAMON for IMS V5.3.0 IF2

We are pleased to announce a new round of updates for IBM Tivoli OMEGAMON XE for IMS on z/OS V5.3.0 with the arrival of Interim Feature 2 this week. This delivery provides updates to OMEGAMON Enhanced 3270UI workspaces for database analysis. It is delivered through PTF UA83408.

You can now use the new Database Summary workspaces to monitor all databases in your environment - no matter the type - in one easy to access location. The workspace is found from the System Resources menu (option I) off any IMS Health workspace and displays useful information similar to that of the IMS Type-2 command QUERY DB








Solve problems quicker by using the "In Error" tab to filter by databases with errors:



More information about this delivery is available on the IBM Support website.

Shelley Powers (Burningbird)No We Can’t Sneak Judge Garland In Through The Back Door

Crooks & Liars put out an audacious plan, based on twitter posts from Daily Kos’ David Waldman.

<script async="async" charset="utf-8" src=""></script>

The foundation of Waldman’s idea is that newly elected senators are not sworn in yet so their positions are vacant. At noon on January 3, previous senatorial terms expire. At that time there are 66 senators whose terms are not expiring: 34  Democrat, 2 independent, and 30 Republican. Therefore, at noon on January 3, Democrats would be in the majority.  Following Waldman’s line of thought, since there are only 66 senators at noon,  it would only require 33 votes to confirm Judge Merrick Garland.

The Vice President presides over the Senate on the first day of a new Congressional session. Established procedure is that the Senate Chaplain opens the session with a prayer, followed by the Pledge of Allegiance. Then, the VP announces the receipt of the certificates of election establishing the eligibility of the newly elected and re-elected Senators. A reading of all the certificates is waived, and the Senators are invited to the front, in groups of four, to be sworn in.

Waldman states that until the Senators take the oath, they’re not really members of the Senate. In addition, he also states that since the previous position holders term ends at noon, the positions are temporarily vacant. Therefore, if Joe Biden were to defer acceptance of the certificates of election and the oath giving, and instead recognize Senator Dick Durbin, Durbin can move to nominate Garland. The majority would then suspend the rules, the Democrats and independents could vote Garland in, and viola! Garland is a Supreme Court Justice.

It sounds good on paper. And it’s totally cracked.

For one, Senate rules require one day’s notice in order to suspend the rules. They also require a document meticulously outlining which rules to suspend, and why. The only exception would be if all Senators present unanimously agree to the suspension. This isn’t going to happen.

In addition, Senate rules related to presentation of credentials gives them precedence over most other Senate business.  The few exceptions listed in the rules are related to the timing of the presentation of the credentials. In the case of the first day of Congress, these credentials have already been presented to the Senate before the session even begins.  During the opening day procedure the VP lays the certificates out for the members, but they’ve already been received by the Secretary of the Senate. Since, at the very beginning of the session, no journal reading is occurring, no call for a quorum has been made, no question or motion to adjourn or vote is pending, the VP doesn’t have an opening in order to pull this plan off.

Even if we tortuously twist the Senate rules on their head, wringing every last semantically favorable ounce of meaning out of them, we still run into the fact that, according to Section 1 of the 20th Amendment, a Senator’s term begins when the previous term expires.

The terms of the President and Vice President shall end at noon on the 20th day of January, and the terms of Senators and Representatives at noon on the 3d day of January, of the years in which such terms would have ended if this article had not been ratified; and the terms of their successors shall then begin.

One of Waldman’s followers argued that the Senate term begins, but not the Senator.

<script async="async" charset="utf-8" src=""></script>

I do understand what he’s saying: a specific Senate term exists even if a Senator dies or has to resign. A replacement Senator is appointed or chosen via a special election to finish out the term. So the term isn’t directly tied to an individual.

But in this case, at the beginning of a senatorial term, you can’t separate the term from the person. The 20th Amendment reads, “…the terms of their successors begin”. It doesn’t state that a new term begins, it states the term of the successor begins. It’s based on assumption that if all is as it should be, a person is elected senator and they serve a term. The term starts at noon on January 3 and ends at noon on January 3, six years later.

The point is, the Constitution calls for an orderly transition of government. What this implies is that as one President leaves, another enters; as one Senator leaves, another enters. There is no gap, no void.

There is never a vacancy in the Presidency. If the President dies, the Vice President takes over. They’re given the oath as soon as possible, but they’re President at the moment there is no President.

The only time a true vacancy occurs in the Senate is when a Senator dies, becomes incapacitated, is forced out, or resigns. The vacancy lasts only as long as it takes the Governor of the Senator’s state to appoint a replacement or to hold a special election to find a replacement.

True, there is also the assumption that the vacancy isn’t literally filled until the person presents their credentials and is sworn into office. For the most part, this is true. But the credential submission and oath taking isn’t an absolute requirement when it comes to filling a vacancy. The definitive paper on Senate terms was written by Senate Parliamentarian Robert Dove, in 1984. He noted that if a Senator is elected to fulfill a Senate vacancy currently held by an appointee, and said election is held during a sine die adjournment (the end of year adjournment) , the individual’s term begins the day after the election.  This, regardless of when they submit their credentials and take the oath. The paper includes instances of people officially becoming Senators (according to their pay stubs) even though they have not presented their credentials nor taken the oath required of all senators.

The Constitution abhors a vacuum. It wants order.

For the nonce, though, let’s forget that whole orderly transition thing. Let’s assume for the sake of argument that at noon, January 3, there are only 66 senators. The real kicker would be that if the VP did suddenly decide to wield that gavel like Thor’s hammer, one of the 30 Republicans whose terms have not expired then suggests the absence of a quorum. At this point in time, the VP has no choice but to direct the clerk to call the roll. Now, whose names do you think will be on this roll? If your answer is the existing Senate members and the newly elected or re-elected members, you’d be correct. Yes, wouldn’t that twist things into a knot?  How exactly is the Senate supposed to proceed at this point?

Suspend your belief just one minute more, and let’s take this exercise out to the end of the gangplank before we drop kick it off.

Let’s assume the Senators in the room are so paralyzed by the events that they haven’t called the Sargent of Arms to escort VP Biden to the nearest doctor for mental evaluation. Technically, there is a quorum of senators present, even if you limit the pool to the 66 “legitimate” senators. But how does one deal with calling for a vote? All it takes is one-fifth of the quorum members present to demand a roll call vote.  Even going by the standard quorum count of 51, this is only 11 senators. So the confirmation vote is by roll call, in which case the clerk would use the roll that lists the newly elected senators.

Then there’s the possibility of a filibuster, and the votes needed to end the filibuster. Normally 60 are required, but if we follow Waldman’s logic that only 66 senators are present, you’d still need 40 (three-fifths of 66) senators to invoke cloture. And all of this has built in timing. For instance, there’s time to debate, there’s time to consider cloture, there’s time after cloture. This absolute breakage of Senate procedure would take weeks.

I’m a Democrat and I can understand the desperation that goes into a plan like this. Voters in a couple of states had a snit and now we’re stuck with another Scalia, at best. A good man like President Obama is denied his right to appoint a Supreme Court Justice because Republicans have abrogated their responsibility to serve all the people of this country.

The appointment of a Supreme Court Justice is the one act our government does that should be apolitical, and free of shenanigans and twisty games. That the Republicans show profound disrespect for the process is no reason for us to do so, too.

Consider the procedures normally followed with an appointment of Supreme Court Justice. During the vetting of a Supreme Court Justice, we in this country have a chance to get to know this person. They’re publicly questioned, not to mention investigated by the FBI. All senators have a chance to express their concerns or compliments about the individual because the person we’re appointing has to represent all of the country, not just one party. It is an incredibly serious and important process—the only Senatorial actions more important and more solemn are the forced removal of a President or a declaration of war.

Yes, the Republicans brought shame to themselves with their actions, but none of us wins, especially President Obama and Judge Garland, if we Democrats play an even worse game. I have no doubts that both President Obama and Judge Garland would vehemently and resoundingly reject this Senate takeover idea.

Lastly, too many people coached Waldman’s idea in terms such as, “If only Democrats had guts…if only they were courageous and aggressive…it takes backbone.”

Do we really need to place our elected leaders into this kind of position? That if they don’t follow through on a badly conceived idea, they’re somehow cowards, and not worthy of our regard? No wonder Democrats lose so many races…with friends and supporters like us, who needs enemies?

We have to, once and for all, stop with the “If only Democrats had guts…” kind of talk. Right now, there is only one enemy and it sure isn’t any of the Democrats we have elected to represent us.

The post No We Can’t Sneak Judge Garland In Through The Back Door appeared first on Burningbird.

John Boyer (IBM)Información en Español para Microservicios.

Esta serie de artículos se puede conseguir una visión clara y muy bien explicada acerca de la adopción de API en la organización.


Tanto en las comunidades de programación Cloud como en las de Agile, pareciera que todo el mundo estuviera hablando acerca de microservicios. Otros principios arquitectónicos como REST tuvieron un éxito rotundo en el mundo del desarrollo, y ahora los microservicios son la nueva moda. Pero, ¿qué son? Y ¿por qué un desarrollador de Java™ se debería ocupar de ellos?


Enjoy !



John Boyer (IBM)Getting certificates to work with the MQ Web console on z/OS

Ive been playing with the MQ Web console on z/OS in MQ901 and had a few problems getting digital certificates to work.
Below are the definitions I used, and some hints on getting it working.


The MQ Console is a Liberty application server.

I created a file ssl.xml to holf  the configuratton and put  <include location="ssl.xml" optional="true"/>  in the mqwebuser.xml file.
The ssl.xml file has

<?xml version="1.0" encoding="UTF-8"?>
<sslDefault sslRef="mqDefaultSSLConfig"/>
<ssl id="mqDefaultSSLConfig" keyStoreRef="defaultKeyStore"
<keyStore filebased="false" id="defaultKeyStore"


<webAppSecurity allowFailOverToBasicAuth="false"/>


Where the two  mqDefaultSSLConfig and defaultKeyStore  match up.

The keyring is SCENSTC.RING for userid  SCENSTC.

The certificate to use to send to the web clients is  serverKeyAlias="DEFOU"

I set up  SSL port 9444

<httpEndpoint id="defaultHttpEndpoint" host="WINMVSCA"   httpsPort="9444">
    <httpOptions readTimeout="${mqRestRequestTimeout}" removeServerHeader="true"/>



I created a certificate, signed it, downloaded it to my redhat machine, and imported it.



When I connected chrome to I got


which is the my web browsers certificate ( which originated fom z/OS).   This was expected.


I then got

Your connection is not private

Attackers might be trying to steal your information from (for example, passwords, messages, or credit cards).



and on Firefox I got

Your connection is not secure

The owner of has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.

Learn more… uses an invalid security certificate.

The certificate is only valid for SCENSTDEFAULTOU




After  week of playing around and not getting anywhere, someone (Jon Rumsey)  told me to check the CN ( as the google message says).

The magic incantation to get it to work was to change the CN in my certificate to match the URL.


I set up


              SUBJECTSDN(CN('')  -

                                                O('CONSOLE') -



                                       )     -





                             RING(SCENSTC.RING) ) -




so the CN is the url I need to connect to '

I changed   serverKeyAlias="DEFOU"  to be   serverKeyAlias="'WINMVSCA"

After a few seconds, the liberty detects the parameters changed, and I was able to logon successfully.



John Boyer (IBM)End of marketing announced for z/VSE 5.2 products (including IBM TCP/IP for VSE/ESA)

In between I posted more than 800 blog entries. I hope, that those had some value for you.


Yesterday there was an end of marketing announcement (eom) related to z/VSE. The end of marketing for a specific product means, that the product can no longer be ordered via Shopz.

It was announced, that the eom of IBM TCP/IP for VSE/ESA 1.5 and the GPS (General Print Server) Feature of IBM TCP/IP for VSE/ESA 1.5 will be on March 13, 2017. The replacement product is IBM TCP/IP for z/VSE 2.1 and the corresponding GPS feature. Both are available on z/VSE 6.1.

The eom announcement letter is here.


That also means that all z/VSE 5.2 related products, such as z/VSE 5.2, CICS TS for VSE/ESA 1.1.1, IBM IPv6/VSE 1.1, IBM TCP/IP for VSE/ESA 1.5 (including GPS for that release) are no longer orderable after March 13, 2017.
See my z/VSE 5.2 eom blog entry in June - here.


If you plan to migrate to z/VSE 5.2, please order those products before March 13, 2017. I recommend to migrate to z/VSE 6.1. More information on z/VSE 6.1 is here and the latest announcement - here.

John Boyer (IBM)WCM import-wcm-data fails due to invalid characters

A WCM library exported on a WebSphere Portal v7 system using export-wcm-data might fail during the import with the following error:

--- snip ---

org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0x3) was found in the element content of the document

--- snip ---

The reason for this exception is that low hex characters are in the exported ".node" file(s). These characters are usually shows as ^A, ^B etc.


For this problem there is actually a fix included in WebSphere Portal v7.0.0.2 CF24 (was back-ported from v8) but needs to be activated. Perform the following steps to remove these invalid characters:


  1. Stop WebSphere Portal
  2. Open icm.properites file in <profile_home>/PortalServer/jcr/lib/com/ibm/icm
  3. Add the following property:
  4. Start WebSphere Portal again
  5. Rerung the export of the WCM library
  6. Verify the .node file which failed before

John Boyer (IBM)What Web Services Trace Specification Should I Use?



I often hear the question, "What web services trace specification should I use?". The answer to that question will depend on the type of problem you are trying to diagnose. In this blog entry, I'll list the different types of trace specifications and give you a bit of guidance on when to use each one. Its important to select the best trace string from the list below. The trace specifications and trace strings below are target-designed to gather information based on your problem-type. 


To determine root causes of particular web services problem type, it is recommended you choose the trace based on problem type. If you are not sure of what kind of problem you have, use the full engine trace.

1.)  Full [Web Services] Engine trace.


Reference:  MustGather: Web Services engine and tooling problems for_WebSphere Application Server

What type of [web services] problems need a full engine trace?

  • Any web services problem happening while the application and/or the application server is starting up. Problems occurring at Web services application startup almost always need the full engine trace.
  • WSDL problems will need a full engine trace:
    • WSDL file could not be generated for the ... Web service implementation class because of the following error...
    • WSDL could not be generated, missing [jaxb] class: NoClassDefFound / ClassNotFound with WSWS7054E / WSWS7027E.
    • Any WSDL problems at Web Services application start time.
  • JAXB problem with JAX-WS web services, Marshalling, UnMarshalling, etc.
  • Web services tooling, development-time, build-time, and or deploy-time issues. wsgen, wsimport, java2wsdl, wsdl2java
  • Rational Products (RAD, WID, etc) and WebSphere web services problems usually need full engine trace.
  • Web services client unexpected results with, http.nonProxyHosts, http.proxyHost, http.proxyPort, https.proxyHost, https.proxyPort (Reference IBM Knowledge Center WebSphere Application Server 8.5.5 -  HTTP transport custom properties for web services applications)
  • Startup Performance: Web services application startup is slower than expected.
  • Startup Performance: Startup is ok but the first message is always slow.
  • Startup Performance: The entire server is slow to start, and, you suspect a JAX-WS application is the root cause.
  • NOTE: Startup Performance problem is very different problem type and requires a very specific trace. See WSPerf below.

2.)  IO and Messages Trace

Applications using IBM WebSphere JAX-WS Implementation:**=all

Applications using IBM WebSphere JAX-RPC Implementation:**=all

When your web services applications suffer just IO exceptions and/or IO connectivity problems, these problem types do not require full [web services] engine trace. There is likely no need to trace all the engine's work. All you need to troubleshoot is IO and/or the xml-soap messages.

Advantage: This trace will generate less output compared with full engine trace. Less data to analyze speeds up analysis time.

Web Services IO Problems

  • The IBM WebSphere web services engines (JAX-RPC, JAX-WS), use IBM WebSphere tcp channel for its communications needs.
  • Therefore, to diagnose IO problems with applications using IBM WebSphere web services, include WAS http and tcp channel trace specifications.

What type of [web services] problems need IO and Messages Trace?

  • IOException: Async IO operation failed Socket timed out, SocketTimeoutException
  • IOException RC: 64 The specified network name is no longer available.
  • IOException RC: 76 A socket must be already connected
  • WebServices Timeouts, read timeout, write timeout, connection-establishment timeout, etc.
  • IOException: Async IO operation failed, reason: RC: 73  A connection with a remote socket was reset by that socket...

3.)  JAX-RS (Wink) Trace.**=all:org.apache.wink.*=all

Include JAX-RS Trace specification to Full Engine trace for any applications using IBM WebSphere JAX-RS implementation.

4.)  Web Services Performance Trace


WSPerf trace is used for both JAX-RPC and JAX-WS applications.

Use WSPerf trace to troubleshoot performance problems or intermittent performance problems with web services message traffic.
WSPerf trace helps the analyst quickly narrow down likely suspects for root-cause determination.
If extreme lightweight trace is required, this is about as light as is gets.

Reference: Troubleshooting Web Services Performance Problems In IBM WebSphere Application Server With Web Services Performance (WSPerf) Trace

5.)  Trace To See Just XML-Soap Messages

For JAX-WS applications, use:*=all
For JAX-RPC applications, use:*=all

6.)  Web Services Client Outbound Connection Pool Trace

For the connection pooling trace, reference the article here, Web Services Client Outbound Http Connection Pool Troubleshooting.

What type of [web services] problems need Web Services Client Outbound Connection Pool Trace?

7.)  JAX-WS Applications In WebSphere Liberty


Reference: Reference IBM Knowledge Center 8.5.5, Liberty:Implementation of JAX-WS web services applications

8.)  Policy Set Trace.


Use this trace when you suspect the policy sets and/or policy set binding themselves.

Examples of some problem descriptions where the policy set trace specification may help:

  • "An http policy-set for a client read-timeout is not working but the jvm custom property is working",
  • My SSL policy set is not getting picked up by the application",
  • "The administration console does not save any Policy Set Bindings for our SAML Token, wsadmin however does work ok to set these bindings configurations"


Choose the WebSphere Application Server Web Services trace specification based on your type of problem. Problems at application startup require full engine trace, while IO symptoms can be diagnosed with a more targeted trace specification. There is special trace for Web Services performance and a target trace specification for Web Services Connection Pooling problems. Finally, there is a different trace specification for Liberty.


John Boyer (IBM)New Continuous Engineering product releases

The Continuous Engineering teams have delivered a number of new product releases over the last few weeks.  These releases include a number of fixes and enhancements.  We've included highlights of some of the enhancements below.


Available November 29, 2016

  • Rational Lifecycle Integration Adapters Tasktop Edition V1.2
    The latest release of Rational Lifecycle Integration Adapter Tasktop Edition allows clients who use third-party tools to take advantage of the benefits that are offered by Configuration Management capabilities in the IBM Collaborative Lifecycle Management suite while still ensuring that desired artifacts are visible and synchronized across the tools. Configuration Management allows for multiple versions of Collaborative Lifecycle Management artifacts. Other tools, like JIRA and HPE ALM do not share the same concept nor have this same capability. Rational Lifecycle Integration Adapter Tasktop Edition V1.2 bridges the gap by showing one (or more) versions of a Collaborative Lifecycle Management artifact in tools that do not have version capabilities.

    Additionally, there are now new adapters for HPE Octane and Pivotal Tracker, along with general improvements in functionality for connectors with:
    • JIRA
    • HPE ALM
    • IBM ClearQuest
    • CA PPM and Agile Central, Version One
    • Microsoft™ TFS


Available December 5, 2016

  • Rational solution for Collaborative Lifecycle Management V6.0.3

Teams that create and evolve complex IT systems and mechanical-electrical-software products need modern tools and developments practices to overcome the challenges of increasing solution complexity, compliance, and cross-organization coordination and collaboration. Version 6.0.3 furthers this with the following enhancements:

  • Extend agile development to agile program and portfolio planning with built-in Scaled Agile Framework (SAFe) V4.0 templates.
  • Gain efficiencies from easier parallel development and reuse of requirements and tests with Global Configuration Management of many component streams and baselines within project areas in IBM® Rational® DOORS® Next Generation and Rational Quality Manager.
  • Find insight into project dependencies and status using:
    • Jazz™ Reporting Service with more development data
    • More ready-to-copy reports
    • More graph formatting options
  • Collaborate more easily on requirements with expanded use case support for requirements interchange across supply chains and new user interface wireframe diagrams.
  • Improve productivity of manual testing with new customizations and automated testing across multiple servers.
  • Improve productivity in complex software development with improved change and merge use cases in Rational Team Concert™ source control management and enhancements when updating Rational Team Concert work items while using Git.

Refer to the Highlights of 6.0.3 blog post for more information.


  • Rational Publishing Engine V2.1.1

In this release, project teams and document controllers can look forward to further improved web functionality that is built on the success of the recent Rational Publishing Engine V2.1 release. These enhancements includes following details:

  • Everyday users:  When grouping artifacts, users can expand or collapse all of the items.
  • Report designers:  They can hide complexity from everyday users:
    • New sample assets are available for report designers to use (for example, templates, snippets, scripts, and style sheets, which are designed to work with specific data sources, such as, Rational DOORS or Rational Quality Manager).
    • Report designers can upload and work with JavaScript files.
    • Report designers can select all the assets in a view by clicking the check box in the table heading.
    • Report designers can select multiple templates and click the Create Report from Template icon to load the selected templates into the Design reports page.
  • Administrator:  Administrators can now monitor long running jobs and pause, cancel, or resume the jobs.


Other Rational Publishing Engine enhancements include:

  • New Document Studio guided tours to help you quickly build and generate documents for data sources, such as Rational DOORS and Rational DOORS Next Generation.
  • Drag and drop support for creating templates from web resource websites is now added for communication diagram and object model diagram web addresses from Design Management, and graphical report websites from Jazz™ tm Reporting Service Report Builder in the Document Studio. The resulting template and document specification are automatically configured with the REST website, credentials, and authentication type.
  • Improvements to the Document Studio JavaScript editor include syntax coloring and content assist. You can also include the JavaScript as a reference link in the template.
  • Ability to add mathematical equations to your documents.
  • Ability to set links on your images.
  • Ability to add rich text inside links, for example tags.


In Rational Publishing Engine V2.1.1, there are the following deprecated functions:

  • The 1.x Remote services are deprecated in this release. Administrators can migrate assets from the 1.x Remote services application to Document Builder.
  • The Legacy PDF driver is no longer supported. If your document specification points to the old legacy PDF driver, modify the specification to refer to the default PDF driver.


  • Rational Rhapsody V8.2 and Rational Rhapsody Design Manager V6.0.3
  • AUTOSAR: migration of projects to newer versions of AUTOSAR
  • Configuration management: direct integration with RTC (Rational Team Concert)
  • Diagrams: Adding shortcut list to graphic elements
  • Diagrams: connection points for rectilinear lines
  • Diagrams: connectors with rounded bends
  • Diagrams: enhanced highlighting during hover
  • Diagrams: guides for aligning elements
  • Diagrams: improved routing of connectors
  • Harmony SE Toolkit enhancements
  • Rhapsody API: additional callback methods
  • Rhapsody DM client: remote requirements in multiple views
  • Rhapsody DM web client: improvement of properties page
  • Rhapsody DM web client: remote requirements as independent model elements
  • Rhapsody Design Manager - copying, moving and referencing model elements
  • Rhapsody Design Manager: additional information in element preview
  • Rhapsody Design Manager: importing profiles
  • Rhapsody Design Manager: linking to sequence diagram elements
  • Rhapsody Design Manager: navigating from element pages to element in Rhapsody client
  • Rhapsody Design Manager: new ChangeSet toolbar in Rhapsody client
  • Rhapsody Design Manager: reference information for non-loaded elements
  • Rhapsody Design Manager: references to nested packages
  • Rhapsody Design Manager: use of work item approvals to restrict changeset delivery
  • Rhapsody command line: loading helpers
  • Rhapsody in Ada: SXF ports
  • Rhapsody in Ada: catching exceptions when entering first state
  • Rhapsody in Ada: flowchart support - code generation and reverse engineering
  • Rhapsody in Ada: including user-defined code in auto-generated code
  • Rhapsody in Ada: pragmas in generated code
  • Roundtripping of individual classes in 64-bit version of Rational Rhapsody
  • Search and replace dialog: enhancements
  • Sequence diagrams and timing diagrams: new metamodel elements
  • Sequence diagrams: controlling message spacing
  • Sequence diagrams: defining timeout duration with model element
  • Sequence diagrams: message formatting
  • Support for Red Hat Enterprise Linux 7 and 7.1
  • SysML modeling: ValueProperty elements
  • SysML modeling: automatic creation of allocations
  • Tag enhancements
  • TestConductor: inclusion of remote requirements in coverage computation
  • TestConductor: model coverage computation for flow charts in Rhapsody in Ada.
  • TestConductor: support for Rhapsody in Ada on Linux

For more detailed information about what is new in these releases and how to download these new versions, refer to the following:


John Boyer (IBM)How to map your network by geography

In the first version of what was then Riversoft, and is now IBM Tivoli Network Manager V4.2, you could choose, if I recall correctly, a purple map of the world to be the background for the equivalent of the Network Views.


But the devices didn't go in the correct places - they were just overlaid on the image.


Technology has moved on in the intervening years, and now, with V4.2 Fixpack 2 of Network Manager, geographical mapping is supported in the GA product. A previous version of this feature was available as a technology preview.

  • You can configure your discovery to pick up existing device location information, or enrich the discovery with existing device location information.
  • Then you can create and display a Network View which shows the devices in their proper location. You'll need to ensure you have the right licenses first.
  • And to keep the view manageable, there are several scoping and filtering options available.

Looking good!



To try out the feature, update Network Manager to Fixpack 2 (on Fix Central), and see the instructions:


John Boyer (IBM)IBM Control Desk v7.6 certification

Well, it has been more than a year since IBM Control Desk v7.6 has been released back in July 2015.

In preparation moving to year 2017, have you planned to get certified on the product? There is one certificaton which you may be interested in :-


IBM Certified Associate - IBM Control Desk v7.6

In order to obtain this certification, you will need to pass following test :-

Test C9560-680: IBM Control Desk V7.6 Fundamentals


It is suitable for everyone who is doing installation and deployment for IBM Control Desk v7.6 solution. A certified individual is expected to be able to complete basic configuration and usage tasks with little to no assistance from peers, support or product documentation. If you are interested, you can take the test from Pearson VUE test center @

And finally, in preparing to take the exam you can refer to following study guide :-

Good luck and all the best!


John Boyer (IBM)IBM HTTP Server v9.0 is based on Apache 2.4

As last week I wanted to use the "no-cache" environment variable in the IBM HTTP Server configuration for a more fine grained caching configuration I learned that this one is only available in Apache 2.2.12 and later. As the IBM HTTP Server 8.5 is based on Apache 2.2.8 I failed to configure caching for specific mime types only. Just checked the IBM HTTP Server v9.0 documentation and found that IBM HTTP Server is based on Apache HTTP Server 2.4.12. This should allow us to make use of the "no-cache" environment variable from now on when using IBM HTTP Server.

Matt Webb (Schulze & Webb)Hardware-ish coffee morning, Thursday 15th

Okay okay okay, let's have one more hardware-ish coffee morning to wrap up 2016...

Thursday 15 December, 9.30am for a couple of hours, at the Book Club, 100 Leonard St.

You know the score: No intros, no presentations. Just a corner at a handy cafe and seriously talk to EVERYONE it's worth it. Bring prototypes if you have em, and if you don't then your good self is enough... More info here.

Might be 5 people, might be 25, might be just me and my email. Feel especially welcome if you are NOT A DUDE because it's weird otherwise. All super relaxed and friendly. I'll bring Christmas crackers if I remember and we can all wear hats.

See you on the 15th!

ps. for email updates about hardware-ish coffee mornings, join to the mailing list.

John Boyer (IBM)Optimizing and Multi-booting Raspberry Pi 3

With the latest Raspberry Pi 3 (from now on referred to as Pi 3) which is about 10x faster than Raspberry Pi 2, and with built-in WIFI and bluetooth connectivity, there will be more reason for users to venture into more operating systems (O/S) built for Raspberry Pi or ARM processors, to find the best  O/S that suit the tasks, in the ever blooming field of Internet of Things (IoT)

Moreover, the improved CPU speed allow Pi 3 to running full fledge version of O/S, such as Ubuntu desktop, without lagging or slowness which contributes to poor usage experiences.  The improvement in processing speed allows users to use Pi 3 like a desktop PC, and continue to use their favorite applications on Pi 3 for their daily work, or for designing, coding and testing of IoT applications.

Some of you may feel the statements above are exaggerated or oversold. You may find Pi 3 running probably slightly faster than Pi 2, but certainly not 10x faster. You may find that loading of applications are still as slow as on Pi 2.  You may also feel managing many of the tiny microSD cards to hold different O/S is a nightmare, in addition to confusing specifications of the microSD cards (Class 4/6/10, SD/SDHC/SDXC, UHS-I/UHS-II, etc.).  Let me tell you the bottleneck here is the microSD card… it was designed to store sequential files, such as photos and videos. All the high speed ratings of microSD cards are measurements to read and write sequential files.  It does not perform well for random accessing of content stored, a storage method that most operating systems are designed to perform.   if you can do away with microSD cards, your experience will be a lot more merrier.

In this article, I am going to share with you some of the optimizations tricks I discovered, and an open source boot manager which support multi-boot on Pi 3, and most importantly, it allows users to use any USB storage devices as the storage medium for O/S.  You can avoid the bottleneck of microSD card this way, and easily store multiple O/S in one USB device.  To make the best out of this arrangement, I will connect a small capacity SSD (120GB) over USB to Pi 3.  As we all know, SSD is the fastest storage media available currently, and it is easily over 10x faster than common HDD, which is already many times faster than microSD when doing random access operations.


Conventional Way of Getting Started to Pi 3

The manufacturer of Pi 3 suggested users to use New Out Of Box Software (NOOBS) to get started. NOOBS is an easy operating system installation manager for the Raspberry Pi.  You can get official Raspberry Pi NOOB and installation instructions here:


My New Way of Getting Started to Pi 3

In this article, I am going to show you alternative way to get started, which uses Berryboot boot manager and its headless setup option.  Headless setup means you can connect to your Pi 3 from your laptop running VNC client via wireless network once Berryboot manager is running on Pi 3, to install new O/S, without the need of physical display unit/monitor, keyboard and mouse connecting to Pi 3.  This method promotes mobility and less hassle as all you need is to connect a USB storage device to one of Pi 3’s USB port,  and insert a small capacity microSD card to Pi 3’s built-in microSD slot.
Note that a microSD card is still required because there is no BIOS firmware on Raspberry Pi. The microSD card will act as medium to store booting related codes and data. Therefore, only a small capacity microSD card is required. I am going to use an old 2GB in size microSD card for my digital camera which I had put aside for some time after I bought bigger capacity replacement cards for the camera.

You can read more on Berryboot and its installation procedures here :-

1. Download Berryboot for Pi 2 or Pi 3 here :-

2. If the 2GB microSD card is not formatted to FAT file system, please format it to FAT before you proceed to the next step.

3. Extract the content of the .zip file downloaded in step 1 to the newly formatted microSD card, maintaining directory structures.

4. Before modifying the file cmdline.txt, do a quick check of its content using the following command :

cat /media/<user name>/<serial number>/cmdline.txt


Note that you need to replace <user name> with your current login user name, and <serial number> with the actual serial number of the formatted microSD card.

5. You should see a single line as shown in screenshot above.

6. Using text editor such as gedit, append “vncinstall” to the line to indicate that you wish to connect to Berryboot via vnc to perform installation.

7. Next, using text editor such as gedit, append one of the following “ipv4=...” to the line, depending on whether you will connect using ethernet cable or connect using wifi.

To connect using Ethernet, use this format :
ipv4=<IP addr>/<netmask>/<gateway>

For example, ipv4=

To connect using wifi, use this format :
ipv4=<IP addr>/<netmask>/<gateway>/wlan0

For example, ipv4=

Wifi connection requires you to create another new file named wpa_supplicant.conf on the FAT partition of the microSD card to store the wifi SSID and password in the following format:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev


Note: Please use key_mgmt=NONE instead of psk="wpa-password" if no password for the SSID.

You should have something like this for using wifi :-


8. Unmount the microSD card from your laptop and insert it into your Pi 3. Boot up your Pi 3.

9. On your laptop, start a VNC client application, set it to connect to the IP address you specified in cmdline.txt. You should be able to see this :


10. Please specify your time zone, then press OK button to continue.

11. You will be shown the next screen :

You will now have the option to plug in a USB storage device and select it to be used as storage for all your O/S images.  Go ahead and plug in your USB device now.

12. Once you plug in a USB storage device, it will be detected and shown in the list.


As you can see, it detected my SSD connected via USB port as sda.  I will choose to format it as btrfs file system, which support on-the-fly compression.


13. Next, you will see a progress bar indicating formatting process is running. Wait until formatting of disk is done.


14. You will be prompted to select a O/S to download next, from a list of popular O/S. However, we do not want to install any of those for now. Just click Cancel to close the window, and click OK to reboot.


15. After reboot, you will see the following screen with Add OS option.


16. Berryboot has its own server hosting many more O/S images, which can be downloaded for free.  We are interested to download the latest Raspbian Jessie Desktop PIXEL.  Go ahead and click on Download button for that image and copy the download file to another USB storage devices



17. The downloaded file is raspbian_jessie_desktop_pixel_2016.09.23_berryboot.img.tar.xz. You can use tar command to uncompress it if tar version is 1.22 and greater.  Else, you will need to install xz-utils on Ubuntu/Debian to uncompress it. If you are using MS Windows, you can use 7-zip to uncompress it.  Once you had downloaded the file, on Ubuntu/Debian system, run the following command to uncompress it :

tar -xf raspbian_jessie_desktop_pixel_2016.09.23_berryboot.img.tar.xz
The tar command will extract the content and create a new .img file, named raspbian_jessie_desktop_pixel_2016.09.23_berryboot.img

18. Copy the .img file to another USB storage, then plug in the USB storage to one of the USB ports on Pi 3.

19.  Switch back to VNC client, long press left mouse button on the Add OS button, you should see a drop down with two options. Select Copy OS from USB stick



20. Navigate to the USB storage you just plug in, select the .img file, then click Open



21. Wait for Berryboot to copy the image to your USB hard disk. When it is done, you will see one new entry to the list of O/S.



22. You can highlight the new OS, then click Set default, then click Exit to reboot.

23. By default, the new Raspbian OS boot to graphical user interface. You would need to connect a HDMI monitor and network cable to your Pi 3 to see some important information we need, such as IP address so that you can remotely access your Pi 3 via the IP address.



ssh access is enabled by default, so that you can remotely access your Pi 3 without the need to connect a monitor / keyboard / mouse to it. All you need is a network connection to your Pi 3.   To check the current IP address obtained via DHCP, you can mouse over to the network icon at top right corner, a text pop up message will show the current IP address.



24. Once you know the IP address, for instance, in our example, it is, regardless of whether you had booted your Pi 3 in GUI mode or text mode, you can always use ssh to connect to your Pi 3. Assuming ssh related packages are properly installed on your environment, issue the following command to connect to your Pi 3 :

ssh pi@



The first time you connecting it via ssh, you will be warned about the authenticity of Pi 3 as a host, just answer yes to continue. When prompted for password, key in raspberry which is the default password.

25. Because Pi 3 was designed to run OS from microSD card, swap partition which is common in ordinary linux OS is turn off by default. Enabling a swap partition on microSD card will only slow the OS down because microSD card was not design for random read and write access.

In order to make Pi 3 behave more like ordinary linux system, we can enable the built-in to kernel zram module, in which its concept is similar to RAM disk.  Instead of storing data to RAM disk, it will create swap partitions on the RAM disk and use as swap.  There are studies which concluded zram helps improving GUI responsiveness.  Moreover, zram’s ability to compress the swapped out content on-the-fly and store in memory allocated, means more content can be swapped out and stored, if compared with ordinary swap partitions.

26. To begin, we will first make sure there is no swap partition allocated. Issue this command :

cat /proc/swaps



As you can see, there is only header line, there is no results.


27. Now, we need to create the script to start zram.  You can use any GUI text editor or command line text editor such as nano.   For simplicity, I will show steps which use nano text editor. Issue this command :

sudo nano /etc/init.d/

You should get into nano text editor, key in or copy and paste these lines :

#! /bin/sh
# Provides:          zram
# Required-Start:
# Required-Stop:
# Should-Start:
# Default-Start:     S
# Default-Stop:
# Short-Description: Enable zram and tuned for RaspPi 3
# Description:


. /lib/init/
. /lib/lsb/init-functions

do_start () {
 # load dependency modules
 NRDEVICES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
 if modinfo zram | grep -q ' zram_num_devices:' 2>/dev/null; then
 elif modinfo zram | grep -q ' num_devices:' 2>/dev/null; then
 exit 1
 modprobe zram $MODPROBE_ARGS

 # Calculate memory to use for zram (1/2 of ram)
 totalmem=`free | grep -e "^Mem:" | sed -e 's/^Mem: *//' -e 's/  *.*//'`
  mem=$(((totalmem / 2 / ${NRDEVICES}) * 1024))

  # initialize the devices
  for i in $(seq ${NRDEVICES}); do
    DEVNUMBER=$((i – 1))

    # select lzo compression algorithm
    echo lz4 > /sys/block/zram${DEVNUMBER}/comp_algorithm
    echo $mem > /sys/block/zram${DEVNUMBER}/disksize

    mkswap /dev/zram${DEVNUMBER}
    swapon -p 5 /dev/zram${DEVNUMBER}

  # Ask kernel to swap more often
  sysctl vm.swappiness=80

case "$1" in
    echo "Error: argument '$1' not supported" >&2
    exit 3
    echo "Usage" >&2
    exit 3

For your convenience, I had the script file attached and you can upload it to your Pi 3 via scp command.  The syntax of scp command is as below :

scp localpath/file remoteuser@remotehost.domain:/remotepath

As such, for our case, the command would be :

scp /<your folder which stored downloaded file>/ pi@


28. Once uploaded, you need to move the file to /etc/init.d, then turn on the permission to allow execution of the file, and execute update-rc.d command :

sudo mv /etc/init.d

sudo chmod +x /etc/init.d/

sudo update-rc.d defaults



29. Reboot your Pi 3. Once rebooted, open a terminal or ssh  run again the command :

cat /proc/swaps


As you can see from the results, swap partitions are created.

30. To improve performance, you should also install a module called preload. You can install using the command :

sudo apt-get install preload


31. In order to further improve performance, we will mount some of the frequently use folders but the data stored is only for logging or only valid for the current session as tmpfs, which only exists in RAM and volatile. Example of these folders are /tmp and /var/log.  You may argue that content in /var/log is required for diagnostic purposes if things go wrong, so why store in the volatile form.  I would say you would only check log files in /var/log when you need to, but most of the time you won’t bother to check them. So why waste your precious storage space?  The same goes to /var/cache/apt/archives, which was used to store downloaded update files before update process starts.

Edit /etc/fstab with this command :

sudo nano /etc/fstab

Then, append these lines to it :

tmpfs    /var/log                  tmpfs    defaults,noatime            0    0
tmpfs    /tmp                      tmpfs    defaults,noatime,mode=1777        0    0
tmpfs  /var/cache/apt/archives  tmpfs  defaults,noatime                       0    0

Note that number of spaces in between phrases above are flexible, with minimum of one space required.  You can also use tab instead of space. Save /etc/fstab, and exit nano. Restart Pi 3 to see the changes.

Note that when you really need log files or cached apt modules to be in non-volatile form, you can always remark the lines in /etc/fstab which start with /var, by inserting # in the beginning of the line. Save the edited file and reboot.


32. Finally, you should perform the standard update steps to make sure you get the latest of the modules installed.  You can do so easily with these two commands :

sudo apt-get update

sudo apt-get dist-upgrade





During the updating process, you might be asked to confirm to use newly installed configuration files, etc. I would just agree to use the latest configurations, files and other relevant settings.  



33. Reboot Pi 3 to have a fresh and fast system. Congratulations to you. You have successfully optimized a standard Raspberry Pi OS image to make it more enjoyable to be used in your next IoT project.  





John Boyer (IBM)z/OS UNIX is providing more granular authorization support for specific SMF TYPE/SUBTYPE records

The USS kernel is providing support to the SMF service (BPX1SMF/BPX4SMF) that will allow a more granular check for new profiles in the FACILITY class. In addition to BPX.SMF, the SMF service will also check for Where xxx is a specific TYPE and yyy is a specific SUBTYPE. If this profile is defined and the user is permitted to it, the SMF service will allow the caller to write a SMF record of this TYPE/SUBTYPE.


For the BPX.SMF.type.subtype check, no generics are allowed and if permitted, the environment must be clean program-controlled. The smf_record syscall will verify that the address space has not loaded any executables that are uncontrolled and any future loads or execs to files that reside in uncontrolled libraries will be prevented. Program-controlled is not required when using the BPX.SMF profile.


In order to use the smf_record callable service, the caller must be APF authorized or permitted to either the BPX.SMF or the BPX.SMF.type.subtype resource profile in the FACILITY class where type is the SMF record type to be written and subtype is the SMF record subtype.

We used OpenSSH which uses SMF Type 119 subtype 94 to subtype 98 both for server and client records for this test.

Notes: For executable programs, APF authorization will be checked first, if the program is APF authorized, any user who calls this APF authorized program can write SMF records. If the program is not APF authorized, then BPX.SMF will be checked. If user is permitted to BPX.SMF, then all SMF recording for this user will be allowed. If the user is not permitted to BPX.SMF, then BPX.SMF.type.subtype will be checked. If the user is permitted to BPX.SMF.type.subtype, then a certain type/subtype SMF recording will be allowed. If the user is not permitted to BPX.SMF.type.subtype, then no SMF recording will be allowed. For BPX.SMF.type.subtype, program-controlled is required. For BPX.SMF, program-controlled is not required.


Client connection started (subtype 94): Client connection started (subtype 94) is collected after an ssh client connection is started and the user is authenticated

Server connection started (subtype 95): Server connection started (subtype 95) is collected after an sshd server connection is started and the user is authenticated

Login failure record (subtype 98): Login failure records are collected after each unsuccessful attempt to log into the sshd daemon


Setting up OpenSSH to collect SMF records

You can set up the system and OpenSSH to collect SMF Type 119 records for both the client and the server.

1. Setting up the system to collect OpenSSH SMF records

(1) Update the SMFPRMxx parmlib member to activate SMF data collection for Type 119 and subtype 94, 95, 96, 97 and 98 records.


(2) Update the SMFPRMxx parmlib member to indicate which SMF exits (IEFU83 or IEFU84) are desired.


(3) Issue SET SMF=xx to have this update take affect.

2. Setting up OpenSSH to collect SMF records

(1) To enable SMF recording for the client side, in the /etc/ssh/zos_ssh_config file (if the file doesn’t exist, you can copy it from /samples to /etc/ssh), set the keyword:

      ClientSMF                  TYPE119_U83


      ClientSMF                  TYPE119_U84

(2) To enable SMF recording for the server side, in the /etc/ssh/zos_sshd_config file (if the file doesn’t exist, you can copy it from /samples to /etc/ssh), set the keyword:

      ServerSMF     TYPE119_U83


      ServerSMF     TYPE119_U84

When you’re done, you have set up OpenSSH to collect SMF records.

The following cases are what we performed in our environment.

Case 1: APF authorized

SSHD is APF authorized, so any user who calls SSHD is able to write SMF records, no need for BPX.SMF authorization.

However, SSH is not APF authorized, so the user who calls SSH is required to be BPX.SMF authorized in order to write SMF records.

136:/PETPC0/usr/sbin $ ls -El /usr/sbin/sshd

-rwxr--r--  ap--  2 ALEASE1  root     8269824 Oct 12 11:20 /usr/sbin/sshd

139:/PETPC0/usr/sbin $ whence ssh


140:/PETPC0/usr/sbin $ ls -El /bin/ssh

-rwxr-xr-x  -p--  2 ALEASE1  root     7987200 Oct 12 11:20 /bin/ssh


(1) Users demi and usswork are not permitted to BPX.SMF, usswork started sshd on Z1, demi issued ssh on Z2 and failed login for one time and succeeded after, the expected results are subtype 95 (success login) and 98 (failure login)



rlist facility bpx.smf au

Z1 - SSHD (usswork):

_BPX_USERID='OMVSKERN' _BPX_JOBNAME='SSHD' /usr/sbin/sshd 2>/tmp/sshd.stderr &















Z2 - SSH (demi):

















Using ERBSCAN/ERBSHOW to view the SMF records.

Record Number 2: SMF Record Type 119(98)

Record Number 3: SMF Record Type 119(98)

Record Number 4: SMF Record Type 119(95)


The reason for one login failure but two type 119 subtype 98 records is that PubkeyAuthentication in /etc/ssh/sshd_config is set to yes, we just used PasswordAuthentication to perform authentication.

SMF Record Type 119(95) & 119(98). Work as expected.

(2) User demi is neither permitted to BPX.SMF nor BPX.SMF.119.94, demi issues ssh on Z1, as ssh is not APF authorized, the expected result are no subtype 94 records.



search class(facility):









rlist facility bpx.smf.119.94 au


----      ------   ------ -----









Z1 - SSH
















128:/u/demi $ bpxmtext 09210405

BPXMRSMF 03/18/16

JRSMFNotAuthorized: The __smf_record function can not be performed because the

caller is not permitted to the BPX.SMF facility class and is not APF

authorized. The caller must either be permitted to the facility class or APF


Action: If the user is to be permitted to use the __smf_record function, the

user must be permitted to the BPX.SMF facility class or be APF authorized.


Using ERBSCAN/ERBSHOW to view the SMF records.

  • No subtype 94 records

No subtype 94 records. Work as expected.


Case 2: BPX.SMF authorized

User demi is permitted to BPX.SMF and issues ssh on Z1, the expected results are subtype 94 records.

permit bpx.smf class(facility) id(demi) access(read)


rlist facility bpx.smf au


----      ------   ------ ----- 

DEMI      READ        000000    









Z1 - SSH













Using ERBSCAN/ERBSHOW to view the SMF records.

  • Record Number 2: SMF Record Type 119(94)

SMF Record Type 119(94). Work as expected.


Case 3: BPX.SMF.type.subtype authorized

User demi is not permitted to BPX.SMF but to BPX.SMF.119.94, demi issues ssh on Z1, the expected results are subtype 94 records.




search class(facility):










rlist facility bpx.smf.119.94 au


----      ------   ------ ----- 

DEMI      READ        000000    









Z1 - SSH











Using ERBSCAN/ERBSHOW to view the SMF records.

  • No SMF Record Type 119(94)

logout and login:                      

  • Record Number 2: SMF Record Type 119(94) 

SMF Record Type 119(94). Work as expected.


Case 4: BPX.SMF.type.subtype authorized calls to BPX1SMF/BPX4SMF from a dirty environment

User demi is not permitted to BPX.SMF but to bpx.smf.119.94, ssh is not program-controlled, the expected results are no 119(94) records and BPXP014I and BPXP015I messages occurring.



-----  --------   ----------------  -----------  -------

 00    USSWORK         NONE               NONE    NO   



-----  --------   ----------------  -----------  -------

 00    OLDID           NONE               NONE    YES   

RALT FACILITY BPX.SMF NOWARNING                                                                   



SETR RACLIST(FACILITY)  REFRESH                                                                   

 SETROPTS command complete.    

PERMIT BPX.SMF.119.94 CLASS(FACILITY) ID(DEMI) ACCESS(READ)                                       



SETR RACLIST(FACILITY)  REFRESH                                                                   

 SETROPTS command complete.                                                                       



----      ------   ------ -----

DEMI      READ        000000   



----      ------   ------ -----

No DEMI   



128:/u/usswork $ ls -El /bin/ssh

-rwxr-xr-x  -p--  2 ALEASE1  root     7987200 Oct 12 11:20 /bin/ssh

129:/u/usswork $ chmount -w /bin/ssh

130:/u/usswork $ df -kvP /bin/ssh

Filesystem         1024-blocks        Used  Available  Capacity Mounted on

OMVSSPT.PETPD0.ROOT.FS 3240000     3179256      60744       99% /PETPD0

ZFS, Read/Write, Device:45438, ACLS=Y

File System Owner : Z2          Automove=Y      Client=N

Filetag : T=off   codeset=0


131:/u/usswork $ extattr -p /bin/ssh

132:/u/usswork $ ls -El /bin/ssh

-rwxr-xr-x  ----  2 ALEASE1  root     7987200 Oct 12 11:20 /bin/ssh

Z2 - SSHD:

128:/u/usswork $ ps -elf|grep sshd

 ALEASE1     196851          1  - 02:48:18 ?         0:00 /usr/sbin/sshd

 ALEASE1     197875     197515  - 22:54:03 ttyp0010  0:00 grep sshd

Z1 - SSH:


FOTS2815 zsshSmfWriteRecord: Caller not permitted to use __smf_record2(): EDC5139I Operation not permitted. (errno2=0x092102AF).

FOTS2814 zsshSmfWriteRecord: ClientSMF keyword value TYPE119_U83 requires additional system setup.

32:/u/demi $ bpxmtext 092102AF

BPXMRSMF 03/18/16

JREnvDirty: The specified function is not supported in an address space where

a load was done that is not program controlled.

Action: Make sure that programs being loaded into this address space are

defined as program controlled.

N 00A0000 Z1       2016308 01:18:15.16 S0073997 00000090  BPXP015I HFS PROGRAM /bin/ssh IS NOT MARKED PROGRAM CONTROLLED. 

N 00A0000 Z1       2016308 01:18:15.16 S0073997 00000090  BPXP014I ENVIRONMENT MUST BE CONTROLLED FOR SMF (

S                                                         PROCESSING.   

Using ERBSCAN/ERBSHOW to view the SMF records.                                                  

  • No SMF Record Type 119(94), BPXP014I / BPXP015I

No SMF Record Type 119(94), BPXP014I / BPXP015I occur. Work as expected.

John Boyer (IBM)为了用好开源,江西银行部署了LinuxONE



<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 10px; margin-bottom: 10px; box-sizing: border-box; "> <section style="border-top: 0.1em solid rgb(163, 198, 233); border-bottom: 0.1em solid rgb(163, 198, 233); padding: 2px; line-height: 1.3; box-sizing: border-box;">


近日,IBM宣布,为江西银行成功部署基于IBM LinuxONE解决方案的柜面/定价系统,以LinuxONE在性能、稳定性及扩展能力方面的优势,为江西银行的业务扩展及创新提供坚实的IT架构基础。


</section> </section> </section> </section>



<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 10px; margin-bottom: 10px; box-sizing: border-box; "> <section style="display: inline-block; width: 100%; border: 2px dotted rgb(192, 200, 209); padding: 10px; border-radius: 0.7em; box-sizing: border-box;"> <section style="box-sizing: border-box;"> <section style=" box-sizing: border-box; "> <section style="color: rgb(51, 51, 51); font-size: 14px; box-sizing: border-box;">

作为本地首家应用IBM LinuxONE搭建核心业务系统的商业银行,江西银行基于对系统性能、稳定性、开放性、可扩展性、可维护性以及绿色节能等多个方面的长期考量,选择部署IBM LinuxONE作为其柜台、定价两大核心系统的底层基础架构,将以升级的业务处理能力和资源调配能力为江西银行用户交付提升的服务水平。

</section> </section> </section> </section> </section> </section> </section>



<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 0.5em; margin-bottom: 0.5em; text-align: center; box-sizing: border-box; "> <section style="display: inline-block; border-radius: 1em; padding: 0.3em 0.8em; color: rgb(255, 255, 255); font-size: 15px; box-sizing: border-box; background-color: rgb(95, 156, 239);">


</section> </section> </section> </section>


<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 10px; margin-bottom: 10px; overflow: hidden; padding-left: 5px; padding-right: 5px; box-sizing: border-box; "> <section style="transform: rotate(3deg); margin-top: 10px; margin-bottom: 10px; box-sizing: border-box;"> <section style="border: 1px solid rgb(204, 204, 204); box-sizing: border-box;"> <section style=" transform: rotate(-3deg); -webkit-transform: rotate(-3deg); -moz-transform: rotate(-3deg); -o-transform: rotate(-3deg); box-sizing: border-box;"> <section style="border: 1px solid rgb(204, 204, 204); width: 100%; padding: 10px; box-sizing: border-box;"> <section style="box-sizing: border-box;"> <section style=" box-sizing: border-box; "> <section style="font-size: 14px; box-sizing: border-box;">


</section> </section> </section> </section> </section> </section> </section> </section> </section> </section>











<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 0.5em; margin-bottom: 0.5em; text-align: center; box-sizing: border-box; "> <section style="display: inline-block; border-radius: 1em; padding: 0.3em 0.8em; color: rgb(255, 255, 255); font-size: 15px; box-sizing: border-box; background-color: rgb(95, 156, 239);">


</section> </section> </section> </section>




<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 10px; margin-bottom: 10px; overflow: hidden; padding-left: 5px; padding-right: 5px; box-sizing: border-box; "> <section style="transform: rotate(3deg); margin-top: 10px; margin-bottom: 10px; box-sizing: border-box;"> <section style="border: 1px solid rgb(204, 204, 204); box-sizing: border-box;"> <section style=" transform: rotate(-3deg); -webkit-transform: rotate(-3deg); -moz-transform: rotate(-3deg); -o-transform: rotate(-3deg); box-sizing: border-box;"> <section style="border: 1px solid rgb(204, 204, 204); width: 100%; padding: 10px; box-sizing: border-box;"> <section style="box-sizing: border-box;"> <section style=" box-sizing: border-box; "> <section style="font-size: 14px; box-sizing: border-box;">

在接近一年的评估与测试过程中,江西银行基于IBM LinuxONE搭建了包括前端、集群负载、中台、后台核心系统在内一整套环节的测试系统。通过长期反复测试,江西银行通过了关于IBM LinuxONE解决方案部署的可行性验证。

</section> </section> </section> </section> </section> </section> </section> </section> </section> </section>




在系统的可扩展性及开放性方面,江西银行较为重视基础架构的虚拟化扩展能力及云化功能。IBM LinuxONE可在单台CPU中实现多节点的虚拟化扩展,为银行类机构的资源配置及动态处理提供了高效的方案。而LinuxONE对OpenStack、开源系统的支持,为江西银行潜在的云化、大数据分析、系统开发需求提供了面向未来的平台。


此外,为实现资源集约及高效管理,江西银行计划将定价系统与柜面系统整合在一个设备中,希望在部署高可用、高可靠的硬件架构的同时,实现绿色、集约管理。IBM LinuxONE体量较小,不仅为江西银行提供了一站式的系统管理方案,还将显著减少其人力投入及管理成本。同时,IBM也为江西银行提供了全方位的系统运维及人员培训服务。


IBM LinuxONE及主机业务在全球及本地实现了高速的市场增长及丰富的成功案例,LinuxONE在全球市场中帮助国际大型银行实现了上线之后的100%高可用,并在本地多个行业收获了成功的应用案例,这为江西银行应用这一基础架构方案积累了充分的信心。


江西银行科技部总经理李辉明先生表示:“传统金融业的未来必然要经历新技术不断投入使用、不断得到验证的过程。面对开源、大数据分析等技术趋势,江西银行希望先人一步,做好架构。在重型银行向轻型银行的转型过程中,互联网新应用带来的业务跨越很难利用既有的x86架构快速地部署。IBM的主机技术在金融业界享有良好的口碑,而IBM LinuxONE将IBM业界认可的大型主机技术与开源的系统相结合,这是深厚的技术沉淀与创新的前沿趋势相结合的成果。我们希望借助这样值得信赖的创新技术成果,加速我们的革新。”



<section dir="ltr" style="box-sizing: border-box; background-color: rgb(255, 255, 255);"> <section style=" box-sizing: border-box; "> <section style=" margin-top: 0.5em; margin-bottom: 0.5em; text-align: center; box-sizing: border-box; "> <section style="display: inline-block; border-radius: 1em; padding: 0.3em 0.8em; color: rgb(255, 255, 255); font-size: 15px; box-sizing: border-box; background-color: rgb(95, 156, 239);">


</section> </section> </section> </section>


基于两台IBM LinuxONE Rockhopper(LinuxONE冠企鹅版),江西银行搭建的新柜台/定价系统实现了性能的显著提升。根据最新测试结果,该系统能够充分应对江西银行在业务暴增期对于大量并发要求的处理,使其有能力在未来向用户交付升级的服务速度与水平。




IBM LinuxONE已实现了对包括ApacheSpark、 Node.js、 MongoDB、 MariaDB、 PostgreSQL、Chef和Docker在内的主流开源及ISV工具的支持。未来,江西银行计划将其集中作业平台以及大数据平台部署于LinuxONE,充分利用LinuxONE的开源系统、开放架构及大数据处理能力开发更多业务应用,实现对于金融大数据的认知,并以此支持其经营决策。此外,江西银行将继续基于目前部署在LinuxONE上的初级灾备体系,进一步深化灾备中心的搭建,更为充分地实施“三地两中心”灾备体系建设。




--本文转载自 IBM 中国(微信号:IBMGCG)

John Boyer (IBM)A customized mail template affects Notes, POP3 client or both?

(Q). A customized mail template affects Notes, POP3 client or both?

(A). In general, a customized mail template affects to Notes Client only.

John Boyer (IBM)WebSphere Portal support newsletter December 6th, 2016

Welcome to this edition of the WebSphere Portal Tuesday NewsDay letter summarizing the latest content for several products supported in the Digital Experience Support Center, including: WebSphere Portal and Portal Express, IBM Web Content Manager and IBM Web Experience Factory.

Product Documentation Links
Portal 8.5 Product Documentation
Portal 8.0 Product Documentation


Latest available updates for IBM WebSphere Portal and IBM Web Content Manager

IBM recommends to update Portal with the latest cumulative fixes (CF) on a regular basis to benefit from product fixes as soon as they become available. Staying on the latest CF level will also help you in making sure that interim fixes for potential security vulnerabilities will become available to you as quickly as possible. For more information follow the link to PSIRT Blog to see the latest IBM Security Bulletins by product.

Version 8.5  - Latest CF12, Delivers new services and updates including support for JDK 1.8 and DB2 11.1
Version 8    - Latest CF21


New Features Introduced in Portal 8.5 Cumulative Fixes:     

Embark on creating exceptional user experiences quickly and easily with IBM Digital Experience on Cloud  


News Flashes
Security Vulnerability Bulletins for Portal / WCM                                                                                              
Security Vulnerability Bulletins for WebSphere Application Server                                                                    


Key Technical Documents

Article Title:   "Article Title:   "Step-By-Step Guide to performing staging to production using Portal Application Archive in WebSphere Portal 8.5"         
Author:  David Batres - L2 Support Engineer.
Summary:        This guide provides a comprehensive approach to performing staging to production using Portal Application Archive (PAA) in IBM WebSphere Portal 8.5    

Article Title:   "Article Title:   "Step by Step guide to setup an IBM WebSphere Portal and IBM Web Content Manager V8.5 Cluster"                
Author:  Emily Johnson - L2 Support Engineer.
Summary:        This guide provides a comprehensive step-by-step approach to building an IBM® WebSphere® Portal version 8.5 cluster.                 

Article Title:   "Step by Step guide to setup an IBM WebSphere Portal and IBM Web Content Manager V8.5 Cluster"                
Author:  Andrea Fontana - Non-IBM Senior Consultant
Summary:        This guide want to explain how install, configure, and building an IBM WebSphere Portal v8.5 cluster from zero to hero.

Article Title:   "Administering IBM WebSphere Portal 8.5: A comprehensive workshop"                
Authors:  Thomas Hurek and Chet Tuttle - Portal + WCM Development
Summary:        The goal of this white paper is to explain the various administration and configuration tools offered by IBM WebSphere Portal 8.5. Learn about which tool to use for which task and about the new capabilities of WebSphere Portal 8.5, and understand differences from previous versions of WebSphere Portal. [The whitepaper] will take you through exercises for each tool so you can learn hands-on how to use them.

Article Title:   "Guide to Integrating WebSphere Portal v8.5 with LDAP"                
Author:  Jason Wicker - Portal Support
Summary:        Applications commonly require IBM ® WebSphere Portal ® to integrate with existing user repositories to enable authentication, authorization, and user management. LDAP (Lightweight Directory Access Protocol) is the most common user repository type. In most organizations, different groups administer WebSphere Portal and the LDAP server. Integrating the two requires in-depth knowledge of both systems. This document guides WebSphere Portal administrators on obtaining the necessary information to integrate WebSphere Portal with existing LDAP servers.

Article Title:   "IBM WebSphere Portal V 8.5 Performance Tuning Guide"                
Authors:  Multiple Portal Testers and Developers
Summary:        This white paper provides a basis for parameter and application tuning for IBM WebSphere Portal for Multi platform and for Linux on System Z V8.5.0. Remember that both tuning and capacity are affected by many factors, including the workload scenario and the performance measurement environment. For tuning, the objective of this paper is not to recommend that you use the values we used when measuring our scenarios, but to make you aware of those parameters used in our configuration. When tuning your individual systems, it is important to begin with a baseline, monitor the performance metrics to determine if any parameters should be changed and, when a change is made, monitor the performance metrics to determine the effectiveness of the change.

Article Title:   "Security Hardening Guide for IBM WebSphere Portal"                
Author:  Jason Wicker - Portal Support
Summary:        Security is a fundamental requirement for most web applications. Organizations commonly demand detailed accounting of web application security, especially after mainstream media coverage of high-profile vulnerabilities or exploits. This guide instructs architects and administrators on evaluating and improving the security of applications based on IBM WebSphere Portal.

Article Title:   "Developing Themes for WebSphere Portal 8.5"                
Author:  Cindy Wang - Portal Theme Development
Summary:     This is a series of articles highlighting how to create a modular theme in Portal 8.5.

Article Title:   "Redbook - Building and Implementing Social Digital Experiences"                
Author:  Multiple IBMers and non-IBMers
Summary:        IBM WebSphere Portal and IBM Web Content Manager (WCM) offer facilities to support the creation of a social digital experience (social portal), especially when combined with the market-leading social capabilities of IBM Connections.  This guide begins with defining the social portal and its capabilities, the architecture, the integration patterns between WebSphere Portal and IBM Connections and social analytics. It then looks at a number of products and how their capabilities can help you create the social digital experience.

Articles, Blogs and White Papers

Article Title:   "Staging to Production FAQ"                            Posted 08-Nov-2016
Authors:   Travis Cornwell - L2 Support
Summary: Staging to production is the process by which data from one WebSphere Portal Environment is copied to a second WebSphere Portal environment. It is also known as a deployment process. This document will discuss many questions that arise when devising a deployment process for your portal environments.

Article Title:   "New Tool Available for Generating Custom DX Themes From WordPress Themes"                 Posted 15-Nov-2016
Authors:   Laura Pomerleau - Portal Development
Summary:  Now a new tool is available for automatically generating an installable DX theme PAA from an existing WordPress theme. The tool will create the DX theme in seconds, complete with static templates, modules and dynamic content spots. Instead of copying, editing and moving files around manually, just run the tool and install the new theme in a few minutes. Then clean up the existing WordPress PHP code from the new DX theme, and it’s good to go. 

Article Title:   "Administration Tips and Tricks"                    Posted 23-Jun-2015
Authors:   Thomas Hurek - Portal Development
Summary:  In today's blog the author discusses a few Administration Tips and Tricks - 1. Easy Redirect from / to Portal URL with Apache/IBM HTTP Server, 2. Installation Manager has now a Web Interface, 3. How to speed up Cumulative Fix application?, 4. What to consider when using WCM Asynch rendering from a caching aspect, 5. Staging to production with installed PAAs, 6. Script to delete projects, 7. VMM Schema directories, 8. Resolver cachability, 9. Cross version syndication.

Article Title:   "New Administration Tips and Tricks"                Posted 08-Nov-2016
Authors:   Thomas Hurek - Portal Development
Summary:  In today's blog the author discusses a new set of Administration Tricks and Tips - 1. Optimize Performance if not using Intelligent Management Features in 8.5, 2. Solving Reference Errors during Syndication, 3. Writing a simple custom workflow action, 4. WCM Friendly URLs, 5. Themes stop to render after CF12.  

Article Title:   "How to Find Leaky Sessions in Portal"                       Posted 26-May-2015
Author:   Travis Cornwell - Portal L2 Support
Summary:  This blog entry will discuss utilizing a tool which will help in determining where session leaks may exist in WebSphere Portal.

Article Title:   "Lab- Building your Site with IBM Digital Experience"                        Posted 15-Nov-2016
Authors:   Herbert Hilhorst - IT Specialist
Summary: If you are new to IBM Digital Experience (DX) or have not worked with the latest release (8.5 CF07), you may want to get up to speed as a business user and start following this online lab that walks you through the main capabilities of this engaging solution.

Article Title:   "Preventing Backend Slow Downs From Causing Portal Delays"             Posted 29-Nov-2016
Authors:   Thomas Hurek - Portal Development
Summary:  A classic use case for Portal is to integrate backend data into Portal - be it via classic portlets, Scripting portlets, Ajax proxy calls or other means on the Portal and calling backend systems via EJBs, SOAP web services, REST or other technologies. A common issue is for those backends to slow down or hang at times and when those slow down also slowing down or even hanging the Portal. Even only rarely called backends can eventually cause hang situations since the waiting threads will accumulate and no web container threads will be left to handle work on the Portal side. This blog covers the prevention of external call hang and slow down situations.

Article Title:   "Developing IBM WebSphere Portal 8.5 EAR Themes: A Step-by-Step Guide"        Posted 16-Jun-2015
Authors:   Peter Grauvogel - IBM Lab Services
Summary:  In quite a few projects I have come across a requirement to deploy the dynamic and the static resources for a portal theme as an EAR file instead of using WebDAV for the static resources. I shared my experience some time ago in this article: Deploying and developing IBM WebSphere Portal 8 themes: A step-by-step guide. Since the theme architecture has stayed the same the steps are still valid for Portal 8.5 and only a couple of paths have changed. To make things easier I created a little ANT script that will copy the resources for you.

If you are interested in deploying your theme as an EAR file I recommend taking a look at the article first and then get started with the ANT script.

Article Title:   "Warming up your Portal Site"                Posted 26-May-2015
Author:   Travis Cornwell - Portal L2 Support
Summary:  WebSphere Portal leverages a large number of caches throughout the product.  Caches offer may advantages to the WebSphere Portal product - most notably they allow for significant performance improvements when the Portal server is under load.  When a Portal server is restarted the caches are emptied out.  In the event of a planned (or unplanned) outage of the Portal server - it can take a bit of time for the caches on a Portal server to become repopulated following a Portal server restart.  When accessing the Portal server in a "cold" state without the caches filled in yet, this can leave the end user experience degraded.  In this article, we will discuss warming up the Portal server using a series of scripted operating system commands.  

Article Title:   "Determining the Number of Users Logged Into Portal"                 Posted 26-May-2015
Author:   Travis Cornwell - Portal L2 Support
Summary:  For purposes of this blog entry - we will define the number of logged in users to Portal to be roughly equivalent to the sessions that exist in the Portal.  The blog entry proceeds to describe how to enable PMI in WAS to monitor the session count in Portal and get an approximation of the number of logged in users.

Product Tools and Utilities for WebSphere Portal

WebSphere Portal Log Analyzer:
A tool designed to help IBM WebSphere Portal users to troubleshoot issues with ConfigEngine and JVM startup failures.

IBM Heap Analyzer:
HeapAnalyzer allows the finding of a possible Java™ heap leak area through its heuristic search engine and analysis of the Java heap dump in Java applications.

IBM Thread and Monitor Dump Analyzer:
IBM Thread and Monitor Dump Analyzer for Java analyzes javacore and diagnoses monitor locks and thread activities in order to identify the root cause of hangs, deadlocks, and resource contention or monitor bottlenecks.


Useful URLs

Open a feature request

Automated Data Collection Tools to aid in resolution of Portal/WCM related issues

dwAnswers - quick Q&A, how to, etc. - similar to StackOverflow

Forums - detailed stacktraces, custom code samples, etc.:

Collecting Data / Mustgather Documents
Portal / WCM -


John Boyer (IBM)How to locate the local database directory?

I came across a situation where one of my client recreated an instance but were not able to catalog the database successfully.


> db2 catalog db sample
SQL6028N  Catalog database failed because database "SAMPLE" was not found in the local database directory.


They were running this command from instance home directory /home/db2inst1 .


So it is clear that the local database directory does not exist under instance home directory. We need to specify the correct database directory in 'CATALOG DATABASE' command. The client was not aware of exact location where database was created.


Now the challenge is to find the correct database directory location. 


I asked him to look for file SQLSPCS.1 file on complete system:


find / -name SQLSPCS.1


It returned following location:



This path is interpreted as follows:



Here, local_db_directory is /data1/db2data .


So it explains that the local database directory for database 'SAMPLE' resides under /data1/db2data .


Finally we are able to catalog database successfully:


> db2 catalog db sample on /data1/db2data

DB20000I  The CATALOG DATABASE command completed successfully.
DB21056W  Directory changes may not be effective until the directory cache is refreshed.



Please note: In case of more than one SQLSPCS.1 files, you may need to be careful with correct local database directory location. In that case, move to <local_db_directory>/<instance_name>/NODE<number> and look for your database name.

For eg. 

> cd /data1/db2data/db2inst1/NODE0000

> ls

SQL00001  SAMPLE      sqldbdir



Here, database SAMPLE is found. Hence '/data1/db2data' is the correct database directory corresponding to database 'SAMPLE'.


Hope this helps!




John Boyer (IBM)创建 Pool & VIP - 每天5分钟玩转 OpenStack(122)


上节完成了 LBaaS 配置,今天我们开始实现如下 LBaaS 环境。

1. 创建一个 Pool “web servers”。
2. 两个 pool member “WEB1” 和 “WEB2”,均为运行 Ubuntu cloud image 的 instance。
3. load balancer VIP 与 floating IP 关联。
4. 位于外网的 client 通过 floating IP 外网访问 web server。


创建 Pool

点击菜单 Project -> Network -> Load Balancers,点击 Pools 标签页中的 “Add Pool” 按钮。

显示 Pool 创建页面。

将 Pool 命名为“web servers”。
Provider 选择默认的 “haproxy”。
Subnet 选择 “”。
Protocol 选择 “HTTP”。
Load Balancing Method 选择 “ROUND_ROBIN”。

点击 “Add” 按钮,“web servers” 创建成功。

这里对 Pool 的几个属性进行一下说明。

LBaaS 支持如下几种 Protocol:

因为我们用 web server 做实验,所以这里需要选择 “HTTP”

LBaaS 支持多种 load balance method:

如果采用 round robin 算法,load balancer 按固定的顺序从 pool 中选择 member 相应 client 的连接请求。 这种方法的不足是缺乏机制检查 member 是否负载过重。 有可能出现某些 member 由于处理能力弱而不得不继续处理新连接的情况。 如果所有 pool member 具有相同处理能力、内存容量,并且每个连接持续的时间大致相同,这种情况非常适合 round robin,每个 member 的负载会很均衡。

如果采用 least connections 算法,load balancer 会挑选当前连接数最少的 pool  member。 这是一种动态的算法,需要实时监控每个 member 的连接数量和状态。 计算能力强的 member 能够更快的处理连接进而会分配到更多的新连接。

如果采用 source IP 算法,具有相同 source IP 的连接会被分发到同一个 pool member。 source IP 算法对于像购物车这种需要保存状态的应用特别有用,因为我们希望用同一 server 来处理某个 client 连续的在线购物操作。

在我们的实验中选择的是 ROUND_ROUBIN 算法。

为 Pool 添加 VIP

现在 Pool 已经就绪,接下需要为其设置 VIP。 在 “web servers” 的操作列表中点击 “Add VIP”。

VIP 命名为 “VIP for web servers”。
VIP Subnet 选择 “”,与 pool 一致。
指定 VIP 为,如果不指定,系统会自动从 subnet 中分配。
指定 HTTP 端口 80。 Session Persistence 选择 “SOURCE IP”。
可以通过 Connection Limit 限制连接的数量,如果不指定则为不加限制。

点击 “Add”,VIP 创建成功。

通常我们希望让同一个 server 来处理某个 client 的连续请求。 否则 client 可能会由于丢失 session 而不得不重新登录。

这个特性就是 Session Persistence。 VIP 支持如下几种 Session Persistence 方式:

这种方式与前面 load balance 的 SOURCE_IP 效果一样。 初始连接建立后,后续来自相同 source IP 的 client 请求会发送给同一个 member。 当大量 client 通过同一个代理服务器访问 VIP 时(比如在公司和学校上网),SOURCE_IP 方式会造成 member 负载不均。


HTTP_COOKIE 的工作方式如下: 当 client 第一次连接到 VIP 时,HAProxy 从 pool 中挑选出一个 member。 当此 member 响应请求时,HAProxy 会在应答报文中注入命名为 “SRV” 的 cookie,这个 cookie 包含了该 member 的唯一标识。 client 的后续请求都会包含这个 “SRV” cookie。 HAProxy 会分析 cookie 的内容,并将请求转发给同一个 member。

HTTP_COOKIE 优于 SOURCE_IP,因为它不依赖 client 的 IP。

app cookie 依赖于服务器端应用定义的 cookie。 比如 app 可以通过在 session 中创建 cookie 来区分不同的 client。
HAProxy 会查看报文中的 app cookie,确保将包含 app cookie 的请求发送到同一个 member。
如果没有 cookie(新连接或者服务器应用不创建 cookie),HAProxy 会采用 ROUND_ROUBIN 算法分配 member。

比较 Load Balance Method 和 Session Persistence

前面我们介绍了三种 Load Balance Method:

这里还有三种 Session Persistence:

因为两者都涉及到如何选择 pool member,所以很容易混淆。 它们之间的最大区别在于选择 pool member 的阶段不同:

  1. Load Balance Method 是为新连接选择 member 的方法

  2. Session Persistence 是为同一个 client 的后续连接选择 member 的方法

Load Balance Method -- ROUND_ROUBIN
Session Persistence -- SOURCE_IP

当 client A 向 VIP 发送第一个请求时,HAProxy 通过 ROUND_ROUBIN 选择 member1对于 client A 后续的请求,HAProxy 则会应用 SOURCE_IP 机制,仍然选择 member1 来处理请求。

Pool 创建完毕,下一节我们向 Pool 添加 Member。


John Boyer (IBM)What's new for December 2016

Struggle analytics automatically detects struggle

As IBM continues to be the industry leader in cognitive, IBM Tealeaf Customer Experience on Cloud
harnesses the power of cognitive computing with the new Struggle Analytics feature. Struggle Analytics
automatically detects struggle with little to no human intervention.

How does struggle analytics work?

As data flows into IBM Tealeaf Customer Experience on Cloud, that data is processed through an
additional series of server components. These components are complex, machine-learning algorithms
developed by our award-winning data science team. The algorithms will assess each session against a
number of attributes, many of which are out-of-the-box. Our IBM Tealeaf Customer Experience on Cloud
clients will also have the ability to add their own client-defined events to be included in this assessment.

Using struggle analytics data to make business decisions

Here are three key ways you  can use struggle analytics to make business decisions:

  • Struggle score: Each session is assigned a Struggle score which  ranges from 0 to 100. The higher the score, the higher the confidence level of that customer session having struggled. IBM Customer Experience on Cloud clients can replay these sessions to help prioritize fixes for site issues that are having a negative impact on their bottom line.
  • Struggle analytics Page view: This default view shows the user the top pages where struggle occurred. It displays the proportion of struggling visits and the average score of struggling visits. Pages which are critical are highlighted in red, an indication to the client that set of pages should be prioritized for site issue resolution.
















A user can also drill into each page to see trends for struggle visits and scores.














  • Struggle analytics Session view: This default view is sorted by the struggle score. It surfaces the top sessions where struggle occurred so the user can focus their time on the sessions that matter most to their bottom line.




















Struggle Analytics demonstration

Do you want to see a demonstration of Struggle Analytics?  Click here to watch this very informative 8 minute video.

John Boyer (IBM)IBM Data Replication V 11.4 Product Announcement

IBM Data Replication V 11.4 Product Announcement on :


Stay tuned for details, documentation on the new and exciting product capabilities!

John Boyer (IBM)SCC Cognos issues getting CAM-CRP-1057 error

SCC Cognos issues getting CAM-CRP-1057 error.

Below are a few possible solutions:

Unable to start ( after hostname change Red-Hat C10: CAM-CRP-1057

1. Before you delete the following file, make sure you have an un-encrypted cogstartup.xml file. (You can export it from Cognos configuration) If you cannot start the service, please just make a copy of cogstartup.xml file.
2. Make sure to backup file before you delete or simply rename the following files. Delete the following files under <Cognos installation folder>/configuration: csk, encryptkeypair, signkeypair, caSerial and cogstartup.xml and cogconfig.prefs. Delete freshness under <Cognos installation folder>/temp/cam
3. Try if you can start Cognos. If you have the un-encrypted cogstartup.xml file, put that file under <Cognos installation folder>/configuration. If you don't have that file, don't worry about, you can generate a new one.

CAM-CRP-1057 Launching Cognos Configuration

Chat with our new IBM Support Agent Tool (Beta)
Technote (troubleshooting)


CAM-CRP-1057 Unable
to generate the machine specific symmetric key. Reason:
NoSuchProviderException: No such provider: IBMJCE

Bad Java
Resolving the problem

Ensure version of Java is supported

Reinstall Java
Related information

Cognos Business Intelligence 10.2.1 Supported Software


CAM-CRP-1057 Unable to Generate the Machine Specific Symmetric Key

I had a corrupted C84)Ms directory. I copied it over from a working duplicated configuration. after making the changes in . at startup I am getting CAM-CRP-1057 Unable to generate the machine specific symmetric key.

there are 2 keyfiles that you have to delete, so they get recreated by Cognos. Please search in this forum, you should find the answer.

Backup the following two folders which are located in c8->configuration->

signkeypair, encryptkeypair.

Then delete the above folders , after save and run ibm configuration.

CAM-CRP-1315 Current configuration points to a different Trust Domain than originally configured.
The cryptography information was not generated.

The remedy? Close the configuration and completely remove these directories beneath the /opt/cognos10/configuration directory:

– encryptkeypair
– signkeypair
– csk (actually I didn’t have this one. But I guess it should be removed if present)

re-ran cogconfig and saved. This time it worked!


John Boyer (IBM)New IBM License Metric Tool Open Mic Webcast: How ILMT and BFI interact with IBM i - Thursday, 8 December 2016

Hello All,

I would like to announce that on Thursday, 8 December 2016  at  11:00 AM EST (16:00 UTC/GMT, UTC-5 hours) for up to 60 minutes, we will have another Open Mic Webcast called: "How ILMT and BFI interact with IBM i".


For this open mic session, Hope Maxwell-Daley will explain:
1. How IBM i differs from other platforms
2. What you must know in order to maintain LMT/BFI on this platform

IBM Level 3 Experts might even provide a demonstration!


Join the WebEx Event Center to view the presentation, hear audio and participate in web Q&A:

Event number:  668 212 630

This event is open to the public and does not require a password.

More details are in the announcement at:

(Technote 7048875 - below link - gives information on how to use the new WebEx Event Center.

Looking forward to your attendance.
Thank you in advance.


John Boyer (IBM)Identify connections originating from IBM MQ MFT Agents

MFT Agents can connect to a queue manager either in bindings mode or in clients mode using a server connection channel depending on where the queue manager is running.


MQ v901 has made a simple and very useful improvement that helps to identify, at the queue manager end, the client connections originating from MFT Agents. All client connections now have APPLTAG attribute contains the name of an agent that made the connections. The following picture shows an example output of DIS CONN command for a connection coming from MFT Agent. 


The command output indicates the connection is originating from MFT agent HA2. This would help customers in clearly identifying the MFT Agent connections coming into queue manager and take appropriate administrative actions.  On similar lines the MQExplorer would show the connections as below in Application Connections panel.




Another tiny improvement has also been made for identifying connections originating from MFT Agents via the DIS CHSTATUS command. The output of DIS CHSTATUS would now display the RPRODUCT attribute as MQJF for connections originating from MFT Agents. The following picture shows an example output of DIS CHSTATUS command for a channel.

















John Boyer (IBM)MQ APARs of the month : October 2016

Slightly later than usual, here is a post containing information about the APARs that were closed in October that related to:

In each section, you will find a table showing a list of all of the APARs that have been closed in October, as well as additional information about some key APARs that Level 3 would like to highlight.


MQ Server and Client APARs for distributed platforms

During October 2016, 24 APARs related to the WebSphere MQ and IBM MQ queue manager, non-Java clients and the MQ applicance have been closed.

APARs of the month


This APAR addresses a memory leak that can occur when using connections secured via SSL or TLS.


If your queue manager is being accessed via IBM Integration Broker (IIB) V10 MQInput nodes that connect using the CLIENT transport, then you should be aware of this APAR. MQInput nodes seem particularly susceptible to the message loss issue issue reported in the APAR.


If you use clusters, then you should be aware of this APAR. It fixes an issue where workload is not correctly balanced between cluster members.


List of APARs closed in October 2016

APAR Number


Affected Component, Versions and Platforms



WebSphere MQ V7.5 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform


WebSphere MQ V7.5 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform


IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform


IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform


IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

WebSphere MQ V7.1 - Multiplatform

WebSphere MQ V7.5 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

WebSphere MQ V7.5 - Windows

IBM MQ V8 - Windows

IBM MQ V9 - Windows
IT17583 SIGSEGV in MQ application  occurring within the MQ API library code in function xlsPostEvent IBM MQ V8 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

WebSphere MQ V7.5 - Multiplatform

IBM MQ V8 - Multiplatform


WebSphere MQ V7.5 - Multiplatform


IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

IBM MQ V8 - Multiplatform

IBM MQ V9 - Multiplatform

WebSphere MQ V7.1 - Windows

WebSphere MQ V7.5 - Windows

IBM MQ V8 - Windows

IBM MQ V9 - Windows



MQ classes for Java, classes for JMS and MQ Explorer APARs

During October 2016, 8 APARs related to the classes for Java, classes for JMS and MQ Explorer components have been closed.


APARs of the month


If you are using the MQ resource adapter inside of Liberty, then this is an important APAR for you as it addresses an issue where it was not possible to create secure TLS connections to a queue manager.

List of APARs closed in October 2016


APAR Number


Affected Component & Versions


IBM MQ V8 Resource Adapter

IBM MQ V9 Resource Adapter


IBM MQ V8 Resource Adapter

IBM MQ V9 Resource Adapter

IT17427 MQ-JMS JMS message properties corrupted after consuming while using migration mode with a non-ASCII based JVM file.encoding

IBM MQ V8 classes for JMS

IBM MQ V9 classes for JMS


WebSphere MQ V7.5 PCF classes

IBM MQ V8 PCF classes

IBM MQ V9 PCF classes


IBM MQ V8 Resource Adapter

IBM MQ V9 Resource Adapter


WebSphere MQ V7.5 Explorer

IBM MQ V8 Explorer

IBM MQ V9 Explorer


IBM MQ V8 classes for JMS

IBM MQ V9 classes for JMS



WebSphere MQ File Transfer Edition and MQ Managed File Transfer APARs

During October 2016, 6 APARs related to the WebSphere MQ File Transfer Edition product, and the MQ Managed File Transfer component have been closed.


APARs of the month

PI66388 and PI54331

If you are using agents on z/OS that participate in transfers involving GDGs, then these two APARs will be of interest to you, as they fix issues that were introduced as part of WebSphere MQ File Transfer Edition V7.0.4.5 and the MQ V8.0.0.2 Managed File Transfer component.

List of APARs closed in October 2016

APAR Number

Abstract Affected Versions


WebSphere MQ File Transfer Edition V7.0.4

Managed File Transfer V8 and V9


WebSphere MQ File Transfer Edition V7.0.4

Managed File Transfer V8 and V9


WebSphere MQ File Transfer Edition V7.0.4

Managed File Transfer V7.5, V8 and V9


WebSphere MQ File Transfer Edition V7.0.4

Managed File Transfer V7.5

WebSphere MQ File Transfer Edition V7.0.4

Managed File Transfer V7.5, V8 and V9

John Boyer (IBM)ITM Agent Insights: OS Agents Version 6.3.0 FixPack 5 Scripting Feature

In this blog I will discuss the scripting feature which allows users to define scripts to run at Tivoli Monitoring OS agents sites at a defined frequency.


1. Introduction

The feature is enabled by default. The administrator can enable/disable it by setting a new environment variable
KXX_FCP_SCRIPT=true/false (default true) in the agent configuration file, where XX can be:

- LZ for Linux OS agent
- UX for Unix OS agent
- NT for Windows OS agent


This is an overview of the scripting feature; details will be provided in the following sections.


The OS agent loops, at a configurable interval, looking for script definition property files (*.properties) in a configurable directory path.
The property files are parsed and OS Monitoring Agent spawns a new process named “fcp_daemon” if at least one valid script definition is found. This new daemon is responsible for scheduling the script executions and gathering all information regarding running scripts.


According to another configurable interval, the OS agent loops getting the execution script results from the fcp_daemon.
The OS Agent is able to parse the script standard output, splitting each row in up to 16 attributes.
An event is sent for each standard output row of the script and these events can be caught by pure event situations.


1.1 Quick Start
The feature is enabled with default values as soon as the OS agent is started. The only action to start using the feature is the following:
     - create a property file under default directory (on Linux/UNIX it is

       $CANDLEHOME/localconfig/<product code>/scripts_definitions,

       on Windows it is

       %CANDLE_HOME%\localconfig\nt\scripts_definitions) using as an example the provided template script_property.txt.
Only two properties are required:
    ATTRIBUTE_NAME=Any name used to uniquely identify the script definition inside the property file.
    SCRIPT_PATH_WITH_PARMS=The fully qualified path of the script with arguments.


Not only shell scripts but also perl and other types of scripts can be used. You just need to specify the full command to execute in the SCRIPT_PATH_WITH_PARMS property.
For example, "perl C:\IBM\scripts\ITM_Custom_Scripts\". In this example you need to make sure that the location of "perl" can be resolved by the agent through the PATH variable in its environment. Specify the full path where "perl" is installed otherwise.


Further properties and details can be found in this document  and in the template script_property.txt under $CANDLEHOME/localconfig/lz/scripts_definitions.

2. New Attribute groups

Two new attribute groups have been added for this feature as detailed below. Statistic and execution data of all the scripts are stored in these two tables. The name of the property file (Property_File) and the attribute name defined in the property file (Attribute_Name) are the 2 key fields of these two tables. These two keys must be used in situation conditions to filter rows related to a specific script.

Attribute group: KXX_Custom_Scripts (multiple rows), table: KXXSCRPTS (sampled).
Description: Configuration and statistic data gathered using custom scripts. It contains a row for each defined script, information on the fcp_daemon and on property files:

Attribute                                                 Size            Description
===========================     ===== =========================
System_Name                                          32    This is the managed system name of the agent.
Timestamp                                                16    This is the local time when the data was collected.
Property_File                                           256    The name of the property file.
Attribute_Name                                         96    The attribute name that is defined in the properties file. The attribute is
                                                                          used for metric identification.
Script_Name                                            512   The name of the script.
Script_Path                                              512   The fully qualified path name of the script with arguments.
Custom_Name                                           96   The custom name that is defined in the properties file. It is used for custom
Standard_Output_Type                                4   Standard output type of the script.
Status_Code                                                4   The status of the script. It includes general errors, configuration errors, the
                                                                          status or the execution code returned by the  Script Provider.
Execution_Start                                          16   The time when the last execution of this script started.
Execution_Duration                                      4   The duration of the last execution of this script, in seconds. When timing out,
                                                                           the value of the configured timeout is returnd.
Average_Execution_Duration                       4   The average duration, in seconds, of all the execution of the script.
Refresh_Interval                                           4   The interval, in seconds, that the agent attempts to start this script.
Number_of_Collections                                4   The count of execution attempts of this script since agent started.
Intervals_Skipped                                         4   The count of occurrences where an execution of this script is skipped because
                                                                           the previous execution is still running.
Property_Group                                          64   The name of the property group.
Return_Code                                                4    Integer value returned by the Script.
Cust_Label_Str1                                         16    Label for custom string attribute #1
Cust_Label_Str2                                         16    Label for custom string attribute #2
Cust_Label_Str3                                         16    Label for custom string attribute #3
Cust_Label_Str4                                         16    Label for custom string attribute #4
Cust_Label_Str5                                         16    Label for custom string attribute #5
Cust_Label_Int1                                          16    Label for custom integer attribute #1
Cust_Label_Int2                                          16    Label for custom integer attribute #2
Cust_Label_Int3                                          16    Label for custom integer attribute #3
Cust_Label_Int4                                          16    Label for custom integer attribute #4
Cust_Label_Int5                                          16    Label for custom integer attribute #5
Cust_Label_Float1                                      16    Label for custom floating point attribute #1
Cust_Label_Float2                                      16    Label for custom floating point attribute #2
Cust_Label_Float3                                      16    Label for custom floating point attribute #3
Cust_Label_Float4                                      16    Label for custom floating point attribute #4
Cust_Label_Float5                                      16    Label for custom floating point attribute #5
Standard_Error                                        2048   Script Standard Error in a unique row


In addition to script definitions, the table may report rows to return errors found on property files and they are identified by @ANY@ as Attribute_Name. Moreover, a row is always used to report the status of fcp_daemon and it is identified by @ANY@ as Property_File and @ANY@ as Attribute_Name.


Attribute group: KXX_Custom_Scripts_Runtime (multiple rows), table: KXXSCRRTM (pure).
Description: Data gathered using custom scripts. It contains the output rows of the scripts currently running:


Attribute                                                   Size            Description
=======================               ===== =========================
System_Name                                            32   This is the managed system name of the agent.
Timestamp                                                  16   This is the local time when the data was collected.
Property_File                                            256   The name of the property file.
Attribute_Name                                          96   The attribute name that is defined in the properties file. The attribute is used
                                                                          for metric identification.
Script_Path                                               512  The fully qualified path of the script.
Custom_Name                                            96  The custom name that is defined in the properties file. It is used for custom
Return_Code                                                4   Integer value returned by the Script.
Row_Number                                               4   Output row number.
Standard_Output_Type                                4   Standard output type of the script.
Standard_Output_String                         2048  Script Standard Output in String Format.
Standard_Output_Integer                             8  Script Output in Integer Format
Standard_Output_Float                                8  Script Output in Floating Point Format (2 decimals).
Cust_Attr_Str1                                            64  Custom string attribute #1
Cust_Attr_Str2                                            64  Custom string attribute #2
Cust_Attr_Str3                                            64  Custom string attribute #3
Cust_Attr_Str4                                            64  Custom string attribute #4
Cust_Attr_Str5                                            64  Custom string attribute #5
Cust_Attr_Int1                                              8  Custom integer attribute #1
Cust_Attr_Int2                                              8  Custom integer attribute #2
Cust_Attr_Int3                                              8  Custom integer attribute #3
Cust_Attr_Int4                                              8  Custom integer attribute #4
Cust_Attr_Int5                                              8  Custom integer attribute #5
Cust_Attr_Float1                                          8  Custom floating point (2 decimals) attribute #1
Cust_Attr_Float2                                          8  Custom floating point (2 decimals) attribute #2
Cust_Attr_Float3                                          8  Custom floating point (2 decimals) attribute #3
Cust_Attr_Float4                                          8  Custom floating point (2 decimals) attribute #4
Cust_Attr_Float5                                          8  Custom floating point (2 decimals) attribute #5


Note: different status conditions can be monitored using the Status_Code field in the statistic table KXX_Custom_Scripts.
The following detailed list provides different values for the Status_Code field:


 Initial general statuses
    UNKNOWN_ERROR (status code=0) --> Error
    NO_ERROR (status code=1) --> Informational
 General daemon statuses
    FEATURE_NOT_ENABLED (status code=40) --> Informational
    DAEMON_STARTING (status code=2) --> Informational
    DAEMON_STARTED (status code=3) --> Informational
    DAEMON_STOPPING (status code=4) --> Informational
    DAEMON_STOPPED (status code=5) --> Informational
    DAEMON_STOPPING_AT_AGENT_STOP (status code=6) --> Informational
    DAEMON_STOPPED_AT_AGENT_STOP (status code=7) --> Informational
    DAEMON_ERROR (status code=8) --> Error
    DAEMON_ERROR_NO_RESTART (status code=9)-> Fatal Error
 General directory statuses
    ERROR_OPENING_PROP_DIRECTORY (status code=10) --> Error
    PROP_DIRECTORY_NOT_FOUND (status code=11) --> Error
    NO_SCRIPT_DEFINED (status code=12)--> Warning
 Property file statuses
    PROP_FILE_NOT_FOUND (status code=13) --> Error
    ERROR_OPENING_PROP_FILE (status code=14) --> Error
 Script definition statuses
    SCRIPT_ADDED (status code=15) --> Informational
    SCRIPT_CHANGED (status code=16) --> Informational
    SCRIPT_REMOVED (status code=17) --> Informational
    SCRIPT_INACTIVE (status code=18) --> Informational
    NO_SCRIPT_PATH (status code=21)--> Error
    SCRIPT_PATH_INVALID (status code=22)--> Error
 Execution statuses from the fcp_daemon
    FACTORY_UNKNOWN_ERROR (status code=23) --> Error
    FACTORY_NO_ERROR (status code=24)--> Informational
    GENERAL_ERROR (status code=25)--> Error
    OBJECT_NOT_FOUND (status code=26)--> Error
    OBJECT_CURRENTLY_UNAVAILABLE (status code=27)--> Error
    NO_INSTANCES_RETURNED (status code=28)--> Error
    NO_RESPONSE_RECEIVED (status code=29)--> Error
    AUTHENTICATION_FAILED (status code=30)--> Error
    ACCESS_DENIED (status code=31)--> Error
    TIMEOUT (status code=32)--> Error
    NOT_IMPLEMENTED (status code=33)--> Error
    RESPONSE_TOO_BIG (status code=34) --> Error
    GENERAL_RESPONSE_ERROR (status code=35)--> Error
    SCRIPT_NONZERO_RETURN (status code=36)--> Error
    SCRIPT_NOT_FOUND (status code=37)--> Error
    SCRIPT_LAUNCH_ERROR (status code=38)--> Error
    INVALID_TOKEN_TYPES(status code=39) --> Error


3. Parameters in OS agent environment files

It is possible to customize the feature setting parameters in the OS agent environment files:
- $CANDLEHOME/config/lz.ini file for the Linux OS agent
- $CANDLEHOME/config/ux.ini for the UNIX OS agent
- %CANDLE_HOME%\TMAITM6_x64\KNTENV for Windows 64bit OS agent
- %CANDLE_HOME%\TMAITM6\KNTENV for Windows 32bit OS agent


The scripting feature is enabled by default. To disable it set:


Other parameters can be defined inside the agent environment files based on specific needs:

- KXX_FCP_SCRIPT_DEFINITIONS (default location on Linux/UNIX is $CANDLEHOME/localconfig/<product
  code>/scripts_definitions, on Windows it is %CANDLE_HOME%\localconfig\nt\scripts_definitions)
  The location where property files are stored.

- KXX_FCP_SCRIPT_INTERVAL (default 60 sec)
  OS agent uses the value of this variable as loop interval in seconds to check execution of running scripts and it sends
  events if the filter condition is satisfied. The minimum value is 30 seconds, the maximum value is 300 seconds. Invalid
  values will be reset to the default.
  Note: this parameter is ignored if KXX_FCP_SCRIPT_SYNC_INTERVALS is set to USE_SCRIPT (see definition below).
  If the agent looping interval defined by KXX_FCP_SCRIPT_INTERVAL  is bigger than the script execution frequency, it can 
  happen that data produced by some of the script execution loops is lost.
  To avoid this behaviour the script execution frequency can be synchronized with the agent looping interval setting the
      - USE_AGENT; the value of each script execution frequency is forced to be the maximum between
         KXX_FCP_SCRIPT_INTERVAL and EXECUTION_FREQUENCY defined in its property file.
      - USE_SCRIPT; the agent looping interval is dynamically set to the minimum frequency value
        (EXECUTION_FREQUENCY in property file) between all of the defined scripts . The value set by
        KXX_FCP_SCRIPT_INTERVAL is ignored. The frequency of the scripts remain the ones defined in the property files.
        When using USE_SCRIPT, the agent looping interval may change every time a script definition is added, changed or
        removed. In any case, it cannot be lower than the value set by KXX_FCP_OVERRIDE_MIN_FREQUENCY_LIMIT or 
        bigger than 300 seconds.
      - NO; no synchronization is performed and some execution results could be lost.


  At startup and at every interval defined by this variable, the OS agent checks for any changes in scripts or property files.
  Note that if KXX_FCP_SCRIPT_DEFINITIONS_CHECK_INTERVAL is less than the agent looping interval it will be reset to
  the agent looping interval. The maximum allowed value is the default, 300 seconds.

- KXX_FCP_USER (default OS agent user)
  This parameter is valid only on linux and unix platforms. It defines the user used to spawn fcp_deamon process if different
  from OS agent process user; all the scripts are executed by this user. Note that the user owner of the OS agent must have
  correct permission to spawn the fcp_daemon process. On windows a different user must be defined as login of the service
  "Monitoring Agent for Windows OS - FCProvider". The user must have "Full Control" permission to CANDLE_HOME and
  scripts repository directories. For more information please refer to official IBM Monitoring documentation:

  It defines the maximum concurrent number of scripts to be executed. Maximum value is 32.

  The OS agent watches the fcp_daemon: if an abnormal exit of process occurs, the OS agent restarts it. This is done for the
  KXX_FCP_MAX_DAEMON_RESTARTS (times at a day).
  The value 0 must be used to avoid the restart; if -1 is set, the OS agent retries to restart fcp_daemon forever. The restart
  counter is reset at OS agent restart.

  If set to false, the OS agent stops sending events for each row of script standard output. In this case script outputs are
  visible on TEP console workspaces but no situations will be displayed and no historical collection data will be collected.

  It is used when KXX_FCP_SCRIPT_SYNC_INTERVALS is set to USE_SCRIPT. In this condition, it sets the minimum value
  of the OS agent looping interval.
  Using low values for the the OS agent looping interval (less than 5 seconds) is highly invasive and can impact OS agent
  performances. If a frequent data collection is needed (e.g. every second), it is strongly suggested to customize a script that
  caches data at the needed frequency and returns the collected data to the OS agent at an higher interval (e.g. every 60

The following Agent Builder (CDP) variables can also be used to control the behavior of the fcp_daemon:

- CDP_DP_REFRESH_INTERVAL (default 60 sec) Global script scheduled start time. Used if the frequency is not passed in
  the script property file.

- CDP_DP_SCRIPT_TIMEOUT (default 30 sec) Global script execution maximum time. When the execution time of a script
  exceeds this limit, its Status_Code is set to TIMEOUT
- CDP_DP_KILL_ORPHAN_SCRIPTS (Y|N - default N) Global behaviour used by fcp_daemon process for timing out scripts.

  When set to 'Y', the scripts are killed, otherwise they are abandoned. This value is ignored for a specific script if the
  KILL_AFTER_TIMEOUT key is set in the script property file

- CDP_MAXIMUM_ROW_COUNT_FOR_CPCI_DATA_RESPONSES (default 1000) Global value added for performance
  reasons to limit the maximum number of output rows returned by the scripts. Additional rows after this limit are ignored.
  Allowed values are positive integer. Invalid values means no limit.


The fcp_daemon also supports the other environment variables used to control Agent Builder agents. For a complete list see the official Agent Builder documentation here:


4. Parameters in property files

The KXX_FCP_SCRIPT_DEFINITIONS directory contains a list of *.properties files. Each property file contains a list of
scripts to run with respective properties in the form of key=value. The properties that can be defined (case in-sensitive) are:

- ATTRIBUTE_NAME (Required - string max 256 characters).
  It is a name of your choice that defines a specific script and its attributes. The characters that can be used for the
  ATTRIBUTE NAME name can be alphabetical, numeric and only the underscore can be used as a special character. If
  other special characters (even a blank is considered this way) are used they get converted into underscore (_).
  When multiple scripts are listed inside the same property file, more different ATTRIBUTE_NAME must be defined (one for
  each script). It must be the first value specified for each defined script and delimits the start of the properties set for the
  specific script until the next ATTRIBUTE_NAME.

- SCRIPT_PATH_WITH_PARMS (Required - string max 512 characters).
  It defines the full path to the script with parameters, separated by a blank. No special characters can be used in the script
  path name.
  Values containing blanks must be enclosed in single (') or double quotes (").
  Environment variables can be passed, but only enclosed in ${...} for all the platforms. Environment variables must be
  available in the OS agent process context.

- EXECUTION_FREQUENCY (Optional - default 60 sec).
  It indicates the script execution frequency.
- CUSTOM_NAME (Optional - string max 256 characters)
  The user can fill it with a description of the script.

- IS_ACTIVE (true|false - Optional - default true).
  It activates the script. If false, the script is not executed.

- DISABLE_USE_AGENT_SYNC (true|false - Optional - default false). If true, the EXECUTION_FREQUENCY of the script is
  respected also if the global variable KXX_FCP_SCRIPT_SYNC_INTERVALS is set to USE_AGENT.

- KILL_AFTER_TIMEOUT (true|false - Optional - default value defined by the CDP_DP_KILL_ORPHAN_SCRIPTS variable).
  When true the script is killed after timeout (a timeout occurs when script execution is greater than the value specified by
  CDP_DP_SCRIPT_TIMEOUT parameter in OS agent configuration file) otherwise it is ignored. In both cases no data is
  collected. Note that when KILL_AFTER_TIMEOUT is set, only the script defined in property file is killed and not child
  processes (if any) spawned by the script. This feature is not supported by Solaris and Windows 32 bit OS agents and any
  timing out scripts are abandoned.

Output rows returned by a script are parsed. The script returns a standard output (called hereafter as first token). When the script returns more values in the output row they are added as additional tokens for a maximum of 5 strings, 5 integers and 5 floats following a predefined syntax as described below:

- OUTPUT_TYPE (STRING|INTEGER|FLOAT - Optional - default string). It defines the type of the first token returned by
  each row of the script;
     OUTPUT_TYPE can be:
     1. STRING (default): strings up to 2048 characters. When used, the "Standard_Output_String" attribute of      
         KXX_Custom_Scripts_Runtime is filled in by the first token.
     2. INTEGER: allows getting numeric values between -9223372036854775806 and 9223372036854775806. When
         used, the "Standard_Output_Integer" attribute of KXX_Custom_Scripts_Runtime is filled in by the first token.
     3. FLOAT: allows getting numeric values between -92233720368547758.06 and 92233720368547758.06,(with 2 decimal
         precision). When used, the "Standard_Output_Float" attribute of KXX_Custom_Scripts_Runtime is filled in by the first

  It defines the output type of additional tokens after the first one. The user can define a maximum of 5 strings, 5 integers and
  5 floats. It is a list of types separated by commas:
  token_type can be empty or one from (case insensitive):
    - STRING or S
    - INTEGER or I
    - FLOAT or F
      If <token_type> is empty, the corresponding token is skipped.

Examples of the same valid layouts:
  - TOKEN_TYPES=String,integer,S,,,Float,,f,FLOAT


- TOKEN_LABELS (STRING - maximum 16 characters each label - Optional).
  It defines the labels of the tokens defined in TOKEN_TYPES. This value is a list of token labels separated
  by commas, and must correspond to the tokens defined by TOKEN_TYPES.
  - TOKEN_LABELS=Cpu Name,Cpu number,Description,,,value 1,,value 2,value 3  
  It is ignored if TOKEN_TYPES is not set.


- TOKEN_SEPARATOR (Optional - default semicolon ";").
  It sets the string to be used as separator to split the output row in tokens. It is ignored if TOKEN_TYPES is not set. Empty
  value (blank) is accepted as separator and multiple consecutive blanks in output rows are considered as a single one.

The following two parameters allow you to filter the rows output of a script; they are applied by the OS agent only to the first token and they must be used together:

- FILTER_VALUE (Optional).
  The value used for comparison. It is required if FILTER_OPERATOR is defined.
  If the OUTPUT_TYPE is a string, the filter value must reflect exactly the string value returned by the script that is intended to
  be filtered, without any additional quotes (no wildcards allowed).
  The operator used for the comparison. It is required if FILTER_VALUE is defined. Accepted FILTER_OPERATOR values
  =    (equal to)
  !=    (different from)
  >    (bigger than)           only for numeric type
  >=    (not lower than)    only for numeric type
  <    (lower than)          only for numeric type
  <=    (not bigger than)       only for numeric type


5. Examples of property file

#First script definition: script is launched every 150 seconds, it returns float values and only the output rows equal to 0.5 will be considered by the agent.



#Second script definition: script ex_script2 is launched every 60 seconds, it returns integer values and only the rows different from 0 will be considered by the agent.



#Third script definition: script is launched every 120 seconds with 3 input parameters (the first input parameter is an integer, the second and third are string). It's killed if it hangs or if the execution time is greater than the timeout value.

SCRIPT_PATH_WITH_PARMS=/opt/scripts/ 1 "second input parameter" "third input parameter"


#Fourth script definition: script is launched every 50 seconds and returns the cpuid as standard output string and 2 float for Idle and Used CPU percentage and 2 integers for Memory and Virtual Memory usage. The pipe is used as separator to parse the output. An example of row that must be returned by the script is:

ATTRIBUTE_NAME=cpu and mem Usage
TOKEN_LABELS= Idle CPU %, Used CPU %, Virt MEM used MB, MEM used MB


6. Examples of private situations

The private situations definitions can be inserted into the xx_situations.xml under $CANDLEHOME/localconfig/lz for linux or $CANDLEHOME/localconfig/ux for unix or %CANDLE_HOME%\localconfig\nt for Windows to monitor scripts execution.
Examples below are for the Linux OS agent:

<!-- Sends an alert if the script defined by attribute name "demo" in property file "" returns a row equal to "demo.log" -->
<![CDATA[*VALUE KLZ_Custom_Scripts_Runtime.Property_File *EQ '' *AND *VALUE KLZ_Custom_Scripts_Runtime.Attribute_Name *EQ 'demo' *AND *VALUE KLZ_Custom_Scripts_Runtime.Standard_Output_String *EQ 'demo.log']]>

<!-- Sends an alert if a script exits with a return code different from zero -->
<![CDATA[*IF *VALUE KLZ_Custom_Scripts.Return_Code *NE 0]]>

<!-- Sends an alert when script path is not defined in a property file -->
<![CDATA[*VALUE KLZ_Custom_Scripts.Status_Code *EQ NO_SCRIPT_PATH]]>

<!-- Sends an alert when a script cannot be launched -->
<![CDATA[*VALUE KLZ_Custom_Scripts.Status_Code *EQ SCRIPT_LAUNCH_ERROR]]>

7. Custom Scripts and Custom Scripts Runtime Workspaces

A new OS agent navigation item has been added in the TEP console. It contains two  workspaces: "Custom Scripts" and "Custom Scripts Runtime".
"Custom Scripts" workspace contains the following views:
- "Factory Daemon Status" table view showing informational/warning/error events related to the fcp_daemon process.
- "Properties Files error" table view showing problems related to property files.
- "Number of execution per script" bar chart view summarizing the executions of defined scripts
- "Defined Scripts" table view showing detailed information of defined scripts.


"Custom Scripts Runtime" workspace is called clicking on the anchor of a specific script and provides information on script execution. The views provided by default on the TEP console are to be intended as example workspaces with all available custom attributes. Users can duplicate and customize workspaces to filter out fields that are not needed according to the output returned by their scripts.

8. Known problems and limitations

- Kill after timeout does not work on Solaris and Windows 32 bit OS agents.
- The fcp_daemon may stop executing scripts in Windows 32 bit If some scripts do not complete within the time out period and
  the user has turned on intensive tracing. If this happens the data reported on the TEP will reflect the last time the script was
  actually run. It is also possible that the OS agent will stop returning data. Terminating the fcp_daemon process will  allow the
  agent to resume proper operation.
- The scripting feature does not provide full Native Language Support; some issues may be found using Nationalized
  characters in property files or script outputs.
- On windows OS agent there is no possibility of executing scripts residing on a mapped network drive.


9. Troubleshooting

Standard KBB_RAS1 variable will apply to the OS agent and to the fcp_daemon  processes.
To apply a specific trace setting to fcp_daemon only, use the KXX_FCP_KBB_RAS1 variable; when KXX_FCP_KBB_RAS1 is set, the value specified by KBB_RAS1 is ignored by fcp_daemon.

To trace the operations logged by the OS agent core threads of the feature:
    KBB_RAS1=ERROR (UNIT:factory ALL)

To trace scripting queries from the ITM server and events sent to the server, add the entries:
    (UNIT:klz34 ALL) (UNIT:klz35 ALL) on Linux OS agent
    (UNIT:kux48 ALL) (UNIT:kux49 ALL) on Unix OS agent
    (UNIT:knt84 ALL) (UNIT:knt85 ALL) on Windows OS agent

To view tema traces to verify private situation execution, add the entries:
    (UNIT:kraavp all) (UNIT:kraapv all)

To see the execution of the scripts and how the data from the scripts is being parsed set:
    KXX_FCP_KBB_RAS1=Error (UNIT:command ALL)

To troubleshoot problems in the communication between the os agent and fcp_daemon add this trace level to both KBB_RAS1 and KXX_FCP_KBB_RAS1:
    (UNIT:cps_socket FLOW) (UNIT:cpci FLOW)

To see the interaction between the OS agent process and the fcp_daemon in detail add to both KBB_RAS1 and KXX_FCP_KBB_RAS1:
    (UNIT:cps_socket ALL) (UNIT:cpci ALL)

10. Quick Start Scenario

The following section describes the minimum steps needed to configure a linux OS agent to run 2 custom scripts.


Custom Scripts descriptions
Let's suppose the user has 2 scripts under a directory /scripts_repo:

- that checks the size of a specified directory passed as input parameter. Its output is an integer like the


- that checks the used CPU percentages and used Swap Memory megabytes. Its output is returned in
  the following form:


  where the first token is the CPU id, the second token is the used CPU percentage, the third token is the used swap memory
  in megabyte.


Customization needed to have the linux OS agent run above scripts
The feature is enabled with default values as soon as the OS agent is started:

- you could create one or two property files (<Any Name>.properties)under the default directory $CANDLEHOME/localconfig
  /lz/scripts_definitions. In this example let's create 2 property files one for each script called and
  SCRIPT_PATH_WITH_PARMS=/scripts_repo/ /opt
  TOKEN_LABELS= Used CPU %, Swap MEM used MB


- There is no need to restart the OS agent after adding (or changing) the 2 property files above: the OS agent checks script
  definition directory with a specified time interval (default value 300 seconds). Open the TEP console and under the "Custom
  Scripts" workspace the scripts details and results are shown.


Additional ITM Agent Insights series of IBM Tivoli Monitoring Agent blogs are indexed under I<wbr></wbr>T<wbr></wbr>M<wbr></wbr> <wbr></wbr>A<wbr></wbr>g<wbr></wbr>e<wbr></wbr>n<wbr></wbr>t<wbr></wbr> <wbr></wbr>I<wbr></wbr>n<wbr></wbr>s<wbr></wbr>i<wbr></wbr>g<wbr></wbr>h<wbr></wbr>t<wbr></wbr>s<wbr></wbr>:<wbr></wbr> <wbr></wbr>I<wbr></wbr>n<wbr></wbr>t<wbr></wbr>r<wbr></wbr>o<wbr></wbr>d<wbr></wbr>u<wbr></wbr>c<wbr></wbr>t<wbr></wbr><wbr></wbr>i<wbr></wbr>o<wbr></wbr>n.



Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:











Check out all our other posts and updates:

Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:
Academy Google+:
Academy Twitter Handle:



John Boyer (IBM)Linux on z Systms Live Virtual Class on single sign-on

The Linux team scheduled a new Live Virtual Class (LVC) for tomorrow.


Topic: An integrated Single Sign-On Solution with Linux on z Systems, z/OS, and Microsoft Active Directory

Date: Wednesday, December 7, 2016

Abstract: In spring 2016, the Client Center Boeblingen performed a complex Proof-of-Concept for a large European bank. One of the primary goals of this project was to realize a fully integrated Single Sign-On (SSO) solution with WebSphere Application Server running on Linux on z Systemsn CICS Transaction Server on z/OS, and Microsoft Active Directory. Besides the SSO requirement, another major goal of this PoC was to demonstrate t the client that this setup is able to provide a full audit trail for their transactions - with User IDs flowing all the way from the WebSphere-based front-end to CICS Transaction Server and z/OS RACF in th back-end. This session will highlight the architecture of this solution, the different technologies included, and how they were integrated in order to address the client's specific needs.


More information and the registration link is on the z/VM LVC web page here.


John Boyer (IBM)How to test a cognitive system

imageWhat does quality mean for a cognitive system, and how do you test for that?  A seven-part Cognitive System Testing series in the IBM Watson Dev blog gives you insight into answering those questions. Even if you're not developing your own cognitive applications, this series might help you define your expectations for cognitive applications in your business.

John Boyer (IBM)XCLI Client Python library

Hi guys, we have a surprise for you!


We have just released a Python library for our XCLI client.  It's an open-source project and free for use by everyone under the Apache 2 license. It enables connecting to the storage and managing all its operations. It supports all XCLI managed storage types:

  • IBM XIV Gen2 and Gen3 
  • IBM Spectrum Accelerate 
  • IBM FlashSystem A9000 and A9000R


This is the first open-source version of the Python XCLI client library ever. This may not sound like such a big deal, but it actually is. It enables users to tailor the way they use the Spectrum Accelerate Family storage. 

The XIV GUI is great, and so is the new IBM Hyper-Scale Manager and they let users to do almost anything with the storage. But there are cases that you would rather have specific functionality that is suitable just for you. The flexibility and power of Python and XCLI enables you to tailor it to your exact needs.


Let's take two simple examples:

1. You want to take snapshots for the production volumes in pool FOO every day. Just create a scheduled task and run the following script:

from pyxcli.client import XCLIClient

xcli = XCLIClient.connect_ssl('admin', 'adminadmin', '')

volumes = xcli.cmd.vol_list(pool='FOO).as_list

for volume in volumes:

    if volume.startwith('production'):


2. You'd like to clean up your system and remove all empty volumes. Here is how you create a list of unused volumes:

from pyxcli.client import XCLIClient

xcli = XCLIClient.connect_ssl('admin', 'adminadmin', '')

volumes = [volume for volume in xcli.cmd.vol_list().as_list if volume.used_capacity == '0']

for volume in volumes:



The combined power of Python and XCLI management has been unleashed! 

You are welcome to use it! 


Download the package from:

- GitHub:

- PyPI:

or simply run 'pip install pyxcli' from your command prompt.


Have fun,

Tzur and Alon

John Boyer (IBM)How to Create, Delete and Modify Notes at Order and OrderLine Level

In IBM Order Management version 9.3 we introduced a feature to Create, Delete and Modify Notes at Order and OrderLine Level

More details on this can be found here :


While this feature is also available in version 9.4, please note that the deletion and modification of orderline level notes has been introduced only in fix pack 13 of version 9.4
Defect ID : 463555
Description : The changeOrder API is enhanced to update and delete of existing notes at the order line level. To achieve this criteria, pass Operation attribute as "Modify" or "Delete" in the <Note> element during the API invocation.

For tried and tested API input xmls kindly refer to the following Technote :



John Boyer (IBM)Did you come across these news?

If you have not come across these recent postings, we invite you to discover them:


A new white paper titled "IBM Connections Brings Focused Collaboration to Teams of Any Size" filled with success stories, discusses how effective team work happens when supported by Connections communities, chat services, meetings, Verse and Connections Docs. It highlights financial, productivity and capacity benefits. 


While UK based Mears Group was in the habit of bringing on premise Cloud-based systems of their acquisitions, they decided to entrust IBM Connections Cloud for the collaboration of their diversified workforce. As their IT Director says, IBM Connections Cloud delivers in an unmatched way" at the price of a first class stamp a month." Read the article and its links for the full story.


Acting on insights is no longer a differentiator: it's being able to do it faster! This article explains why business intelligence and collaboration is essential.


A look back at World of Watson 2016 - key WOW sessions and announcements related to collaboration will allow you to understand the new names such as Watson Work, Watson Workplace, Watson Work Services, Watson Talent...


In our Services blog, some great posts explaining the advantages of software as a service,   the advantages of cloud computing,  a look at IBM and Cisco integrations,  imagining what's possible with IBM Watson Work Services  and Connections Cloud features that help with post-meeting tasks.


In our Support blog, some technical discussions, notably about integrating on-premises Connections Blogs & Communities with Notes and 5 tips for troubleshooting screen sharing with Linux & Connections Cloud Meetings.


Good reading!



John Boyer (IBM)Not all KDC_FAMILIES configuration values added in a remote deploy.













Not all KDC_FAMILIES configuration values added in a remote deploy.


This issue was reported by a customer running 6.3 FP03.

Each windows machine had a NT agent installed and by remote deploy a KLO agent was being installed and

However it was found once the agent KLO agent was insatlled, which it did successfully, it would not show up on the TEPS.
It was found to be be using localhost for the CT_CMSLIST so it was not connecting to the 2 RTEMS set in the NT agent configuration.


The settings in the MTEMS under  Actions->Set Defaults For All Agents

were checked and changed to be a TEMS name rather than a localhost name, then the deploy did have the correct settings for one
RTEMS, but did not have the other RTEMS and had the connection as ip.pipe rather than the ip.spipe that was on the NT agent.

A further review of the NT agent file showed
KDC_FAMILIES=sna use:n ip6 use:n ip6.pipe use:n ip6:spipe use:n ip
use:n ip.pipe use:n ip.spipe port:3660 ephemeral:Y HTTP_CONSOLE:N;

The issue was found to be the settings of HTTP and ephemeral.

6.3.0 FP3 does not support HTTP as a configurable parameter.
It supports it when coded manually, but is a problem in remote deploy.
6.3.0 FP4 is the first fixpack that supports HTTP_SERVER and that is the only parameter it supports for deployed agents.
None of the other HTTP information that is coded, is supported by any 6.3.0 release.

This behaviour is a permanent restriction at this time.

The 6.3.0 FP04 readme has the update:
Add support to disable HTTP and HTTPS ports during remote deploy,silent install/configuration, GUI install/configuration, and CLI
So the restriction is for any type of install.

Once the KDC_FAMILIES did not have the ephemeral:Y HTTP_CONSOLE:N set on the NT agent
 the deploy of the KLO agent corectly configured the CT_CMSLIST and all other elements.

Both the agents then had to be configured for the extra KDC_FAMILIES settings.



Tutorials Point


Subscribe and follow us for all the latest information directly on your social feeds:












Check out all our other posts and updates:
Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:
Academy Google+:
Academy Twitter Handle:





John Boyer (IBM)IBM BigFix Patch: Reminder: Patches for CentOS 5 and Patches for CentOS 6 sites are deprecating on December 31, 2016

As previously announced, the ‘Patches for CentOS 5’ and ‘Patches for CentOS 6’ sites are to be deprecated by December 31, 2016. These sites are being replaced by the 'Patches for CentOS 5 Native Tools' and the 'Patches for CentOS 6 Native Tools' sites, which are more efficient at handling third-party packages.


Baselines and custom sites that are associated with the non-native tools sites are affected by this change. The source Fixlets will still be visible, but no new content will be provided. Support will also no longer be provided for the non-native tools sites beyond the stated date.


Actions to take:

  • Subscribe your endpoints to the following CentOS Linux sites to avoid any disruption during patching:
    • Patches for CentOS 5 Native Tools site
    • Patches for CentOS 6 Native Tools site
  • If you are using baselines or custom sites, you must create new ones with the native tools sites.


More information:
To learn more, see the IBM BigFix Knowledge Center at


Application Engineering Team
BigFix Patch

John Boyer (IBM)S822LC for HPC 初期セットアップガイドを公開しました。

NVIDIA Pascal GPUを最大で4つ搭載可能なS822LC for HPCの初期セットアップガイドを公開いたしました。

Pascal GPUの最新技術としてNVLinkが注目されていますが、今回ガイドを公開したS822LC for HPCでは、GPU-GPU間だけではなく、CPU-GPU間もNVLinkでデータのやり取りが可能となっています。

この技術により、大きなデータサイズを利用する画像・映像処理を高速化させるだけではなく、Deep Learningといったエリアでも大きなメリットを発揮します。



S822LC for HPC 初期セットアップガイド.pdf


・S822LC for HPCについて


・NVLinkについて(NVIDIA社 Webサイト)




John Boyer (IBM)SC16: When Julia met PowerAI

Supercomputing 16 had so many exciting announcements and events around AI and Deep Learning on Power, it was hard to not miss some of the highlights.  As I previously shared, SC16 started with the launch of PowerAI on Monday morning.   PowerAI and the new GPU enterprise server were favorite attractions of SC16 attendees at the large, centrally located IBM booth during the conference with large crowds interested in learning more about the PowerAI and our custom-designed GPU enterprise server “S822LC for HPC”.   

But Deep Learning innovation on Power did not with IBM’s PowerAI offering.  True to the collaborative spirit of OpenPOWER, we had been working with many partners to create a broad ecosystem around PowerAI and Deep Learning, several of who were present as IBM guests at the booth.  We launched the Julia language on Power at Supercomputing 16 with our partners at Julia Computing. The combination of Julia and Power provides a perfect combination for Deep Learning, as Julia Computing CEO Viral Shah explained to HPCwire: “Using IBM’s Power platform with NVIDIA GPU accelerators increased processing speed by 57x – a dramatic improvement.  IBM Power provides 2-3x more memory bandwidth combined with tight GPU accelerator integration to create a high performance environment for deep learning with Julia.”  At the IBM booth, Julia Computing demonstrated how to take advantage of the unprecedented power of Julia and IBM’s GPU-accelerated Power servers with “Deep Eyes”, an exciting public health application built on Power, Julia and the MXnet framework that can diagnose eye disease with a cheap camera and automated diagnose that can refer affected patients to specialists for treatment.  

NIMBIX unveiled their Jarvice high-performance cloud platform on Minsky, with a “push to compute” interface to create a Minsky-powered high performance PowerAI instance at the push of a button. 

Other exciting AI and Deep Learning demos included realtime video analysis for image segmentation using a CAPI-attached FPGA accelerator and a realtime recommendation engine with AI using the S822LC for HPC system. 

To get started with your own Deep Learning applications, learn more about the S822LC for HPC.  Share your ideas on how to use Deep Learning and the Power of the S822LC for your application use cases in the comments section below.



John Boyer (IBM)Updates to IBM Support starting the week of November 18, 2016

IBM Support (beta) updates for the week of November 18, 2016

Improvements and features on the new IBM Support site now include:

Automated Problems and Heartbeats tables


Improvements have been made to the automated problems and heartbeats tables.  All dates and times will be presented in user’s local time.  Also the format was changed to show day/month/year.


imageClick on the image to see it in full size.

Recent searches



Improvements have been made to provide users with a list of their recent searches.  By simply clicking in the search box or tab to the search box, a list of the users’ recent search terms will appear.


imageClick on the image to see it in full size.

Site availability



Site availability has been added to show users current status of the application.  Should a system be offline, details will be provided.  This option can be found at the very bottom of the home page.



Click on the image to see it in full size.


Support technical exchanges

imageUpdates have been made to the results provided in Support technical exchanges.  When accessing a product page, users are able to select “Support technical exchanges” under the Training card.  Date format is now day, month, year.  Also users will see all the Support technical exchanges without any pagination.



Click on the image to see it in full size.

Chat, contact and feedback


When users access a product page they will be prompted with the new look of support assistance.  The prompt can be closed by selecting the “x” at the top right of the pop up.  Users in need of assistance at a later time can reopen this pop up by selecting the chat, contact and feedback module.



Click on the image to see it in full size.


Stay tuned for more updates as the beta site is updated every other week based on your feedback and use.

John Boyer (IBM)Breadbox zSystems hybrid cloud management made easy!

A new video series has been posted to YouTube that you'll want to see. 


This playlist shows a real world example of how a modern zSystems hybrid cloud application environment can be managed easily from your browser with Service Management Unite (SMU), part of IBM Service Management Suite for z/OS (SMSz), and also Application Performance Management (APM). Short vignettes show how a subject matter expert creates customized management dashboards, for easy, worry free use by operations personnel.


Included in the Playlist is a three part SMU demo video series with the following parts:


1.  An overview of Breadbox Groceries, a ficticous company, their real world hybrid cloud application Virtual Shopping List, and SMU

2.  A demo on how Breadbox Operations uses SMU to monitoring and manage Virtual Shopping List CICS with easy-to-use custom SMU screens

3.  A look behind-the-scenes on how Breadbox SMEs customize SMU for the specific CICS management tasks needed for Virtual Shopping List


Also included in the Playlist is a demo on how Breadbox Groceries uses APM to manage the Virtual Shopping List hybrid cloud application end-to-end across the cloud front end and the zSystems backend.


These videos go beyond showing technology and product features.  These videos create a real world example and show how new hybrid cloud architectures can be managed.  You'll want to take a look for yourself.  More real world example demo videos are planned.  Comment on the videos or this blog, to let us know what examples you'd like to see in the future.  Thanks for watching!

John Boyer (IBM)Overview of tools for the z Systems Development and Test Environment (zD&T) beta



The tools for the IBM z Systems Development and Test Environment (zD&T) beta provides a way for authorized users to quickly clone z/OS system(s) volumes for use in a zD&T.  This is our first step to help automate the process of creating and distributing z/OS volumes images for your application programmers to utilize in their zD&T setups, either to customize an existing zD&T install that is using Application Developers Controlled Distributions (ADCD), or build an entire zD&T install from scratch using your own system volumes.



  • Extract z/OS volumes to a sequential files
    • APF authorized command line tool
    • Single volume per execution

Want to see additional functionality in this program?  Let us know in the beta forum!



None.  The FEUVIMG program can run on any supported version of z/OS and will produce a sequential file that can be used on any supported version of zD&T.


You can download the tools for the zD&T beta by following the instructions on the IBM z Systems Development and Test Environment download page.


Additional information

See IBM z Systems Development and Test Environment

Learn about IBM z Systems Development and Test Environment V10.0

John Boyer (IBM)Minimum memory requirement for AIX 7.2 Live Update

AIX 7.2 requires 2GB of memory to boot, but this minimum is not enforced in the LPAR profile except by Live Update (to ensure we'll be able to boot the surrogate LPAR). You can check your Minimum Memory setting in your LPARs profile by running the lparstat command (as shown below).


# lparstat -i | grep Memory
Online Memory                              : 4096 MB
Maximum Memory                             : 8192 MB
Minimum Memory                             : 2048 MB


If your partition does not meet the minimum memory profile requirement, you’ll receive the following error message when you perform a live update preview (with geninstall –p –k).


Checking lpar minimal memory size:


Required memory size: 2048 MB

Current memory size: 1024 MB

1430-119 FAILED: the lpar minimal memory size is not sufficient


You’ll need to change partition profile so that the minimum memory setting is, at least, 2048 (2GB) and then stop & start the partition for the profile update to take effect.


The /var/adm/ras/liveupdate/logs/lvupdlog log file will also contain error messages indicating the problem:


LVUPD 11/28/2016-18:52:00.716 DEBUG lvupdate_utils32.c - 6713 - lvup_check_minmem_size: Partition minimum memory size (1024 MB) on p8tlc-lvup is lower than the minimum memory size required (2048 MB).


LVUPD 11/28/2016-18:52:00.716 ERROR lvupdate_utils32.c - 8647 - lvup_preview: ERROR(s) while checking the current mimimal memory size against the computed required size.





ProgrammableWebGoogle Introduces App Maker, App Development Tool for Enterprises

Google has introduced App Maker, a enterprise application development tool. Google App Maker allows enterprises to build process automation, project management, and other applications without the need for extensive code writing. This new cloud-based enterprise application building tool includes built-in connections to G Suite and Google apps such as Maps, Contacts, and Groups.

John Boyer (IBM)What's New with GRC API in OpenPages v7.3

IBM has recently released OpenPages and by tradition I will be writing about what changes have made it into the OpenPages GRC API. The IBM Knowledge Center has the official product documentation for Open<wbr></wbr>Page<wbr></wbr>s GR<wbr></wbr>C Pl<wbr></wbr>atfo<wbr></wbr>r<wbr></wbr>m. From there you will be able to find the latest versions of the grc api javadocs and GRC_REST_API.pdf guide for more details on the topics which I will cover here. The easiest way to directly find the API related content is from the IBM Knowledge Center Content in PDF support technote. One documentation item to know about, is that in addition to providing the API javadocs in a .zip file format, that the IBM documentation team now host the 7.3 version of the API javadocs in HTML format, to make it easier to find or to share links to.


New Features

Copy Triggers

In GRC Triggers, Copy Operation, the Resource ID of a copied resource is now directly available in the POST operation, this is without having to do additional "lookups" (the “root” resource that was copied, not it’s children in a hiearchical copy case). I will be writing about this topic in more depth in a future blog, stay tuned...


Indirect Queries

The Query Service in the API has has been enhanced to support indirect joins using the extended syntax, without listing all intermediate paths. Previously the only way you could query against the objects of different type was by using the JOIN syntax to create a join from every object type in the hierarchy of your query. For instance to get all the KRI Values under a Risk would be something like:

SELECT [KeyRiskIndicatorValue].[Name], [KeyRiskIndicatorValue].[OPSS-KRIVal:Value]
JOIN [KeyRiskIndicator] ON PARENT([SOXRisk])
JOIN [KeyRiskIndicatorValue] ON PARENT([KeyRiskIndicator])
WHERE [SOXRisk].[Name] = 'some name'

This approach required you to list any intermediate type, even if they were not relevant to your results, such as the [KeyRiskIndicator] in the above example. This would get even more complex when you start considering that there might be multiple paths from a given parent type to it's descendents, in which case the base Query syntax would need to use other things, such as UNIONs to accomplish that in a single query statement.

The new syntax to support an indirect query will simplify this somewhat, although it has limitations that only allow this technique to be used in special cases.

SELECT [KeyRiskIndicatorValue].[Name], [KeyRiskIndicatorValue].[OPSS-KRIVal:Value]  
JOIN [KeyRiskIndicatorValue] ON ANCESTOR([SOXRisk])
WHERE [SOXRisk].[Resource ID] = 9999

One restriction of using this syntax is that it must include a Resource ID filter on ancestor type in the WHERE clause when using ON ANCESTOR([...]) predicate. This means that you must always scope the indirect query by a single Resource ID on the parent type. You may place additional filters in the WHERE clause for other filters on the indirect child type as needed. Note: This approach while simplifying the syntax for the API developer, does make more expensive SQL queries than using the regular or 'direct' joins, even if you scope to a single Resource ID. This query can still be expensive on very large datasets or if you make a very 'deep' indirect query down many levels of hierarchy (e.g. Entity - Issue, or Entity - Action Item for example). In cases where you specifically know the paths that you wish to use, you should still use the regular JOIN predicates and explicitly giving the explicit paths of Object Types.

Other limitations
The Outer Join syntax is not supported with ON ANCESTOR( ).


Enhancements for new Product Features

Users can have Multiple Profiles

The IConfigurationService interface has been expanded to support users with multiple profiles.

The REST API for users now returns multiple associated profiles as availableProfilesNames, as well as current selected profile with preferredProfileName. For example, a GET to /grc/api/security/users/OpenPagesAdministrator returns the following JSON response from my 7.3 system

  "userName": "OpenPagesAdministrator",
  "id": "6",
  "description": "System Administrator",
  "firstName": "System",
  "lastName": "Administrator",
  "passwordCreationDate": "2016-11-08T11:35:46.282-05:00",
  "passwordExpiresInDays": 0,
  "canChangePassword": true,
  "isTemporaryPassword": false,
  "isPasswordChangeFromAdmin": false,
  "isLocked": false,
  "isSecurityAdministrator": true,
  "isHidden": false,
  "isDeleted": false,
  "isEnabled": true,
  "isEditable": true,
  "emailAddress": "OpenPagesAdministrator@openpages.local",
  "emailAdress": "OpenPagesAdministrator@openpages.local",
  "adminLevel": 1,
  "availableProfileNames": [
    "OpenPages Modules 7.3.0 Master",
    "OpenPages Platform 2"
  "preferredProfileName": "OpenPages Modules 7.3.0 Master",

  "displayName": "System Administrator - OpenPagesAdministrator"

One other defect you may have noticed that we also addressed in this same User resource is that the JSON attribute 'emailAddress' was incorrectly spelled 'emailAdress', which was a typo. emailAdress is deprecated and we have added the correct spelling going forward, but for backwards compatibility the old spelling is still produced.

The /grc/api/security/users end point also supports PUT and POST methods for updating and creating users respectively. You can set the availableProfileNames array in a PUT or POST and it will change the profiles that user has assigned. Note that removing an existing profile from the list will indeed remove that profile association for that user. So it is an explicit listing behavior. Also note, when sending a PUT or POST request with the JSON for the user in the request body, you only need to specify emailAddress, and that value will be updated, you don't need both.

Global Search Enhancements: New Facets

The DateFacetParam, SearchFacetOptions, and UserFacetParam classes have been added to support searching object facets via ISearchService API.

For the REST API /grc/api/search corresponding new optional query parameters (fot, fur, fdt, ffp) were added to support searching with the new facets. The previous facet, 'types' parameter has been deprecated; use 'fot' instead.


Performance Improvements

One of the changes in 7.3 was the development team took a look at low level platform functionality that gets most commonly used and how we can improve the performance of those areas. As a result there were enhancements to creating, reading, updating, and deleting GRC objects, which the API as well as other features, like FastMap, can now benefit from. Depending on the exact use case and your system resources you may see a noticeable improvement in those operations from 7.2 to 7.3. For example we have run some simple tests on identical hardware between 7.2 and 7.3 and the performance of a single create operation was 40% faster! These numbers can vary, of course, depending on things like how many triggers or security rules you may have on the relevant object types, or other concurrent workload on the system.

Workflow APIs Deprecated

With a heavy heart, I have to announce that OpenPages GRC 7.3 is the last supported Fujitsu IBPM release. Moving into the future OpenPages will now support a loose integration with another workflow engine, IBM Business Process Manager (IBM BPM). What this means for the APIs are some deprecation notices:

  • Workflow (e.g. Fujitsu IBPM interfacing APIs) java interfaces and packages are deprecated in 7.3
  • /grc/api/workflow REST APIs deprecated
  • The current Workflow APIs mentioned above do not support interaction with new IBM BPM 8.5.7 "loose" integration

Note that IBM BPM product public APIs are available for usage for performing tasks that the Workflow APIs did, such as starting a process instance (job), finishing a task or querying workflow data.

IBM BPM and the OpenPages integration with it are a much deeper topic than this post allows, there will continue to be ongoing presentations and postings that explore the implications and strategies for the new integration. Stay tuned to the new GRC Community as another way to learn more about this and other 7.3 features.

John Boyer (IBM)Your Website as your Top Salesperson: Here’s How to Do It

Thinking of how to come up with a great website? There is one thing that you should remember – consider it as the most important component of your sales team. It should be seen as the epitome of your organization’s top salesperson.


What are the characteristics of great salespeople? They are easy to like, have knowledge of their customers, communicates efficiently, keeps people interested and engaged, and they also know how to do follow-ups. They are not aggressive and manipulative. All of these things can be present not only in your sales team but in your website as well.


Based on my personal experience, the tips that will be mentioned below will be the most effective ways of having a website that can function as a competent salesperson.


  1. Knowledge about the Prospect


One of the essential characteristics of a great salesperson would be having extensive knowledge about the people they are targeting, including their demographics, purchase behaviors, and personal preferences, among others. He knows what to do to be able to establish trust and to foster a long-lasting relationship. He knows what the problem is without asking the client, and more importantly, can quickly come up with the right solution.


The same things hold true for an excellent website. It has content that is based on the characteristics of the prospect, including their demographics and preferences. It is easy to navigate and contains relevant information to provide answers to the most common questions of prospects. There are testimonials that will be indicative of the experiences of other people. It is created in such a way that it will be easy to understand, knowing the fact that no individual will have the patience to deal with information overload.


  1. Familiar with the Market and Competition


A great salesperson is knowledgeable about how his business performs compared to the competition. He is the first to admit if the competitors are doing a better job and such is seen as a motivation to think of ways to be better. He knows the strengths and weaknesses of his products and services, as well as that of the other players in the industry.


In the same way, a good website is designed in such a way that inspiration is drawn from the competition, although this is not necessarily tantamount into being a copycat. A good website knows that if it is unable to provide the information sought by the visitor, he will most likely end up closing it and visiting the website of the competitor. A good website comes with relevant content that can attract visitors but won’t lead into information overload.


  1. Helps to Educate and Empower the Prospect


People hate it when salespeople are being manipulative. They prefer doing things on their own. With this, a great salesperson knows how to help customers have a discovery on their own of the things they want or need. They give prospects the free will to make a choice. They are not pushy.


Similarly, a good website is able to achieve the same goal. It does not force the prospect to perform an action that will be favorable for the business. More often than not, the website will contain attractive texts and captivating imagery in the form of pictures, videos, and animation, among others. This will help to not only engage the prospect but also towards the discovery of their needs. They help the customers in making research and in educating them.


  1. Easy to Like


Another common characteristic of great salespeople is that they are likable. Logically, people would purchase from a person they like, especially when it comes to high-valued transactions. This is basically because the relationship is anticipated to last even after the sales transaction. With this, salespeople are friendly and have a personality that will be easy to like.


The same thing should be true in the case of your website. It should speak in such a tone that it will be perceived as being friendly and relatable. Prospects who are visiting the website should feel warm. Additionally, according to domain brokerage firm Media Options’ founder Andrew Rosener, “you should also have a domain name that is easy to remember; and be a .com. The easier your domain is to say and remember, the more people will share it.”


  1. Makes the Entire Process a Lot Easier


Another thing that is common for great salespeople is that they know how to make the process as hassle-free as possible. They also make sure that once the transaction is sealed; there is little room for the buyer to have second guesses. More importantly, they know how to thank the client.


In the same way, an excellent website must also be created in such a way that navigation will be easy. It is also critical to have a Thank You page in order to show appreciation to your customers. A follow-up email is also good not only to thank the client but also to trigger more favorable actions in the future.


Why It Is Necessary to Have a Digital Salesperson


Before anything else, let us get one thing clear – there is no replacement for a great sales team. The latter will be indispensable for the success of an organization. However, they can be complemented in the form of a great website. Having a digital salesperson in the form of a website can be beneficial through the following, among others:


  • Be available round the clock, unlike employees who can file for sick leaves
  • Deliver messages in a manner that is consistent
  • Defy geographical boundaries, which can help extend reach
  • Use mediums like videos and animations to lessen boredom in sales presentations
  • Interact with thousands of people simultaneously


Indeed, the workforce will be the best asset of any organization, which is why a great salesperson will be a necessity. Success, however, can be multiplied if this is complemented with a great website.


John Boyer (IBM)New GRC Community and forum

The Offering Management team for OpenPages GRC has created a new community for communicating about a broad range of GRC topics from the OpenPages team. That community will contain expert blogs, as well as a support forum, however I will still continue to maintain and update the GRC Power Plant blog with more technical focused material. I highly encourage everyone who follows this blog to also follow the new GRC Community, and to post questions (or answers) in the new forum.


The following is an excerpt from a communication that summarizes what the new community is all about.



We are pleased to announce a new GRC forum for IBM OpenPages GRC and IBM Algo FIRST users to gain the latest details on new releases, interact with peers and solution experts, and learn about upcoming events.  

The new IBM Governance, Risk & Compliance Community page has just been launched to help keep the dialogue open year round.  

As you'll see at the link below, there are a number of valuable resources to keep you informed about IBM OpenPages and FIRST and help you excel in your GRC environment.

These include:
- Videos to help you get started on your journey
- Resources such as tutorials, events, and expert blog posts on the latest enhancements
- Video overviews and demonstrations on OpenPages and FIRST solutions
- The latest news and events to keep you informed

There's also a Support forum where you can ask questions about IBM OpenPages GRC and IBM Algo FIRST, or simply contact support for any reason.
In addition you can enter Request for Enhancement for IBM OpenPages GRC and Request for Enhancement for IBM Algo FIRST. 

We hope you'll take advantage of this valuable resource to learn and share your experiences with your peers and solution experts.
You only need to login in with an IBM ID if you want to comment or post in the forums; you do not need to log in to get to the videos, events, blogs, etc.

Please let us know if you feel there are ways we can make this new Community even better.

Thank you, we look forward to interacting with you on the new Community.

The IBM GRC Team

John Boyer (IBM)2017 VM Workshop with z/VSE sessions

This time the registration for the 2017 VM Workshop opened very early. That is you get enough time to plan your participation.


Besides z/VM and Linux on z Systems we plan to provide several z/VSE sessions like this year. For z/VSE we see the VM Workshop as a replacement for the former WAVV conference.

The VM Workshop is planned for Thursday to Saturday, June 22-24, 2017 at Ohio State University in Columbus, OH.


More information on the VM Workshop and registration are here.


John Boyer (IBM)Websphere Portal v7 vs v8: change on format="path"

Apparently a change has been introduced in v8 regarding the format-parameter that you can add on link-Elements. In v7 when you added i.e. format="path" to your wcm-design components, the path of an item had been returned.


If you do the same in v8, in addition to the path you get the following:


This of course can have side effects in you wcm components.

The format="path" parameter is not even mentioned anymore in the Info-center:

Instead you've got two other parameters you can add: format="namepath" and format="titlepath"

So, if you want to achieve the same that you achieved in v7 using format="path", you would need to use format="namepath" from now on.


John Boyer (IBM)How to prevent the DOM Inventory model allowing shoppers to add out of stock items to cart

I recently had a client who was implementing the direct integration between WebSphere Commerce and Sterling Order Management System (OMS). 


As part of this integration, their inventory allocation was being changed from non-ATP to DOM inventory.  In making this change the application no longer prevented shoppers from adding out of stock items to the shopping cart.

This is a change from what happens with the non-ATP inventory model, where the add to cart button is greyed when an item is out of stock. 


The DOM inventory flow behaves differently in that it allows items to be added to the cart, but prevents the cart from being submitted until a inventory reservation (allocation) is given from Sterling OMS.  This behavior can be changed to prevent the out of stock items from being added.  In the DOM Inventory add to cart process currently calls a empty command DOMValidateInventoryStatusCmd.  The DOMValidateInventoryStatusCmd can be implemented (customized) to check the inventory status (ORDERITEMS.INVENTORYSTATUS) of the item.  If the inventory status of the item being added to the cart is not allocated (NALC) , then throw an exception.  This will prevent the item from being added to the cart.

John Boyer (IBM)IBM MQ Console on z/OS

Richard introduced the MQ Console in MQ V9.0.1 in a recent blog post. The MQ Console is a browser-based MQ administration tool, which is available on all platforms that are supported by MQ V9.0.1, including z/OS. In this blog post I'll give an introduction to getting the MQ Console set up on z/OS, and some of the differences between the MQ Console on z/OS and on other platforms.


The MQ Console on z/OS is provided in a new optional "Unix System Services Web Components" feature (FMID JMS9016), which you'll need to select when performing the SMP/E install of MQ V9.0.1. When this feature is selected, it will install all the necessary files to use the MQ Console in a directory called "web" under the MQ installation directory in USS.

Creating and configuring the server

The MQ Console runs as an application in a WebSphere Liberty profile (WLP) server, which runs as a started task on z/OS. Before you can start the server for the first time there's a few things you'll need to do.

The first thing you'll need to do is to create the WLP server. Pick a location for the WLP user directory. The configuration files for the WLP server, and any log and trace files produced by the server, are all stored under this directory, so make sure that there's plenty of space available in the filesystem where this directory is located.

Then create the WLP server by changing into the web/bin directory under the MQ installation path in USS, and running the command

./ wlp_user-directory


The script copies the server definition to the WLP user directory specified as a parameter. You may need to change the permissions on the WLP user directory and its contents to give the user ID that the WLP server will run under read and write access.

To configure the server, edit the mqwebuser.xml file in the servers/mqweb directory of the WLP user directory. Bear in mind, when editing this file, that it's encoded in UTF-8.

You'll most likely need to configure a few parameters before starting the server. For example, setting the value of the httpHost variable to "*" so the server listens on all network interfaces, not just for requests on localhost (the default).


The easiest way to configure the WLP server is to start with one of the three sample XML files that are provided in the web/mq/samp/configuration directory under the MQ installation path in USS.


The samples are:

  • basic_registry.xml - contains a sample basic user registry with a simple list of users and groups
  • ldap_registry.xml - contains an example of how an LDAP registry can be configured
  • no_security.xml - provides no user authentication.

Jon Rumsey recently wrote two blog posts on authentication and Role Based Access Control in the MQ Console. These are all relevant to z/OS, as well as other platforms, and describe in more detail the two methods of authenticating users shown in the sample XML files.

There are other options for authenticating users in WLP, and on z/OS you may want to consider using a SAF registry, which would allow users to log in to the MQ Console with their z/OS user IDs. I won't go into the details of how to configure WLP to use a SAF registry here - that's probably worthy of a blog post in its own right - but the procedure is documented in the IBM Knowledge Center.

The other aspect of security to consider is securing the communication between your browser and the MQ Console. By default, the only way of connecting to the MQ Console is over HTTPS. However, the TLS certificate that is automatically generated by the WLP server when it's first started is not intended for production use. On z/OS, you'll probably want to use a certificate in a RACF keyring instead, which you can configure by following the example in the IBM Knowledge Center.

Almost there!

Once you've created and configured the WLP server, you'll need to create a JCL procedure to start the server. You can copy and edit the sample procedure supplied with MQ V9.0.1 called CSQ4WEBS. When you're choosing a name for your WLP procedure, bear in mind that you're likely to need a separate procedure for each version of MQ you will install from now on. So it's a good idea to call your WLP procedure for IBM MQ V9.0.1 something like MQW0901.

You can now start the WLP server by issuing the MVS command

START wlp-procname

Which queue managers can I work with?

The first thing you'll notice after logging into the MQ Console on z/OS will probably be the list of queue managers that are displayed in the queue manager widget. This list contains all the IBM MQ V9.0.1 queue managers that are defined on the system where the WLP server is running, regardless of whether they're currently running or stopped.


There is however one exception to this. Queue managers that have not been started since they were defined, or since the system was last IPLed, cannot be identified by the MQ Console. Therefore, they won't be displayed in the queue manager widget. So be aware that after your system is IPLed, the list of queue managers shown by the MQ Console may be shorter than usual, until the queue managers are started again.

MQ Console features on z/OS

Many of the features present on the MQ Console on distributed platforms are also available on z/OS. You can create and view objects such as queues, channels and channel authentication records, to name just a few.

However, there are some differences. For example, you can view and alter queue managers on z/OS from the MQ Console, but you cannot create, delete, start or stop queue managers. There are other functions that are specific to z/OS, such as creating shared queues, which are not yet available in the MQ Console. The full list of restrictions on z/OS are documented in the IBM Knowledge Center.

Existing objects, even if they're shared objects in a queue sharing group (QSG), can be displayed and managed from the MQ Console. For example, displaying a shared queue's properties will display its QSG disposition and the CF structure it's defined in.


Hopefully this blog post will have helped you to get started with the MQ Console on z/OS. If there are any enhancements to the MQ Console you'd like us to consider, then do please let us know.

John Boyer (IBM)How Verse cut my post-vacation email triage time in half

I returned from a two-week vacation today and faced the dreaded task of triaging hundreds of emails I had ignored while away. I've done this many times over the years, but today was the first time since I switched to using Verse exclusively about six weeks ago. Like the other times, I budgeted two hours just to wade through the mess. To my pleasant surprise, I was done in half the usual time. How was I able to reduce the triage time in half? image

To understand the answer to that question, let me first explain how I look at every email during the triaging process:

  1. Is this junk that can just be deleted without investing any time in understanding the contents? If the answer is yes, I delete it and move to the next message.
  2. Is this an important email I should reply to immediately? If the answer is yes, then I reply to it and move to the next message.
  3. If the message is not either 1 or 2, I place it in a temporary bucket without investing additional time and move to the next message. When I'm done triaging all messages, I'll come back to the temporary bucket and begin the second pass of processing messages that require additional time to read, understand, and potentially reply.

Verse shines in handling these use cases. My Case 2 scenario is rare, so for about 98 percent of emails I am either deleting or stashing away for a second pass. Verse groups the Move to Trash, Remove from Inbox, and Mark as Needs Action buttons together and inline with the message in the Inbox. When I detect a message that is my Case 3 scenario, I just mark it as Needs Action (Today) and remove it from the inbox.

Using these techniques, I can handle the 98 percent in a series of rapid-fire mouse gestures that help me burn through my Inbox in a way I just can't with Notes and other email systems I've used. When scaled out to the hundreds of messages I had waiting for me, I reduced the time needed to process those messages by 50 percent, which was a huge win for my first Monday morning back in the office.

John Boyer (IBM)Continuous Engineering: Work Smarter With Configuration Management



Continuous Engineering: Work Smarter With Configuration Management - Part 1

Video 1: Configuration Management in and across the Engineering Lifecycle. Get the big picture in this gentle introduction to the basic concepts that will be demonstrated in the other videos in the series. 


Continuous Engineering: Work Smarter With Configuration Management - Part 2

Video 2: Configuration Management in and across the Engineering Lifecycle. See how teams can use configuration management to make their work easier. 


Continuous Engineering: Work Smarter With Configuration Management - Part 3

Video 3: Configuration Management in and across the Engineering Lifecycle. See how teams can use linking to create traceability in configurations. 


Continuous Engineering: Work Smarter With Configuration Management - Part 4

Video 4: Configuration Management in and across the Engineering Lifecycle. See how teams can work in parallel and deliver changes to other development streams representing product variants or releases. 



View more content in this series



Daniel Glazman (Disruptive Innovations)MacBookPro 2016 with Touchbar

I spent a short while playing with a new MacBookPro 15" with Touchbar at the Apple Store. Conclusions:

  • I hate the keyboard that is too thin and has too little space between the keys. I made so many typing mistakes with it... This is a keyboard for people who don't use a lot their keyboard, not a keyboard for code developers
  • the Touchbar is a PITA and a strong disruption because it adds another layer of input/output between the keyboard and the screen. "Typing" on the Touchbar too often requires moving your eyes from the screen to the Touchbar and losing focus when you go back to the screen. For each app, you have to learn a new Touchbar design, and some apps have more than one. A mess. And it does not counter-balance the loss of the Escape and function keys.
  • the screen seems better and brighter than on my mid-2014 retina MacBook Pro.
  • Apple Store people themselves are extremely cautious with the USB-C-based power supply when they move a MBP. MagSafe, we all miss you.
  • Touch ID is cool but I noticed the Touch ID key does not seem to be oleophobic so the fingerprint of the person who unlocked the MBP can be very visible. Probably not too difficult to copy it.
  • I have doubts on the readability/usability of the TouchBar in high-light conditions
  • the price, the price, the price !!! The 15" model with Touchbar starts at $2,889.00 here in France (price converted from € to $ at today's exchange rate), urgh !!!

For the time being, I am more than happy with my mid-2014 model and I don't plan to change. This would probably feel like a downgrade to me.

John Boyer (IBM)How IBM and Cisco Are Teaming for a More Cognitive Work Experience



Click on the above image for the 15 minutes interview delivered at World Of Watson with Inhi Cho Suh, IBM GM IBM Collaboration, and Jens Megger, GM Cisco Cloud Collaboration. They discuss their vision and provide examples of how the partnership will lead to the Best of Social.


John Boyer (IBM)Single sign-on with an external Enterprise Content Management system – Part 2: SAML based

Last week (see Single sign-on with an external Enterprise Content Management system - Part 1: LTPA based) we discussed the options to connect IBM Business Process Manager (BPM) to an Enterprise Content Management (ECM) server using a technical user or to propagate the end user credential with single sign-on. We looked at the simplest option to establish single sign-on through Lightweight Third-Party Authentication (LTPA) and checked the Web Service client and provider specific requirements.

The LTPA based approach though has limitations:

  • LTPA is IBM proprietary. Non-IBM products will usually not understand it. Nevertheless, there are ECM products that can run on WebSphere Application Server (WAS) – for example Alfresco Enterprise – where LTPA might work.
  • Even between IBM products LTPA requires to use the same user registry configuration which might not always be the case.

Today we have a look at a different authentication token that can be used to overcome such limitations: Security Assertion Markup Language (SAML) tokens. SAML was standardized by the OASIS standards consortium.

As IBM BPM uses the Web Services binding of the Content Management Interoperability Services (CMIS) to connect to an ECM system, it is important to note that the Web Services Security (WSS) specification specifies how to use so called SAML assertions within Web Service interactions. Both CMIS as well as WSS are also OASIS standards – so, if still necessary then this is another motivation to bring all of this together.

Based on my area of expertise, when it comes to application servers, I will explain steps based on a target ECM system that is hosted on WAS as well. I will describe the steps for the Web Service client (this is in IBM BPM) and provider (this is in the ECM system) separately. So, if your ECM system does not run on WAS, then you will need to adopt those steps.

The SAML tokens that we need to get into the Web Service request can be generated in different ways:

  • By IBM BPM at the time of a Web Service request.
  • By an external Secure Token Server (STS) that is called at the time of a Web Service request to generate a token.
  • By propagation. With that I mean that the end user login happens through an Identity Provider (IdP) using the SAML HTTP POST binding. On a successful login, the identity provider creates the SAML token and sends this to the IBM BPM system. This SAML token can be propagated to the ECM system if it is valid for that system as well.

We will now look at the steps for the first approach.

To configure this, these things must be done:

  1. We need a key store with a certificate that is used to sign the SAML token. This allows the receiver of the SAML token to verify that it was signed by a trusted party.
  2. We need to configure the CMIS Web Service client in IBM BPM to create a SAML token instead of the default LTPA token.
  3. We need to configure the CMIS Web Service provider in the ECM system to consume the SAML token.

Let’s do it in that order.

Creating a key and trust store for the SAML token signing

A certificate is required for the service client to sign the SAML token that is send in a Web Service request. The receiver will need to check the signature to verify that token comes from a trusted party. Such a certificate can be created using the keytool command that can be found in WAS_HOME/java/jre/bin.

In a command prompt, we create a key store first:

keytool -genkeypair -keystore IBM_BPM_SAML_KeyStore.jks -storepass ksPassword -keypass ksPassword -keyalg RSA -alias IBM_BPM_SAML_Issuer -dname "cn=IBM BPM SAML Issuer, o=ACME" -validity 365 -storetype JKS

In a second and third step, we export the public key and import it into a second store that we call trust store:

keytool -export -alias IBM_BPM_SAML_Issuer -file pub.cer -keystore IBM_BPM_SAML_KeyStore.jks -storepass ksPassword -storetype JKS

keytool -import -alias IBM_BPM_SAML_Issuer -file pub.cer -keystore IBM_BPM_SAML_TrustStore.jks -storepass tsPassword -storetype JKS -noprompt

The pub.cer file is temporary and can now be deleted. The result are two files:

  1. A key store file IBM_BPM_SAML_KeyStore.jks with the private and public key. This key store will be used by the IBM BPM system to sign the SAML token.
  2. A trust store file IBM_BPM_SAML_TrustStore.jks with only the public key. This trust store will be used by the ECM system to verify the signature of the SAML token.

The stores are protected with different passwords and the key pair is valid for one year.

Configuring the CMIS web service client in IBM BPM to send SAML tokens

In a next step, we will configure IBM BPM to send a SAML token in the web service requests to the external ECM system.

First, we need to make the key store available to the IBM BPM system. For this, we need to copy the IBM_BPM_SAML_KeyStore.jks file into the config directory of the deployment manager. A good location is PROFILE_HOME/config/cells/CELL_NAME. After you placed the file there you need to perform a Full Resynchronize in the WAS admin console in System Administration > Nodes for all your nodes. This will distribute the file to the nodes.

Now we are ready to configure the web service client. IBM BPM uses a managed web service client with a policy set and binding that are by default configured to send an LTPA token. You can find them in the WAS admin console in Services > Service Clients. The relevant service clients for the single sign-on access to ECM system are prefixed with SSO.

Service clients in the WAS admin console.

There are in total six clients for different CMIS defined web service ports.

You can click on each of them to see the attached policy set and binding. The policy set defines which service features are required. The binding then specifies how these features are configured. By default, the IBM BPM CMIS service clients use the BPM SSO Policy Set and the BPM SSO Client.

Policy set and binding for the CMIS discovery service client.

We now need to replace these with a policy set and binding that sends a SAML token instead of an LTPA token. And for that I prepared something: CMIS client policy|View Details and CMIS client policy set|View Details. Download these two files. These are a policy set and a matching binding that are configured in the same way as the BPM default one’s – beside that they send a SAML instead of an LTPA token.

In the WAS admin console go to Services > Policy sets > Application policy sets. Click Import > From Selected Location … and select the CMIS client policy file. Then go to Services > Policy sets > General client policy set bindings. Click Import … and select the CMIS client policy set

Let’s look at what we just imported. Go back to the policy sets and open the details of the CMIS client policy set. There, go to WS-Security > Main policy > Request token policies. There we can see the required SAML 2.0 token. The interesting configuration is in the binding. So, go to the client policy set bindings and open the details of the CMIS client policy set binding. Go to WS-Security > Authentication and protection. There, a single outbound authentication token is defined.

SAML 2.0 authentication tokenThis specifies that a SAML token is to be generated. Click Callback handler to see the configuration. There you can see that the class is used. This is designated to generate a SAML token. Customization may be required at the bottom, where the custom properties are defined.

Custom properties of the SAML 2.0 authentication tokenHere is an explanation for these properties, the long names are shortened:

Name Description
confirmationMethod Use Bearer as the simplest confirmation method where the SAML token is only signed and could theoretically be used by anybody. For more security where the SAML token can only be used by a dedicated client, consider looking at the Holder-of-key assertion.
KeyStorePath The path to the key store, if you chose a different file name or location, then you need to adjust this here.
KeyStoreType The type of the key store that you specified as the storetype argument when creating the store with the keytool command.
KeyStorePassword The password of the key store as specified in the storepass argument when you created the store with the keytool command. You can specify the password in clear text. WAS will automatically encode it.
KeyAlias The alias of the key as specified in the alias argument when running the keytool command.
KeyName The distinguished name of the key as specified in the dname argument when running the keytool command.
KeyPassword The password of the key as specified in the keypass argument when running the keytool command. You can specify the password in clear text. WAS will automatically encode it.
IssuerURI A URI that is used as issuer of the SAML token. You can choose a value here.

The reference information of these properties can be found at SAML token generator properties for self-issued tokens.

After you made required changes, click OK to confirm the changes.

Note: if you are using older versions of IBM Business Process Manager and therefore possibly also older WAS versions, then the settings other than confirmationMethod may need to be specified in a different way. Please check the SAML Issuer Config Properties documentation.

Now we can configure the service client to use the policy set and binding. Go to Services > Service clients. For SSODiscoveryServiceRef, SSOMultiFilingServiceRef, SSONavigationServiceRef, SSOObjectServiceRef, SSORepositoryServiceRef and SSOVersioningServiceRef perform the following: click on it to view the details. Select the first row, then click Attach Client Policy Set > CMIS client policy set. Click Assign Binding > CMIS client policy set binding.

Click Save and synchronize the nodes. Certain changes to policy sets or bindings require the server to be restarted. Now it’s a good time to do this as we just finished the required changes on the web service client = the IBM BPM side.

Configuring the CMIS web service provider to consume SAML tokens

Now, we switch to the CMIS provider side. Depending on what system you are using these steps will be different. I will continue my explanations for an ECM system based on WebSphere Application Server using FileNet as example.

The first thing you need to verify is if the CMIS implementation is deployed correctly. The deployment is typically done using the IBM Content Navigator Configuration and Deployment Tool. This graphical user interface allows to deploy new and update existing deployments for the IBM Content Navigator application and for the CMIS endpoint for IBM Content Manager and FileNet Content Manager.

As a step of a so-called deployment profile, there is the Configure the IBM CMIS client authentication method step. There you need to specify WS-Security Authentication instead of the default HTTP Basic Authentication. The authentication policy name is not so relevant here as we are going to replace this later.

Settings for the Configure the IBM CMIS client authentication method step

Note: WS-Security Authentication is only available for the CMIS 1.0 implementation. It is not supported for the CMIS 1.1 implementation. This is not a problem because IBM BPM does not require any CMIS 1.1 capabilities.

To allow the provider to verify the signature of the SAML token, we need to make the trust store available to the ECM system. For this, we need to copy the IBM_BPM_SAML_TrustStore.jks file into the profile directory of the deployment manager. A good location is PROFILE_HOME/config/cells/CELL_NAME. After you placed the file there you need to perform a Full Resynchronize in the WAS admin console in System Administration > Nodes for all your nodes. This will distribute the file to the nodes.

We now need to import a policy set and binding that consumes a SAML token. And for that I prepared again something: CMIS provider policy|View Details and CMIS client policy set|View Details. Download these two files. These are a policy set and a matching binding that are configured in the same way as the CMIS for FileNet Content Manager default one’s – beside that they consume a SAML instead of a LTPA token.

In the WAS admin console go to Services > Policy sets > Application policy sets. Click Import > From Selected Location … and select the CMIS provider policy file. Then go to Services > Policy sets > General provider policy set bindings. Click Import … and select the CMIS provider policy set

Let’s look at what we just imported. Go back to the policy sets and open the details of the CMIS provider policy set. There, go to WS-Security > Main policy > Request token policies. There we can see that it requires a SAML 2.0 or Username token. The username token is required for interactions where IBM BPM sends the technical user. The interesting configuration is in the binding. So, go to the provider policy set bindings and open the details of the CMIS provider policy set binding. Go to WS-Security > Authentication and protection. There, two inbound authentication tokens are defined.

The configuration of the Username Token Consumer is straight-forward. Interesting are the details of the SAML Token Consumer. This verifies that the SAML token in the message is from a trusted issuer by verifying its signature.

SAML Token Consumer configurationIt specifies that it uses the system provided wss.consume.saml JAAS login configuration. More relevant information is specified in the Callback handler. In its details you can see that the system class is used. The custom properties there may require customization.

SAML Token Consumer callback handler custom properties configurationHere is an explanation for these properties, the long names are shortened:

Name Description
trustedIssuer_1 This matches the IssuerURI of the token provider. You can specify multiple of them by using the trustedIssuer_n naming pattern.
trustedSubjectDN_1 This is the distinguished name of the key that is used for the issuer. Here again, the trustedSubjectDN_n naming pattern is used where issuer and distinguished name of the key are paired.
trustStoreType The type of the trust store that you used when creating the store with the keytool command.
trustStorePassword The password of the trust store as specified when you created the store with the keytool command. You can specify the password in clear text. WAS will automatically encode it.
trustStorePath The path of the trust store file, if you chose a different file name or location, then you need to adjust this here.

After you made required changes, click OK to confirm the changes.

With the authentication and protection settings we now verified that SAML token is from a trusted issuer. We still need to read the information about the BPM user and make it the caller of the web service request in the ECM system. These settings can be found in the binding is well, in WS-Security > Callers. There you find again two entries, one for the Username Token Caller to read the credentials in case a WSS-UsernameToken is in the request. Relevant for the SAML interaction is the SAML 2.0 Caller. Looking at its details, you can see, the callback and the wss.caller JAAS login being used:

Caller configuration

The purpose of the callback handler here is to extract the user identity from the SAML token. More information about it can be found in Establishing security context for web services clients using SAML security tokens.

Now, we can configure the CMIS web service provider to use this policy set and binding. Go to Applications > Application Types > WebSphere enterprise applications. Go to the details of the application that contains the CMIS web module. Then, go to Service provider policy sets and bindings. There, select the first row and click Attach > CMIS provider policy set and Assign Binding > CMIS provider policy set binding. Finally, it should look like this (IBM_FileNet_CMIS_1.0 is the application name in my installation).

Policy set and binding of the CMIS provider application

Click Save and synchronize the nodes. Certain changes to policy sets or bindings require the server to be restarted. Do this now as we are now also finished to configure the provider.


After your system restarted, you can try the Content Integration from IBM BPM to your ECM system with single sign-on enabled on the ECM server definition. If you want to debug, a good starting point is to see the SOAP messages that IBM BPM sends and receives. To see them, enable the following trace on the application cluster members:

Conclusion and outlook

We changed the web service client in IBM BPM to use self-issued SAML tokens instead of LTPA tokens. By doing this we now have a better chance to integrate with ECM systems that

  • Are WebSphere based, but use a different user registry configuration that prevents using LTPA
  • Are WebSphere based, but do not share LTPA keys (for example between on cloud and on premise)
  • Are not WebSphere based and therefore unable to be integrated with LTPA

Still, there are limitations. Even within WebSphere systems we cannot yet integrate across different WebSphere cells that use different realms or that use different login attributes. To come around this, we will next week look at further advanced configuration options to customize the content of the SAML token that we send.

Other articles in this series:

John Boyer (IBM)福利来了!免费网络课堂 “IBM API经济解决方案” 【12 月 6 日】,期待小伙伴们报名参加 !



IBM 中国渠道大学将于12月6日开展一堂网络在线虚拟课程,课程名称是“IBM API经济解决方案”,欢迎大家报名参加!


IBM API 经济解决方案


2016年12月6日 10:00-11:30



在API经济中,API是服务、应用和系统的数字纽带。企业通过对API 的使用,更快、更高效、更加可扩展地实现用户获取、引导、以及产品发现流程,促进与合作伙伴的协作与整合,允许企业成为开放式平台,充分利用其大部分数据,创建一流的客户体验。它帮助企业实现公司转型、围绕现有产品打造新的生态系统,以及变现核心资本、服务和产品,为各种垂直行业增加革新性的价值。到2018年, API经济预计将成为一个2.2万亿美元的全球市场。





<section dir="ltr" style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="margin-top: 10px; margin-bottom: 10px; max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; width: 2.25em; height: 2.25em; border-left: 5px solid rgb(124, 220, 183); border-top: 5px solid rgb(124, 220, 183); word-wrap: break-word ! important;"> </section> <section style="margin-top: -2.25em; margin-left: 5px; padding-top: 5px; max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important; background-color: rgb(247, 247, 247);"> <section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="padding: 10px; max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; text-align: center; word-wrap: break-word ! important;">




“写留言” 告诉我们您的姓名、公司、职位、手机、Email


</section> </section> </section> </section> </section> </section> </section>



<section dir="ltr"> <section> <section> <section>



</section> </section> </section> </section> <section dir="ltr" style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; font-size: 14px; word-wrap: break-word ! important;"> <section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;">1、报名成功后,您可以通过“报名确认函”中的链接在课程开始前登录平台参加课程,或者您也可以直接登录平台,然后在“我的课程”中查看已经报名的课程,并且点击相应按钮参加。</section> <section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word ! important;"> </section>

2、报名课程后,如果您在开课时间无法使用电脑登录学习,可以手机下载WebEx APP同步听课,具体下载方式,我们将于开课前发送邮件说明,敬请关注。

</section> </section> </section> <section dir="ltr" style="font-size: 14px; line-height: 24.8889px; white-space: normal; box-sizing: border-box;"> <section style="margin-top: 10px; margin-bottom: 10px; text-align: center; box-sizing: border-box;"></section> </section>

参考资源推荐 :

API 管理

使用 IBM API Connect 实现您的 API 策略


--本文转载自 developerWorks 中国(微信号:IBMdWChina)

John Boyer (IBM)Interim Fix for Maximo for Nuclear Power Build 057 now available

The Interim Fix for Maximo for Nuclear Power Build 057     is now available.
IF057 ( is cumulative of all prior Interim Fixes for Maximo for Nuclear Power
Here is the location to download this interim fix:

John Boyer (IBM)Interim Fix for Maximo Asset Management Build 006 now available

The Interim Fix for Maximo Asset Management Build 006 is now available.
IF006 ( is cumulative of all prior Interim Fixes for Maximo Asset Management
Here is the location to download this interim fix:


Updated: .  Michael(tm) Smith <>