John Boyer (IBM)Informix JDBC Driver on Maven!

I'm happy to announce that Informix JDBC driver 4.10.8.1 is out on Maven Central!  

Using a distribution platform such as Maven, it's now easier than ever to download, upgrade, and utilize the Informix JDBC driver in your applications.  You can bypass the traditional installer and download site and get the driver you need quickly and efficiently.

 

Here is a link to the Maven page for the new driver.  On the page they have examples for many build systems (Gradle, Maven, SBT, and more) on how to include our driver.  You can also directly download the the jar file from the site. 

http://mvnrepository.com/artifact/com.ibm.informix/jdbc/4.10.8.1

 

The group we use for Maven is 'com.ibm.informix'.  This will be where we push out relevant technologies (like JDBC driver) that make sense to have a home on Maven. Wondering where ifxjdbc.jar and ifxjdbcx.jar files went?  We combined the functionality into a single jar file.  Now you can just download and manage one file.

John Boyer (IBM)Changes to Description fields on ticket or workorder related objects

Maximo allows us to configure database and manage the object/tables the way our business processes requires.

 

Most users need to apply changes to workorder related objects, however the workorder and ticket records are not used in a single application in Maximo and we have other tables that may be affected with some change. 

 

You may need to increase the length of a Description field of a workorder, but you want that change to be reflected in all workorder/tickets tables/objects.

 

So, how do we know what list of tables/objects are related to the ones that my workorders and tickets belong to?

 

There is not a specific list of tables, however you can get a list of the tables related to both WORKORDER and TICKET services by running the following SQL in your database.

 

SELECT MAXOBJECT.OBJECTNAME FROM MAXOBJECT WHERE MAXOBJECT.SERVICENAME IN ('WORKORDER','TICKET') 
AND MAXOBJECT.OBJECTNAME IN (SELECT MAXATTRIBUTE.OBJECTNAME FROM MAXATTRIBUTE WHERE ATTRIBUTENAME = 'DESCRIPTION')
ORDER BY OBJECTNAME;

 

That will result in all Maximo objects that belong to the WORKORDER or TICKET services and have a field called DESCRIPTION on their structure. 

 

If you would like to prevent issues,  I would make sure that the length info for all of them match before running any configuration or data generation process.

 

 

See ya

John Boyer (IBM)ITCAM4Tx - ISM - LDAP monitor changes Description field value

Using ITCAM for Transactions 7.4 with ISM agent and LDAP monitor, you can set up  a profile with one or several targeted servers, like below, and you can configure the description field with a value of your choice.

 

image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now, if you are not seeing the same description values reported in TEP, under ISM "Elements" workspace, but something different like for example :

Default container for orphaned objects

or

Default location for storage of Microsoft application data

see example below :

image

 

 

 

 

 

 

 

 

 

 

 

 

Then it's very likely that you are encountering a known issue, as described in 7.4.0.0-TIV-CAMIS-IF0023 Readme
http://www-01.ibm.com/support/docview.wss?uid=isg400001970
APAR IV65580 - Element values in the ISM LDAP monitor can be overridden by the retrieved LDAP Objects attributes
The ISM LDAP monitor will overwrite preexisting element values with the value of an LDAP Object attribute if the names of both are the same. For example if a queried LDAP object has an attribute called "description" the ISM element of the same name will end up with the value of the LDAP object attribute rather than the value specified in the ISM configuration.

The solution is to install latest fixpack or Ifix available and that includes this APAR; currently it's recommended to  upgrade your ISM agent to version 07.40.01.10 (7.4.0.1 IFix 0010), as it includes this APAR IV65580; follow ISM version 07.40.01.10 readme instructions.

see : http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002826

 

Furthermore, you then need to edit manually on your ISM agent the following file :

<ISM_HOME>/etc/props/ldap.props

and add in it a new line, like for example :

LDAPObjectPrefix  : "LDAP_"

save the edited file, restart the ISM agent.

 

Without such line, the APAR fix is ineffective, and by default you will see in ldap.log :

Debug: LDAP Object Prefix set to ""

 

To resolve this issue, the new code takes into account a LDAPObjectPrefix property, which will prepend the value of the property to LDAP Object Attributes, but its setting by default is empty.

 

Remark : If you have custom rules files that directly use the names of LDAP Object Attributes, then the use of the LDAPObjectPrefix property will require you to re-validate and possibly update those custom rules files.

 

 

Tutorials Point

 

Subscribe and follow us for all the latest information directly on your social feeds:

 

 

image

 

image

 

image

 

 

  

Check out all our other posts and updates:

Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:  http://ow.ly/PIKFz
Academy Google+:  http://ow.ly/Dj3nn
Academy Twitter Handle:  http://ow.ly/Dj35c


image

 

John Boyer (IBM)Service news: HiperSockets performance in z/VM guests

There is a z/VM APAR available,  that resolves performance issues when using HiperSockets connections for Linux environments.
I believe, that the corresponding PTF may be beneficial for z/VSE guests too, especially if they connect to Linux guests via HiperSockets.

 

z/VSE APAR VM65992: HIPERSOCKETS PERFORMANCE ISSUES ON SHORT BUSY

Error description: Using a dedicated HiperSockets device in a virtual machine running Linux exploiting QDIO Enhanced Buffer State Management (QEBSM), may experience slow performance in a very highly contested LPAR to LPAR communication environment.  This is an environment where multiple HiperSockets data transmissions occur simultaneously to the same set of QDIO queues.  It's this contested environment which increases the likelihood of a HiperSockets device presenting a short busy condition on a Signal Adapter (SIGA) Instruction issued by a program (the Linux Guest in this case).  It is the occurrence of a HiperSockets short busy which opens an error window that may cause the problem to occur. Running with QIOASSIST OFF increases the likelihood of seeing this problem.

 

John Boyer (IBM)Replay webcast "Leveraging IBM z Systems Scalability for SAP for Insurance" available

The Replay of our webcast: "Leveraging IBM z Systems Scalability for SAP for Insurance" is available now:  https://ibm.box.com/s/9y7q0rx74pco41ipq5pde541el3n499l
 
Find three media types there:   

Audio only (mp3)                               21,6 MB
Video replay (mp4)                            64,4 MB
Presentation slide-deck (PDF)            2,7 MB   

 

For further questions you are welcome to either blog here, or address the speakers directly.   

Jeremy Keith (Adactio)Variable fonts

We have a tradition here at Clearleft of having the occasional lunchtime braindump. They’re somewhat sporadic, but it’s always a good day when there’s a “brown bag” gathering.

When Google’s AMP format came out and I had done some investigating, I led a brown bag playback on that. Recently Mark did one on Fractal so that everyone knew how work on that was progressing.

Today Richard gave us a quick brown bag talk on variable web fonts. He talked us through how these will work on the web and in operating systems. We got a good explanation of how these fonts would get designed—the type designer designs the “extreme” edges of size, weight, or whatever, and then the file format itself can extrapolate all the in-between stages. So, in theory, one single font file can hold hundreds, thousands, or hundreds of thousands of potential variations. It feels like switching from bitmap images to SVG—there’s suddenly much greater flexibility.

A variable font is a single font file that behaves like multiple fonts.

There were a couple of interesting tidbits that Rich pointed out…

While this is a new file format, there isn’t going to be a new file extension. These will be .ttf files, and so by extension, they can be .woff and .woff2 files too.

This isn’t some proposed theoretical standard: an unprecedented amount of co-operation has gone into the creation of this format. Adobe, Apple, Google, and Microsoft have all contributed. Agreement is the hardest part of any standards process. Once that’s taken care of, the technical solution follows quickly. So you can expect this to land very quickly and widely.

This technology is landing in web browsers before it lands in operating systems. It’s already available in the Safari Technology Preview. That means that for a while, the very best on-screen typography will be delivered not in eBook readers, but in web browsers. So if you want to deliver the absolute best reading experience, look to the web.

And here’s the part that I found fascinating…

We can currently use numbers for the font-weight property in CSS. Those number values increment in hundreds: 100, 200, 300, etc. Now with variable fonts, we can start using integers: 321, 417, 183, etc. How fortuitous that we have 99 free slots between our current set of values!

Well, that’s no accident. The reason why the numbers were originally specced in increments of 100 back in 1996 was precisely so that some future sci-fi technology could make use of the ranges in between. That’s some future-friendly thinking! And as Håkon wrote:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future. And the future is now :)

Needless to say, variable fonts will be covered in Richard’s forthcoming book.

John Boyer (IBM)After LDAP is configured, users still get the Maximo login page

Some users reported that after you follow the steps how to configure the SSO, still getting the login Maximo page.

The document used for this configuration is this one :

https://www.ibm.com/developerworks/community/blogs/a9ba1efe-b731-4317-9724-a181d6155e3a/entry/How_to_configure_SSO_Single_Sign_On_with_Maximo_7_6_Part_1?lang=en

If this happens to you, I must warn that in some cases, this kind of scenario gets resolved if you follow some steps, as below in WebSphere Application Server console, of your Maximo Application Server.

1. In WebSphere Console, on the left navigation menu, click on Security
2. Click on Security Domains
3. Click on ctgDomain and switch SPNEGO Web Authentication from Customize (which was set by default during 7.6 install) to Use global security settings.

Good Luck and Thank you! :)

John Boyer (IBM)ITCAM4Tx - Linux ISM agent does not start with ITM 6.3 fp7

ITCAM for Transactions, with ISM agent,  was installed and running on Linux system with ITM 6.3 fp6. Then ITM 6.3 fp7 was installed and ITM framework for the 32bit architecture was upgraded as well to 6.3 fp7.

 

Now ISM agent doesn't start, or start is incomplete and agent node never appears as online in TEP.

 

Symptoms :

1) agent doesn't start. Using command :

'itmcmd agent start is'

fails with

Starting ITCAM for Transactions: Internet Service Monitoring ...
*** Error in `/opt/IBM/ITM/lx8263/is/platform/linux2x86/bin/kisagent':
free(): invalid pointer: 0x095e0608 ***
======= Backtrace: =========
/usr/lib/libc.so.6(+0x7571d)[0x55c3071d]
/usr/lib/libstdc++.so.6(_ZdlPv+0x1f)[0x56e0197f]
/usr/lib/libstdc++.so.6(_ZNSs4_Rep10_M_destroyERKSaIcE+0x1b)[0x56e68ecb]
/opt/IBM/ITM/lx8263/is/../../li6243/gs/lib/libgsk8cms.so
(_ZN9GSKStringD1Ev+0x6b)[0x57947b8b]
/opt/IBM/ITM/li6243/gs/lib/libgsk8ssl.so(+0xcfab9)[0x57575ab9]
/opt/IBM/ITM/li6243/gs/lib/libgsk8ssl.so
(gsk_attribute_set_buffer+0x3da4)[0x5753ee94]

 

 

2) agent does start. using command :

'itmcmd agent start is'

'ps -ef | grep kis' command shows "kisagent" process is running, but ISM node never appears online in TEP

 

3) <ISM_HOME>/log/kisagent.log simply ends with lines :

(58A310EB.0016-1:kdebenc.c,364,"ssl_provider_constructor") TLS 1.0 protocol enabled
(58A310EB.0017-1:kdebenc.c,393,"ssl_provider_constructor") TLS 1.1 protocol enabled
(58A310EB.0018-1:kdebenc.c,416,"ssl_provider_constructor") TLS 1.2 protocol enabled
(58A310EB.0019-1:kbbssge.c,72,"BSS1_GetEnv") KDEBE_KEY_LABEL=GSK_KEY_LABEL="IBM_Tivoli_Monitoring_Certificate"
(58A310EB.001A-1:kbbssge.c,72,"BSS1_GetEnv") KDEBE_KEYRING_FILE=GSK_KEYRING_FILE="/opt/IBM/ITM/keyfiles/keyfile.kdb"
(58A310EB.001B-1:kbbssge.c,72,"BSS1_GetEnv") KDEBE_KEYRING_STASH=GSK_KEYRING_STASH="/opt/IBM/ITM/keyfiles/keyfile.sth"

 

4) For reference here is what cinfo reports, just after I upgraded my agent Linux system from 6.3.0.6 to 6.3.0.7 :
PC   PRODUCT DESC                                              PLAT    VER           BUILD           INSTALL DATE

57   Monitoring Agent for IT_SSLCheck                      lx8266  06.31.00.01   201612201254    20161220 1457
84   Monitoring Agent for IT_IPRoute                           lx8266  06.22.00.00   -               20050823 2253
89   Monitoring Agent for IT_CommonOutScript          lx8266  06.30.00.03   201612201256    20161220 1502
ax   IBM Tivoli Monitoring Shared Libraries          lx8263  06.23.05.00   d4099a          20161124 1341
ax   IBM Tivoli Monitoring Shared Libraries                   lx8266  06.30.07.00   d6350a          20170214 1206
gs   IBM GSKit Security Interface                            li6243  07.40.47.00   d4029a          -
gs   IBM GSKit Security Interface                                 lx8266  08.00.50.69   d6276a          -
is   ITCAM for Transactions: Internet Service Monitoring       lx8263  07.40.01.10   0327            20161124 1343
jr   Tivoli Enterprise-supplied JRE                                lx8266  07.09.50.00   201609291010    -
lz   Monitoring Agent for Linux OS                                lx8266  06.30.07.00   62801           20170214 1206
r4   Agentless Monitoring for Linux Operating Systems lx8266  06.30.07.00   201610181737    20170214 1206
ue   Tivoli Enterprise Services User Interface Extensions lx8266  06.30.07.00   d6350a          20170214 1206
ui   Tivoli Enterprise Services User Interface                lx8266  06.30.07.00   d6350a          20170214 1206


then I used the command to upgrade ax and gs 32bit components to 6.3 fp7 :
./install.sh -h /opt/ISM74 -q -p /opt/eric/ITM6307/6.3.0-TIV-ITM_TMV-Agents-FP0007/unix/tflx8263.txt

now cinfo reports this :
PC   PRODUCT DESC                                              PLAT    VER           BUILD           INSTALL DATE

57   Monitoring Agent for IT_SSLCheck                      lx8266  06.31.00.01   201612201254    20161220 1457
84   Monitoring Agent for IT_IPRoute                           lx8266  06.22.00.00   -               20050823 2253
89   Monitoring Agent for IT_CommonOutScript         lx8266  06.30.00.03   201612201256    20161220 1502
ax   IBM Tivoli Monitoring Shared Libraries         lx8263  06.30.07.00   d6350a          20170214 1215
ax   IBM Tivoli Monitoring Shared Libraries                  lx8266  06.30.07.00   d6350a          20170214 1206
gs   IBM GSKit Security Interface                           li6243  08.00.50.69   d6276a          -      
gs   IBM GSKit Security Interface                                lx8266  08.00.50.69   d6276a          -      
is   ITCAM for Transactions: Internet Service Monitoring  lx8263  07.40.01.10   0327            20161124 1343
jr   Tivoli Enterprise-supplied JRE                               lx8266  07.09.50.00   201609291010    -      
lz   Monitoring Agent for Linux OS                               lx8266  06.30.07.00   62801           20170214 1206
r4   Agentless Monitoring for Linux Operating Systems  lx8266  06.30.07.00   201610181737    20170214 1206
ue   Tivoli Enterprise Services User Interface Extensions lx8266  06.30.07.00   d6350a          20170214 1206
ui   Tivoli Enterprise Services User Interface               lx8266  06.30.07.00   d6350a          20170214 1206

 

Origin of the problem:

ISM agent is a 32-bit agent and is currently built  to use only gskit v7.

After 6.3 fp7 framework 32-bit was installed, gskit v7 binaries and libraries are replaced by gskit v8, and breaking ISM agent.

 

Possible solutions:

1) from a valid non upgraded yet ISM agent linux system, backup the whole content of gskit v7, located in <ITM_HOME>/li6243/gs directory or similar, using

$> Cd li6243

$> tar -cvpf gs-v7.tar ./gs

 

then restore it on the broken ISM agent system

$> Cd li6243

$> rm -rf gs (to remove gsk8)

$> tar -xvpf gs-v7.tar (to restore gsk7)

 

2) uninstall ISM agent from the current ITM home directory where other ITM components were laid down, and install ISM agent into a separate directory

 

3) IBM development team is currently working on a more long term solution.

 

Remark: So far, this issue was seen only on Linux OS, not AIX or Windows.

 

 

 

Tutorials Point

 

Subscribe and follow us for all the latest information directly on your social feeds:

 

 

image

 

image

 

image

 

 

  

Check out all our other posts and updates:

Academy Blogs:  h<wbr></wbr>t<wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>p<wbr></wbr>:<wbr></wbr>/<wbr></wbr><wbr></wbr><wbr></wbr>/ow.<wbr></wbr>ly/O<wbr></wbr>tue<wbr></wbr>0
Academy Videos:  http://ow.ly/PIKFz
Academy Google+:  http://ow.ly/Dj3nn
Academy Twitter Handle:  http://ow.ly/Dj35c


image

 

John Boyer (IBM)EOTC 2017 - Conference Announcement - Register Now!

Dear Friends of System Automation,

 

This is your official Invitation for the European Operations Technical Conference (EOTC) in 2017.

 

The conference is planned from Monday, May 15 through Thursday, May 18, 2017 in the IBM Lab in Boeblingen, Germany. Conference language will be English.

 

Remember that the conference will last half a day longer than last year. Begin is on Monday at 1 PM, end is on Thursday at noon.

This conference addresses the needs of System, Network and Automation Administrators. Sessions are presented by representatives from IBM software development teams and YOU, the product exploiters. Interaction and feedback is both expected and encouraged.

Topics cover a range of IBM Systems Management products, mainly IBM System Automation z/OS, the IBM Service Management Suite for z/OS and IBM NetView for z/OS. Other topics might be GDPS, OMEGAMON, z/OS Automation Infrastructure, IBM Workload Scheduler and the IBM Systems Management strategy.

 

Additionally we plan to provide workshops running in parallel to the presentations.

Discussions can include product futures and requirements.

 

As last year we will again require all attendees to sign a non-disclosure agreement. This allows all speakers to talk freely about product futures and plans.

 

We expect a conference fee of € 350,00 for the entire conference. The fee will be charged to all attendees (except speakers!).

Non-GSE members have to pay an additional € 100,00 but if they decide to become a member in 2017 then this amount will be credited.

Travel, lodging and other expenses must be covered by the attendees (including speakers).

 

Please do extend this announcement to others within IBM and to customers and colleagues whom you deem appropriate.

 

Register now on https://gse.paxido.com/EOTC-2017

 

For more information contact

   Gabriele Frey-Ganzel

IBM Laboratory Boeblingen, Germany

System Automation Development z/OS

e-Mail: gfrey@de.ibm.com

John Boyer (IBM)EOTC 2017 - Conference Announcement - Register Now!

Dear Friends of System Automation,

 

This is your official Invitation for the European Operations Technical Conference (EOTC) in 2017.

 

The conference is planned from Monday, May 15 through Thursday, May 18, 2017 in the IBM Lab in Boeblingen, Germany. Conference language will be English.

 

Remember that the conference will last half a day longer than last year. Begin is on Monday at 1 PM, end is on Thursday at noon.

This conference addresses the needs of System, Network and Automation Administrators. Sessions are presented by representatives from IBM software development teams and YOU, the product exploiters. Interaction and feedback is both expected and encouraged.

Topics cover a range of IBM Systems Management products, mainly IBM System Automation z/OS, the IBM Service Management Suite for z/OS and IBM NetView for z/OS. Other topics might be GDPS, OMEGAMON, z/OS Automation Infrastructure, IBM Workload Scheduler and the IBM Systems Management strategy.

 

Additionally we plan to provide workshops running in parallel to the presentations.

Discussions can include product futures and requirements.

 

If you have interest in special topics in the area of IBM System Automation z/OS, the IBM Service Management Suite for z/OS and IBM NetView for z/OS, let me know this before end of February and please send me a mail (gfrey@de.ibm.com) and tell me the topics you'd like to be addressed. I will try to get them covered in the conference.

 

As last year we will again require all attendees to sign a non-disclosure agreement. This allows all speakers to talk freely about product futures and plans.

 

We expect a conference fee of € 350,00 for the entire conference. The fee will be charged to all attendees (except speakers!).

Non-GSE members have to pay an additional € 100,00 but if they decide to become a member in 2017 then this amount will be credited.

Travel, lodging and other expenses must be covered by the attendees (including speakers).

 

Please do extend this announcement to others within IBM and to customers and colleagues whom you deem appropriate.

 

Register now on https://gse.paxido.com/EOTC-2017

 

For more information contact

Gabriele Frey-Ganzel

IBM Laboratory Boeblingen, Germany

System Automation Development z/OS

e-Mail: gfrey@de.ibm.com

John Boyer (IBM)Using "Total Pending" attribute to monitor mail router health

Last week I've been asked to investigate an issue with Domino agent returning unexpected data.
Customer was complaining about wrong values returned for attribute "Total Pending" in "Domino Mail" attribute group.

This value was not matching for most of the server rows when compared with the value
returned for "Pending Mail" by command "show server".
While instead the "Dead" attribute was consistent in both TEP and "show server" output.

Setting specific traces I noticed that the "total pending" attribute, as well as other
attributes in the same attribute groups, are retrieved directly from the Domino
server statistics and agent does not perform any operation over it.

Data showed in TEP for "Total Pending" is consistent with the information retrieved directly from Domino Server Statistics.
"Total Pending" is obtained from field Mail.TotalPending in the Domino Server statistics.
It includes both Dead and Held mail.

So if you have an high value for Dead mail, usually you will have also an high value for Total Pending.
Mail.TotalPending is updated by the server task to reflect the current number of messages that are pending delivery.
Can we rely on it to monitor pending mails ?
The drawback to using  Mail.TotalPending is that it is updated by the server task every five minutes, and thus its value becomes "stale"
and some time may not reflect the current sum of Dead+Held mail.

So generally speaking, you cannot compare the "Pending Mail" value returned by show server with "Total Pending".

Looking at the "Pending Mail" field, it is defined as: "Number of mail documents waiting to be routed to other servers and users."

In the Domino Agent, the attribute Waiting is the one that closely matches the Pending Mail,
as it is defined as "The number of outgoing mail messages currently in MAIL.BOX waiting."

If you want to evaluate the reliability of the values showed by TEP, instead of using show server, you can run a "show stat mail".

In this way you can check if there is really something wrong with data showed by Domino agent.
If you want to keep under control the pending emails, you must decide if you want to consider also the dead mails.
In such case you can use Total Pending attribute, but you need to remember that this value is updated by Domino server every 5 minutes.
If instead you don't want to take dead mails into account, use "Waiting" attribute in the ITM situations.

Some additional considerations from Domino side:
"Mail.TotalPending" may be more reliable than "Mail.Waiting".
Mail.Waiting is dependent on the router task; if the router is not running, then Mail.Waiting is not updated.
As previously wrote, Mail.TotalPending is updated by the server task to reflect the current number of messages that are pending delivery.  The drawback to using  Mail.TotalPending is that it is updated by the server task every five minutes.

Mail.Dead represents mail that cannot be delivered and could not be returned to the sender.
Mail.Hold is mail that is being held pending delivery to an external site.
Mail.TotalPending count includes both Dead and Held mail, while the Mail.Waiting figure does not include these items.
 
The absence of Pending/Waiting mail (i.e. Mail.TotalPending=0, Mail.Waiting=0) is an indication of a healthy mail router.

Hope this helps.

 

 

Tutorials Point

 

Subscribe and follow us for all the latest information directly on your social feeds:

 

 

image

 

image

 

image

 

 

  

Check out all our other posts and updates:

Academy Blogs: https://goo.gl/U7cYYY
Academy Videos: https://goo.gl/FE7F59
Academy Google+: https://goo.gl/Kj2mvZ
Academy Twitter : https://goo.gl/GsVecH


image

John Boyer (IBM)DB2 license expired after 90 days for IBM IoT CLM, RQM, RTC, RDM, JTS, RELM, IBM DOORS Next Generation – what can you do?

Each valid client license of

CLM, CLM, RQM, RTC, RDM, JTS, RELM or IBM DOORS Next Generation

includes a license to run one instance of DB2 Workgroup Server Edition (excludes z/OS).

Should you get a message that your DB2 license is expired you might have installed a trial license – for example a trial license for DB2  Enterprise edition).

You can remedy this by extracting the license file for the bundled DB2 Workgroup Server edition.

 

Please follow the steps provided in CLM, RQM, RTC, RDM, JTS, RELM, IBM DOORS Next Generation – DB2 license expired after 90 days, what can I do?

John Boyer (IBM)IBM Machine Learning

We announced IBM Machine Learning last week, see here and here  for event replays.  I was interviewed as part of the launch.  A good write up of what I said can be found on Silicon Angle. You can find the video of the interview at the end of this post. 

This video has been shared on social media under various titles, but he one that got most impact is : The evolution of : fusing human thought with algorithmic insights.  It is probably because the interview contains a discussion of AI potential danger.  Our take at IBM, and my take, is that we do not care much about artificial intelligence if the goal is to reproduce human cognition in order to replace it.  Our take is to work on tools that help humans perform their tasks better.  We speak of augmented intelligence, or assisted intelligence.  Machine Learning, as one of the prominent artificial intelligence capability, is no exception.

I also discuss other, more near term, topics around machine learning and the forthcoming IBM offering for it. Here it is if you want to watch the full interview (about 18 minutes ):

 

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="670" src="https://www.youtube.com/embed/vsxigYYHrTo?ecver=1" width="100%"></iframe>

John Boyer (IBM)What Connections Cloud Goody are you waiting for?

I use a "draft" and a "published" folder in Connections to progress assets through their creation, approval and publishing. I was SO pleasantly surprised the day I discovered, due to an irresistible urge to try just in case, that I can drag & drop files between folders! Pinch me, I am not dreaming! This was delivered in 2016. It missed my radar unlike the ability to assign an activity task to many or the ability to change the title of a wiki page.

 

Just in case you are like me and may have missed some of the goodies delivered, here is a compilation of the Connections Cloud enhancements delivered in 2016.

Become a member and you will be able to download the higher quality pdf version with active links to the relevant feature announcement.

image

John Boyer (IBM)Automatic Binary Optimizer for z/OS V1.2 messages available in IBM Doc Buddy

You can now download the Automatic Binary Optimizer for z/OS, V1.2 messages from IBM Doc Buddy, and then use it locally. Simply take the following steps:

  1. In IBM Doc Buddy, click the menu icon in the top left corner and go to the navigation pane, and then select "Components".
  2. From the "Components" list, choose "Automatic Binary Optimizer for z/OS". You can see that the V1.2 component is available.

  image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. Click the download icon.
  2. After the download is complete, you can search the messages locally.

 

For an overview of IBM Doc Buddy, see https://www.ibm.com/developerworks/community/blogs/31c890c6-ace1-4eeb-af6b-5950f3a1a5d1/entry/IBM_Doc_Buddy_Instant_help_with_ABO_error_messages_right_from_your_smart_phone?lang=en.

John Boyer (IBM)NewEvent! Open Mic Webcast: Hardware discovery information with ILMT - Tuesday, 28 February 2017

Hi All,

 

I'm happy to announce our second Open Mic, which will be held on the 28 February about "Hardware discovery information with ILMT".

Further details and meeting invitation can be downloaded from this link:
http://www-01.ibm.com/support/docview.wss?uid=swg2704938
 

I look forward to meet you all on this event.
Thank you in advance for participation.

John Boyer (IBM)DB2 sort 软件CBPDO安装及IVP作业常见问题

适宜读者(一年及以上主机经验)

背景知识:SMP/E, DB2 for Z/OS, JCL, USS基础知识)

 

IBM DB2 sort for Z/OS是用来加速DB2 for Z/OS utility对其database中存储的数据进行sort处理的,DB2 sort通过改进sort技术,调整系统资源,优化sort等方式,来改善DB2 utility的性能,可以很大程度的减少sort CPU time。当不使用DB2 sort软件时,DB2默认使用Z/OS提供的DFSORT软件。DB2 sort可为如下utility提供sort性能方面的改善:

CHECK DATA

CHECK INDEX

CHECK LOB

LOAD

REBUILD INDEX

REORG TABLESPACE

RUNSTATS

 

CBPDO(Custom-Built Product Delivery Offering)是打包好的软件安装包,或者称为电子安装包,不同于以前的安装方式,需要物理介质,如磁带,光盘等。CBPDO需要通过SMP/E对产品进行单独安装,包括改产品的补丁包等。

 

一、DB2 sort 软件CBPDO安装

在安装DB2 sort软件的过程中,需要先解压CBPDO的压缩包以获取到相应的说明,来解释如何上传到主机并解压,把相关压缩包解压到主机后,通过SMP/E等相关sample作业进行安装。在DB2 Sort的压缩包中,相关的说明文档为ePDO_checklist_v6.pdf,其中第4页详细解释了如何通过ftp上传的过程(必须通过windows操作系统上传):

C:\> ftp <hostaddress>

User (hostaddress:(none)): <tsouid>

331 Send password please.

Password:

230 tsouid is logged on. Working directory is "tsouid.".

Prompt off

Interactive mode Off

mkdir /smpnts/<ordernumber>

cd /smpnts/<ordernumber>

bin

mput packagelocation\<ordernumber>\*.*

mkdir /smpnts/<ordernumber>/SMPHOLD

cd /smpnts/<ordernumber>/SMPHOLD

bin

mput packagelocation\<ordernumber>\SMPHOLD\*.*

mkdir /smpnts/<ordernumber>/SMPPTFIN

cd /smpnts/<ordernumber>/SMPPTFIN

bin

mput packagelocation\<ordernumber>\SMPPTFIN\*.*

mkdir /smpnts/<ordernumber>/SMPRELF

cd /smpnts/<ordernumber>/SMPRELF

bin

mput packagelocation\<ordernumber>\SMPRELF\*.*

quit

 

上传好所有文件后,通过GIMUNZIP程序来提交JCL作业对pax文件进行解压。

参考例子:

//GIMUNZIP JOB ,CLASS=B,MSGCLASS=H,REGION=0M,MSGLEVEL=(1,1),

//    NOTIFY=&SYSUID                                        

//STEP1    EXEC PGM=GIMUNZIP,PARM='HASH=YES'               

//SMPDIR   DD PATH='/ist/smpnts/db2sort/',PATHDISP=KEEP    

//SMPCPATH DD PATH='/usr/lpp/smp/classes/',PATHDISP=KEEP   

//SMPJHOME DD PATH='/usr/lpp/java/java800/',PATHDISP=KEEP  

//SMPOUT   DD SYSOUT=*                                     

//SYSPRINT DD SYSOUT=*                                     

//SYSUT3   DD UNIT=SYSDA,SPACE=(CYL,(50,10))               

//SYSUT4   DD UNIT=SYSDA,SPACE=(CYL,(25,5))                 

//SYSIN    DD *                                            

<GIMUNZIP>                                                 

  <ARCHDEF name="S0002.CSP.OSP67093.DOCLIB.pax.Z"          

           replace="YES"                                    

           newname="IST.DB2SORT.V2R1.OSP67093.DOCLIB">     

  </ARCHDEF>                                               

  <ARCHDEF name="S0003.CSP.OSP67093.RIMLIB.pax.Z"          

           replace="YES"                                   

           newname="IST.DB2SORT.V2R1.OSP67093.RIMLIB">     

  </ARCHDEF>                                         

  <ARCHDEF name="S0005.CSP.OSP67093.PGMDIR.pax.Z"    

           replace="YES"                             

           newname="IST.DB2SORT.V2R1.OSP67093.PGMDIR">

  </ARCHDEF>                                         

</GIMUNZIP>                                          

/*                                                   

 

解压完所有压缩包后,需要进行:

  1. 提交SMP/E 的receive作业,把产品及补丁包接收到global zone里,在接收之前,如果需要创建新的CSI,可通过解压后的PDS **.F7中找到相应的sample作业CNKALA/B, receive sample 作业为CNKRECEV;
  2. 创建target和distribute 库,sample作业为 **.F7(CNKALLOC);
  3. target和distribute zone创建DDDEF entries,sample作业为 **.F7(CNKDDDEF);
  4. 提交SMP/E的apply作业,在apply之前,可先做apply check进行检查,sample作为为 **.F7(CNKAPPLY);
  5. 提交SMP/E的accept作业,在accept之前,可先做accept check进行检查,sample作为为 **.F7(CNKACCEP);
  6. 提交SMP/E的report crosszone作业,改作业会把产品所需的内容装在单独zone里,且会在SMPPUNCH库中创建apply和accept作业,可通过这两个作业安装cross zone的内容,sample作为为 **.F7(CNKACCEP);

 

二、DB2 sort IVP作业常见问题

本人在提交IVP作业遇到了如下两个见问题:

  1. 提交IVP作业产生abend S04E,在JOBLOG中有如下错误:

CNK998E CNKIVPAB,#CNKLD2B,IVP     -  UNSUCCESSFUL SORT 13E S

CNK526E  DB2 SORT FOR Z/OS INTERNAL ABEND -   A4

 

经检查,在JCL中,参数DYNALLOC=OFF,在相关书籍等资料中并未查到此处该参数的解释,根据JCL的注释,应该是是否动态产生一些diagnose信息的控制开关,该成DYNALLOC=ON后,解决了该问题。

//*----------------------------------------------------------           01050000

//$DB2PRM$ EXEC PGM=IEBGENER,COND=(4,LT)  DB2 SORT DIAGNOSTIC MESSAGES  01060000

//SYSUT1   DD  *                                                        01070000

GMSG,DYNALLOC=OFF

 

改成DYNALLOC=ON 解决

 

2. 在客户化时,如果把SCNKLINK和SCNKLPA添加到LNKLST和APF list中,那么在提交相关utility作业时,那么不需要修改JCL,只要DB2 ZPARM DB2SORT ENABLE便可直接提交作业使用到DB2SORT,如果没加LNKLST和APF,则需要在STEPLIB中加上这两个dataset,如果用到了DB2 SORT,JOBLOG中会有如下message:

DSNU3352I   040 11:14:20.67 DSNUGSRP - SORT TASK SW01: USED DB2 SORT OPTMODE=ELAP

......

1 DB2 SORT FOR Z/OS  V2.1.0.0    PRODUCT ID: 5655-AA9    z/OS   2.1.0

......

 

在提交IVP作业时,由于数据量较大,且没有配置动态分配sort数据集的最大值,导致出现了CNK046E 超过sort capacity,手工删除了IVP作业前面的一些步骤,接着提交,发现第二次提交的IVP作业,并没有使用到DB2 SORT,而是使用的DFSORT。

经查,IVP作业做的是DB2 Sort和DFSORT的对比。因此作业当中会有DIAGNOSE TYPE(134)/DIAGNOSE TYPE(133)的指令,以强制控制Utility是使用DB2 Sort还是DB2 Sort。134会强制使用DB2 Sort,133会强制使用DFSORT。如果发现IVP作业没有调用DB2 Sort需要看下是否使用了DIAGNOSE TYPE(133).

参考例子:

//* DB2 LOAD REPLACE WITHOUT DB2 SORT                                   03660000

//SYSIN    DD  *                                                        03670000

DIAGNOSE TYPE(133)                                                      03680000

LOAD DATA INDDN(INPUT) LOG NO RESUME(NO) REPLACE REUSE SORTDEVT SYSDA   03690000

     EBCDIC                                                             03700000

     INTO TABLE DB2SORT.CNKTBLN2

 

 

参考资料:

IBM DB2 Sort for z/OS User's Guide(Version 2 Release 1)

Program Directory for IBM DB2 Sort for z/OS(V02.01.00)

 

作者:王典

邮箱:wangdianATcn.ibm.com(替换AT为@)

 

内容声明:本文仅代表作者的个人观点,与IBM立场、策略和观点无关。文中专业名词因翻译原因,表述中难免存在差异。如有疑惑,请以英文为准。同时数据来源于实验室环境,仅供参考。如果您对我们的话题感兴趣,请通过电子邮箱联系我们

John Boyer (IBM)玩转超级账本,乐享应用开发 - 区块链黑客松,火热报名中!

 

 

图像

 

      

 

 --<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>---<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-------------------------------------

相关学习资源:

 

索结果标题:

索结果标题:

John Boyer (IBM)9 sessions by IBM Collaboration Services & Support to catch at Connect 2017

Title Details
Moving Your Mail to the Cloud 101

If you're interested in moving your email platform to IBM Verse and don't know how to get help, join us as we take you through the key topics in cloud migration. This session will introduce you to migration tools such as MOM, cloud configurations such as service-only and hybrid, and migration and services offerings such as IBM's Migration Part Numbers. We'll arm you with the knowledge you need to start planning your migration. Whether you're migrating from Domino, Exchange, Google or other mail platforms, we'll describe the migration options so you can make sense of it. It's a session not to be missed. Speaker: Stuart McKay, IBM

Tuesday, 8:00 AM - 8:45 AM | Room 2001
Tuesday, 2:30 PM - 3:15 PM | Room 2011

 

Your Mail Is in the Cloud. What About Your Apps?

You've moved hundreds (or thousands!) of users to the cloud for IBM Verse mail, but what about the hundreds (or thousands!) of IBM Domino applications you're running in-house? How many servers could you retire with those apps running in the cloud? In this session, you'll walk through the entire application migration process with two experts, from the provider-selection process to the final data moves and application maintenance cycle. You'll learn how to identify potential difficulties with specific applications, estimate time requirements, and use specific tools to make your migration easier; you'll also see real-world examples from among your peers who have already made the move. Be ready when the boss asks, "What about our apps?"  Speakers: Brad Boston, IBM and Matt Holthe, IBM

Tuesday, 9:00 AM - 9:45 AM | Room 2001 | Session ID: 1083A

 

Expertly Moving IBM Connections to the Cloud

Drawing on the recent experiences of three clients' migrations from on-premises Connections to Connections Cloud, an experienced onboarding coordinator and two customers will share the process of successfully moving IBM Connections from on premise to the cloud. Hear about the expert's logic behind the decisions, and learn about the outside-in view on this experience. We'll cover planning, best practices, gaps, challenges, training and communications, as well as some technical stories. There will be a brief technical overview of how the process works. With clients on the panel, there will be time for a discussion to get the low down on the most pressing questions. Speakers: Gideon Sheps, IBM Canada Ltd; Lynda Burrows, Commonwealth Bank; Abby Butts, Hendricks Regional Health

Wednesday, 8:00 AM - 8:45 AM | Room 2009 | Session ID: 1125A

 

So, You're Going to the Cloud - Start Preparing NOW!

There's a lot of nuts-and-bolts work involved in preparing for a cloud migration, and some of it may not be obvious. There's work to be done in identity management, network design, security and many other areas of your enterprise; some tasks can be accomplished in a few days, but others may take weeks. Join two of IBM's senior troubleshooters to explore the most critical areas in which you must prepare for a successful migration. You'll see where your peers encountered unexpected problems AND the means by which those problems were resolved. Along the way, you'll learn how IBM Support is ready to assist you BEFORE you "go live." In short, you'll leave this session equipped to avoid the most common pitfalls in cloud migration.  Speakers: Wes Morgan, IBM and Casey Toole, IBM

Wednesday, 4:00 PM - 4:45 PM | Room 2006 | Session ID: 1620A

 

IBM Verse - Everything You Need to Know for a Successful Migration

Are you a customer considering a move to IBM Verse? Whether a long time Domino customer, or moving from another platform, don't miss this opportunity to learning about the techniques, tools and best practices to make your migration fast, painless and successful. We'll uncover and share the best practices you can only get from countless global migrations to IBM's Verse cloud. We'll take you through planning, migrating, and deployment best practices. We'll help you choose the right tools for your migration, and uncover some of the low hanging fruit to make your transition successful. Whether you're a veteran administrator or manager, or just want to understand what your company can expect, this session will help you move with success. Speakers: Stuart McKay, IBM and Luis Guirigay, IBM

Wednesday, 2:00 PM - 2:45 PM | Room 2008 | Session ID: 1064A

OpenNTF Domino API (ODA): Super-Charging Domino Development

After Lotusphere 2013 a community initiative began to make Java and SSJS development easier. Four years later, it's much bigger, some features have been incorporated into XPages Extension Library and ODA is now a plugin within Verse On Premise. Hear from one of the original ODA developers, Paul Withers, and one of the VOP developers, Stephan Wissel. Find out how it helps Domino development in XPages or plugins, how it adds out-of-the-box multi-threading for background tasks, how it turns Domino from just NoSQL to a multi-model database by adding Graph access, and how the latest enhancement, EventSubscribers, listens for certain actions at server level and acts upon them. Domino development will never be the same.  Speakers: Paul Withers, Intec Systems Ltd and Stephan Wissel, IBM

Wednesday, 2:00 PM - 2:45 PM | Room 2020 | Session ID: 1487A

 

Born in the Cloud Collaboration Platform for Telcos, System Integrators & CSPs: Digital China Story

IBM Social Lab Services SaaS Cloud solution - Webmail for internet service providers (WISPR) is landing in IBM China's biggest Social partner Digital China's Public Cloud. WISPR will provide email services together with OA and collaboration services to 100,000+ Small and Medium Enterprises all over China. DigitalChina Cloud Company Yunke Eco Center's GM - Ms. Li will discuss her view of Chinese SaaS cloud market, Yunke's continuous strategic plan of cloud business in China, and how WISPR helps with them to achieve the goal. The session will be especially interesting for CEO/CIO/CTO of cloud operation companies and consuming companies of SaaS services, and LOB who focusing on Chinese cloud market and potential opportunities.  Speakers: Stuart McKay, IBM and Jing Li, Digital China (China) Limited

Wednesday, 3:00 PM - 3:45 PM | Room 2008 | Session ID: 1190A

 

Collaboration for Employees on the Go

Social collaboration brings all company employees to a common platform for sharing data, files, content etc.  Some companies have a predominant field force so a mobile or tablet is the choice of device through which they access the platform.  To address the challenges of using Connections from mobiles, we have come up with innovative solutions for clients in Banking and Telecom.  The solution uses push to send SMS and provide alerts, offline app and augments the Connections Mobile App to provide access from devices.  We will provide business scenarios of how this can help the day job for a field employee, increase adoption by getting more users to access Connections, and also provide offline access to information when data bandwidth is low. Speakers: Amol Dhondse, IBM and Krishnakumar Bala, IBM

Wednesday, 8:00 AM - 8:45 AM | Room 2008 | Session ID: 1379A

 

Context Driven Cognitive Collaboration

Collaboration Networking platforms today are evolving, tapping into collaborative mindset that continues to be more evolutionary in area of innovation, becoming unique touch points to engage communities, start conversations, recruit skillful employees, and develop new innovative ideas. Enterprises are engaging their employee to communities in conversation explicitly to tap into their brainpower and energy. Enterprises are also engaging customers, business partners and ecosystem members to learn and offer better products and services by embracing collaboration and community based networks enterprises. Speakers: Amol Dhondse, IBM and Krishnakumar Bala, IBM

Wednesday, 11:00 AM - 11:45 AM | Room 2020 | Session ID: 1495A

image

 

John Boyer (IBM)Maximo 7.6 Mulitenancy Login

Are you new to Maximo Multitenancy?

Have you tried to login with the usual administrator user maxadmin and get the following error?

image

With Maximo Multitenancy installed, there is a different way to log in for administrators.

On the login page, you will need to click on the "Administrator Login" link that is above the language.
This will then have the "Tenant" field appear.
By default, the tenant code is MTM.

imageimage

You should now be able to login as maxadmin successfully.

 

John Boyer (IBM)4 Ways Big Data is Enhancing the Streaming Experience

4 Ways Big Data is Enhancing the Streaming Experience
image

In every field that involves millions of users or customers, massive quantities of data are generated every second. Big Data analytics help manage all that data, make sense out of it, and use the insights gained from it to deliver better products and services to the users. TV and movie streaming services with millions of users are one of the prime examples where Big Data is heavily utilized. Right this very moment, Big Data is being used to enhance your streaming experience in ways that will amaze you.
 

Improved TV Content

Streaming services are investing heavily into producing original content. They understandably want to ensure that their investment pays off handsomely. To make this happen, they monitor the behavior of their users. For the present generation of TV viewers and streaming users, the gap between the virtual world and the physical world is blurred.

For instance, many of us have a habit of enjoying our entertainment on TV, while simultaneously using our smartphones to socialize virtually. So, whenever we like some aspect of a show, or dislike it, we immediately share our opinions with the others via Facebook, Twitter, or discussion threads on blogs.

Content producers that are monitoring our online behavior immediately know our opinions. By knowing the opinions of millions of TV viewers almost instantly after a show has been aired, they are able to take necessary steps to better the content on the next episode of the show.

This conversation, between the audience and the content creators, is happening in real time, thanks to Big Data. Of course, most of us have no idea that we are contributing to the Hollywood this way. That feels funny, when you think about it, but it is true.

 

Transforming TV Advertisements

As brands and companies compete against each other to acquire and retain customers, they can no more rely on just the traditional TV ads to reach out to their target audience. The number of cord-cutters in the US are increasing. Even those who have cable subscriptions simply mute their TVs during ad breaks.

As a consequence, TV ads have lost their sheen. The same brands and companies are now turning to streaming services to reach their target audience. In streaming services, they might have found a savior.

The streaming services deliver content online, and have crucial data available to them – the likes, the dislikes, and the typical tastes of their users in entertainment. Using Big Data analytics, the streaming services can use their wealth of customer data to deploy targeted ad campaigns.

This way, the consumers watch relevant ads that interest them, and advertisers successfully send their messages to their target consumers. It is a win-win situation.
 

Introduction of New Products

Ever noticed how some show makers keep milking their successful shows until they eventually turn them into absolute garbage? We have seen many of these, haven’t we? Two and a Half Men, Dexter, and Grey’s Anatomy are just a few examples that come to mind. The reason is that new shows cost a fortune to make, and there is no guarantee that they will succeed.

As with every business, the producers want to minimize their risks. So, if the audience likes something, then that formula is exploited until the audience, well, dislikes it. Big Data is quietly changing this once and for all.

Online streaming services and TV networks now have access to large quantities of information regarding the entertainment preferences of their users. Using Big Data, they are able to better predict what type of show or movie has a better chance of being liked by their audience.

Instead of producing content and then hoping for the audience to like it, Big Data allows the producers to create content that is tailored to specific segments of viewers, and then introduce the content to them. This way, the show makers can finally let go of their long running, burned out show and focus their energies on producing new content for the audience.
 

Improving the TV Experience

The traditional TV experience for cable users is browsing through hundreds of channels until they find something they like. Streaming services improve on that by categorizing their content into easily-navigable categories. This ensures that their users find what they were looking for easily. Now, the next step is to bring suitable content to the users even before they try looking for it.

Big Data allows the service providers to make use of their users’ history and content consumption, and recommend suitable content to then. Previous generation services already have their own recommendation systems, but Big Data’s recommendations are a lot more accurate.

John Boyer (IBM)February 20, 2017 at 7:32:05 PM

What’s Next for Online Streaming Services?

image

Whenever we think of TV and movie streaming, most of us only imagine one thing – Netflix and Chill. For those of us, Netflix is synonymous with streaming, and it’s not going to change anytime soon. That’s not without a reason either. At just $9.99 a month, Netflix offers high quality content, both third-party and originals, in pure HD on multiple devices. That’s an irresistible deal there. Everyone knows that Netflix’s digital library is massive, and that you will never run out of quality content to watch.

As invincible as Netflix might seem, it is not a without its flaws or limitations. It is these flaws and limitations that its rivals are trying to take advantage of, and beat it at its own game. A few years ago, this would have been an overambitious dream for other streaming services. But, today, Netflix cannot and does not take things for granted. A growing crop of competitors is offering it tough competition and Netflix knows it. So, how will the future of streaming look like? What can you expect to change in a few years’ time? We may have some answers.

The Crème de la Crème

Going by the sheer numbers, Netflix and Amazon Prime Instant Video occupy the top branch of the streaming tree. The rest of the players like Hulu Plus, iTunes, and others do not figure in the same league as them. Despite the fact that the market is inundated with more than a dozen other streaming services like Sling TV, PlayStation Vue, CinemaNow, VUDU, and many more, many of us have never given them a serious thought. For us, Netflix and Amazon are all that the streaming world has to offer.

The Live TV Connection

For cable subscribers, the cable TV is still the entertainment of choice. Netflix caters to their On-demand fix. Together, the two bring them all the entertainment they ever wanted. The cord-cutters, on the other hand, usually have two streaming services. One of them is typically Netflix.

The other service is a live channel service provider like Sling TV or PlayStation Vue. Both of them pioneered the live TV streaming on the internet. Subscribers of these services can enjoy cable TV on the internet without actually signing up for a cable channel. Their packages are also smartly designed, so that the customers pay only for those channels that they want.
However, they do not enjoy the same kind of popularity as Netflix and Amazon Instant Video. Of course, they are trying hard to change that, and have tasted some success in this direction. But, there is another game-changing event that is set to unfold soon. One that has the potential to create waves across the industry. And, it is coming right out of the blue.

VIDGO

VIDGO is a new player that has the potential to outgrow other streaming services in the market. Chances are that you are hearing about this service for the first time. It is a startup and made its debut at CES 2016. It has since suffered multiple delays in its initial launch in select cities. However, the service has generated a lot of buzz about its features. VIDGO’s management has revealed that VIDGO will offer its users absolute control over the customization of their package. It offers live TV, like Sling TV and PlayStation Vue, with a greater degree of customizability.

Users will be able to choose which channels they sign up for, and pay for only those channels. VIDGO’s team has also indicated that it will be offering On-demand content, in addition to live TV, to its users. If this happens, then VIDGO will become a massive hit in no time at all. All those cord-cutters who use multiple streaming services to enjoy both live TV and On-demand content can simply replace them with VIDGO and experience very little change in their daily entertainment ritual. Whether VIDGO will be able to live up to the hype is something that is too early to discuss.

Small Players

There are many players in the streaming industry, who have not managed to make a serious dent in the market share of the industry leaders. These include Google Play, YouTube, FandangoNOW, and others. They have a good-sized library, and feature good quality content.

They have been in the market for some time now. But, they are not expected to register any sudden growth in the immediate future, unless they change their strategy in a big way. Right now, they are enjoying a good fan following, and can take their service to new heights if they play their cards right.

Another unlikely direction from which the likes of Netflix and Amazon might face is the TV Networks. In their bid to stay relevant in the world full of increasing numbers of cord-cutters, TV Networks have started releasing their own, in-house streaming services.

These networks upload their cable content on their streaming services and make them available for online consumption. Thus, TV viewers can either have a cable subscription to these channels or buy the online subscription to enjoy the content on them. They have not made the same impact as the premier streaming services, but they do have their own niche markets.

Final Thoughts

The streaming world is bursting with numerous streaming services, and their numbers are only growing. New players are entering the market every year, and existing players are expanding their offerings. There is no telling who will win the streaming war. Services like Netflix cannot be taken for granted anymore. They are facing heat from every direction. The next few years will witness a fierce battle in this industry, and the final winner will take it all.

 

 

John Boyer (IBM)【IBM i ニュース 第83号】 IBM i アプリ開発最新事情/無償体験環境

【IBM i ニュース 第83号】 IBM i アプリ開発最新事情/無償体験環境

配信日: 2017年2月15日

 

イメージ

 

1. Top News

 

☆ IBM i アプリケーション開発最新事情

RPG初心者プログラマーの開発効率向上、Web系アプリケーションを開発している場合、今ある資産を継承するために構成管理をなど、お客様の開発状況にあわせたツールの使い方を紹介しています。

詳細はこちら

https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=bd84ca90-6b53-48c6-946d-e030336961fc#fullpageWidgetId=We87a0ccfab3c_4f11_ab9e_9fd079fb31e0&file=0e35e33a-cee0-47be-96e8-8aa7642a947b

 

2. お知らせ

 

☆ フリーフォームRPG+Node.js+Rational Developer for i 無償体験

IBM i の最新開発環境の利用を検討中のお客様向けにCloud環境で、フリーフォームRPG+Node.js+Rational Developer for i をお試し可能な環境をご用意しております。下記からお申し込みください。

お申し込みはこちら

http://i5php.jp/jishuu.html

学習に役立つ動画

http://i5php.jp/technicalinformation/technicalinformation-2/1759.html

 

3. セミナー案内

 

☆【Liveオンラインセミナー】IBM i + Watson Analyticsで広がる新時代のデータ分析 あなた専属の”データサイエンティスト”!?

IBM i をお使いのITご担当者を対象に、IBM i とWatson Analyticsの組み合わせが実現するデータ活用ソリューションについて、デモを交えながらご紹介いたします。この新ソリューションを活用することで、IBM i 基幹データに様々な周辺データを組み合わせ、分析することが可能となり、新たな知見の獲得による新たなビジネスチャンスの創出が期待できます。 モバイルや自社オフィスからWebブラウザで、短時間の参加で習得いただけるLIVEオンラインセミナーです。データ活用による新たなビジネスモデルを確立されたいすべてのお客様、ぜひご参加下さい。

■ LIVEオンラインセミナー概要

◎ 日 時:2月23日(木)17:00〜18:00

◎ 形 式:LIVE オンラインセミナー(事前登録制)

◎ 主 催:日本IBM

お申し込みはこちら

https://enq.itmedia.co.jp/on24/form/1354799

 

☆【セミナー案内】3/2-7開催 iCafe 主催 IBM i World 2017 IBM i TECHセミナー 2017 春

IBM i とPower Systemsの最新情報をわかりやすくお届けする技術者向けのイベント「IBM i TECHセミナー」を東京・名古屋・大阪で開催します。今回は「企業活動の根幹を成す基幹システム基盤としてのIBM i の価値を見直す」と題して、今最も注目を集めるWatson AnalyticsとIBM i の連携をデモを交えながらご紹介するセッションや、基盤の維持・拡張時に必要となる最新開発環境をわかりやすく解説いたします。さらに企業の大切な資産である基幹データをどうやって保全するか、最新ソリューションもご紹介します。

◎ 日 時:

名古屋:3月2日(木)14:00〜17:00 (受付開始13:30)

大 阪:3月3日(金)14:00〜17:00 (受付開始13:30)

東 京:3月7日(火)14:00〜17:00 (受付開始13:30)

◎ 会 場:

名古屋:日本IBM名古屋事業所

大 阪:日本IBM大阪事業所

東 京:日本IBM箱崎事業所

◎ 主 催:iCafe(アイカフェ) www.i-cafe.info

◎ 協 賛:日本アイ・ビー・エム株式会社

◎ 定 員:各会場50名

http://www.sbbit.jp/eventinfo/38905/

 

4. 技術情報

 

☆【技術情報】IBM i パフォーマンス FAQ

Power Systemsで稼働するIBM i のパフォーマンスについて、お客様からよく頂くご質問をFAQ形式でまとめました。当資料を参考に、IBM i の基本的なパフォーマンスの考え方について理解していただくことを目的としています。IBM i の長い歴史において、基本的なパフォーマンスに関する概念は変わっていません。ただしPower Systemsで実装されたSMT機能では、新しい概念が出てきました。また長年IBM i を使用するなかで運用上はあまり意識することがないため無意識のうちに勘違いしている点もあるかもしれません。この機会に、いま一度IBM i の理解や認識を深めて頂けたらと思います。

詳細はこちら

https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=bd84ca90-6b53-48c6-946d-e030336961fc#fullpageWidgetId=We87a0ccfab3c_4f11_ab9e_9fd079fb31e0&file=f45932d2-96fb-4555-a44d-9fa816a7b4f6

 

5. その他

 

☆ 2世代先までのサポート・ロードマップを明示

IBM i に関する戦略とロードマップのホワイトペーパー最新版を公開しています。Java、PHP、Rubyなどの言語と同様に開発で使用できるフリーフォームRPGや、2026年までのリリース・サポート、最新バージョンである IBM i 7.3の特長などをご紹介しています。ぜひご一読ください。

詳細はこちら

https://www-01.ibm.com/marketing/iwm/dre/signup?source=mrs-form-6956&S_PKG=ov52980&lang=ja_jp&cm_mmc=Earned-_-IBM+Systems_Systems+-+Power+SOR-_-JP_JP-_--Power-SOR-newsletter-0217_ov52980&cm_mmca1=000001XM&cm_mmca2=10001934&

 

☆ Youtubeビデオ公開

Youtubeで、IBM i のマシンの素晴らしさを端的に示す 35秒の動画が公開されました! ぜひご覧ください!

https://www.youtube.com/watch?v=t0DO9cayF9M

https://www.youtube.com/watch?v=zr5rpiv_GaQ

 

 

★「IBM i ニュース」の配信登録はこちら

 

 

John Boyer (IBM)Random strings

Sometimes Blog entries will go off on random tangents for no apparent reason and this Blog is no exception. A colleague today asked about generating a random string using relevance for a test he wanted to run. The initial thought was around computer ID, but we noticed that it does not really change, so was a poor choice for introducing entropy into our random string.

I did not know, until today, that we now have a BigFix inspector for Random Integer, first available in BigFix 9.5.3 (For pre 9.5.3, you might consider this approach by JGStew

q: random integer

A: -144416853603867511

But we really wanted a string.  First thought was to generate several, convert to hexadecimal and trim to the desired length.

q: first 16 of concatenation of (random integer as hexadecimal;random integer as hexadecimal;random integer as hexadecimal)

A: 79c457e80078cb57

Not too shabby, but also does not have as many characters as a string might. We can do better.

First we can generate a fixed quantity of integers (one per character we want) with a relatively new inspector that I have been dying to blog about:

q: integers in (1,16)

A: 1

A: 2

.

.

.

A: 15

A: 16

 

Now we can an it clause without using the it, in a sneaky way

q: (random integer) of integers in (1,16)

A: 7702497158501585548

A: 7280986203060192850

.

.

.

A: 3920865462525020315

A: -7129107120883537456

 

Now we can trim this down to an integer between 36 and 116 Why 36 and 126? Because we can turn those into printable characters with this relevance.

q: characters (integers in (36,126))

 

Now we just trim our big random integers down into our 36-126 range.

q: random integer mod 90 + 36

A: 77

 

And slip it all together with a concatenation on top for "Instant Random String of x characters long".

x=16 in this example.

 

q: concatenation of characters ((random integer mod 90 + 36) of integers in (1,16))

A: j]H>5?+e4-I'%25w.8

 

==============  Update  =================
JGStew and JGo just inspired me with an even cooler combo to get a 28 character random string.

Caveat, most of these will start with LMNO. LMNO will be overweight in the random string, because we are only inputting integers for the encoding.

q: base64 encode (random integer as string)

A: MTY0MDA0NDE2NzgyNzkxNjcx

 

Need a really long random string? Just use the integers in () trick.

q: concatenation of (base64 encode (random integer as string)) of (integers in (1,20))

 

Want to trim out the possible = padding at the end of a base64 encoding?

q: concatenation of (first 24 of base64 encode (random integer as string)) of (integers in (1,20))

ProgrammableWeb6 Errors That API Providers Should Look Out For

It’s never been easier to create an API. With Web frameworks like Flask and Express and app engines like Heroku, you can get an API up and running in hours if not minutes. The downside to this is that the easier it is to build an API, the more badly designed APIs there are going to be. And of course clients are not always much better. Developers don’t read docs properly and make their own silly errors.

John Boyer (IBM)New installation automation included in A Deployment Guide for IBM Spectrum Scale Unified File and Object Storage Redpaper

This is the fourth edition of our A Deployment Guide for IBM Spectrum Scale Unified File and Object Storage, REDP-5113. It just seems like yesterday when we published the first edition.  The title was also updated to show the breadth of IBM Spectrum Scale support of Unified File and Object storage. The latest Redpaper focuses on the scope of object storage with OpenStack Swift and the automation of the  installation processes and preferred practices that ensure optimal performance and reliability.  We have come a long way from the original paper that included lots of scripts to help with the installation process. Now is a good time to take a look at the Redpaper to see the progress in IBM Spectrum Scale V4.2.2.

A key highlight: This version describes the use of the spectrumscale installation toolkit to automate the steps that are required to install IBM Spectrum Scale, deploy protocols, and install updates and patches.

The Redpaper has the following layout:

Chapter 1. IBM Spectrum Scale Unified File and Object Storage
Chapter 2. Planning for an IBM Spectrum Scale Unified File and Object Storage deployment
Chapter 3. IBM Spectrum Scale Unified File and Object Storage configuration overview
Chapter 4. IBM Spectrum Scale Unified File and Object Storage installation
Chapter 5. System administration considerations
Chapter 6. Swift feature overview
Chapter 7. Data protection: Backup, restore, and disaster recovery
Chapter 8. Summary

Section  2.1.8 Unified file and object access describes some of the use cases for IBM Spectrum Scale Object Storage when unified file and object access is enabled. This figure shows a comparison between traditional analytics with object store versus analytics with Spectrum Scale Unified File and Object access.

image

This is just one example of new information in the fourth edition. 

 

 

 

https://www.ibm.com/developerworks/community/blogs/storage_redbooks/resource/BLOGS_UPLOADED_IMAGES/Redbooks_Blogger_IconforLarryCoyne.jpg

John Boyer (IBM)Planning Poker Is Not a Contact Sport

In one of my blog postings just over a year ago I suggested that mature teams may not need to use Story Points.  However, at the end of that post, I wrote:

 

[If] your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.

(You can read the full post here:  link)

 

What I’d like to do with this post is cover a few issues on how to make sure you’re getting the most out of using Planning Poker.

image

First, keep in mind that Planning Poker is meant to be a very simple and quick process to be used by a team for assigning Story Points to a given story.  It’s not meant to provide absolute precision regarding the assignment of Story Points, and the assignment of Story Points itself is meant to be relative to the sizings given to other User Stories on your team’s backlog.

 

Next, because Planning Poker is an estimation technique please don’t assume that everyone on the team must be 100% in agreement with the number of Story Points being assigned to a given Story.  Let me explain:  typically, when a team uses Planning Poker to assign Points to a User Story, there will be a smattering of “votes.”  As an example, let’s say that, after the very first vote for a given Story, there are a couple of 3's, a 5, a couple of 8’s, and a 13.  The scrum master should ask why folks who voted 3 thought the size was relatively small, and then ask the person who voted 13 why he thought the size was relatively big (i.e., over 4 times the estimated size of those who voted 3).  After a brief discussion, the scrum master should ask for the team to vote again.  Let’s say this time there’s one 3, several 5’s, and a couple of 8’s.  A good scrum master should say something like, “OK team, let’s go with a 5 for this story and move on to the next story on the backlog.”  What the scrum master is looking for is “harmonic convergence.”  Notice how, in the second vote, there were fewer 3’s and the 13 went away.  The bulk of the votes were gravitating towards a 5, and so you can see the team is getting closer (“converging”) on a given number.  Good enough!  Don’t waste time by voting again, and again, and again, and again (ad infinitum and ad nauseum) until the team members all vote the same – it’s not worth it.  Estimation is not meant to be precise, and spending more and more time trying to be precise with estimates is a waste of time.

 

And here’s the main point that I’d like to make – let’s say you’re one of the team members who voted an 8 for this story even though the number finally assigned was a 5 – should you be upset?  Should you start an argument?  Should you want to keep voting until everyone agrees with you and assigns the Story an 8?  No, no, and no.  Even though you may really think the story should have been sized at an 8, you can see the team was converging on a 5, so just let it go and press on.  Planning poker is not a contact sport.  Laughing

 

Finally, once teams get the hang of Planning Poker, most of them vote no more than twice on any given story, and they usually don’t spend more than just a couple of minutes per story.  Here’s the high-level process:

Step 1.  If you've not assigned Story Points to any User Stories on your backlog then, as a team, quickly pick out what you all think is the smallest story on the backlog and automatically assign it a 1.  All other Stories on the backlog will be sized relative to this Story.

Step 2.  Grab the next User Story from the backlog.

Step 3.  Have a brief discussion about the Story regarding the anticipated effort relative to the first story (the story itself should be familiar to the team since, if the writing of the Stories was done correctly, the team participated together in the writing of the given Story).

Step 4.  Vote.

Step 5.  If there appears to be a fairly strong consensus on a given number, go with that number, and then go back to Step 2….

Step 6.  Otherwise, if there’s a fair amount of disparity in the votes cast, then have another brief discussion focusing on asking those who voted with the lowest number, and those who voted with the highest number, to give some insights into their respective thinking.

Step 7.  Vote again – you’ll likely see some harmonic convergence on a specific number after the second vote, and then just go with that number.  Now go back to Step 2 (lather, rinse, repeat)….

 

In sum, Planning Poker is a great technique to use to quickly assign Story Points to your User Stories.  Just be sure that the team doesn’t over-engineer its use.

 

As always, please feel free to ask questions and share your experiences.  Leslie and I look forward to hearing from you!

 

John Boyer (IBM)Accelerate with IBM Storage: TTS7700 Management Classes and Copy Policies - What You Need to Know

Date:  March 16, 2016

Time:  12 noon New York, 5 p.m. London, 6 p.m. Paris, 16:00:00 GMT

Duration:  1.5 hours

Abstract: During this session we will discuss how the Advanced Policy Management capabilities of the TS7700 family use the SMS construct names from your ACS routines to control and manage your virtual volumes in a Grid configuration.  We will review the management class constructs and policies for a TS7700 Grid and how they can be used to influence which clusters will be selected for scratch mounts, what tape volume cache will be used for those mounts and which clusters will receive copies of the virtual volumes.

Speakers: Randy Hensley

Register:  http://bit.ly/2jVgoE3

After registering you will receive an email confirming your registration with information you need to join the Webinar.

 

Find out about upcoming Accelerate with IBM Storage webinars by subscribing to our blog:

http://ibm.co/1HctX2V or

https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

 

Join our Accelerate mailing list to hear about upcoming webinars by sending an email to

Accelerate-join@hursley.ibm.com

Jeremy Keith (Adactio)Amber

I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.

Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”

CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.

I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.

I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).

I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.

We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:

  • I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
  • I want to adhere to best practices.
  • I want to learn more about what direction I want to go in, find a niche.
  • I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.

My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.

Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”

I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.

I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?

Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.

John Boyer (IBM)Stop by the IBM Connect EXPO floor to meet the Services team

image

Representatives from IBM Software Services for Collaboration will be available on the EXPO floor, and they would like to meet you and discuss ways that they can help your business.  Meet the Services team on the EXPO floor :

  • Tuesday from 8 to 10 am, as well as noon to 5 pm
  • all day on Wednesday (8 am to 5 pm)
  • Thursday until 1 pm

Bring your questions! If you want to see a demo, they are happy to oblige. 

We hope you take advantage of this terrific opportunity to meet the Services team face-to-face to discuss how they can help you.

John Boyer (IBM)Rational Team Concert 6.0.3 - Web interface for importing work items

Rational Team Concert 6.0.3 - Web interface for importing work items: This video demonstrates the web user interface for the work item importer.

 

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/kBqATeiIydI" width="560"></iframe>

 

You may also want to watch:  

 

John Boyer (IBM)2017 IBM 开源技术微讲堂来啦!【第一期:区块链和 HyperLedger 系列】

区块链是一种通过去中心化和去信任的方式集体维护一个可靠数据库的技术方案,是时下最火热的技术方案之一 。最初该技术是随着数字货币被热炒而被逐渐关注,之后区块链的应用场景也扩展到金融、制造业、物联网、保险等各个领域,一时之间人人都在谈论这种革命性的技术方案能给我们的生活带来什么创新。

 

超级账本(HyperLedger)区块链联盟是Linux基金会于2015年发起的推进区块链数字技术和交易验证的开源项目联盟,成员包括以IBM为代表的技术厂商以及各大型银行、航空等100多家公司,其中有超过1/4的成员都来自中国。这个项目将给我们带来关于区块链技术与软件工业、金融、保险、物流等领域碰撞结合的想象空间。

 

以传播开源技术为使命的“IBM开源技术微讲堂”由IBM开放技术工程院(IBM OpenTech ) 主办,它将于3月2日开设一期 “区块链和HyperLedger”的系列免费课程,共8次线上课堂,每周四晚上8点通过WebEx来进行实时直播,也支持收看课后录播。此外,该课程还配套了8次课后习题和1次线下动手实操训练营。

听众

HyperLedger感兴趣的企业决策者、技术调研者,区块链应用开发者以及区块链技术的开发者。

程目

听众在完成该系列课程后,能够成长为HyperLedger的使用者和初级开发者。课程结束后,听众将了解区块链的概念、流行的区块链应用场景,掌握HyperLedger社区、架构和内部构造,能够搭建HyperLedger环境,或是使用IBM Bluemix上的HyperLedger环境,开发出自己的区块链应用。资深开发人员可以开始尝试阅读HyperLedger的代码并修复Bug

设计

 

3月2日线上课堂:  区块链知识概览

3月9日线上课堂:  HyperLedger概述

3月16日线上课堂:  Bluemix上的区块链服务

3月23日线上课堂:开发和部署一个区块链应用

3月30日线上课堂: HyperLedger 中的共享账本

4月6日线上课堂: HyperLedger中的共识管理

4月13日线上课堂: HyperLedger中的隐私与安全

4月20日线上课堂:  HyperLedger应用案例赏析

5月线下活动:动手实操训练营

 

通过活动行报名留下有效的邮箱,想参加课程的同学会在2月27日之前会收到邮件邀请,加入我们的HyperLedger课堂微信群,进群即视为报名成功。接下来关注微信群的消息,就可以按时参加每周四晚上8点的IBM开源技术微课堂了。

 

 ---<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>-<wbr></wbr>-

相关学习资源:

 

搜索结果描述:

索结果标题:

搜索结果描述索结果标题:

搜索结果描述:

索结果标题:

 

 <wbr></wbr>><wbr></wbr>> 更多 de<wbr></wbr>velo<wbr></wbr>perW<wbr></wbr>orks<wbr></wbr> 上的 区块链技<wbr></wbr>术文章 

John Boyer (IBM)2017 IBM 开源技术微讲堂来啦!【第一期:区块链和 HyperLedger 系列】

区块链是一种通过去中心化和去信任的方式集体维护一个可靠数据库的技术方案,是时下最火热的技术方案之一 。最初该技术是随着数字货币被热炒而被逐渐关注,之后区块链的应用场景也扩展到金融、制造业、物联网、保险等各个领域,一时之间人人都在谈论这种革命性的技术方案能给我们的生活带来什么创新。

 

超级账本(HyperLedger)区块链联盟是Linux基金会于2015年发起的推进区块链数字技术和交易验证的开源项目联盟,成员包括以IBM为代表的技术厂商以及各大型银行、航空等100多家公司,其中有超过1/4的成员都来自中国。这个项目将给我们带来关于区块链技术与软件工业、金融、保险、物流等领域碰撞结合的想象空间。

 

以传播开源技术为使命的“IBM开源技术微讲堂”由IBM开放技术工程院(IBM OpenTech ) 主办,它将于3月2日开设一期 “区块链和HyperLedger”的系列免费课程,共8次线上课堂,每周四晚上8点通过WebEx来进行实时直播,也支持收看课后录播。此外,该课程还配套了8次课后习题和1次线下动手实操训练营。

听众

HyperLedger感兴趣的企业决策者、技术调研者,区块链应用开发者以及区块链技术的开发者。

程目

听众在完成该系列课程后,能够成长为HyperLedger的使用者和初级开发者。课程结束后,听众将了解区块链的概念、流行的区块链应用场景,掌握HyperLedger社区、架构和内部构造,能够搭建HyperLedger环境,或是使用IBM Bluemix上的HyperLedger环境,开发出自己的区块链应用。资深开发人员可以开始尝试阅读HyperLedger的代码并修复Bug

设计

 

3月2日线上课堂:  区块链知识概览

3月9日线上课堂:  HyperLedger概述

3月16日线上课堂:  Bluemix上的区块链服务

3月23日线上课堂:开发和部署一个区块链应用

3月30日线上课堂: HyperLedger 中的共享账本

4月6日线上课堂: HyperLedger中的共识管理

4月13日线上课堂: HyperLedger中的隐私与安全

4月20日线上课堂:  HyperLedger应用案例赏析

5月线下活动:动手实操训练营

 

通过活动行报名留下有效的邮箱,想参加课程的同学会在2月27日之前会收到邮件邀请,加入我们的HyperLedger课堂微信群,进群即视为报名成功。接下来关注微信群的消息,就可以按时参加每周四晚上8点的IBM开源技术微课堂了。

 

 ---<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>----<wbr></wbr>-<wbr></wbr>-

相关学习资源:

 

搜索结果描述:

索结果标题:

搜索结果描述索结果标题:

搜索结果描述:

索结果标题:

 

 <wbr></wbr>><wbr></wbr>> 更多 de<wbr></wbr>velo<wbr></wbr>perW<wbr></wbr>orks<wbr></wbr> 上的 区块链技<wbr></wbr>术文章 

John Boyer (IBM)Configuring access to the MQ Console and CLI on the IBM MQ Appliance

I have received a few queries asking how to configure access to the MQ Console and the command line interface (CLI) in version 9 of the IBM MQ Appliance. In response to these queries this article provides a worked example that demonstrates how to configure a number of local users with different levels of authority. The same principles apply if appliance users are defined in an XML file or in an LDAP repository. Hopefully this article provides a useful reference for administrators who need to implement similar policies.

Scenario

For the purpose of this article let’s assume we need to create the following user accounts:

  • Alice requires full administrative access to both system settings and MQ
  • Bob requires administrative access to system settings but he does not require access to MQ
  • Carlos requires full administrative access to the MQ Console but no access to system settings
  • Dave requires full administrative access to the MQ Console and access to the MQ CLI
  • Erin requires read-only administrative access to the MQ Console
  • Frank requires limited access to one queue manager using the MQ Console

Full administrative access to system settings and MQ

To configure a user account for Alice that has full administrative access to both system settings and MQ there are two available options.

Option 1: Create a privileged local user account for Alice

To create a privileged user account using the UI:

  1. Navigate to AdministrationAccessUser Account
  2. Select New...
  3. Name the user account, for example alice
  4. Specify an initial password
  5. Select the access level Privileged
  6. Click Apply to create the user account

Option 2: Add Alice to a user group that grants full administrative access

To define a user group that grants full administrative access using the UI:

  1. Navigate to AdministrationAccessUser Group
  2. Select New...
  3. Name the user group, for example Administrator
  4. Specify a single access policy in the access profile of */*/*?Access=r+w+a+d+x (shown below)
    Screenshot of the administrator access profile
  5. This generic policy grants read, write, add, delete and execute privilege to all types of resource. Alternatively generate the access policy using the UI. To generate the policy click Build next to the access profile to open a new dialog box, select local address *, application domain (all domains), resource type (all resources), then select all privileges and click Apply and close the dialog box.
  6. Click Apply to create the user group

To create a user account that is a member of the group follow the steps for creating a privileged user but select the access level Group defined then the name of the user group.

Full administrative access to system settings but no access to MQ

To configure a user account for Bob that has full administrative access to system settings, but no access to MQ, a user group must be created as per option 2 for Alice. The access profile needs to have the following access policies:

  • A generic access policy that grants full access to all resources, as per Alice.
  • A more specific access policy that revokes authority to the MQ CLI. To do this click the Add button (shown above) and enter the access policy */*/mq/cli?Access=NONE. Alternatively, build the access policy using the UI by selecting the MQ CLI Administration resource type and deselecting all privileges. When building the access policy using the UI the ?Access=NONE suffix might be omitted. You do not need to manually append it because if no privileges are specified the policy is the same as explicitly specifying NONE.
  • A third access policy of */*/mq/webadmin?Access=NONE, which revokes administrative authority to the MQ Console. To build the access policy using the UI select the MQ Web Administration resource type.
  • A fourth access policy of */*/mq/webuser?Access=NONE, which revokes user authority to the MQ Console. To build the access policy using the UI select the MQ Web User resource type.

The resulting access profile should look as below (note the boxes are not wide enough to show the entire access policy for the last two entries):

Screenshot of the administrative access profile that grants no access to MQFinally create a group-defined user account for Bob and assign him to the group.

Full administrative access to the MQ Console but no access to system settings

To configure a user account for Carlos that has administrative access to the MQ Console we similarly need to create a user group with the appropriate authority level. Carlos does not require access to view or modify system settings on the appliance so this time we don’t start with a generic profile that grants full access. Instead, we grant only the minimum authority to allow Carlos to use the MQ Console. The access profile needs to have the following access policies:

  • An access policy of */*/login/web-mgmt?Access=r, which grants access to login to the appliance using the web UI. This is a prerequisite for accessing the MQ Console. To build the access policy using the UI select the Web-Mgmt resource type and select the Read privilege.
  • An access policy of */*/mq/webadmin?Access=r+w, which grants read and write administrative access to all resources in the MQ Console. To build the access policy using the UI select the MQ Web Administration resource type and select the Read and Write privileges. Having said this there is an unfortunate bug in 9.0.1 that requires Add authority to be granted instead of Write authority - it should be fixed in 9.0.2.
  • Optionally, add an access policy of */*/access/change-password?Access=x, which grants users in the group authority to change their own password (it can be changed in the System Control panel). To build the access policy using the UI select the Change User Password resource type and select the Execute privilege.

Finally create a group-defined user account for Carlos and assign him to the group.

Full administrative access to the MQ Console and the MQ CLI

In our scenario Dave has similar access requirements to Carlos, but Dave also needs access to the MQ CLI so he can perform additional actions. To grant Dave this access we need to create a user group as per for Carlos but with the following additional access policies:

  • An access policy of */*/login/ssh?Access=r, which grants access to login to the appliance using SSH from where permitted commands can be executed. This is a prerequisite for access the MQ command line interface (CLI). To build the access policy using the UI select the Ssh resource type and select the Read privilege.
  • An access policy of */*/mq/cli?Access=x, which grants access to MQ Administration Mode using the mqcli command. To build the access policy using the UI select the MQ CLI Administration resource type and select the Execute privilege.

Read-only administrative access to the MQ Console

Let's assume Erin is an auditor who requires access to view the MQ configuration but she does not require access to perform modifications. Once again we need to configure her user account to be associated with a user group that has the appropriate authority. The user group for Erin needs to be configured as per for Carlos who requires full administrative access, except only the Read privilege should be granted to the MQ Web Administration resource type.

It is worth noting that when creating a new user group the access profile is pre-populated with the default access policy */*/*?Access=r.  This policy grants read-only authority to every resource type, which includes MQ Web Administration. If Erin also requires access to view system settings as well as the MQ configuration then this default profile is likely to be a good starting point. Additional policies can be appended to the profile to add or revoke specific authority, such as granting the authority for users to change their password.

Limited access to one queue manager using the MQ Console

The last example I’ll cover in this article is for Frank, who requires some limited access to one queue manager on the appliance, He does not require access to the remainder of the MQ configuration. Configuring access for Frank is a little more involved than for the other examples I’ve covered, but hopefully it will be easy to understand.

Step 1: Appliance user setup

Firstly, we need to configure an appliance user account for Frank. As per for the other examples a user group must be created that grants access to login to the appliance UI, and which this time grants user access to the MQ Console instead of administrative access. Create a user group with an access profile that contains the following access policies:

  • An access policy of */*/login/web-mgmt?Access=r, which grants access to login to the appliance using the web UI. This is a prerequisite for accessing the MQ Console. To build the access policy using the UI select the Web-Mgmt resource type and select the Read privilege.
  • An access policy of */*/mq/webuser?Access=x, which grants user access to the MQ Console. To build the access policy using the UI select the MQ Web User resource type and select the Execute privilege.
  • Optionally, add an access policy of */*/access/change-password?Access=x, which grants users in the group authority to change their own password. To build the access policy using the UI select the Change User Password resource type and select the Execute privilege.

Create a group-defined user account for Frank, for example called frank, and assign it to the group.

Step 2: Messaging user setup

User access to the MQ Console requires a messaging user of the same name to be defined so that MQ authorities can be granted to it using the MQ object authority manager (OAM). To create the messaging user login to the appliance using SSH, enter the MQ CLI using the mqcli command, then use the usercreate command, such as:

usercreate –u frank

Note that a password is not required for the messaging user because the appliance user password is used to access the MQ Console. For more information about defining messaging users see the Administering messaging users topic in Knowledge Center at https://www.ibm.com/support/knowledgecenter/en/SS5K6E_9.0.0/com.ibm.mqa.doc/administering/ad00080_.htm.

Once a messaging user has been defined for Frank then MQ authority commands must be run to grant him the access he requires. This access can be defined to the OAM using MQSC (or equivalent) and it can be granted directly to Frank’s user ID or to a messaging group his ID belongs to.

Let’s assume that Frank only requires authority to display information about the queue manager QM1 and the queues defined on it. The following MQSC commands, when run on QM1, allow Frank to access the queue manager using the MQ Console:

  • SET AUTHREC PROFILE(SYSTEM.ADMIN.COMMAND.QUEUE) OBJTYPE(QUEUE) PRINCIPAL('frank') AUTHADD(PUT)
  • SET AUTHREC PROFILE(SYSTEM.REST.REPLY.QUEUE) OBJTYPE(QUEUE) PRINCIPAL('frank') AUTHADD(PUT,GET,INQ,BROWSE)

The following MQSC commands grant Frank the authority he requires to display information about the queue manager and its queues:

  • SET AUTHREC OBJTYPE(QMGR) PRINCIPAL('frank') AUTHADD(DSP)
  • SET AUTHREC PROFILE(**) OBJTYPE(QUEUE) PRINCIPAL('frank') AUTHADD(DSP)

Putting it together

If you need to create other users with limited access to the MQ Console you need to repeat both steps, but you can reuse the same user group defined for Frank in step 1. If you need a number of users with the same authority consider using a messaging group in step 2 instead of defining authorities for each principal (user) individually.

Below is an example screenshot of the MQ Console to demonstrate the restricted access that Frank has been granted. It shows that his user does not have access to view topic objects on queue manager QM1 because he was only granted access to view the queue manager and queue objects. It similarly shows he does not have access to view queues on queue manager QM2.

Screenshot that illustrates limited access to the MQ Console on the IBM MQ Appliance

Summary

This article has shown how to configure users on the MQ Appliance with different levels of access to the MQ Console and the MQ CLI. Thank you for reading and I hope you found it useful.

Related Links

Introducing the MQ Appliance Version 9.0.1

https://www.ibm.com/developerworks/community/blogs/messaging/entry/Introducing_the_MQ_Appliance_Version_9_0_1?lang=en

MQ Appliance v9.0.1 Console overview

https://www.ibm.com/developerworks/community/blogs/messaging/entry/MQ_Appliance_v9_0_1_Console_overview?lang=en

http<wbr></wbr>s://<wbr></wbr>www.<wbr></wbr>yout<wbr></wbr>ube.<wbr></wbr>com/<wbr></wbr>watc<wbr></wbr>h?v=<wbr></wbr>kdC1<wbr></wbr>Q1Kr<wbr></wbr>2R<wbr></wbr>8

IBM Knowledge Center: IBM MQ Appliance 9.0.x
http<wbr></wbr>://w<wbr></wbr>ww.i<wbr></wbr>bm.c<wbr></wbr>om/s<wbr></wbr>uppo<wbr></wbr>rt/k<wbr></wbr>nowl<wbr></wbr>edge<wbr></wbr>cent<wbr></wbr>er/e<wbr></wbr>n/SS<wbr></wbr>5K6E<wbr></wbr>_9.0<wbr></wbr>.0/W<wbr></wbr>elco<wbr></wbr>mePa<wbr></wbr>ge/h<wbr></wbr>omep<wbr></wbr>age.<wbr></wbr>htm<wbr></wbr>l

IBM Knowledge Center: IBM MQ Console security

https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.sec.doc/q127930_.htm

IBM Knowledge Center: Administration using the IBM MQ Console

https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.adm.doc/q127570_.htm

Introducing Role-Based Management (RBM) for the IBM MQ Appliance

https://www.ibm.com/developerworks/community/blogs/messaging/entry/Introducing_Role_Based_Management_RBM_for_the_IBM_MQ_Appliance?lang=en

Bitesize Blogging: MQ 9.0.1 - IBM MQ Console Role Based Access Control

https://www.ibm.com/developerworks/community/blogs/messaging/entry/Bitesize_Blogging_MQ_9_0_1_IBM_MQ_Console_Role_Based_Access_Control?lang=en

What’s new for the IBM MQ Console in 9.0.1

https://www.ibm.com/developerworks/community/blogs/messaging/entry/What_s_new_for_the_IBM_MQ_Console_in_9_0_1?lang=en

John Boyer (IBM)WLE/SEE 2016 Recap

As we announced earlier on this community that starting from 3rd January 2017 Workload Estimator (WLE) & System Energy Estimator (SEE) servers have been moved on IBM Bluemix cloud. Please use new (Bluemix) links here onwards to access WLE & SEE applications respectively. Note : Old WLE & SEE links are no more accessible and all existing sizing guides plug-ins should be updated with new WLE server link.

WLE new link : http://wle.mybluemix.net/wle/EstimatorServlet

SEE new link: http://see.au-syd.mybluemix.net/see/EnergyEstimator

 

A lot of new features & fixes went into WLE & SEE with 4 WLE and 2 SEE releases in 2016. Our statistic shows WLE produced almost 1400 estimations per week in 2016. Here is quick recap of WLE & SEE 2016 released features –

Year 2016 started with major changes in hardware sizing support. Starting from 1st Jan 2016 WLE dropped support for System X sizing as per IBM & Lenovo divestiture agreement, added sizing support for 160 cores E880 Power System (mid of Jan 2016) and marked POWER7/7+ systems as retired & made them available under retired system sizing option. The removal of Lenovo System X sizing had significant impact on hybrid (POWER + System X) sizing guides. WLE handled this by marking System X tier as Incompatible tier and allowing existing hybrid sizing guides to function with Power System only sizing. Note that as WLE dropped support for System X, the System X performance metric (xPerf) was also removed from WLE database.

 

Going forward in the month of April (WLE 2016.1 release), WLE added support for new operating systems – AIX 7.2 & IBM i 7.3 and sizing support for PurePower system (based on GA2 specification). (Note: In order to size workload on PurePower system one should select form factor as “PurePower Systems”.)  Mid of 2016 (WLE 2016.2 ), WLE added sizing support for z13s (z Systems) and updated Power E850 and E870 specification (memory updates) with respect to the latest announcements.

 

On 8th September 2016, IBM announced OpenPower Systems S822LC-8335-GBA for HPC and deep learning workloads along with 2 additional LC servers S822LC-8001-22C & S821LC-8001-12C. On the day of announcement WLE provided sizing support for newly announced OpenPower systems. In the same month, WLE also added support for new IBM Power Systems - E870C & E880C. In the same release i.e. WLE 2016.3, WLE team introduced new feature - sizing as a service (Or WLE as a Service (WaaS)) through REST based sizing API. More information on this can be found here http://wle.mybluemix.net/wle/EstimatorServlet?view=help&topic=restAPIDoc

We have also introduced the new landing page (see the screen-shot below). We feel the new landing page will give users a good starting point and enhance the sizing experience. We have introduced different categories(tiles) based on usage type like new users who want to quickly size Power or z Systems based on performance data Or users who want to consolidate/migrate from legacy IBM systems or non IBM systems to current IBM system Or users who know their workloads and know which sizing guide to use.

image

 

The links on "Size New IBM System" tile will give access to quickly size Power & z systems while "Migrate from Existing Systems" tile will take you to Server Consolidation section. Easy access to existing sizing guides hosted at PartnerWorld through "Customize Solution Sizing" tile, System Energy Estimator (SEE) through "Power System Energy Estimation" tile and Power system performance proof points through "Power System Performance Claims" tile. On "WLE as a Service (WaaS)" tile, one can find links to documentation on how to use WLE REST API & wle.zip as well as link to download Workload Developer.

 

The WLE 2016.3 release also enabled KVM hypervisor support for z System sizing and 5th generation SSD support for storage sizing (when sized along with Power System sizing).

 

The last release of the year, WLE 2016.4 was released on 2nd December 2016. In this release, we have enabled GPU information in sizing report for Power Systems. Also introduced new Power system user option to select CPU only or CPU & GPU only systems for sizing. Selection of this option will ensure either CPU only or CPU + GPU systems will be considered during sizing respectively.


The new option (Systems Data Downloads (see below screen-shot)) has been added on Solution Overview tab to download Power Systems, z Systems and External Storage data in CSV format. (source: WLE systems and storage database). When clicked on Systems Data Download, users will be able to download various tables like latest or legacy Power systems, Power8, Power7, OpenPower, z13, z13s systems etc. with information like min/max cores, cores per socket, frequency, feature code etc. While on storage side users will be able to download information for IBM external storage systems e.g. DS, Storwize, FlashSystems and XIV families.


image

 

In the beginning of 2016, SEE enabled rack level energy estimation for non-Power MTM (from eConfig). (Note: The rack energy estimation for Power MTM was enabled in SEE 2015.4 release.) Also, enabled the functionality to generate rack level energy estimation report in PDF format and integrated energy estimation for newly announced E880 Power System model.

 

Later in second quarter 2016, SEE has been enhanced with support for MES upgrade eConfig files i.e. SEE 2016.1 onwards, it understands base (current) and proposed (target) configuration marked in eConfig file. It can generate rack level energy estimation for base and target configuration separately. The second release of 2016 (SEE 2016.2), added energy estimation support for E870C and E880C Power Systems.

 

John Boyer (IBM)What should I keep in mind before buying a domain?

Today if you are a company or entrepreneur and do not have a place on the Internet to offer and promote your products or services, you may be losing customers or potential sales. Increasing numbers of people recognize the importance of the network and seek to make their way through it, however, before taking the first big step of buying a domain is necessary to learn basic knowledge on the subject.

A domain name is in charge of identifying a web hosting. The goal is to differentiate your space in the competition network, therefore each domain must be unique. A key factor to keep in mind when selecting your domain name is to choose words related to the products or services offered by your organization, in order to get more visits.

For example, "www.milesweb.com" is the domain name that the web hosting company uses for its website. Therefore, when you use words related to the activity of your business, it automatically adds a positive to SEO or web positioning, generating more visitors’ traffic for your portal.

Parts that make up a domain and its meaning

The domains are mostly composed of three parts: "www.milesweb.com" the three double uves (www), the name of the company and the type of organization (com).

Currently determining the type of company or organization through the third part of the domain, may be inaccurate. This is because any natural person can register a web hosting under a .org domain (non-profit).

The most common domain names on the Internet are .Org, .Net, and .Com. These refer to an organization, network, commercial and military. It is necessary to remember that the Internet focuses its operation on IP addresses, not on domains, therefore each server or web hosting service needs a Domain Name Server (DNS) to perform the translation of the names of the different domains to IP addresses.

Carry out a preliminary study before selecting the domain that represents your brand within the Internet, in addition to include simple vocabulary-style words that people use when they go to search engines like Google and Yahoo!, to locate a product or find Some service, can make the difference between appearing among the top places and ensuring a higher visitor traffic or staying in the last positions.

Readers Note: If you want to migrate your website to a reliable hosting provider, check out our unlimited linux reseller hosting Plans

John Boyer (IBM)What happens if you do not maintain an unmanaged VPS?

Customers who hire an unmanaged Virtual Private Server (VPS) have to take over maintenance. You must do this periodically and appropriately so that the web page that is housed in it can function normally.

At the very least, it is recommended that we devote 1 hour a week to perform security checks on our VPS.

In case you do not maintain your virtual server, you run a series of risks that can be a problem for your business.

Risks you run when you do not maintain your VPS

Failure to maintain a VPS involves a number of risks that can threaten the security of your server and your website. You should keep these points in mind:

The software installed on the computer may have vulnerabilities that can be exploited by intruders.

If you do not update the operating system of your server, you put at risk the performance, security, and stability of the server.

If you do not routinely check error logs, there may be areas that are not working without your knowledge.

Another check that is important to do from time to time is to see if your IP has been included in a spam list.

Performing these operations is not complicated when you begin to enter the world of servers. With the help of technical support, user guides and Youtube channels you can find a lot of information on how to perform these operations. Generally, the VPS that you use will be on Linux, so it will be important that you become familiar with the console.

What to do if you do not want or can not manage your VPS?

In case you can not manage your VPS directly, in no case should you leave it without maintenance. Problems would come sooner or later. You can choose one of these possibilities:

1. Hire a computer maintenance service. You can hire a computer company to take charge of the administration of the VPS that you have rented.

2. Hire a Managed VPS. In case you do not want an external company to take care of these functions, the VPS hosting provider you hire will give you the option of working with a managed VPS. In this case, you do not have to perform server maintenance.

We hope this article has helped you with the type of tasks you have to do to work with an unmanaged VPS. We invite you to share it on your social networks.

Readers Note: If you want to migrate your website to a reliable hosting provider, check out our cheap windows vps india Plans

John Boyer (IBM)Creating plug-in for WLE using Workload Developer

Creating plug-in for WLE using Workload Developer :

  Please follow the below steps to start with creation of plug-in for WLE using workload developer.
 
  a) Download the latest Workload Developer from the below location.
 
        https://www-01.ibm.com/marketing/iwm/iwm/web/reg/download.do?source=iswe&S_PKG=dl&lang=en_US&cp=UTF-8
        
        Please use IBM Intranet user id and password to login.
        
  b) Install the downloaded latest developer.
 
  c) Please go thro the Workload developer's User guide commands to create oem file. Developer's User Guide can be found on Help Menu > User Guide - PDF.
 
  d) Once oem file is created with out any errors, click on File Menu and enable the Launch HTML after save.
 
  e) Again click on File Menu and click on Save HTML as and provide WLE production URL (http://wle.mybluemix.net/wle/EstimatorServlet) and click OK.

 
  f) Now it will run the created OEM file on the given production URL and provide the recommendation based on the inputs.

John Boyer (IBM)Doc Buddy for z/VSE messages

z/VSE messages are uploaded to IBM Doc Buddy now.

 

Doc Buddy is a no charge mobile application that enables retrieving z Systems message documentation. You can download it from the Apple App Store for iOS devices or the Google Play Store for Android devices.
Many z Systems "components" are already available on Doc Buddy such as message explanations for z/OS, z/VM, Linux on z Systems, TPF - and now for z/VSE too.
You can download such components to your device and look up messages without an internet connection. Doc Buddy also includes links to the relevant product support portals and supports calling a contact from the app.
You may also receive alert messages concerning a specific component.

 

I recommend to download the app from the corresponding store and download messages for component z/VSE. If you look up a z/VSE message and z/VSE messages are not yet downloaded, you will see a "more" link. This also gives you the option to download the messages.

John Boyer (IBM)BVQ double release week (version 5.0.3 & 5.1.2)

Last week, 15th February 2017, 2 BVQ versions (5.0.3 & 5.1.2) have been released.

All of the new features explained in earlier posts (Multiple view composition, Self explanatory favorites, Fully recallable table views) are available in the BVQ GUI.
You can find a summary of all new features, improvements & bugfixes here: https://bvqwiki.sva.de/display/BVQ/BVQ+5.0

 

In version 5.1 we introduced our new MongoDB database backend in BVQ. In version 5.1.x MongoDB is only available for new installations. A migration process from existing DB2 BVQ databases will be available in version 5.2. soon.

 

Both versions support the SVC Code level 7.8.x including the new EasyTier Levels (tier1_flash).

 

You can find the downloads here:

 

John Boyer (IBM)Patches for RHEL: RHSM Download Plug-in and RHSM Download Cacher updated

The RHSM Download Plug-in tool and RHSM Download Cacher have been updated with the following enhancements:

 

• New commands to verify if the entitlement certificates, which are in the 'certs' folder, have access to the supported Red Hat repositories.
- RHSM Download Plug-in: --check-baserepos and --check-allrepos
- RHSM Download Cacher: check-baserepos and check-allrepos

• New download cacher command tthat specifies a flag to enforce a checksum check for existing RPM files when trying to download packages using the "buildRepo", "downloadPkg", or "downloadbypatchid" subcommands.

• Improvements for more robust handling of Red Hat certificates.

 

Updated Tools Versions:
• RHSM Download Plug-in, version 1.0.0.2
• RHSM Download Cacher, version 1.0.0.2

 

Actions to take:
Update to RHSM Download Plug-in version 1.0.0.2 by using the Manage Download Plug-ins dashboard.
Download the updated RHSM Download Cacher here: http://ibm.co/1WtNC6W

 

Reference:
For more information about the RHSM enhancements included in this release, see http://ibm.co/2gSEGJb.

 

Published site version:
Patching Support, version 710.

 

Application Engineering Team
IBM BigFix

John Boyer (IBM)DevOps workshops @IBM InterConnect - The year that was and the year that is!

 

With IBM InterConnect 2017 around the corner, it’s time to start building agendas and signing up for onsite activities. In my recent blog, I shared my experience of the DevOps workshops from the last two InterConnect conferences and this year’s workshops look even more interesting.

 

This conference this year is packed with the best IBM DevOps and industry experts, new topics, and great networking. The workshops are different from the regular breakout sessions because they are interactive, small-group sessions led by IBM DevOps experts who facilitate the discussion and provide input to promote learning with structured exercises and sharing ideas and experiences. PowerPoint decks will be in scarce supply!

Last year, we got an overwhelming response for our workshops. Here’s what we learned from you:

  1. Interactive – You loved the format of the workshops because they allowed you to interact with your peers. One participant mentioned “Having perspectives from other organizations in the same place from a technology perspective and learning how hybrid cloud can solve some of these issues or alleviate pressures was among the best things in these workshops”
  2. Collaboration – You valued group discussions, collaborative thinking, hands-on experience, practical and exercise based approach were big attractions. The feedback we received from participants ranged from “The DevOps workshops with their discussion and exercise based approach helped in better understanding of concepts” to “Liked the group discussions and hands-on experience” and “The workshops stimulated collaborative thinking in the participants, which led to great learning”
  3. Effective Speakers - You liked the facilitators, their approach, their knowledge and the effectiveness of their responses to the questions asked. You enjoyed the Q&A with our experts. One participant mentioned, “Very knowledgeable facilitators, they had an answer to every question and their answers were short, simple but clear”
  4. All things DevOps – You liked our themes -  discussions on current DevOps process, models, and spreadsheets of ROI tools, reinforcement of concepts that are common amongst many companies, automated process scenarios and ideas around the ops/dev/DevOps relationship

 

We have also incorporated your feedback to ensure a better experience this year. In addition, some of you requested that the workshops be longer and have an industry perspective. So, this year we have a dedicated workshop to do just that: Hold Your Horses or Let Them Run Wild? A Financial Institution’s DevOps Journey.

Also, this year the DevOps workshops will follow the “Lean Coffee” format which is a structured, but agenda-less meeting. Participants gather, build an agenda, and begin talking. Conversations are directed and productive because the agenda for the meeting was democratically generated.

Take a few minutes this week to review the workshops and remember workshop registration is on a first come first served basis.  After you register for the conference make sure you add the workshop to your InterConnect 2017 agenda to reserve your seat. You can read more about the workshops in detail here - http://bit.ly/ibmdevopsworkshops

 

image

 

Register for the workshops today at http://bit.ly/ibmworkshops

Note: You must be registered for InterConnect to attend the workshops: http://bit.ly/interconnect_2017

You can follow my twitter handle @dishagmittal or @ibmdevops to get latest updates on what’s new for DevOps at InterConnect 2017

 

Author:

image

 

Disha Garg Mittal

Content Strategist, IBM Cloud Marketing - DevOps

Follow Disha:

 

 

John Boyer (IBM)Evolution of the PC

At first glance, the history and evolution of the PC is long and broad. Let us first try and quantify our definition. Which would mean, in it’s simplest form “Personal Computer,” that is to say, a computer which was within reach of every regular person. Rather than the realm of prestigious universities, corporations and government or army departments. The definition is blurry, both now and the in the past, but let’s focus on this definition for now. If computers were invented to solve complex mathematical problems, it has certainly taken a backseat in modern PCs. So whilst the “Computer” has been around in one form or another since 1937, the history of the PC stretches back to 1975 (arguably).

 

If we are defining it simple as a Personal Computer, then the earlier example may be the Librascope LGP-30. Invented by the physicist Stan Frankel at Caltech, the Librascope LGP-30 was sold to defense contractor General Precision in 1956 for $50,000. Not exactly within reach of the average person. The reason it is dubbed a PC is because it was able to be used by just one person.

 

image

But using our definition, it could be argued that the real first PC was the MITS Altair 8800 kit. It sold for $297 in 1975 and was the first to use the name “Personal Computer,” a term coined by MITS co-founder Ed Roberts who invented the machine. It soon became the core of hobbyist computing and continued being produced until 1978.

 

The next step in PC evolution came soon after in 1976 with the introduction of the Video Display Module (VDM) at the Altair Convention. This visual display created the ability to visualize interactive games.

 

Seems like a blockbuster couple of years, because it was in 1977 the Apple II was introduced. Arguably, the Apple 1 came before, but the success and reality of the PC did not exist until the Apple II. It came with a power supply, keyboard and case. It could be connected to a color television set and was capable of producing incredible color graphics. Millions of Apple IIs were sold and it continued to be sold up until 1993. It found a place in homes, universities, schools and workplaces and thus earned its place as the first real PC.image

 

The next step was when IBM introduced its own PC. Before, IBM was largely a player in the industrial, government and military space. Undoubtedly, after seeing the success and widespread adoption of the Apple II, it quickly realised the need to release its own PC. The IBM Model 5150 was released in August 1981 and looked remarkably similar to what PC owners today would now recognise as a modern computer. It also introduced MS-DOS as an operating system and revolutionised business computing. Many future successors within the ecosystem of PC software and peripherals can trace their lineage back to the IBM 5150.

 

But it was the Apple Lisa in 1983 which introduced the graphical user interface (GUI) which removed the last big hurdle for most home users. The Lisa itself was not a huge success but it led directly to both the Apple Macintosh in 1984 and all future Windows successors.

 

image

And it was in 1992 when Apple released its PowerBook series of laptops that the PC went mobile finally. Apple had attempted to create a portable PC in the past, with its Macintosh Portable in 1989, but it was heavy and expensive. It was with the release of the PowerBooks that the true form of the modern laptop PC was first seen. What we recognise today as a laptop can be traced back to the PowerBook. These early laptops came with a trackball, floppy drive and palm rests and continued until 2006. The Sony Vaio in 2004 and IBM ThinkPad T43 in 2005 are natural successors.

 

What next for PC’s? There was of course the invention of the PDA, and the mobile phone, then the smartphone and the tablet. All of which could be argued is a “PC.” The main continuing trend in modern, traditional PC’s though seems to be more power, more processing capabilities, better graphics. However, in many ways it has stalled. In terms of design, what you view as a PC is likely to look quite similar to what you had a decade ago. Maybe smoother, the performance faster, the graphics more intense. What happens next is difficult to predict and open to conjecture. Or perhaps the PC is growing irrelevant in our increasingly mobile world.

 

Photo Credit: PC Byte , Amber CaseLuc Legay

John Boyer (IBM)Technology Trends in Manufacturing - Then and Now

It would be difficult to find an industry more open to change than in manufacturing. A traditional early adopter of new technology and innovative methods. In fact, it is a requirement for a manufacturer to stay up-to-date with the changing trends and movements in order to stay competitive. Because if one manufacturer will not adopt the latest in efficient technology, then their competitor will. And they will find themselves soon in a desperate struggle to catch up - at which point it is too late. You would be hard-pressed to find much within the industries that has not changed over this time. And that rate of change only seems to be increasing. We are now moving into the fourth industrial revolution.

 

Where once a worker was required for every single task, where once a driver was required for every truck, car and forklift now exists a robot capable of doing the job as well as or better than any human counterpart. And if there is not yet an automated system in place, there soon will be. Even something as simple as a foreman with a clipboard and a checklist may soon find themselves using augmented reality to do the same job faster and more accurately. image


Even the type of worker required, the skills and expertise needed are changing. Conjure up an image of a warehouse labourer and you’ll likely get a very different image to the kind of employee who is in demand now. Where once a worker was sought after for their strength and ability to work long hours now stands a button-down shirt kind of guy.

 

The largest employment growth opportunities in manufacturing is for STEM educated people. People who are built for programming, engineering and critical thinking. Who will be able to build, maintain and improve the robotic systems that will be put into place.

One trend that has reversed and is, in fact, mostly beneficial is the act of reshoring manufacturing. Where once it made economical sense to move the jobs and work of mass production into lower cost countries like China, the trend has begun to reverse. For one thing, these countries are growing up, finding their own economic and political footing and their middle class is growing. With that comes the demand for higher wages for the same work as well as new types of jobs. Combine this with the increasing efficiency and cost-effectiveness of new technologies and it is suddenly viable to produce goods in the original country, such as the USA, Australia and much of Europe.

 

imageLast-minute ordering was one of the biggest revolutions in the last century. The core idea was efficiency by producing what was needed exactly when it was needed. The result (ideally) is to reduce costs and enhance production times. There’s now a number of new methods which exist or are in development which will greatly enhance this - Predictive Analytics. This is the method by which advanced computer programs and at least partial AI is able to better predict the demand trends for products in order to better understand the when and where new products are required for production.

 

Once, the maintenance and efficiency of machinery was largely governed by checklists and schedules. A recent innovation, the Digital Twin idea has greatly enhanced the possibilities. By linking a machine to a “digital twin” and having in place sensors which can relay real-time data and information back to a program you can understand exactly what state it is in. The machine will essentially tell you when a part is close to being worn out, or an immediate alarm when it fails. So to can you gather data and analytics in order to enhance the efficiency and cost-effectiveness of the machine. This is an incredible use of the Internet of Things (IoT) way of thinking that will continue to improve.

 

The world has changed and is continuing to change. Manufacturers will be required to adopt or be left behind. A large part of ensuring your manufacturing capabilities stay up-to-date will be in hiring and training the right kind of work-force. Employees capable of implementing and leading these new technologies will be essential to your future success. When looking back at the way things were, it is difficult to imagine how you can keep up when so much has changed in such a short time. But it is essential that you do.

Photo credit: Flickr

 

John Boyer (IBM)Challenges with microservices

image

Challenges with Microservices

 

The microservices provides a nice and easy way to separate out individual concerns/capabilities for building a solution. These can be developed independently using whatever technology the squads feel is right for delivering that function. But when you start building a complex solution in this model, there are few disadvantages or issues to deal with.

I’m trying to list few from my experience below.

 

 

Cultural and Organizational change

 

The foremost is to get the teams organized around microservices. Though each squad meets and communicates well within the team, it is important to ensure effective communication across the squads or teams. 

 

Duplication of Efforts

Many a times if there is no good communication across teams, I’ve seen that there is reinventing of wheel or teams trying to solve the same problem. Soon we might have two or more options of the same functionality that’s used by different squads

 

Distributed Systems are inherently Complex

 

 

As with traditional distributed systems, the difficulty is with how to manage distributed systems as well as make sure the end to end value of the solution is delivered – for the same you need to make sure individually each services are scalable, available and performs at their best. We have to deal with N/W Latency, Fault Tolerance, Serialization overhead and of course fan out of Requests leads to increased n/w traffic.

 

Service Discovery, Visualization

The most important issue with this model is when you are building a complex solution, soon are sitting on a pile of microservices and the complexities associated with handling their integration. Service discovery, visualization becomes an important element to be included in such scenario.

 

Operational Efficiency

Any cloud or SaaS service is economical to both the consumer and provider, only if you can operate it optimally. Microservices involves a lot of infrastructure/tooling and significant operational overhead. Each team the again does their own Dev-Ops automation, tools etc. So there has to be some governance across teams to bring in this efficiency with tools and operations. Essentially ensure repeatability and reliable automation - Everything must be defined in code, testable and repeatable.

 

Security

With many microservices, the attack surface increases and it is important to consider Security also holistically than at a microservice level. How to ensure authenticated users are only provided access to protected resource and the identity context is neatly propogated across microservices is one key consideration. Secure engineering should take care of fixing loopholes for any man in the middle attacks.  Collecting the logs from all microservices in one place is a pre-req to do security intelligence.

 

Debugging & Testing

 

How to ensure the entire set of microservices work to fulfil the end to end scenario requires all the services and the right versions are picked up and supports backward and forward compatibility.  Service versioning is another important consideration here.

 

Communication

Overall I think technical challenges can be overcome easily.  The important and difficult thing is to keep the communication across these teams in tact. The recommendation is to have a repeating scrum of scrums to take all the squads together forward.

 

More reading

 

·      Check out crafting the cloud videos on challenges with microservices -   by Kyle and Roland on their interesting insights on the topic. https://www.ibm.com/blogs/bluemix/2016/10/challenges-with-microservices-part1/

·      https://blog.appdynamics.com/product/4-challenges-you-need-to-address-with-microservices-adoption/

·      http://blog.takipi.com/5-ways-to-not-f-up-your-microservices-in-production/

·      https://www.infoq.com/news/2016/04/msa-deployment-challenges

 

John Boyer (IBM)How Do I Install, Configure, and Use IBM Network Performance Insight, V1.2.0

This document covers all the tasks required for installing, configuring and using NPI 1.2.0 easily.

Please download the check list from here and take a print out to track the various tasks required.

 

NPI_1 2 0_installation_configuration_cheat_sheet.docx|View Details

 

For any issues, contact our integration experts:

  • Amanda Yap (1yapas@my.ibm.com)
  • Amilia Robini Aruldas (1amilia@my.ibm.com)

For more detailed information, see IBM Knowledge Center from here.

For information on Device dashboard installation, configuration, and usage see Netcool Operations Insight documentation on IBM Knowledge Center from here.

ProgrammableWebCoding Dojo Highlights the 9 Most In-Demand Programming Languages of 2017

There are more programming languages these days in production environments than you could learn in a lifetime. So which should you choose to make sure you’ll always be an in-demand developer? Jay Patel at Coding Dojo gives you the low-down on the top 9 programming languages to learn in 2017.

John Boyer (IBM)Interim Fix for Maximo Asset Management 7.5.0.11 Build 003 now available

The Interim Fix for Maximo Asset Management 7.5.0.11 Build 003 is now available.
IF003 (TPAE_75011_IFIX.20170201-1300.psi.zip) is cumulative of all prior Interim Fixes for Maximo Asset Management 7.5.0.11.
Here is the location to download this interim fix:
http://www.ibm.com/support/fixcentral/swg/quickorder?parent=ibm%2FTivoli&product=ibm/Tivoli/IBM+Maximo+Asset+Management&release=All&platform=All&function=fixId&fixids=7.5.0.11-TIV-MBS-IFIX003&includeSupersedes=0&source=fc

John Boyer (IBM)Interim Fix for Maximo Asset Configuration Manager 7.6.4.0 Build 001 now available

The Interim Fix for Maximo Asset Configuration Manager 7.6.4.0 Build 001 is now available.
IF001 (ACM7640_ifixes.20170215-1241.zip) is the first Interim Fix for Maximo Asset Configuration Manager 7.6.4.0.
Here is the location to download this interim fix:
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Tivoli&product=ibm/Tivoli/Maximo+Asset+Configuration+Manager&release=7.6.4.0&platform=All&function=all

 

John Boyer (IBM)Interim Fix for Maximo Asset Configuration Manager 7.6.3.0 Build 003 now available

The Interim Fix for Maximo Asset Configuration Manager 7.6.3.0 Build 003 is now available.
IF003 (ACM7630_ifixes.20170213-1052.zip) is cumulative of all prior Interim Fixes for Maximo Asset Configuration Manager 7.6.3.0.
Here is the location to download this interim fix:
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Tivoli&product=ibm/Tivoli/Maximo+Asset+Configuration+Manager&release=7.6.3.0&platform=All&function=all

John Boyer (IBM)Interim Fix for Maximo Asset Configuration Manager 7.5.1.1 Build 029 now available

The Interim Fix for Maximo Asset Configuration Manager 7.5.1.1 Build 029 is now available.
IF029 (ACM7511_ifixes.20170213-1414.zip) is cumulative of all prior Interim Fixes for Maximo Asset Configuration Manager 7.5.1.1.
Here is the location to download this interim fix:
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Tivoli&product=ibm/Tivoli/Maximo+Asset+Configuration+Manager&release=7.5.1.1&platform=All&function=all

John Boyer (IBM)解决 Windows instance 时间不同步问题 - 每天5分钟玩转 OpenStack(153)

 

这是 OpenStack 实施经验分享系列的第 3 篇。

 

问题描述

 

通过上一节部署出来的 Windows instance 有时候会发现操作系统时间总是慢 8 个小时,即使手工调整好时间和时区,下次 instance 重启后又会差 8 个小时。

 

原因

 

KVM 对 Linux 和 Windows 虚拟机在系统时间上处理有所不同,Windows 需要额外一些设置。

 

解决办法一

 

给 Windows 镜像添加 os_type 属性。

glance image-update --property os_type="windows" <IMAGE-ID>
 

明确指定这就是一个 windows 镜像。通过此镜像部署 instance 的时候,KVM 会在其 XML 描述文件中设置相应参数,保证时间的同步。

 

解决办法二

 

对于之前部署的 Windows instance,用第一种方法就没有效果了,只能采取一点非常规手段:Hack Database!

 

假设要 hack 的 instance 的名字是 win-test,用下面的 MySQL 命令:

 

$ use nova;

$ update instances set os_type='windows' where hostname='win-test';

$ select hostname,os_type from instances where hostname='win-test';

+------------+----------+

| hostname  | os_type  |

+------------+----------+

| win-test     | windows |

+------------+----------+


需要重启 win-test,KVM 会获取修改后的数据库信息,更新 XML 配置,保证时间同步。

 

下一节继续讨论镜像使用上的经验和技巧。

 

John Boyer (IBM)MDM Post Install Configuration Targets - modify_default_queues

Target modify_default_queues
Modifies Queue names in the Java Messaging Services configuration of the application server in which MDM is installed.

 

When to use:
When MDM is configured to use WebSphere Messaging Queue (WMQ) certain default queue names are used.  To modify this configuration to use custom queue names the post configuration target modify_default_queues can be used.

 

Inputs obtained:
The target reads the below properties from file <MDM_INSTALL_DIR>/properties/post_config.properties
# The MQ queue name for AsynchronousWorkQueue, the default name is MDM.ASYNCHRONOUS.WORK.
mqAsynchronousWorkQueue =
# The MQ queue name for Customer Completed Work, the default name is CUSTOMER.COMPLETED.WORK
mqCustomerCompletedWork =
# The MQ queue name for Customer Integration, the default name is CUSTOMER.INTEGRATION
mqCustomerIntegration =
# The MQ queue name for Customer Scheduled Work, the default name is CUSTOMER.SCHEDULED.WORK
mqCustomerScheduledWork =
# The MQ queue name for Customer Tail, the default name is CUSTOMER.TAIL
mqCustomerTail =
# The MQ queue name for EMQUEUE, the default name is EMQUEUE
mqEMQUEUE =
# The MQ queue name for MDM Change Broadcast Queue, the default name is MDM.BROADCAST
mqMDMChangeBroadcastQueue =
# The MQ queue name for MDM Messaging Backout Queue, the default name is MDM.MESSAGING.BACKOUT
mqMDMMessagingBackoutQueue =
# The MQ queue name for MDM Messaging Failed Response Queue, the default name is MDM.MESSAGING.FAILED_RESPONSE
mqMDMMessagingFailedResponseQueue =
# The MQ queue name for MDM Messaging Request Queue, the default name is MDM.MESSAGING.REQUEST
mqMDMMessagingRequestQueue =
# The MQ queue name for MDM Messaging Successful Response Queue, the default name is MDM.MESSAGING.SUCCESSFUL_RESPONSE
mqMDMMessagingSuccessfulResponseQueue =
# The MQ queue name for MDS Queue, the default name is MDS.QUEUE
mqMDSQueue =
# The MQ queue name for Flex Queue, the default name is FLEX.QUEUE
mqFlexQueue =

Please fill in the above file before invoking the target.

Properties pertaining to MDM application server are read from <MDM_INSTALL_DIR>/properties/mdm_install.properties file.

 

Invocation:

  • When the operating system is Windows:
    • Go to <MDM<wbr></wbr>_INS<wbr></wbr>TALL<wbr></wbr>_DIR<wbr></wbr>>/md<wbr></wbr>s/sc<wbr></wbr>ript<wbr></wbr>s
    • Invoke madconfig modify_default_queues
  • In other operating systems
    • Go to <MDM<wbr></wbr>_INS<wbr></wbr>TALL<wbr></wbr>_DIR<wbr></wbr>>/md<wbr></wbr>s/sc<wbr></wbr>ript<wbr></wbr>s
    • Invoke ./madconfig.sh modify_default_queues

 

Tasks performed:

  1. Modifies the queue names in the Java Messaging Services (JMS) configuration in the application server based on inputs provided in <MDM<wbr></wbr>_INS<wbr></wbr>TALL<wbr></wbr>_DIR<wbr></wbr>>/pr<wbr></wbr>oper<wbr></wbr>ties<wbr></wbr>/pos<wbr></wbr>t_co<wbr></wbr>nfig<wbr></wbr>.pro<wbr></wbr>pert<wbr></wbr>ie<wbr></wbr>s

 

Logs:

Logs can be found at

  • <MDM<wbr></wbr>_INS<wbr></wbr>TALL<wbr></wbr>_DIR<wbr></wbr>>/lo<wbr></wbr>gs/m<wbr></wbr>adco<wbr></wbr>nfig<wbr></wbr>/jav<wbr></wbr>a

 

Available in:

  • MDM v11.5
  • MDM v11.6

John Boyer (IBM)MDM Post Install Configuration Targets - switch_to_mq

Target switch_to_mq
Uninstalls configuration related to  WebSphere Embedded Messaging (WEM) and configures the application server to make use of WebSphere Messaging Queues.

 

When to use:
When MDM has be installed using WebSphere Embedded Messaging (WEM) and has to be configured to use WebSphere Messaging Queue (WMQ) the target switch_to_mq can be used.

 

Inputs obtained:
The target reads the below properties from file <MDM_INSTALL_DIR>/properties/post_config.properties
# MQ server host name
messagingHost=<MQ-host>
# MQ server listener port
messagingPort=1414
# MDM user name to access to MQ server
messagingUser=<MQ-user>
# MDM user password to access to MQ server
messagingPassword=<MQ-user-password>
# The MQ Queue Manager name for MDM server
messagingQueueManager=<MDM-QMGR>
# The MQ server connection channel name for MDM server
messagingChannel=<MDM-SVR-CHANNEL>
# The MQ queue transport type for MDM server, CLIENT or BINDING
messagingTransport=CLIENT
# The MQ server home path
messagingHomePath=/usr/mqm

Please fill in the above file before invoking the target.

Properties pertaining to MDM application server are read from <MDM_INSTALL_DIR>/properties/mdm_install.properties file.
Please note that the default queue names are used during configuration.  The default queue names can be found in file <MDM_INSTALL_DIR>/properties/post_config.properties

 

Invocation:

  • When the operating system is Windows:
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke madconfig switch_to_mq
  • In other operating systems
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke ./madconfig.sh switch_to_mq

 

Tasks performed:

  1. Removes WEM related configuration from the application server on which MDM is installed.
  2. Configures the server to use WMQ by obtaining input from <MDM_INSTALL_DIR>/properties/post_config.properties

 

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig/java

 

Available in:

  • MDM v11.5
  • MDM v11.6

 

John Boyer (IBM)欲しいパッケージを含む LoP ディストリビューションが見つかる検索エンジン

 

Open Source POWER Availability Tool (OSPAT) を使うと、

Linux on Power で利用可能なオープンソース・パッケージを検索することができますよ。 はい

 

 

John Boyer (IBM)PowerHA の新しい GUI って、ご存知でしたか?

 

新しい UI は、 SMUI (system mirror user interface) と呼ぶらしいですよ。

※ 何て読むんでしょうね。 スマイル

 

 

 

 

ProgrammableWeb5 Potential Use Cases for Serverless Frameworks

Serverless frameworks, despite their name, are not really serverless. Of course, there are servers somewhere handling requests but you don’t have to worry about them. You just post a snippet of code and your hosting service takes care of the rest. But who could benefit from these new frameworks? Serdar Yegulalp over at InfoWorld takes you through five of the best uses of serverless frameworks right now.

John Boyer (IBM)How Cheap Internet Plans Make Your Business Lose Money

Cheap Internet plans might seem like a fantastic idea. Paying less can keep your costs down, but you have to take into account the quality aspect. Your Internet service needs to do what you want it to, otherwise you could find yourself in a situation where you’re unable to carry out your usual activities.

 

This guide is going to help you understand why the cheapest Internet plans can make your business lose money.

 

They’re Not That Fast

 

The idea of cheap and fast Internet doesn’t exist. There’s a reason why these plans are cheap and it’s not because the provider invested in their network. The cheapest Internet plans are only good for surfing and not much else. Your business needs to be able to do more than Internet browsing if you’re going to handle all your day-to-day activities.

 

Fast Internet is essential in the business world of today. Don’t trust an Internet provider if they’re offering you dirt cheap deals. The chances are their connections are slow.

 

They’re Not Reliable

 

Reliability is everything. The vast majority of businesses conduct most of their operations online in this day and age. The last thing you need is for your Internet connection to collapse. Once your Internet connection goes down your company is essentially paralyzed.

 

Imagine if that happened in the middle of a big project or in the middle of a conference call. It could lead to you losing a big client. That could set your business back by many months, and all because you decided to be cheap.

 

Reliability is not something you’re going to get from a cheap Internet plan. They don’t care about you. After all, what did you expect for such a small amount?

 

They’re Not Going to Support You

 

Cheap Internet plans are so cheap because they’re poor quality packages. The company has cut corners almost everywhere, and that includes support. They’ll have a minimal amount of customer support. You certainly won’t be a priority customer because you’re not paying them enough to warrant that.

 

VIP support only comes when you’re dealing with a great Internet provider. The model of a company offering cheap Internet plans is quantity not quality. They’ll pull as many people through the doors as possible. They’re not interested in your loyalty or complaints.

 

So how does this translate to your business?

 

Imagine if your Internet connection goes down and you can’t solve the problem yourself. You can forget about the company helping you with any great urgency. That means you’re going to be down for longer and you’re going to be steadily losing money as you go off the grid.

 

They Will Limit Your Bandwidth

 

Cheap Internet plans are so cheap because they put extremely strict limits on how much you can download and how much you can upload. Most businesses will be using large amounts of bandwidth just to get through the month. Employees will be constantly uploading and downloading.

 

This will work for a while, but what if that cable Internet service suddenly limits you? Your upload and download limits will slow to a crawl. It will take minutes just to download a single Word document. That’s how frustrating it can get. And when time is money you can forget about trying to get a project done on time.

 

The best Internet plans will have unlimited bandwidth and there will be no usage limits. You’ll never have to worry about your projects not being completed because of a slow Internet connection.

 

How Do You Choose the Right Internet Connection for You?

 

These are the main reasons why a cheap Internet plan can cost your business a lot of money. So how do you go about getting the right one for your business?

 

Look for a reputable provider first. Do your research online and see what people recommend for businesses of your size. Then you can begin narrowing down the companies you’re looking at. You should consider their usage limits, their costs, and whether you can easily scale.

 

The reality is you don’t have to pay a lot of money for great Internet these days. There really is no excuse for not having a great Internet connection for your business.

 

Have you ever heard any business Internet horror stories?

 

John Boyer (IBM)Why We Know TV Streaming Services Are Here to Stay with Big Data

Cable used to be popular and on everyone’s TV’s in the past. Walking into someone’s house, you would see them watching TV shows or movies on their cable TV’s. However, with technology rapidly advancing, these days, everyone is cutting the cord on cable and streaming their TV shows and movies through the Internet.

 

We know TV streaming services are here to say with big data because everyone is enjoying the freedom of having the ability to watch what they want, whenever they want to. Having streaming services like Netflix and Hulu provides you with the freedom to watch anything you want on your own time.

 

Cut the Cord on Cable

 

With streaming services, you won’t need cable anymore. Cutting the cord on cable is something you can do when having these streaming services because you can watch Netflix, Hulu, Amazon Instant Video, or your choice of any streaming service. You can stream TV shows and movies on your TV through a streaming device including Roku Player, your PC, or gaming consoles.

 

Comparing Streaming Services

 

When it comes to internet TV options, there are so many to choose from. Choosing the right one for your needs might be a little difficult since there are so many to choose from. Hulu and Netflix basically charge the same monthly fee per month, starting out. You can get more streaming for your money though if you pay a little more. Depending on your needs, you can stream on more than one TV as well. This way everyone in the house can watch whatever they want, whenever they want, and everyone will be happy.

 

Will Streaming Services stay with us?

 

With the low prices of these streaming services, they will surely stay with us even as the future unfolds. Everyone wants to save money and with the services these streaming services provide us, we will surely keep using them. Having the ability to watch what we want, when we want, and not missing another episode of our favorite TV shows because of these streaming services, why wouldn’t Netflix, Hulu, Amazon Instant Video, and more stay with us for the long haul?

 

Netflix versus Hulu

 

When it comes to choosing streaming services for you to decide which one you want to try, looking at the differences between Netflix and Hulu is a start. Although there are more streaming services available, these two are the most popular among the many of them available on the market.

 

Netflix starts out with a price of $7.99 per month and that provides you with the ability to watch on one TV, your PC, or your gaming consoles. However, if you go up to $8.99 per month, you will get access to stream on two TVs. Going up a little more to $11.99 per month will get you the ability to stream unlimited TV shows and movies on up to four TVs, your PC, and gaming consoles. They release new shows and movies as soon as the new season becomes available. They also send out DVDs still to those who want to watch newly released movies. This option becomes available for the price of $11.99. However, the future of Netflix will mostly be streaming your entertainment.

 

Hulu starts out at the same price as Netflix at $7.99 per month. You can also watch it for free on your PC until you can afford to start watching it on your TV for $7.99 per month. Hulu has one thing above Netflix where they are a little better than them. This is because they don’t make you wait a whole year to watch new movies or your favorite TV shows since they release them right away. Once a TV episode becomes available, Hulu releases it on their service.

 

Conclusion

 

Having so many streaming services to choose from is a good thing and a bad thing. It is a good thing because everyone who wants to try something like this has many good ones to choose from. However, it is sort of a bad thing because it can be hard to choose from the many streaming services there are. That is not too bad though because once you see the prices and services provided by each one of these services, you will be able to choose the right one that fits your family’s needs.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>