John Boyer (IBM)Chatbot, o desafio da linguagem natural

Ricardo Kubo

Há poucos dias, minha filha de 8 anos me perguntou o que eu fazia no trabalho, já tinha tentado explicar sem sucesso que criava soluções de tecnologia. Resolvi dizer que ensinava pequenos robôs de computador a falar com as pessoas e ela adorou. Logo abri um protótipo de chatbot e mostrei, de imediato ela pediu para perguntar “o que é Kubo?” Ao fazer a pergunta o chatbot respondeu quem eu era e ela se frustrou, pois queria saber a origem do nosso sobrenome.

No ano passado, resolvi me aprofundar em computação cognitiva e um passo natural foi criar meu próprio chatbot. Acabei utilizando a interface do messenger e integrei com APIs de serviços cognitivos de conversação, reconhecimento de voz e imagem, tons de expressão, dentre outros mais. O interessante é perceber como temos a cabeça voltada ao acerto, onde um conjunto restrito de diálogos com caminho feliz aparenta ser suficiente para começar. Ao colocar em produção, vem a realidade de que interações simples como uma saudação pode ser muito diferente de pessoa a pessoa. Algo em torno de um a cada sete passavam por uma sequência de até quatro interações perfeitas, os demais se desconectavam na segunda resposta não atendida. O legal aqui é aprender rápido com os caminhos não pensados e retroalimentar o bot.image

Comecei a aplicar algumas técnicas básicas, por exemplo dar opções sendo claro o que o chatbot faz. O índice aumentou para um a cada quatro de usuários com diálogos percorridos completamente. Acreditava que faltava mais conteúdo, investi mais tempo em profundidade para poucos temas. Notei então que por ser um experimento, poucas pessoas ficavam mais que quatro interações pois não tinham algo efetivo a buscar ou resolver. Acabei tendo alguns feedbacks motivadores, foi então que investi em dar mais capacidade ao chatbot. Desde um SMS para mim caso alguém estivesse agressivo no chatbot até puxar notícias de uma empresa nas últimas 24h. Foi interessante conhecer o quanto podemos explorar e estender esta capacidade para fazer nosso dia-a-dia diferente.

Recordo no meu tempo de startup de sites digitais que se tivesse esta tecnologia à disposição poderia automatizar alguns atendimentos de suporte no site, pois tínhamos poucos recursos. Poderia criar novas jornadas de interação com meus clientes que poderia gerar mindshare e cativá-los. A General Motors(GM) implementou inteligência cognitiva para interação por voz no ano passado em seu atendimento online. Não é a toa que o chatbot é uma das tendências de nova interface, quem não se lembra dos filmes de ficção científica? A base é a interpretação de linguagem natural, que por si é complexa para se relacionar expressão com intenção.

O desafio sempre será o mesmo, olhar do ponto de vista da flexibilidade de interação do usuário versus os cenários de intenções mapeadas. Quanto maior o conhecimento coberto, maiores as complexidades de curadoria do conteúdo para manter atualizado. Quanto mais abrangente num mesmo assunto, maior o desafio de ambiguidade.

A linguagem natural como interface é algo fascinante, traz um novo paradigma de desenvolver a lógica dos sistemas. Sai do mundo exato com “IF” e “ELSE” para cenários de hipóteses, que traz a possibilidade de equívoco. Além disso, conforme a base de conhecimento evolui, as respostas podem ter índices de confiança variando com o tempo também.

Temos que planejar a estrutura de conteúdo, sua manutenção e principalmente evitar ao máximo ambiguidades. A engenharia por trás da interface em linguagem natural exige conhecimento não comuns aos desenvolvedores de formação, brinco até que antropologia e biblioteconomia podem trazer visões complementares a considerar no currículo. Qual deveria ser o currículo do Engenheiro Cognitivo?

Agora, nada disso é suficiente se não tivermos a visão da inocente menina de 8 anos, que tinha uma simples pergunta e a resposta não a atendeu. De fato, o design tem que ser centrado no indivíduo que vai interagir com o chatbot, que perfils distintos poderão ter maneiras diferentes e mais efetivas de atendimento. Que ao se colocar no lugar deste indivíduo, a tecnologia fica mais humanizada e cada passo mais que estejamos evoluindo, vamos estar mais próximos dos filmes de ficção científica. Não é um barato fazer parte deste futuro? #ficaAdica

Para saber mais


Ricardo Kubo é Líder Técnico de Cloud & Cognitive, formado em Engenharia de Computação pela UNICAMP e membro do TLC-BR desde 2016. O Mini Paper Series é uma publicação quinzenal do TLC-BR e para assinar e receber eletronicamente as futuras edições, envie um e-mail para tlcbr@br.ibm.com.


Baixe a versão PDF deste mini paper clicando aqui.

John Boyer (IBM)How to install and set up an SAP SMDA instance on z/OS and connect it to your SAP Solution Manager

The documentation "SAP Solution Manager Diagnostics Agent on z/OS" describes the steps needed to install and set up an SAP Solution Manager Diagnostics Agent (SMDA) instance on z/OS and connect it to your SAP Solution Manager.  It covers the installation and configuration of the SAP Host Agent and the SAP Solution Manager Diagnostics Agent on z/OS systems.  

image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A sample installation with screenshots shows also further information about z/OS-specific prerequisites as well as a setup check to test the monitoring interfaces and components.

Find the document:  link

John Boyer (IBM)z/VM 6.2 end of service this month

Many z/VSE users run their systems in z/VM guests.

 

Therefore I want to remind you, that z/VM 6.2 end of service is June 30, 2017.
Please migrate to z/VM 6.4 to stay in a supported environment.

 

You can find the end of service dates for z/VM releases here.


There you will also see, that z/VM 5.4 and z/VM 6.3 end of service is planned for December 31, 2017.
So it's time to plan for a z/VM 6.4 migration too.

John Boyer (IBM)Connect 2017 sessions you may have missed: Verse

 

The IBM Connect 2017 conference brought together developers, IT professionals and business leaders to discuss the latest software advances, highlight innovations in workplace technologies and share best practices.

 

Many of the Connect 2017 presentations are available online and cover topics relevant to this community.

 

Over the next weeks I will be taking a look at a selection of these presentations, starting this week with IBM Verse.

 

  IBM Verse Deep Dive and Roadmap
image IBM has revolutionized the enterprise inbox from the ground up for today's social and mobile workplace. This session covers the latest updates and future plans for IBM Verse, including advances in offline, cognitive, calendaring and connecting to third-party applications. Learn how your organization can position itself for driving massive improvements in end-user satisfaction and engagement with IBM's amazingly powerful and easy-to-use inbox!
     
  IBM Verse - Everything You Need to Know for a Successful Migration
image

If you are considering a move to IBM Verse, learn about the techniques, tools and best practices to make your migration fast, painless and successful. This presentation shares the best practices you can only get from countless global migrations to IBM's Verse cloud, and covers  planning, migrating, and deployment best practices. Learn about the right tools for your migration, and uncover some of the low hanging fruit to make your transition successful.

 

 

John Boyer (IBM)DB2 Tips n Tricks Part 118 - DB2INIDB command options w.r.t [Non] Recoverable Databases

Hi All,
DB2INIDB command options w.r.t [Non] Recoverable Databases.

DB2INIDB

 

<iframe allowfullscreen="" frameborder="0" height="320" src="https://www.youtube.com/embed/xU2j9Sfi83g" width="460"></iframe>

Thanks,
Happy Learning & Sharing
http://youtube.com/DB2LUWACADEMY
http://db2luwacademy.blogspot.in

John Boyer (IBM)Curved monitors: pros and cons

imageCurved monitors were launched in 2013. Although they offer a number of benefits, their high price tags have dissuaded lots of would-be buyers.

Two years later, prices have begun to fall and many big manufacturers have thrown their substantial weight behind the new(ish) technology.

So now I just want to have a quick look at the pros and cons of curved monitors.

Pros

Immersion -  Curved screens can make you feel more immersed in what you’re watching. Especially, it is effective for gamers, cause anyone sitting in the curved screen spot gets a real sense of immersion in the game.

Depth – Well, I would say this type of monitors produces something like 3D picture without headaches of actual 3D viewing. In the same time, this isn't actually 3D, he curvature of the screen enhances your perception of depth. If you compare it with a flat screen, you will definitely see the difference.

Aesthetics – No doubt, curved monitors look much cooler that regular types we used to have. In the same way that LCDs were an advancement on CRT, curved screens just look straight up slick.

Cons

Distortion – Actually some people really think there are geometric distortions with a curved screen. In the same time many people simply don’t notice such effect from standard viewing angles. It’s only when you’re viewing from off-centre that the issue comes into play.

Cost – That is pretty obvious, but anyway. As a newer technology, curved monitors cost a lot.

Wall Hanging – The problem is that curved screens sometimes look a little weird hanging on the wall. After the amazing look of a curved monitor on a unit, a hanging TV just looks a bit out of place.

 

John Boyer (IBM)Connecting ITCAM SQL Server agent when SSLv3 and TLS1.0 are disabled

You noticed for sure that the amount of infrastructures where SSLv3 and older versions of TLS are disabled grow constantly and often you are requested to deal with software that stops working or needs to be somehow re-configured to cope with this change.

When SSLv3 and TLSv1.0 are disabled for Microsoft SQL Server, ITCAM SQL Server agent could get some problems connecting with it.
You can find in collector logs some error messages like:


CNTOSVRE (2017-05-12 09:18:32) (15632)Failed to connect to SQL Server:  SQLSRVA03\SQLDBU00 User:ITMONIT.  
DBCONCTT (2017-05-12 09:18:32) KOQSQLD(2812) (15632)could not connect to SQL serve::1  
MSS0510W (2017-05-12 09:18:32) KOQSQLD(12261) (15632)Could not connect to MS SQL Server
CLNDBX0T (2017-05-12 09:18:32) KOQUTIL(2583) (15632)Stopped cleaning of dbx structure, dbx pointer is NULL  
MSU0500I (2017-05-12 09:18:32) (14688)SQLSTATE: 08001, Native error: 18, Message: [Microsoft][ODBC SQL Server Driver][DBMSLPCN]SSL Security error  
MSU0500I (2017-05-12 09:18:32) (14688)SQLSTATE: 01000, Native error:  772, Message: [Microsoft][ODBC SQL Server Driver][DBMSLPCN]  ConnectionOpen (SECDoClientHandshake()).

 

The problem occurs because of the drivers used by the agent to connect to the SQL Server.
Using the default drivers, the same occurs if you try to manually connect to the server, while using most recent drivers, like "ODBC Driver 11 for SQL Server", you can successfully connect to the server.

So in order to get ITCAM agent correctly connecting to the server without using SSLv3 and TLSv1.0, we need to force it using the newest drivers instead of the default ones.

The driver used by agent can be changed by using one of the environmental variable , 'KOQ_ODBC_DRIVER'' .
So, you can perform below steps to fix the problem:
   
1. Open MTEMS window  
2. Right click on 'Monitoring Agent for Microsoft SQL Server' and click  Stop.  
3. Right click on 'Monitoring Agent for Microsoft SQL Server' ->  Advanced -> Edit Variables  
4. Click on 'Add' button on the newly opened window.  
5. Give variable as 'KOQ_ODBC_DRIVER'  
6. Give value as 'SQL Server Native Client 11.0'  
7. Click ok  
8. Start the agent.  
9. Check if agent is able to connect to SQL server by looking at TEP workspaces and/or connector logs.

Hope it helps.

Best Regards

 

 

 

Tutorials Point

 

Subscribe and follow us for all the latest information directly on your social feeds:

 

 

image

 

image

 

image

 

 

  

Check out all our other posts and updates:

Academy Blogs: https://goo.gl/U7cYYY
Academy Videos: https://goo.gl/FE7F59
Academy Google+: https://goo.gl/Kj2mvZ
Academy Twitter : https://goo.gl/GsVecH


image

John Boyer (IBM)Cannot send email to some recipients, the error message is 5.1.3 invalid character ('%') in username

(Q). Domino will auto add the address %domain@xyz.com at the end of email address.  How can I fix this problem?  Please advice.

(A).Workaround:
Recommend AP to use a valid internet address in the field "mail from" such as "公司" <virtual@xyz.com> when talking with Domino SMTP server.

John Boyer (IBM)Does the Future Lie with Embedded BI?

 Business Intelligence programming has turned into an about consensual piece of any information driven association. Today organizations large and little are acknowledging, like never before, that "information is control", and that outfitting this power requires the correct tools for the task.

 

In any case, soon we may see BI take another course: Rather than organizations just obtaining dashboard revealing programming for the reasons for inward use, we'll be seeing a surge in organizations hoping to incorporate progressed examination and announcing into their own particular items. Welcome to the universe of embedded analytics.

 

What really matters to Embedded BI

 

Basically, imbedded analytics (or inserted BI) implies including highlights typically connected with BI programming -, for example, dashboard reporting, data visualization and analytics tools - to existing applications. This can for the most part be accomplished in two ways:

 

In-house improvement - i.e., the application producer assembles its own analytics platform and incorporates it in its current product

 

Buying and embedding out-of-the-box software  - i.e., swinging to an outer developer and incorporating its analytics solution in the application.

 

While both of these arrangements are reasonable methods for adding a BI platform to existing software, it's broadly acknowledged these days that most organizations basically don't have the required specialized skills and assets to build up a really vigorous analytics tool.

 

Business Intelligence is a confounded field that requires particular learning, and if the product designer doesn't as of now have the required labor and foundation, obtaining them could be greatly time and asset consuming.

 

Subsequently, any designer that would like to give an analytics feature that can give real incentive to its clients, and who doesn't have perpetual time and unendingly profound pockets, ought to presumably be taking a gander at embedded analytics solutions.

 

So what focal points does inserted BI offer to software designers? Why is it ready to be the following enormous thing in business intelligence? Since organizations are starting to understand the energy of information and the additional value that can be gotten from it - not only for themselves, but rather for their clients too.

 

Gathering information has turned out to be less demanding than at any other time. Any product application that processes a lot of information (advertising computerization applications may be viewed as a regular illustration, however different cases can be found in an extensive variety of businesses) can likewise record this information and store it in a genuinely organized shape - in a mechanized way and without the requirement for any sort of human intervention.

 

What's more, in case you're as of now gathering this information - why let it go to waste?  Giving your clients access to it would make own item considerably more valuable. In any case, crude information is normally not exactly completely valuable without the way to "crunch" (i.e., clean, break down and picture) it. This is typically finished with the utilization of BI software.

 

In any case, since a lot of reporting tools exist, including many that can incorporate with existing stages or csv sends out, the inquiry remains - what is the one of a kind benefit of installing the analytics platform inside the application that is gathering the information?

 

The appropriate response is straightforward: to keep things basic. Clients would prefer not to interchange amongst stages and to end up plainly acclimated to a radical new UI and structure. In the event that they're managing information that is created or exists inside a specific application, it's considerably less demanding for them to continue to deal with that information inside a similar application as opposed to being compelled to buy, introduce and get comfortable with an extra device. This likewise shortest the time periods that go between the information being produced and the its analysis, which makes for more compelling analytics.

 

Consequently, Izenda embedded BI gives a much cleaner and friendlier client encounter for clients, and in that lays their significant favorable position over arrangements that require two separate stages.

 

We will dare to figure that the expanded adoption of embedded analytics as of late won't be a passing pattern; actually, no doubt we will be seeing a great deal more coordinated BI arrangements sooner rather than later.

 

This expectation depends on the way that data, and data analytics, are ending up plainly a commodity product - and are at no time in the future seen as an extravagance thing for the bigger and wealthier organizations, yet as an unquestionable requirement for any sort of information driven business, and notwithstanding for buyers and people who might want to be settling on better educated choices in their everyday lives. Also, with the Internet of Things and the developing affinity of portable and wearable gadget utilization, the measures of information anybody can be presented to whenever develops significantly.

 

For instance, we could without much of a stretch envision a reality in which customers have a mobile app that analyzes late changes in item costs - enabling the purchaser to choose whether to buy a specific item on the spot (e.g. when remaining before the counter at the general store), or sit tight for a superior time. Route applications are as of now examining activity information to locate the shortest ways, however giving the client guide access to this information may let him additionally realize what might be the most gas-effective or most secure one too.  Wearable gadgets would track a man's heart rate, speed and this individual to figure out which exercises could later utilize different factors amid his or her work out - information analysis or activities were the best.

Really, the conceivable outcomes are almost unending. When customers acknowledge how much they can advance their own lives utilizing data, they will start to request producers to give them devices that would allow them to analyze the data themselves.

 

In a similar manner, we hope to see an expanding measure of items accompanying an embedded analytics feature, exhibiting another open door for programming and application designers and BI vendors alike.

John Boyer (IBM)Everything you want to know about Low Code Development

What is a low code platform ?

A low code platform is a software application tool that lets anyone with minimal It skills build applications visually without the need for complex coding . Everything from building the application to deployment, testing and maintenance is taken care of on a single platform.Forrester Research defines a low-code development platform as follows: platforms that enable rapid application delivery with a minimum of hand-coding, and quick setup and deployment, for systems of engagement.

 

How are Enterprises meeting the growing application demands

 

 When the Analyst firm Gartner conducted their study , they found that  in house developers and It experts are not always able to meet the growing demands for applications from business. The demand of mobile apps and applications is far more than what can be delivered by traditional methods.  There comes the importance of Low code development that is gaining traction these days. Low code platforms help build applications fast, with great ease and at lower costs. With the growing demand and digital transformation, low code platforms are much in vogue. A low-code platform uses  simple visual designs  to replace traditional coding with drag-and-drop functions in an UI. The  application  is developed using visual tools instead of writing complex coding text. This in turn has led to the growth of Citizen developers, people with limited coding and It skills who can easily build applications using low code platforms. The Low code platform is the leads to digital

 

Low code platforms and the concept of low code application development is gaining popularity because of many good reason- they are easy to use, backed with enterprise grade robust security features. They follow rapid application development methodology, easy to be prototyped. Citizen developers can build custom applications fast and in much lower cost than they could have done using traditional ways of application development. The application built on low code can be customised, enhanced and changed to suit business requirement as and when required, making it a very business and customer friendly application. So why create multiple apps, when one single application  can be customised to be reused in multiple ways.

 

The time has come , when Gartner's  Bimodal IT is the rule of thumb. Enterprises are focussing on Business strategies and challenges and looking at ways how IT can meet these ever changing demands in the fastest speed and low cost and also be agile.l

Business Leaders are facing many challenges today

The leader and decision makers who strive hard to run business like the CEOs and the CIOs are finding it hard these days to meet the digital transformation and innovation goals. There are hindrances like

●Evolving customer needs

●Finding the right talent and workforce

●Meeting customer expectations with legacy applications

●Educating employees to embrace the new digital disruption

●Tackling Shadow IT challenges

●Embracing modern application development methods like RAD, Low code development

 

Features of Low code development that particularly helps meet these challenges are :

 

Develop Visually

 

The Low code platforms uses simple drag and drop functionality , it uses out of the box themes and templates are than be used as per requirement. This can be used easily by citizen developers to build custom apps and meet business requirements

 

Simplified Integration

 

Data from disparate sources can be easily integrated to create application using low code platform. This integration when done using low code platforms is expected to provide a visual approach to developers who can bind data from sources like APIs, external cloud sources etc. Developers can also design data models and configure business logic into a low code application.

 

Instantly Deployed

 

Low code applications reduce coding efforts large , which in turn makes application delivery fast. The applications can be instantly deployed with no Devops. Security, governance, version control, release management can be seamless handled in a low code platform.

 

Conclusion

 

CIOs are now looking forward to investing in low code development as they see a positive trend of business-IT alignment in it.   Using low code platforms, business leaders and IT developers can work together keeping in mind the business priority and IT guidelines together. Low code development also helps in clearing IT Backlog and democratisation of application development. For  building business ready applications, low code development is the next big thing.

 

John Boyer (IBM)Realtime Monitoring and History in Data Server Manager

To understand the difference between historical and realtime monitoring you need to understand the difference between In-Memory Monitoring and Repository Persistence. IBM Data Server Manager stores information in two ways. 
 
In-Memory Monitoring is available with Data Server Manager Base Edition and uses the memory of the DSM server to store relatively short term information about your database history. Repository Persistence is available with Data Server Manager Enterprise and allows you to collect and manage a large amount of detailed information about your databases.
 
In-Memory
You can choose how much data to store in memory by changing your monitoring profile. 

image

In this example, Data Server Manager will collect monitoring information every two minutes and keep the data in memory for one hour. After that time the data is forgotten. If the Data Server Manager service stops running anything in in-memory monitoring will also be forgotten. When you set the volume of data to collect in the Monitoring Profile, there is an estimate of how much memory is required. In this example it is  3.66 MB of data for each database included in this profile. 
 
Repository Persistence
If you want to keep information longer, you need to setup a DB2 database to store the history. You can specify which database to use through Settings->Product Setup. 
 
The Repository Persistence options in your monitoring profile are different than the In-Memory Monitoring options and allow you to store data for much longer. 
image

 

In this example we only collect information every 15 minutes but keep it for 31 weeks. We allow Data Server Manager to aggregate data and remove some details after about a week. All these are adjustable but require more storage. 
 
Data Server Manager understands what is "normal"
The Data Server Manager Home page uses In-Memory monitoring to display the current value of key performance indicators (KPIs) for all your DB2 for Linux, UNIX and Windows and dashDB databases. Data Server Manager also calculates what a normal value is for key performance indicators during each four hour window in a week. Normal is within one standard deviation. (About 68% of the values collected fall in this range.) Data Server Manager also saves the historical minimum and maximum values for your KPIs. The most accurate calculation of normal values is available when you have enabled the history repository and have collected a couple of weeks of history. If you only have in-memory data, Data Server Manager will calculate normal based on only the data available in memory. So don't be surprised if you see a lot of out of normal indicators until Data Server Manager has collected enough history. (The Normal Range Bar shown below is new for Data Server Manager 2.1.4.)Normal Range Bar new with DSM 2.1.4

image

 
Detailed Monitoring
Both real-time and repository monitoring data are used in the Data Server Manager Monitor page. Select Monitor->Database.

Detailed Monitoring

 

 

 

 

 

 

Realtime
image
Realtime monitoring displays a picture of the current database status. The overview page shows a view of the last hour of time spent and key performance indicators like CPU usage and transaction rates. Everything is either an immediate view of the current state of the database or a view of no more than the last hour. The normal range for the current 4 hour window is shown as a light blue background.
 
History
image
To see data past one hour, select History from the Monitor page. History can provide a much longer look back into the past. It uses both the in-memory monitoring data as well as data stored in the repository database. In the example below we are looking at the last three hours of history. The first hour contains samples every few minutes and you can see the additional spikes from the high variability in this database. After the first hour, data is only collected four times an hour. So while fewer spike appear in the graph that is likely only because of the less frequent monitoring.
image
 
Normal in History
In history mode, the light blue background shows the normal range (one standard deviation from the average) of the period being displayed. In this example about three hours. The longer the period being displayed, the more data is used to calculate the normal range. 
 
DSM Base vs Enterprise
If you have Data Server Manager you can see historical data, but only what is available through In-Memory monitoring. Use of the Repository database is only available with Data Server Manager Enterprise included with DB2 Advanced Editions and for use with dashDB.

John Boyer (IBM)旅游城市租车越南

服务旅游城市的汽车租赁从4席,7席,16席,在胡志明市旅游带动45个座位保费便宜的汽车租赁服务。

随着对使用的车辆,工作中的成长性企业的发展需求,工作越来越多。意识到公司扩大投资市场的豪华车,以满足客户的需求。

专业单位是知名的汽车租赁豪华旅游TPHCM廉价租车月,婚车租赁,豪华车租赁服务。而且是客户转移的一个重要合作伙伴5个星级酒店,如胡志明市新世界乐天传说壬戌,索菲特,拉人,帆船,柏悦酒店,洲际,...

除了分期新的模型车,高品质的品牌汽车制造商如奔驰S550全球,Mercerdes S500奔驰2016 E250在2016年,奥迪A4,奥迪Q5,雷克萨斯LS 460L,宝马X6 ...
你一定会使用我们的服务时,得到满足。

 

欲了解更多信息和建议,请致电:

豪华汽车租赁顾问TPHCM:+84918176999

汽车租赁旅游咨询TPHCM:+84913119595

TPHCM汽车租赁顾问月份:+84835114471 - +84835114477

在全市旅游欧洲公司以专业,诚信和优质的汽车租赁服务。服务的汽车租赁需求TPHCM便宜的旅行,出差,旅行...您在胡志明市及周边省份。英语司机的团队训练,高品质的汽车分期服务于酒店,外国游客......我们总是让顾客满意,当谈到汽车租赁服务TPHCM。

具有完善的质量标准,价格实惠,我们进入了高品质的车辆和奢侈品,例如在系统上 thuê xe du lịch tphcm

    4个席位:奔驰S500,S400,E250,E200,C200,雷克萨斯IS 250C,奥迪A4,奥迪A6,宝马320i的,的528i,丰田佳美2.4,凯美瑞2.5Q,雪佛兰Cruzze,马自达6 ...
    7席:雷克萨斯GX 470,AudiQ7,丰田赛纳,伊诺,Fortuner,Foreverest,Captiva的...
    9座车辆:福特全顺轿车DCAR
    16个座位:奔驰凌特,福特全顺,海狮...
    汽车座椅29:现代县,SAMCO五十铃...
    45个汽车座椅:航空航天,UNIVER ...

进口车辆的所有车辆都是高品质的最新的2012至17年与椅子翻,静音空调,防震系统应该是很多客户的首选汽车租赁选项。
通过多年的服务“的公司租用欧洲车”是汽车租赁公司的旅游城市是客户答谢之一,在认知客户稳定的位置时,租赁需求车辆在高品质TPHCM旅行。

除了优质的服务,价格是,当你租用汽车的重要因素之一.Chau欧洲16个席位,确保客车租赁价格我们TPHCM始终在市场上最有竞争力。专业工作人员,一直致力于告知客户汽车和车辆适当的节省成本,同时保证了舒适的旅程质量。

全旅游汽车租赁服务TPHCM以下标准:
1.司机英语,酒店服务
2.免费无线上网
3.配有先进的Ipad
4.湿毛巾
5.矿泉水
6.音乐点播
7.多功能充电
8.杂志
9.保险高大的乘客。

客人可以选择下面的16席或联系热线点播租赁服务:+84913119595的意见 - +84918176999。
快来体验欧洲的服务,我们很荣幸能陪你。

John Boyer (IBM)An invitee received an error when opening a meeting invitation

(Q).

An invitee received an error as below when opening a meeting invitation.

Field: 'tmpHideTimeZone': Incorrect data type for operator or @Function: Time/Date expected

(A).

By checking, I found a SPR reported the same error message. However, I need to verify the detail by the the problem meeting invitation document in terms to confirm it. Could you please send me the problem meeting invitation for review? Please put it into a blank unencrypted database. Please also advise the steps to reproduce with the screen shots as well as the exact Notes version.

In addition, it looks like mail template design error, also suggest to do the follow actions:
1. If you are using IBM default mail template, please use it to replace this user's mail database and test it again.
2. If you are using the customized mail template, please use this customized template to replace user's mail database and test it again.

John Boyer (IBM)Top 5 Tips For Marketing Online On A Budget

image

 

No one can deny that the number of marketing opportunities on the internet is endless, but that doesn’t necessarily mean that online marketing is easy. On the contrary, establishing yourself as a brand and promoting yourself will be a little more complex than you may have anticipated.

 

Nonetheless, that doesn’t mean that online marketing has to be difficult…or expensive. It’s perfectly possible to effectively promote yourself and your business online without spending a fortune. All you have to do is know what to do and be persistent.

 

Here are the top five tips for marketing online on a budget:

 

1. Set A Budget

 

In order to run an online marketing campaign on a budget, you have to set a budget in the first place, right? When writing a budget plan, be as detailed as possible. Determine how much you are willing or able to spend and then ensure that your expenses never exceed that amount. Check up on your budget monthly to update it.

 

2. Know The Market

 

Who are you marketing towards? Many online marketers who have failed in the past did so for one reason: they didn’t know their target audience. As a result, they wasted literally hundreds if not thousands of dollars in their marketing campaigns. The good news for you is that by figuring out who your target audience and demographics are, you’ll know exactly who to market towards and you can save lots of money.

 

3. Create Infographics

 

Infographics are a superb way to share valuable information in a very visual way. You can utilize an online artwork maker to create anything from infographics to event flyers to business flyers and more. When designing an infrographic, only focus on the most relevant and valuable information and experiment with different colors and fonts until you find a style that is easy to follow and visually appealing.

 

4. Utilize SEO

 

Perhaps the cheapest and most effective online marketing tactic of all is simply to utilize the power of SEO (search engine optimization). SEO means making your website more visible and relevant on search engines by incorporating popular keywords into your headlines and contents. You can further improve your website’s SEO through back links and including images in your content.

 

5. Become A Blogger

 

Blogging is an excellent way to get your name out there and expand the visibility of your business because you can include links to your website in your content. Blogging can furthermore create a new source of income for you through ad revenue and affiliate marketing. If you don’t want to write the blog posts yourself, you can easily hire freelancers to do it for you on the cheap for five to fifteen dollars a post.

 

Marketing Online On A Budget

 

These are just five tips you can use for marketing your brand online without spending very much money. Continue to conduct more research to find even more tips and then apply those tips to reap the benefits of them.

 

ProgrammableWebWhen Designing APIs, Good Path Matters

The 1946 biography of Woodrow Wilson includes an interesting quote:

A member of the Cabinet congratulated Wilson on introducing the vogue of short speeches and asked him about the time it took him to prepare his speeches. He said:

“It depends. If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now.”

Similar sayings can be found from other great speakers:

ProgrammableWebDaily API RoundUp: CoinAPI, AerisWeather, ecomdash, TelecomsXChange

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebCA Technologies Adds New Capabilities to Its API Management Portfolio

CA Technologies today announced new capabilities within its API Management portfolio in an attempt to help developers, enterprise architects and digital leaders create and deploy microservices—and manage the APIs that connect and orchestrate microservices to build application architectures. The latest features include:

Amazon Web ServicesDynamoDB Accelerator (DAX) Now Generally Available

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

John Boyer (IBM)Release Notes .....Are there any issues

Upgrading

With each new code stream (e.g.5.2.4.x to 5.2.5 or 5.2.6) IBM provides release notes. Despite this we often receive PMRs asking if there are any known issues upgrading from one version of IBM Sterling B2B Integrator to another, unfortunately opening a PMR is not a short cut to reading the release notes. They contain many important sections including one called "Known Issues". For example, here's the release notes for 5.2.6 

image

IBM Installation Manager (IIM)

One big change in version 5.2.6 is the introduction of the IIM. The IIM is a graphical interface for installing the product but it also performs a set of other functions. As the notice indicates, reading the planning and installation sections are important to a successful installation.

 

image

What's new, changed and removed in this release

As you can see it lists several important sections. For example in the "What's new, changed and removed in this release" shows that AIX 6.1 and the MySQL database are no longer supported. It also highlights three security fixes that have been resolved in this release.

image

 

Disk Space and System Requirements

Next, before upgrading you should make sure your aware of the base requirements for the release. For example, periodically, a new JDK version is required or more memory or disk space will be necessary for this release to operate optimally.

 

Known Issues

If there are any known issues, they will be listed here. It is rare that support knows of any additional issues that aren't documented here.

 

APARs

This section lists all the fixes that are incorporated into this release.

 

 

Applying Fixpacks

IBM's Fix Central is the location for product fixpacks: https://www-945.ibm.com/support/fixcentral/help?page=start but did you know you can search for fixes by APARs on this site?

 

image

There's even a tab to help you construct search terms:

image

 

Pre-requisites

Fix central can be configured to include any required software as part of the fixpack. 

image

 

John Boyer (IBM)Watson Data Platform: Hybrid Workflow Optimization

This demonstration shows how IBM's Watson Data Platform can be used to optimize a data processing workflow and alleviate a severe bottleneck.

Watson Data Platform components demonstrated include:

IBM Bluemix Lift-cli
IBM dashDB
IBM Bluemix Data Connect
Object Storage
CloudFoundry node.JS
Boilerplate for Node-red

Video Link: http://ibm.biz/BdiCWz

Amazon Web ServicesProtect Web Sites & Services Using Rate-Based Rules for AWS WAF

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

John Boyer (IBM)Transition to SCRT Java version

Last October I had a blog entry about today's subject, see here.

 

SCRT (Sub-Capacity Reporting Tool) is available as classic and Java version. If you are using SCRT to report your MSU usage, you are already aware about the two versions.
This blog entry is a reminder, that the classic version will be replaced by the Java version in October 2017. That is starting with the November reporting, you have to use the Java version.
Reports from the classic version will then no longer be accepted.

John Boyer (IBM)About IBM Maximo Linear Management 7.6.0.1

In technote Maximo 7.6.0.8 Feature Pack, in the list of products includes IBM Maximo Linear Management 7.6.0.1:

image

Q: I have Linear installed. How do I obtain the fix pack? I can't find it in Fix Central.

A: IBM Maximo Linear Asset Management is part of Maximo core. When you install Maximo Feature Pack 7.6.0.8, it updates Linear to 7.6.0.1 Build 20170512-0100 DB Build V7601-06. No additional updates are necessary.

 

Q: I have Maximo 7.6.0.8 installed, and need to add IBM Maximo Linear Asset Management. Where can I get the license for Linear 7.6.0.1? Passport Advantage has 7.6.0, but not 7.6.0.1.

A: Linear is part of core. If you need to add Linear to Maximo 7.6.0.8, go to the download note  Maximo Linear Asset Manager 7.6.0 for more information. As with Scheduler and Calibration, the package enables already-installed product and does not add any code.

John Boyer (IBM)5 Things to Know about Implementing Decision Governance with ODM

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/F83LChb8GHM" width="560"></iframe>

Introducing the IBM Redbooks publication Governing Operational Decisions in an Enterprise Scalable Way


IBM® Operational Decision Manager (ODM) is a platform for capturing, automating and governing business decisions and helps you make decisions faster, improve responsiveness, minimize risks and seize opportunities.

Any implementation of ODM needs to consider the implementation of Decision Governance to ensure controlled changes and deployments. Here are 5 things to know about the Decision Governance implementation:

  1. Start talking about Decision Governance early

The implementation of Decision Governance takes time. It requires many discussions about the requirements of the organization and how Decision Governance will meet those needs. You may start discussing the topic at the beginning of your implementation or as you are wrapping up your first implementation but the key is to start talking about it early so that it can become a way of working for all your projects.

image

  1. Consider the Roles and Responsibilities

Implementing Decision Governance will likely require new roles and responsibilities. As the initial discussions take place, roles and responsibilities should be a big topic of conversation. Roles and responsibilities will allow you to consider gaps in the skills of your current resources, required training to fill those gaps and also to identify who will actually fill those roles and responsibilities.

  1. Plan organizational changes

Implementing Decision Governance will also likely require some organizational changes. Organizational changes need to be planned properly to be as successful as possible. If you know it will be likely, reach out to your Human Resources department to get some expert help on how to proceed.

  1. Aim for a Center of Excellence

The Center of Excellence (CoE) is the ultimate goal when implementing a technology such as IBM® Operational Decision Manager and Decision Management. Having a CoE in place means that you have a pool of resources that are knowledgeable about the technology and decision management, training resources and experience within your organization. The CoE will become key in the success of expanding the use of the technology and making your projects a success.

  1. Communication is Key

As for any project, communication is key. It is important to communicate clearly with all people so that they know what changes might be coming and what to expect. Change is always difficult and keeping people informed will reduce the uncertainty that some may feel and hopefully help gain some additional support to move things along.

 

To learn more about how IBM Operation Decision Manager enables you to put Decision Governance in place and how it enables the business to make agile rule changes in a safe and governed way to give your business the competitive edge, view the IBM Redbooks publication  “Governing Operational Decisions in an Enterprise Scalable Way”, SG24-8395-00.

image

 

Eric Charpentier is an Operational Decision Manager Level 2 Support Engineer with IBM Hybrid Cloud in Canada. Before joining the support team, Eric was an ODM Consultant and helped clients implement ODM in projects of all sizes and in many different industries. As part of the support team, he helps clients troubleshoot and resolve issues around ODM. Eric co-authored the first version of this book and has presented on multiple topics including decision governance. Eric has a B. Eng. in Computer Engineering from École Polytechnique de Montréal and an MBA from HEC Montréal.

 

image

 

John Boyer (IBM)#CloudHaiku: The Poetry of the Cloud

image 

 Let's step away from serious business for a moment and take some time for a bit of fun.... We recently found this excellent deck on Slideshare and couldn't resist sharing with you all here as well!  

#CloudHaiku: The Poetry of the Cloud - Poetry can be found nearly everywhere you look in the tech world. It can certainly be found in the cloud and today’s thriving open technology communities. Never have there been such an abundance and sophistication among open source projects. These communities are producing some of today’s most important and sophisticated technology that will serve as the backbone for an exciting future in cloud, cognitive and data/analytics. It is in this spirit that we decided to pose the #CloudHaiku challenge to some of today’s most brilliant minds in cloud and open tech. Enjoy the result below!

 

<iframe allowfullscreen="" frameborder="0" height="714" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/key/KxVk7g6SgHgMCd" style="border: 1px solid rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;" width="668"></iframe>

At first glance, the cloud doesn’t lend itself to poetry. If Shakespeare were alive today, the object of his sonnets probably wouldn’t be virtual machines, bare metal, public/private/hybrid or even the workloads they support. But once you scratch the surface, poetry can be found just about anywhere—even a data center. Open technology is certainly one area that can inspire prose. Think about it: Where else in business do you see competitors working side-by-side for the greater good? There’s a certain beauty in the fact that these projects are only as strong as the communities that support them.  

It’s in this spirit that we decided to pose the #CloudHaiku challenge to some of today’s most brilliant minds in cloud and open tech. Industry leaders like Lorinda Brandon (CapitalOne), Kris Borchers (JS Foundation), Al Gillen (IDC), Leslie Carr (Clover Health), Rich Miller (Telematica) and so many more were kind enough to lend their voice to this experiment. We’ve compiled just a sample of our favorites in this eBook. But #CloudHaiku doesn’t stop here. Post yours to Twitter using the #CloudHaiku hashtag and challenge a friend, colleague or fellow open source community member to do the same. Learn more about the work IBM is doing to support an open cloud ecosystem at bit.ly/OpenByDesign

 

 

John Boyer (IBM)[UPDATED] IBM BigFix Patch released Fixlets for the Stack Clash Vulnerabilites

Updated on 22 June 2017 to include the additional sites for CentOS and Ubuntu.

 

IBM BigFix Patch has released Fixlets to address the Stack Clash Vulnerabilities for CVE 2017-1000364, CVE 2017-1000366, and CVE-2017-1000367.

 

The Fixlets for these CVEs are released in the following sites:

  • Patches for Oracle Linux 6 site, version 50
  • Patches for Oracle Linux 7 site, version 82
  • Patches for RHEL 6 - Native Tools site, version 332
  • Patches for RHEL RHSM 6 on System Z site, version 38
  • Patches for RHEL 7 site, version 165
  • Patches for RHEL RHSM 7 on System Z site, version 26
  • Patches for RHEL 7 for IBM Power LE site, version 35
  • Patches for RHEL 7 for IBM Power BE site, version 5
  • Patches for SLE 11 Native Tools site, version 196
  • Patches for SLE 11 on System z Native Tools site, version 26
  • Patches for SLE 12 Native Tools site, version 140

Added on June 22, 2017:

  • Patches for CentOS6 R2 site, version 11
  • Patches for CentOS7 R2 site, version 10
  • Patches for Ubuntu 1401 site, version 206

 

NOTE: The CVEs vary for CentOS 6 and CentOS 7.

For CentOS 6, CVE-2017-1000364 is known as CESA-2017:1486 and CVE-2017-1000366 is known as CESA-2017:1480. For CentOS 7, CVE-2017-1000364 is known as CESA-2017:1484 and CVE-2017-1000366 is known as CESA-2017:1481.

CVE-2017-1000367 is known as CESA-2017:1382 in both CentOS 6 and CentOS 7.

 

NOTE: BigFix is unable to publish the Fixlets for some operating systems because the vendors have not published the patches for these CVEs yet. BigFix will publish the Fixlets for these operating systems as soon as the patches become available:

  • Oracle Linux 6 and Oracle Linux 7: CVE-2017-1000367
  • SUSE Linux Enterprise Desktop 11: CVE 2017-1000364 and CVE 2017-1000366
  • SUSE Linux Enterprise Desktop/Server 11 and SUSE Linux Enterprise Server 11z: CVE-2017-1000367
  • Ubuntu 1404: CVE 2017-1000364
  • Ubuntu 1604: CVE 2017-1000364, CVE 2017-1000366, and CVE-2017-1000367

 

Actions to Take:
Given the serious nature of these vulnerabilities, it is advisable to upgrade your systems immediately or apply the patch as soon as possible.

No other action is required after applying the Fixlets.

 

Additional Information:
For more information, see the following sources:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-1000364
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-1000366
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-1000367

 

Application Engineering team
IBM BigFix Patch

John Boyer (IBM)Are you using MQ Internet Pass-Thru?

MQ Internet Pass-Thru (MQIPT) is an IBM MQ product extension that helps you connect MQ queue managers or clients that are not on the same network securely. It’s free to download from the IBM MQ SupportPac website, and is fully supported when used with a supported version of IBM MQ.

As MQIPT fix pack 2.1.0.3 has just been released, I thought I’d take this opportunity to briefly highlight what this SupportPac offers.

What can MQIPT do?

MQIPT listens on one or more TCP ports and forwards MQ connections that it receives. These connections can be between two MQ queue managers, or an MQ client and a queue manager. The presence of MQIPT is completely transparent to the clients and queue managers.

It runs as a standalone service and doesn’t need to run on the same system as a queue manager or client. In a basic configuration MQIPT just forwards connections to a queue manager, as shown in this diagram.

image

As MQIPT understands the MQ network protocol it can perform various transformations on the connection, such as TLS encryption or decryption, and wrapping the connection in HTTP to enable MQ connections to be tunnelled through the firewall using existing HTTP proxies.

For more flexibility, you can use a pair (or more if you need to!) of MQIPT instances. In this example a pair of MQIPT instances is used to secure a connection with TLS between the two instances. The queue managers are unaware that MQIPT or TLS is in use.

image

Note that you don’t have to use a pair of MQIPT instances to use TLS. MQIPT can also communicate directly with MQ using TLS.

Why would I use MQIPT?

There are two main benefits to using MQIPT - improved security, and easier network administration. Let’s look at an example of how you could use MQIPT.

A common use of MQIPT is to place it in the DMZ, so that it acts as a single point of access to your MQ network from the internet. External queue managers or clients connect to MQIPT rather than directly to the queue manager, as shown in this diagram.

image

This has the benefit of pushing security checks out to the edge of your network, as MQIPT can apply rules to connections, such as checking the client TLS certificate, before the channel can connect to the queue manager.

If the MQ channels are using TLS, then MQIPT will also provide a break in the TLS session in the DMZ, which is something that many organizations require.

The other benefit of this configuration is that it reduces the number of firewall rules needed to allow connections to the queue manager, as all external connections to the queue manager will now come from the machine where MQIPT is running.

More information

There’s more information on MQIPT in the IBM MQ Knowledge Center.

If you think you could benefit from using MQIPT, then head over to the IBM MQ SupportPac website where you can download MQIPT.

John Boyer (IBM)Run OS agent report for IPM v8 agent

There is no reporting and warehouse in IPM 8.1.3 so in order to run TCR, you have to setup an ITM environment to hold warehouse data and setup TCR (if you don't have) to run report. General  guidelines:                                                             
1. Setup TCR 3.x environment by installing JazzSM and TCR. latest version is 1.1.3.0                                               
https://www-01.ibm.com/support/docview.wss?uid=swg24042190              
                                                                        
2. Setup ITM warehousing environment with HTEMS, TEPS, WPA and SPA agent configured and running latest version is 6.3 FP7                                               
http://www-01.ibm.com/support/docview.wss?uid=swg24041633
https://www.ibm.com/support/knowledgecenter/SSTFXA_6.3.0.2/com.ibm.itm.doc_6.3fp2/adminuse/history_sumprune_intro.htm                           
                                                                        
3. Configure TCR to connect to ITM warehouse and install OS agent report provided in v6.3                                                        
https://www.ibm.com/support/knowledgecenter/SSTFXA_6.3.0.2/com.ibm.itm.doc_6.3fp2/install/tdw_solutions.htm                                     
https://www.ibm.com/support/knowledgecenter/SSTFXA_6.3.0.2/com.ibm.itm.doc_6.3fp2/install/appendixtcr.htm                                       
                                                                        
4. Enable APM v8 OS agents history collection to send data to ITM v6 warehouse                                                               
https://www.ibm.com/support/knowledgecenter/SSHLNR_8.1.3/com.ibm.pm.doc/install/integ_tdw_intro.htm                                             
                                                                        
5. Wait for one day until data are warehoused and summarized            
                                                                        
6. Run the report in TCR 3.x setup in step 1.                           
                                                                        
If you have ITM v6 and TCR with OS agent report installed already. You can start from step 4 to feed history data to warehouse then run report as you do for v6 agents.                            

John Boyer (IBM)What Are Ladders Used For?

Ladders are the most sensible and practical option if you're doing tasks that require reaching and climbing high places. Although they're not our first choice as there are other things that may be substituted with ladders like tables and chairs, using ladders can make your task easier without the danger of endangering yourself.

When To Use Ladders?

Use ladders when your task requires equipment or you a tool that can provide you with a greater level of fall protection. The best time is when you need to work on something or reach something that's 2 feet above your head.

When Painting The House

Whether you're applying fresh paint or repainting a house, one of the essential tools which you need to have are ladders. Most people think that using ladders when painting hard to reach part or walls of the house could be scary but it's a lot safer than using improvised scaffoldings.

If you don’t own any ladder, ensure that you simply purchase a fresh one because you're going to work with paint. Don't borrow old ladders wherein you're done after several minutes, as this may be unsafe as you'll be staying on it for a longer period of time compared to other tasks.


The ladder that you just need to be using needs to be working properly which means it doesn’t tilt easily, there aren't any annoying sounds when you make a move while using it, and they're robust enough that one may maneuver safely.

The best type of ladder than you are able to use when painting a house are those ladders that you could extend. What this means is you will be using the same ladder in the event that you paint higher.

When Doing Carpentry Works

One of the most valuable equipment for the majority of carpenters is their ladders. Carpenters works over the ground the majority of the time which is why they make a robust one or invest in high quality ladders. Although working on ladders can be a daunting task for the majority of carpenters, risks a greatly reduced if they followed the safety procedures that'll prevent them from getting and falling injuries.

Carpenters have different choices in regards to the type of ladders that they want to use. However, it's best if they choose. If possible, carpenters must use ladders made from hard wood and metals every one of the time. Ladders are very hard to the most common cause of accidents and also trust.

When Cleaning The Roof

The majority people clean our roofs ourselves and some call the professionals for help. Cleaning the roof is also.

Either you a high quality ladder, or a professional must be your equipment. Because you're going to bring cleaning materials to get cleared of foreign objects and leaves in your gutter, the ladder that you simply need to use should be the type that could handle more in relation to the weight of one person.

The perfect types of ladders for this particular type of task are multipurpose ladders and twin steps ladders. Twin step ladders have two parts: one side is the ladder while the other is the support. Twin steps ladders offer the greatest support while climbing to the roof that you just need. Multipurpose ladders on the other hand are ladder with a couple of functions which you are able to use to place a number of the tools that you're going to use for cleaning the roof.

Uses Of Ladders At Home

Whether you're simply going to hang your newly bought decorating the Christmas tree, or painting, or adjusting your picture frames which you have affixed in the walls, a ladder may provide you the best support as you do your home projects and while climbing. Fixed ladders are very helpful as well.


Be certain to use a ladder without worrying about falling in order to place your indoor plants.
These stuffs are stored at the highest shelves of our cupboards. Use a ladder so that you can have all of the support that you simply need when storing your precious and fragile Chinas away.
Cobwebs, molds, and mildews can form anywhere including the best equipment for you to use to remove these eyesores within your home are ladders as well as hard to achieve places.

John Boyer (IBM)Do You Know The Different Types OF Ladder And The Best Way To Use One

Ladders are sets. Ladders are important to our everyday life if you're not tall enough although they're one of the most overlooked tools in many homes. There are two main categories of ladders: rope ladders and rigid ladders.

Rigid Ladders

These are sturdy ladders you could lean against a vertical surface like walls for support. Rigid ladders are made from fiberglass, metal, or wood. They're used at industrial and home places.

Extension Ladders

Straight ladders or extension ladders are the first image which will come into your head when you think of ladders. They're what their names indicate: they're extendable in order to reach high places. Extension ladders have two main parts: the fly as well as the base. The base is the main part that touches the ground which supports.

Step Ladders

Step ladders may be used in the midst of the room because it doesn'
t need to lean in just about any type of support. It can be a twin step ladder or a front step ladder.


Folding Ladders

The main difference is they could be folded although they can be similar to twin step ladders. With that said, folding ladders are convenient to use as they could be folded and stored out of sight when not in use. They consume lesser space which implies you could fit them beside your drawer or in a tiny space under your bed.

Step stools

The only difference is they are designed to help across the house which means they're used to reach high places like shelves and cupboards and not high places such as the ceiling. Step stools are much safer to use when compared to a wobbly chair or stack of things which you use to stand on. Step stools are made from plastic and wood and have firm feet to support the weight of the person. 
Cantilever Ladders are the best ladders.

Telescopic ladders

Telescopic ladders are ladders than can be slide downward or upward. Telescopic ladders have. This allows the user to determine how tall he needs the ladder. Because when they're not in use they are able to measure for up to 23 ft. telescopic ladders are also called little giants

Multi-Purpose Ladders

Multipurpose ladders are basically. Multipurpose ladders are also. Some have rollers in order to transfer them from one place to another.

Rope Ladders

Rope ladders are ladder. These types of ladders are part of the essential tools of campers, police, firefighters and so on. Rope ladders have two long ropes that held the steps or the rungs. Rope ladders are made from a combination of ropes and metal or superior ropes or ropes and wood.

The Way To Use Ladders Safely

Some ladders may deteriorate after not being used for a long period of time so that it's best for your safety if it can support your weight, to check the ladder. Search for broken or bent parts. Avoid using ladders that are damaged.
Don't make temporary repairs. There's a high probability that repaired once you're using them, ladders will weaken.
See that your ladder is free from powder, wet paint, oil, grease, and moist. Check your shoes before you use it to avoid falling and slipping.

Keep your body.
This is to make certain you have a firm grip.
Move when you need to, or adjust the ladder. Don't over reach. In case the object you're trying to reach is out of your hand try not to over reach by tip toeing.

Climb. Never jog on the ladder when using it, or skip one step.
Avoid overloading. Don't use the ladder because most ladders can carry the weight of one person while a person continues to be using it.

Hang the ladders for support if racks are available. Ensure that the ladders€™s base isn't touching the ground. This is to prevent breaking or bending of the ladder.

Store in a safe and dry place. Keep your ladders away from children and direct sunlight. Direct sunlight might make the ladder expand if it's made from plastic and metal. This certainly will become unlikely to carry your weight as soon as you use it and may weaken the ladder.
Clean your ladder after each use to avoid rusting and buildups.

 

John Boyer (IBM)12 winning social habits for IBM Connections

Out of a decade of social practice at IBM, our most popular social IBMers have been known to apply several of the following habits that we share with you in this infographic.

Register as a member to this community and you can download a high quality pdf version of this graphic at this url.

Stay tuned as translated versions get delivered over the course of the next weeks.


image

John Boyer (IBM)【6月22日 晚8点】Serverless 架构和 Apache OpenWhisk 微讲堂系列,第四讲:使用Serverless平台创建微服务

Serverless 架构和微服务架构有很多相似之处。在了解了 OpenWhisk 概念编程模型开发工具集之后,本系列的第四讲,OpenWhisk 的 Committer、IBM 资深工程师郭迎春将再次做客微讲堂,对比两种架构,并且阐述如何用 Serverless 平台 Apache OpenWhisk 开发微服务

 

讲座主题

使用 Serverless 平台创建微服务

 

时间

6 月 22 日晚 8 点

 

摘要

本次讲座将介绍微服务架构,比较 Serverless 架构和微服务架构,以及讲述如何用 Serverless 平台 Apache OpenWhisk 开发微服务。

 

讲师

郭迎春(Daisy),IBM 资深工程师。Daisy 拥有多年开源社区的工作经验,她于 2009 年加入 OpenOffice 社区,2012 年加入 OpenStack 社区,是 OpenStack 社区国际化项目组的创始人。目前她主要从事 OpenWhisk 项目的开发,是 Apache OpenWhisk 项目的 committer

 

参与方式

WebEx斗鱼,参会方法见帮助页面

 

报名方式

扫<wbr></wbr>二<wbr></wbr>维<wbr></wbr>码<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>添<wbr></wbr><wbr></wbr>加 <wbr></wbr>I<wbr></wbr>B<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>M<wbr></wbr><wbr></wbr>开<wbr></wbr>源<wbr></wbr>技<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>术<wbr></wbr>为<wbr></wbr>好<wbr></wbr>友<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>,<wbr></wbr>之<wbr></wbr>后<wbr></wbr>申<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>请<wbr></wbr><wbr></wbr>报<wbr></wbr>名<wbr></wbr>。

 

-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr><wbr></wbr>-

【Serverless 架构和 Apache OpenWhisk】微讲堂系列是 ”IBM 开源技术微讲堂“ 2017 年第二期,共分为 6 次课程。

>> 点击访问【Serverless 架构和 Apache OpenWhisk 微讲堂系列】 课程详情

   (视频回放,讲义下载)

 

-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr><wbr></wbr><wbr></wbr>-<wbr></wbr>--<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-<wbr></wbr>-

IBM 开源技术微讲堂往期回顾:

John Boyer (IBM)Host clusters in 7.7.1 and later

In 7.7.1 we added a long-awaited feature that we have called host clusters.   This feature is not exactly rocket science to understand, but here's a breakdown of a couple of useful scenarios that you can use it for, and some information about how to configure it.  Note - host cluster support was CLI only in 7.7.1, to use the GUI you'll have to be running 7.8.1 or later.

 

 

Major Use Cases

 

1/ If you have a cluster of hosts (e.g. a set of SVC nodes in front of a V7000), and you want to make sure that all the hosts have the same SCSI host mappings, but you want the physical machines in different host objects so you know which WWPN belongs to which server.

 

Once you've made your host cluster, you simply map your new volumes to the host cluster rather than the hosts.

 

2/ Sometimes you have a cluster where you need a number of shared mappings and some non-shared mappings.  The most common example of this is a VMware Cluster where the ESX server is SAN booted.  In this case, you need the datastore volumes share between every ESXi server, and each ESXi server needs its own SAN boot volume.

 

Therefore you will need to have shared mappings which are used by all hosts in the host cluster, and private mappings which are only mapped to a single (or subset) of the hosts in the host cluster.

 

Tip: 7.8.1 allows you to apply a throttle across the entire host cluster (or a host) rather than on a single volume

Caveats in 7.7.1 and 7.8.0

Basically, this boils down to the fact that you will not be able to manage the host clusters, and more importantly map volumes to the host cluster in the GUI.   Unless you have a lot of scripts/automation in your environment and you never provision volumes using the GUI - it is probably better to wait until you have upgraded to 7.8.1

 

Creating new Host Clusters when running 7.8.1 or later

This use case is fairly well documented and has full GUI support - so I'm not going to go into it in detail here.   The command lines is mkhostcluster and the GUI panel is called Host Clusters.

 

Converting your existing configuration into host clusters

Converting multiple hosts into a single host cluster

So many of you who are reading this already have hosts with existing mappings and you will want to convert them to host clusters.   The mechanism to do this in the GUI is fairly good (it basically uses the all-in-one approach below but also allows you to manually specify some private mappings).   But even if you use the GUI method - you still need to at least do the "before you start section below"

 

Once you've made the conversion, and you are running 7.8.1 or newer, you will be able to add more hosts and map volumes to the host cluster using the GUI.

 

Before you start -

  1. Make sure that no hosts have any throttles associated with them
  2. you need to convince yourself that your SCSI host mappings for shared volumes  is identical on all hosts.  If it isn't then you will not be able to convert to a host cluster without fixing it.  

 

How do you check this? 

  • Using the CLI - Run lshostvdiskmap <host ID or name> for each host in your cluster. 
  • Using the GUI - The Host Mappings view will provide the data about which volumes are mapped to which host on which SCSI ID..  You can also use the filter and sorting features to make the work a little easier.
  • Once you have the data all you need to do is to validate that vdisk X has the same SCSI ID on all of the hosts in your host cluster.  Repeat this for each shared vdisk. 

What do you do if they don't match?

This is a bit harder and I can't give you a definitive answer.  The simplest solution is probably to shutdown one physical host from the cluster, undo and redo all its mappings then power it back on again. When you redo the mappings you should manually specify the SCSI ID to ensure that it matches your target. Remember that if you change the SCSI ID of the SAN boot volume then you may need to fix the BIOS to boot from the correct volume.   Other solutions may exist.

Alternatively - just don't use host clusters.

Warning: If the host is SVC - you must make all of your hosts match the SCSI ID configuration as recorded in the lsmdisk output of the SVC.
Warning:  VMware are saying that they require SCSI IDs to match on all ESXi hosts for latest VMware 6.5 - so it is probably better to fix this sooner rather than later - here is the VMware document which makes that statement: https://recommender.vmware.com/solution/SOL-12531

 

 

I'll document the two main approaches to converting existing hosts into host clusters here, and the pros/cons of each.    For either method it will probably be best to test it on a test cluster first. 

 

Both methods should complete the conversion with no interruption to the hosts.

 

The one-at-a-time approach

  1. Pick a master host. 
    If there are any differences between the hosts then you will make everyone else match the master host.
  2. Decide whether the master host has any "private mappings".  A private mapping could be a SAN boot lun for that server, or a volume to store paging space on.  The point is that the private LUN is not expected to be mapped to all hosts in the cluster just one (or maybe a few)
  3. Create the host cluster using the master host as the template:
    mkhostcluster -name myfirsthostcluster -seedfromhost <master host ID or name> -ignoreseedvolume <list of volume IDs or names for the private mappings for the master host - separated by colons>

    If all volumes are shared (i.e. there are no private mappings) then simply omit the -ignoreseedvolume option
  4. Validate that the corresponding host cluster has the correct private and shared mapping using lshostvdiskmap <master host ID> The output will list all of the volumes mapped to that host, as well as telling you whether the mapping is shared (one that's really mapped to the cluster) or private (only mapped to this host).
  5. Now your host cluster exists, you simply need to add the other hosts to the cluster. So for each additional host in the cluster
  6. addhostclustermember -host <additional host ID or name> -hostcluster <host cluster ID or name>  
    This command will fail if there are any conflicts in the SCSI host mappings.  But hopefully you already checked that in the before you start section and everything will go swimmingly.   Additional Volumes that are mapped to this host and are not already in the list of shared cluster mappings will be preserved as private mappings

    Note private mappings on host 1 can share the same SCSI ID as  a private mapping on a different host.  But it cannot use the same SCSI ID as one of the shared mappings (for hopefully obvious reasons).

 

 

Pros:

 

  • You are only ever operating one host from the host cluster.  So if something goes wrong the host cluster can hopefully failover.
  • You have more control about what volumes will become shared versus private mappings

Cons:

  • This approach requires more CLI commands to complete.

 

The all-in-one approach

  1. Collect your list of hosts
  2. Create the host cluster :
    mkhostcluster -name myfirsthostcluster -seedfromhost <complete list of Host IDs or Names, separated with colons>
    This command will fail if there are any conflicts in the SCSI host mappings.  But hopefully you already checked that in the before you start section and everything will go swimmingly.   Volumes that are only mapped to a single host will be preserved as private mappings, others will be converted to shared mappings

    Notes:
    • private mappings on host 1 can share the same SCSI ID as  a private mapping on a different host.  But it cannot use the same SCSI ID as one of the shared mappings (for hopefully obvious reasons).
    • You can use -ignoreseedvolume if you want to make extra sure that the private volumes are kepy private
    • When specifying the list of host or volumes, ideally make sure that they are in ascending ID order, and that there are no duplicates.
       
  3. Validate that the corresponding host cluster has the correct private and shared mappings for each host using lshostvdiskmap <host ID> The output will list all of the volumes mapped to that host, as well as telling you whether the mapping is shared (one that's really mapped to the cluster) or private (only mapped to this host).

 

 

Pros:

  • Simpler solution, fewer commands

Cons:

  • Potential to affect the entire host cluster
  • If you accidentally create a cluster without all of the host IDs (especially if you do it with only one host ID) then the system may not be able to correctly work out which are the private mappings, and may result in a mapping configuration that you don't want.  The only way to fix this will be to unmap/remap - which may require a downtime.

 

 

Converting one host with many WWPNs into a host cluster

 

Warning: This procedure will cause the hosts to see paths to volumes going away and coming back again.  This will put a stress on multipathing drivers, so I don't recommend doing it online if you are in any doubt about your multipathing driver stability.

 

I will assume that for this scenario that there are no private mappings and all mappings are shared mappings.   I'm also going to use the FC WWPNs in all the example commands, however if you are doing the same thing with iSCSI the commands are basically identical.

 

This procedure can also be done in the GUI in the host panels.

 

Note:  converting one host into 10 hosts (for example) will increase the number of vdiskhostmappings that you need to use for that collection of WWPNs by a factor of 10.  In a large configuration please ensure that you check to ensure that you won't run out of these vdiskhostmaps before you start.  At the time of writing the limit is 20,000.

 

  1. Create a new host cluster from the megahost (the single host object with all of the WWPNs)
    mkhostcluster -name myfirsthostcluster -seedfromhost <host ID or name>
  2. For each of the sub-hosts (the entities that you want to break out of the megahost into their own host object) that you want to break out from the megahost
    1. Remove one WWPN/iSCSI name from the original host ID
      rmhostport -fcwwpn <wwpn> <megahost ID or name>
    2. Create a new host objects using that WWPNs/iSCSI name
      mkhost -name mysecondhost -fcwwpn <WWPN> -hostcluster <host cluster ID or name>
    3. Remove each of the additional WWPNs from the megahost one at a time and add them to the new sub-host
      rmhostport -fcwwpn <wwpn> <megahost ID or name>
      addhostport -fcwwpn <wwpn> <sub-host ID or name>

 

 

John Boyer (IBM)最流行的机器学习编程语言是……

应学习哪门编程语言才能找到机器学习或数据科学方面的工作?这个问题没有统一的答案。许多论坛都在争论这个问题。我会在这里提供自己的答案并解释原因,但我首先想分析一些数据。毕竟,这是机器学习研究者和数据科学家应该做的事情:分析数据,而不是分析观点。 

 

所以让我们看一些数据。我将使用 indeed.com 上提供的趋势搜索。它寻找在选定时间范围内出现的工作机会。它提供了雇主正在寻找的技能的指示。但是请注意,它不是对实际使用了哪些技能的投票调查。它是一个表明技能的流行度如何变化的高级指标,(更正式地讲,它可能接近于流行度的一阶导数,因为后者是技能招聘和技能再培训与技能退役和流失之差)。 

 

讲得够多了,让我们来获取相关数据。我结合使用了“machine learning”“data science”来搜索技能,其中的技能是一种著名的编程语言,比如 JavaCC++ Javascript。我还包含了 PythonRScala Julia,我们知道 Python R 在机器学习和数据科学中很流行,Scala Spark 有关联,一些人认为 Julia 是下一门重要语言。运行此查询,我们获得了我们寻找的数据:

图像

 

在关注机器学习时,我们获得了类似数据:

图像

 

我们能从此数据中得到什么?

首先,我们明白,没有一种语言是万能的。一些语言在此环境中可能非常流行。 

 

其次,所有这些语言的流行度急剧上升,反映了人们过去几年对机器学习和数据科学的兴趣高涨。 

 

第三,Python 明显领先,随后依次是 JavaR C++Python 领先 Java 的幅度不断扩大,而 Java 领先 R 的幅度不断缩小。我必须承认,看到 Java 排在第二位我很惊奇,我本来预计 R 会排在第二位。

 

第四,Scala 的增速令人印象深刻。它在 3 年前几乎不存在,而现在已与一些更为成熟的语言处于同等地位。切换到 indeed.com 上的相对数据视图时,更容易看到这一点:

图像

第五,Julia 的流行度与其他语言相差很大,但最近几个月出现了明确的上涨趋势。Julia 是否会成为用于机器学习和数据科学的流行语言之一?未来会告诉我们。

 

如果忽略 Scala Julia,将注意力转移到其他语言的增长上,那么我们可以确认,Python R 的增速比一般用途的语言要快。 

图像

考虑到增长率的差异,R 的流行度可能很快就会超过 Java

 

当我们通过此查询关注深度学习时,数据完全不同:

图像

Python 仍处于领先地位,但 C++ 目前位居第二,然后是 JavaC 排在第 4 位。R 仅排在第 5 位。这个领域显然非常注重高性能计算语言。但是 Java 的流行度正在快速增长。它可能会像在机器学习中一样,很快上升到第二位。R 近期内不会上升到第一名附近。令我奇怪的是排名中居然没有 Lua,尽管在一个主要的深度学习框架 (Torch) 中使用了它。Julia 也没有出现。

 

现在大家应该已经清楚了原始问题的答案。对于机器学习和数据科学工作,PythonJava R 是最流行的技能。一般而言,如果想关注深度学习而不是机器学习,那么 C++(以及更小使用范围的  C)也值得考虑。但是请记住,这只是看待问题的一种方式。如果您要寻找学术界的工作,或者只想出于兴趣在空闲时学习机器学习和数据科学,那么您可能会获得不同的答案。

 

我个人的答案是什么?我今年年初在这篇博客中回答了这个问题。Python 非常适合我,除了因为它拥有许多顶级机器学习框架的支持之外,还因为我拥有计算机科学背景。由于我在大部分职业生涯中都使用 C++ 编程,所以我也习惯于使用该语言开发新算法。但这是对我而言,拥有不同背景的人可能会觉得另一种语言更好。编程技能有限的统计人员显然更喜欢 R。经验丰富的 Java 开发人员可能会坚持使用他最喜欢的语言,因为有大量开源的 Java API。这些图表上的任何语言,肯定都有适合使用的情形。

 

因此,建议在投入大量时间学习一种语言前,参阅其他讨论同一个问题的博客。

2016 12 23 日更新。HackerNews 上正在讨论本博客。

2017 1 11 日更新。本文已在 KDnuggets 中重新发表。

2017 2 12 日更新。本文已在 Silicon Republic 中重新发表。

2017 2 15 日更新。本文已被 Francisco Martínez Carreño 翻译为西班牙语

2017 3 22 日更新。本文已在 medium 中重新发表。

 

更多评论内容请访问链接:

https://www.ibm.com/developerworks/community/blogs/jfp/entry/What_Language_Is_Best_For_Machine_Learning_And_Data_Science?lang=en

 

本文翻译自The Most Popular Language For Machine Learning Is ...

John Boyer (IBM)[Late post] Content Released in Patches for Windows - June 2017 Security Content

Content in the Patches for Windows Site has been released.


New Fixlets:

Fixlets for Microsoft Security Updates:

       MS17-JUN

Fixlets for Microsoft Security Advisory 4025685:

[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 10 - KB4018271    (ID: 401827119)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 10 - KB4018271 (x64)    (ID: 401827133)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 8 - KB4018271    (ID: 401827131)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 8 - KB4018271 (x64)    (ID: 401827137)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 9 - KB4018271    (ID: 401827135)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 7 SP1 - IE 9 - KB4018271 (x64)    (ID: 401827125)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 8 - IE 10 - KB4018271    (ID: 401827129)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows 8 - IE 10 - KB4018271 (x64)    (ID: 401827127)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows Server 2008 SP2 - IE 9 - KB4021558    (ID: 402155809)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows Server 2008 SP2 - IE 9 - KB4021558 (x64)    (ID: 402155807)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows Vista SP2 - IE 9 - KB4018271    (ID: 401827123)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows Vista SP2 - IE 9 - KB4018271 (x64)    (ID: 401827121)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows XP - IE 8 - KB4018271    (ID: 401827141)
[Major] MS17-JUN: Cumulative security update for Internet Explorer - Windows XP - IE 8 - KB4018271 (x64)    (ID: 401827139)
[Major] MS17-JUN: Security update for Microsoft Graphics Component - Windows Server 2003 SP2 - KB4012583    (ID: 1701361)
[Major] MS17-JUN: Security update for Microsoft Graphics Component - Windows Server 2003 SP2 / Windows XP SP2 - KB4012583 (x64)    (ID: 1701373)
[Major] MS17-JUN: Security update for Microsoft Graphics Component - Windows XP SP3 - KB4012583    (ID: 1701375)
[Major] MS17-JUN: Security update for the Windows SMB Information Disclosure Vulnerability - Windows Server 2003 - KB4018466    (ID: 401846613)
[Major] MS17-JUN: Security update for the Windows SMB Information Disclosure Vulnerability - Windows Server 2003 SP2 / Windows XP SP2 - KB4018466 (x64)    (ID: 401846609)
[Major] MS17-JUN: Security update for the Windows SMB Information Disclosure Vulnerability - Windows XP SP3 - KB4018466    (ID: 401846611)
[Major] MS17-JUN: Security update for the Windows win32k Information Disclosure Vulnerability - Windows Server 2003 SP2 - KB4019204    (ID: 401920413)
[Major] MS17-JUN: Security update for the Windows win32k Information Disclosure Vulnerability - Windows Server 2003 SP2 / Windows XP SP2 - KB4019204 (x64)    (ID: 401920409)
[Major] MS17-JUN: Security update for the Windows win32k Information Disclosure Vulnerability - Windows XP SP3 - KB4019204    (ID: 401920411)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows Server 2003 SP2 - KB3197835    (ID: 1614303)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows Server 2003 SP2 - KB4025218    (ID: 402521805)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows Server 2003 SP2 / Windows XP SP2 - KB3197835 (x64)    (ID: 1614301)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows Server 2003 SP2 / Windows XP SP2 - KB4025218 (x64)    (ID: 402521801)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows XP SP3 - KB3197835    (ID: 1614305)
[Major] MS17-JUN: Security update for Windows XP and Windows Server 2003 - Windows XP SP3 - KB4025218    (ID: 402521803)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows Server 2003 SP2 - KB4022747    (ID: 402274703)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows Server 2003 SP2 - KB4024323    (ID: 402432305)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows Server 2003 SP2 / Windows XP SP2 - KB4022747 (x64)    (ID: 402274701)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows Server 2003 SP2 / Windows XP SP2 - KB4024323 (x64)    (ID: 402432301)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows XP SP3 - KB4022747    (ID: 402274705)
[Major] MS17-JUN: Security update of Windows XP and Windows Server 2003 - Windows XP SP3 - KB4024323    (ID: 402432303)
[Major] MS17-JUN: Windows search vulnerabilities - Windows Server 2003 SP2 - KB4024402    (ID: 402440213)
[Major] MS17-JUN: Windows search vulnerabilities - Windows Server 2003 SP2 / Windows XP SP2 - KB4024402 (x64)    (ID: 402440209)
[Major] MS17-JUN: Windows search vulnerabilities - Windows XP SP3 - KB4024402    (ID: 402440211)

 

Reason for Update:

Microsoft has released Security Updates for June 2017 as well as Security Advisory 4025685.


Actions to Take:

None


Published site version:

Patches for Windows, version 2778.


Important notes:
None

 

Additional Link:

Microsoft Security Advisory 4025685

https://technet.microsoft.com/library/security/4025685.aspx

 

Application Engineering Team
IBM BigFix

John Boyer (IBM)Scaling agile across the Enterprise

image

 

 

 

 

Have you ever thought about expanding the successes of agile teams in your organization? Or have you considered an agile approach but are not sure if it would work for you?  You are not alone.  Many organizations are adopting and scaling agile and seeing amazing results. 

 

As the largest provider of financial crime, risk and compliance solutions for regional and global financial institutions and government regulators, NICE Actimize had to improve the quality of their software and accelerate cycle times. Their ad hoc approach to Agile adoption at the team level hadn’t yielded the results they hoped for. They adopted a holistic approach combined with strong tooling to help scale agile and lean principles which allowed them to transform in less than six months, while maintaining their development commitments and no interruption to the business using SAFe and Rational Team Concert!

 

Join us for the webinar to learn how adopting SAFe practices along across the enterprise allowed them to achieve their goals. The webinar will be moderated by Alan Shimel, Editor-in-Chief, DevOps.com and our featured guests include:  

  • Igal Levi, VP Operational Excellence , NICE Actimize
  • Amy Silberbauer, xecutive IT Specialist, Solution Architect, Enterprise Scaled Agile (SAFe) & DevOps, IBM

 

Register and also spread the word with your network:

Learn how adopting #SAFe practices & tooling can help you transform your biz. Join us June 28th: http://bit.ly/2swVyPo  #RTC #DevOps

Retweet: https://twitter.com/IBMDevOps/status/876167571199991808

 

Register Now!

 

John Boyer (IBM)A delayed message comes from Internet

By checking, I found the message has arrived xxx/yyy at 09:26. The problem was not causing your side. I recommend to check up servers such company and customer mail relay servers.

John Boyer (IBM)<IBM z Systems ニュース: 2017 Vol.6>【かんぽ生命様事例】システム品質向上と大幅なハードウェアコスト削減

 

━IBM(R) z Systems(R) ニュース 2017 Vol.6━━━━━━━━━━━━━━
※当メール・ニュースは配信を登録されたお客様にお届けしています。
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
□■□ ◆目次◆

□■□【かんぽ生命様事例】システム品質向上と大幅なハードウェアコスト削減
□■□【Webセミナー】先行企業の事例で知るブロックチェーンの実務への適用
           と最新技術動向
□■□【プレスリリース】日立製作所の新しいメインフレーム環境にハードウェア
            技術を提供
□■□【プレスリリース】明治安田生命、保険事務システム基盤を刷新し、
           ビジネス成長を支援
□■□【製品発表】z Systems関連:2017年5月

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━    

                
===================================
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
【かんぽ生命様事例】システム品質向上と大幅なハードウェアコスト削減
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

基幹系システムをIBM z Systemsに移行することで、災害対策の強化と運用面
の課題を低減するとともに、ハードウェア・コストの4割削減を実現されました。

https://www-03.ibm.com/software/businesscasestudies/jp/ja/jirei?synkey=V450539Q54416E32


━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
【Webセミナー】先行企業の事例で知るブロックチェーンの 実務への適用と
 最新技術動向
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ブロックチェーンは金融だけでなく、流通、製造などの幅広い分野で企業間取引の
自動化や効率化を実現し、低コストでトレーサビリティを保証、ドキュメントの
正当性を証明する技術として期待されています。若い技術ではありますが、
世界的な技術コミュニティーを中心に、性能や品質の検証、開発環境の整備が
進んでおり、エンタープライズ用途での適用実績も増えています。

本Webinarでは、国内外企業の適用事例、Hyperledger最新動向、
またIBMのブロックチェーンの取り組みなどの情報をご紹介します。

当Webinarは以下のような方にお薦めです。

   - 自社での導入を前提にブロックチェーンの実装や性能評価を検討中の方
   - ブロックチェーン技術の業務プロセスへの適用方法を調査中の方
   - 流通・製造などの業種でブロックチェーンの適用事例をお探しの方
   - Hyperledgerの適用を検討されている方

■ 形式:オンラインセミナー (ご都合の良い時に聴講ください)
■ 参加費:無料
■ 主催:日本アイ・ビー・エム株式会社
■ 申し込み先:https://enq.itmedia.co.jp/on24/form/1413343

 
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
【プレスリリース】日立製作所の新しいメインフレーム環境にハードウェア
技術を提供
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
日本IBMは、株式会社日立製作所が2018年度から日本市場で提供予定の新しい
メインフレーム環境において、IBMR z Systemsハードウェアの最新技術を
提供していくことを発表します。

https://www-03.ibm.com/press/jp/ja/pressrelease/52456.wss
 
 
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
【プレスリリース】明治安田生命、保険事務システム基盤を刷新し、ビジネス
成長を支援
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
日本IBMは、明治安田生命保険相互会社の保険事務システム基盤を刷新し、
本年3月27日より稼働を開始しました。新保険事務システムは、z Systems上に
Linux区画を追加しその上で実装したことにより、従来のシステムと比べて
各案件の処理の迅速化および工程管理の高度化を行うことが可能となりました。

http://www-03.ibm.com/press/jp/ja/pressrelease/52380.wss
 
 
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
【製品発表】z Systems関連:2017年5月
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
■ 製品の発表

● IBM CICS Performance Analyzer for z/OS, V5.4は、 IBM CICS
Transaction Server for z/OS, V5.4の新機能をサポートおよび活用します
発表レターNo:JP17-0140
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/0/760/JAJPJP17-0140/index.html&lang=ja&request_locale=ja

● IBM CICS Transaction Server for z/OS V5.4 は、画期的な混合言語
アプリケーションサービスを提供します
発表レターNo:JP17-0138
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/8/760/JAJPJP17-0138/index.html&lang=ja&request_locale=ja

● 料金の変更:料金変更:プログラム番号 5698-S48 System Automation
 for z/OS サブスクリプション&サポート(S&S)料金
発表レターNo:317-128
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/a/317-128JNJA/index.html&lang=ja&request_locale=ja

● IBM Enterprise COBOL for z/OS V6.1 は、追加の継続的デリバリー
機能をリリースします
発表レターNo:JP17-0219
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/760/JAJPJP17-0219/index.html&lang=ja&request_locale=ja

● IBM DB2 データベースツールに新規機能と拡張機能を提供します
発表レターNo:JP17-0191
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/760/JAJPJP17-0191/index.html&lang=ja&request_locale=ja

● IBM MQ for z/OS V9.0.3 は、ハイブリッドクラウドによる変革を
さらに推進するための新機能を提供します
発表レターNo:JP17-0196
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/6/760/JAJPJP17-0196/index.html&lang=ja&request_locale=ja

● IBM z/OS バージョン 2 リリース 2 の機能拡張
発表レターNo:JP17-0253
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/3/760/JAJPJP17-0253/index.html&lang=ja&request_locale=ja
 
 
━━━━━━━━━━━━━……‥・・・・‥……━━━━━━━━━━━
※この"IBM z Systemsニュース"は、登録いただいたお客様に配信して
おります。(当メールは、MSゴシックなどの等幅フォントでご覧ください)

※IBM z Systems Newsの宛先変更・情報不要のご連絡は、下記URLを
ご利用ください
https://www.ibm.com/contact/jp/ja/update_information.shtml

※当メールに関するお問合せ窓口(ZSOFT@jp.ibm.com)

※今後、IBM からの e-メールによるお知らせを希望されない場合には、
当メールへの返信にて、件名に「すべての e-メール不要」とご記入の上
お送りください。また、DM や電話による情報提供を希望されない場合には
その旨も明記ください。なお、登録変更には、しばらくお時間がかかります
ことをあらかじめご了承願います。

・宛先アドレス:ZSOFT@jp.ibm.com
・件名       :[すべての e-メール不要]
・本文       :お客様の <お名前 / e-メール・アドレス / お客様番号>

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
◎ 会社名やお名前などに、機種依存文字が含まれていた場合には、
誠に勝手ながら表示可能な文字に置き換えております。
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

■□■===========================================================■□■
※IBM, ibm.com, CICS、DB2、 z/OS および z Systems  は、世界の多くの
国で登録されたInternational Business Machines Corp.の 商標です。
他の製品名および、サービス名等は、それぞれIBMまたは各社の 商標である
場合があります。現時点でのIBMの商標リストについては、
  http://www.ibm.com/legal/copytrade.shtml
をご覧ください。

======================================================================
発行:日本アイ・ビー・エム株式会社
〒103-8510 東京都中央区日本橋箱崎町19-21 mailto: ZSOFT@jp.ibm.com
======================================================================

 

000006DG
M00010778

 

 

 

Amazon Web ServicesIn the Works – AWS Region in Hong Kong

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris), China (Ningxia), and Sweden (Stockholm).

Coming to Hong Kong in 2018
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Hong Kong, in 2018. Hong Kong is a leading international financial center, well known for its service oriented economy. It is rated highly on innovation and for ease of doing business. As an evangelist, I get to visit many great cities in the world, and was lucky to have spent some time in Hong Kong back in 2014 and met a number of awesome customers there. Many of these customers have given us feedback that they wanted a local AWS Region.

This will be the eighth AWS Region in Asia Pacific joining six other Regions there — Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and an additional Region in China (Ningxia) expected to launch in the coming months. Together, these Regions will provide our customers with a total of 19 Availability Zones (AZs) and allow them to architect highly fault tolerant applications.

Today, our infrastructure comprises 43 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China, and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing customers in Hong Kong and working with partners across Asia-Pacific. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Hong Kong. Public sector organizations such as government agencies, educational institutions, and nonprofits in Hong Kong will be able to use this region to store sensitive data locally (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Hong Kong team.

Help Wanted
If you are interested in learning more about AWS positions in Hong Kong, please visit the Amazon Jobs site and set the location to Hong Kong.

Jeff;

 

ProgrammableWeb: APIsBBVA Authentication

The BBVA Authentication protocol offers the a secure way that is approved by BBVA to access API resources by third party enterprises or developers. This protocol supports the OAuth2 Authorization Framework. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Authentication, , Authorization, , Financial

ProgrammableWeb: APIsBBVA Customers

The Customers API provides third party applications with information belonging to the BBVA user associated to the OAuth token used to invoke the API and includes; User’s unique identifier, Name, Surnames, Gender and more. It allows you to retrieve current key user information that is accurate and updated without endless forms. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Financial

ProgrammableWeb: APIsBBVA PayStats

The data services specifications for the PayStats API provides statistical transactions data that has been retrieved through BBVA cards or in BBVA point of sale. BBVA PayStats offers anonymized and aggregated statistical data from millions of transactions performed with any cards in BBVA POS terminals. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Aggregation, , Data, , Financial, , Payments, , Statistics

ProgrammableWeb: APIsBBVA Business Accounts

The Business Accounts API allows third party applications access to business authorized bank accounts and transactions. It provides a way to retrieve the balances and transactions of your business users in the market standard format, AEB43, with automated and native access from your product. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Accounts, , Business, , Financial

ProgrammableWeb: APIsBBVA Loan

The Loans API allows you to see if the client has a pre-approved loan available with BBVA, the conditions attached to the loan and whether to accept, with just one click. It allows third party applications to retrieve the following information of a BBVA user Pre-approved loan for consumption installments and amounts. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Financial

ProgrammableWeb: APIsBBVA Accounts

The BBVA Accounts API allows third party applications to interact with the accounts of a BBVA user. It provides a way to verify ownership, check balances and retrieve account transactions and includes; account type, status, balance, transaction history and more. The BBVA API Market is a platform for global financial services for business.
Date Updated: 2017-06-21
Tags: Banking, Accounts, , Financial

ProgrammableWeb: APIsImageOptim

The ImageOptim API allows developers to optimize and resize images that load faster and save bandwidth. The API uses a custom JPEG decoder that features progressive rendering, overshoot deringing (for better text readability), quantization tables optimized for high-DPI screens, and color profile support. Additionally, the API supports lossy PNG compression to reduce the sizes of PNG images by 75%.
Date Updated: 2017-06-21
Tags: Images, Optimization

ProgrammableWeb: APIsIcons8

Icons8 provides an extensive ISO compliant icon library. The API allows developers to search and retrieve icons that can be used for template customization, build graphic and text editors, and to integrate with any application with customization features. The Icons8 API requires API Keys for authentication. Fees are paid on a monthly basis, and licensing is free for established open source projects.
Date Updated: 2017-06-21
Tags: Images, Colors, , Library

ProgrammableWeb: APIsZema OData Web Service

The Zema OData Web Service API is available for data management and analysis solutions. To request access, contact michelle.mollineaux@ze.com
Date Updated: 2017-06-21
Tags: Data, Analytics

John Boyer (IBM)none 和 host 网络的适用场景 - 每天5分钟玩转 Docker 容器技术(31)

本章开始讨论 Docker 网络。

我们会首先学习 Docker 提供的几种原生网络,以及如何创建自定义网络。然后探讨容器之间如何通信,以及容器与外界如何交互。

Docker 网络从覆盖范围可分为单个 host 上的容器网络和跨多个 host 的网络,本章重点讨论前一种。对于更为复杂的多 host 容器网络,我们会在后面进阶技术章节单独讨论。

Docker 安装时会自动在 host 上创建三个网络,我们可用 docker network ls 命令查看:

146.png

下面我们分别讨论它们。

none 网络

故名思议,none 网络就是什么都没有的网络。挂在这个网络下的容器除了 lo,没有其他任何网卡。容器创建时,可以通过 --network=none 指定使用 none 网络。

我们不禁会问,这样一个封闭的网络有什么用呢?

其实还真有应用场景。封闭意味着隔离,一些对安全性要求高并且不需要联网的应用可以使用 none 网络。

比如某个容器的唯一用途是生成随机密码,就可以放到 none 网络中避免密码被窃取。

当然大部分容器是需要网络的,我们接着看 host 网络。

host 网络

连接到 host 网络的容器共享 Docker host 的网络栈,容器的网络配置与 host 完全一样。可以通过 --network=host 指定使用 host 网络。

在容器中可以看到 host 的所有网卡,并且连 hostname 也是 host 的。host 网络的使用场景又是什么呢?

直接使用 Docker host 的网络最大的好处就是性能,如果容器对网络传输效率有较高要求,则可以选择 host 网络。当然不便之处就是牺牲一些灵活性,比如要考虑端口冲突问题,Docker host 上已经使用的端口就不能再用了。

Docker host 的另一个用途是让容器可以直接配置 host 网路。比如某些跨 host 的网络解决方案,其本身也是以容器方式运行的,这些方案需要对网络进行配置,比如管理 iptables,大家将会在后面进阶技术章节看到。

下一节讨论应用更广的 bridge 网络。
 

二维码+指纹.png

John Boyer (IBM)3 Key Factors That Make Social Media a Business Asset vs. a Liability

When what we now call social media crept onto the business landscape a couple of decades ago, many pundits, gurus, thought leaders, and other folks with big foreheads who pride themselves on knowing what the future will hold long before rest of us mere mortals, viewed this development with a mixture of indifference and irritation.

 

If this sentiment sounds familiar, it should: because it’s precisely how many of them felt during the dawn of ecommerce. After all, who in their right mind would buy something through their computer when they should enjoy the comfort, convenience, selection and safety of heading to their favorite shopping mall? Well, given that the global ecommerce market is predicted to hit $4 trillion by 2020, I guess the answer to this question is now “pretty much everyone.”

 

With this being said, it’s a mistake to assusocial media marketingme that all businesses are getting the same gas mileage out of their social media bus. Some are zooming right along and picking up a happy caravan of prospects, customers, vendors, suppliers, partners and influencers. But many — and it may even be most — are breaking down and calling a tow-truck. And in some extreme cases, disgruntled CEOs have even commanded a full or partial retreat from the social media world, which is simply not an option. Like the web, social media is here to stay, and there’s no turning back the clock.

 

And so, this begs the question: what are businesses with successful and profitable social media assets doing that their struggling competitors and counterparts aren’t? Here’s a rundown of three key factors that separate winners from losers:

 

  1. Understand that social media isn’t advertising.

 

Because it’s free to post on Facebook, tweet on Twitter, comment in LinkedIn and so on, many executives and managers get excited — and carried away — by the idea of, basically, drilling their (typically non-existent) community with ads, ads and yet more ads. This is an error, error and yet more of an error.

 

Businesses that understand this truth deliver fresh, interesting content that is hyper-relevant to their customer groups and buyer personas. Sure, sometimes this can be an ad. But most of them time, it’s not. Instead, it’s a link to a useful article, a compelling and colorful infographic, a useful video, a pointer to a recently-published research report or white paper, and so on.

 

  1. Be consistent and play the long game.

 

A dead giveaway that a business has left the social media party way too early is if you poke into their blog and see a flurry of posts that are several months — or years — old, and then… nothing. It’s like a virtual catastrophic tornado swept through and all but a few hard core preppers remain.

 

What happened? It’s that the business was excited about getting on the social media landscape, published some posts and sent out some Tweets, and just like Field of Dreams, assumed that “if they publish it, customers will come.” Well, things might work that way on a baseball diamond in Iowa, but it’s a lot less mystical on the business landscape.

 

As such, successful businesses know that social media success is about playing the long game. They establish a foundation and then support it by consistently publishing content through all relevant channels, and cross-linking accordingly (e.g. using Twitter to drive people to Facebook, using YouTube to drive people to Instagram, and so on).

 

  1. Create and enforce social media governance.

 

Businesses that reap the rewards of social media, also take proactive steps to mitigate the risks — since there’s a whole new world of threats to deal with, and some of them are quite scary, like malware, fraudulent accounts, reputation damage, and so on.

 

The centerpiece of this proactive risk mitigation strategy is social media governance, which is a program that address key issues like:   

 

  • Who are the stakeholders for social media governance?
  • What are the corporate risk priorities?
  • How do corporate goals integrate with social media usage and governance?
  • What metrics and KPIs should be used to monitor social media performance?  
  • What laws and regulations need to be captured through social media governance approach?
  • What is the link between our marketing/branding strategy and social media governance?
  • How will social media brand audits be performed, and who will do them?
  • What processes are in place to handle risks and crisis (e.g. malware attack, reputation damage, etc.)?

 

The Bottom Line

 

Businesses waiting and hoping for this “social media thing” to calm down and go away need to re-boot their paradigm, because social media isn’t going anywhere; on the contrary, it’s getting even more influential and invasive. Naturally, the tools and technologies will change, and a decade from now we might not even be calling it “social media.” But regardless of how things unfold, the moral to the story will remain the same: businesses that position and leverage social media effectively will reap the rewards of a profitable asset. Those that don’t, will be forced to continue paying for a costly liability.

 

John Boyer (IBM)POWER8 Watts, Temp & SSP I/O from HMC REST API - Version 2

This Topic was covered in a Power Systems Virtual User Group webinar on
21st June 2017 you can catch the replay here: 
Power Systems VUG website

Please watch the video and/or look at the slides for a 1 hour basics and 45 minutes programming details

What do we want ?

Energy

  • Per POWER server a simple Comma Separated Value (CSV) file
    • Electrical consumption in Watts - single number
    • Temperature in Celsius  for the machine  inlet + system planar + CPUs
    • One line for each capture point
  • Same data but nicely graphed (web enabled)
    • Over time so we can quickly see trends or problems.
    • Example Graphs: energyP8-S824-emerald.html
    • This is a large (give it 10 seconds to load and draw graphs as there a 5000 data points) web page: Click the buttons for the different graphs. You can also Zoom In to the data (see the hints at the bottom).

Shared Storage Pools

  • From Dec 2016 a SSP can have 24 VIOS (with dual VIOS that means 12 Servers)
  • Overview stats are whole SSP level Capacity: size + free space, I/O rates including KB/s ops/s, service times an
  • The KB/s for each VIOS to make sure the servers are equally loaded for Disk I/O to the pool
  • CSV
    • Whole SSP level stats
    • VIOS level stats
  • Graphs
    • Over time so we can quickly see trends or problems.
    • Example Graphs: ssp_orbit.html
    • This is a live web page: Click the buttons for the different graphs. You can also Zoom In to the data (see the hints at the bottom).

 

A few sample graphs

Its better if you take the live links above as you can see lots more details.

Below: Electrical use in Watts during a peak in workload raising the  nominal 700W to 740W in the peak:

image

 

Below: The Temperature in Celsius during the same peak - it shows the computer room getting warmer and that the POWER8 sockets are unevenly loaded (the busy small LPAR was running in only one socket's cores):

image

Below: Shared Storage Pool VIOS stats. Note reading is sow as possible numbers and writes are negative numbers on this graph.

The left blue spike was a 10 GB data load into PostgreSQL and the middle and right peaks are index builds.

image

Download the Python programs from here:

Energy

  1. energy34.tar
  2. Just 30 KB use: tar xvf energy34.tar
  3. Contains 2 Python programs: energy.py and egoo.py

Shared Storage Pools

  1. sspio34.tar 
  2. Just 30 KB use: tar xvf sspio.tar
  3. Contains 2 Python programs: sspio.py and goo.py

 

Pre-reqs:

Both Energy and Shared Storage Pool

  • HMC with VERY latest 860+ level with service pack and fixes
  • Network access to HMC (public user GUI network)
  • HMC user id & password with hmcsuperadmin privileges
  • Python 3 – probably installed on AIX or Linux or your workstation

Energy 

  • POWER8 HMC attached Scale-out S8nn(L) or E850         [Not supported on E870 / E880 ]
  • POWER8 running 860+ System Firmware

Shared Storage Pool

  • Shared Storage Pool based on VIOS 2.2.5.20+

 

Using the scripts:

Energy 

  • For my three machines called on the HMC P8-S824-emerald, P8-S822-lime and P8-E850-ruby
    • ./energy.py --hmc hmc14 --username pcmadmin --password SECRET --reuse --server P8-S824-emerald
    • ./energy.py --hmc hmc14 --username pcmadmin --password SECRET --reuse --server P8-S822-lime
    • ./energy.py --hmc hmc14 --username pcmadmin --password SECRET --reuse --server P8-E850-ruby
    • The CSV is called energy.csv
  • Then clean up the data and generate the graphs
    • ./egoo.py

Shared Storage Pool

  • Once only to switch on the SSP data collection
    • ./SSPIO.py --hmc hmc14 --username pcmadmin --password SECRET --reuse --prefs
  • From then on
    • ./SSPIO.py --hmc hmc14 --username pcmadmin --password SECRET --reuse
    • The CSV files will be ssp_total_io.csv and ssp_vios_io.csv
  • Then clean up the data and generate the graphs
    • ./goo.py

 

Debugging or capturing the XML and JSON files

  1. If you create a subdirectory "debug" and add the option --debug to the command line for energy.py or SSPIO.py then  it will output more on the terminal screen and save the files returned in the subdirectory "debug"
  2. Note JSON file are many MB and no newlines to make them human readable use the python module json.tool like this: python3 -m json.tool yourjosnfile.json >readablefile.json
  3. The AIX and Linux vi editor can do syntax highlighting which hels a lot when writing or changing Python code.

 

What next ?

  1. Looking for volunteers to try out Python programs + report back
  2. Install Python the use CSV files or Googlechart graphs
  3. Can enter at three levels
    1. Use the programs “as-is” and report back  - I can assist via debug output
    2. Use and improve/hack the programs to fit your needs - Feedback your ideas
    3. Use them to “hack out” your own tool - I will answer questions
  4. Help working through my To Do list or ideas.
    1. Robustness from data changes due to fixes!
    2. Data for 1 socket Scale-out like S812/S814
    3. VIOS’s added or removed issue
    4. Getting swamped with data – weekly, monthly roll up suggestions.

 

John Boyer (IBM)Samsung Galaxy S8 Plus - A Short Insight On The Phone Next Door

The Samsung Galaxy S8 Plus in the recent times is being considered as the phone next door. The remarkably designed phone has been with a nearly bezel less screen which is maximized by the infinite display and a reasonable dimension for a big phone. The display has been enhanced with a water resistant stellar IP68 rating. Samsung has been immaculate once again to present to the users with a borderless and gently curved screen or what they call “Infinite Display”. The dimensions of the phone are 159.5mm x 73.4mm x 8.1mm and weighs 173g. Everything about the design of the Samsung phone indicates towards an amazing star gadget. The phone has been provided with an AMOLED Display screen and that does a really good job in appealing the enthusiasts. The screen has been modified to a ratio from 16:9 to 18.5:9.

As this sounds great, some of the recent phones have also started to include this advantage to their display designs. The resolution of the screen comes as 2960 x 1440 pixels, hence making the user interface a bedazzled experience while watching movies, sports etc. After a detailed discussion about its magnificent display, the hardware also needs to be discussed. The processor of the Samsung Galaxy S8 is extremely responsive and will run pretty fast irrespective of the processor – be it Snapdragon 835 or Exynos 8895 (depending upon the region).

image
 

The battery life of a 3500 Li-ion nonremovable battery is considerably more due to the efficient execution of the processor which prevents lagging. The phone comes with storage of 4GB RAM and internal storage of 64BG which can be expandable up to 256GB via the microSD card. The CPU runs on a 4x2.35 GHz Kryo and 4x1.9GHz Kryo with a GPU of Adreno 540.

Human Centred Technology Improves Performance

  • The phone offers with a sleek and attractive design which offers with 6.2 inch bezel less AMOLED display screen with a resolution of 2960 x 1440 pixels which comes equivalent to an HDR TV.
  • The phone has been manufactured with a Snapdragon 835 or Exynos 8895 (depending upon the region) regardless of which it will run faster and smoother with an extremely responsive user interface.
  • The CPU runs on a 4x2.35 GHz Kryo and 4x1.9GHz Kryo with a GPU of Adreno 540. The phone comes with a 4GB RAM and an internal memory of 64GB which is expandable up to 256GB via a microSD card.
  • The user interface of the Samsung Galaxy S8 has been optimized with new settings as additional features like it always on feature- which shows notifications and the date and time even when the phone is asleep.
  • The user interface can be customised by changing the theme which lacks the Bixby functionality with voice.
  • The camera of the phone comes with a superb 12 megapixel camera sensor with a f/1.7 aperture teamed with dual pixels. Since the software and the sensor have been upgraded, the phone takes way better pictures than its predecessor. The multi frame shots taken every time the shutter is clicked upon, takes 3 shots at a time. The dual pixel autofocus is extremely fast and also has an effective stabilization mode. The camera is adaptable to both well-lit and less lit areas and still takes great photos. Selfie camera comes with an 8 megapixel which also has an aperture of f/1.7.
  • The phone has an advantage of being water resistant and the user interface uses a haptic technology which also detects accidental touches.
  • The battery comes with a 3500mAh battery which powers a 6.2 inch phone effectively. Since the software and the processor of the phone works immaculately, the battery life is a more and charges into a full battery within 1.5 hours.

 Samsung Galaxy S8 Plus- The head turning facts

  • Attractive Display Screen. The display screen has taken their slogan seriously “the next best thing” and has taken it to another level. The appearance and the resolution of the phone are stunning.
  • Responsive and Extremely Fast. The processor teamed with the OS makes the phone run fast and also responsive to actions easily. It tends to work efficiently and has a huge impact on the battery life.
  • Great Camera. The camera comes in a superb form of 12 mega pixel on the primary front with an aperture of f/1.7. The camera is said to take multiple frame shots, 3 shots per shutter press. The front camera offers the users with a vivid picture taking 8 megapixel cameras.

Great overall performance.

100% Efficiency Is A Myth

Bixby Feature. Bixby feature basically is a theme store which enables a voice assistant as well a vision analyser. At present the Bixby Voice Assistant is unavailable. The application also has some glitches due to which it might not work properly sometimes. The position of the fingerprint scanner is located in a bad position.

Final Verdict

The features and functionalities of the Samsung Galaxy S8 Plus is not only mind blowing but also keep by its slogan. It takes the display and hardware to a next level with almost zero complaints. Other than the Bixby feature which might slow down the system, there isn’t any other phone which can compete against this marvellous sleek monster!

ProgrammableWebGoogle Releases TensorFlow Object Detection API

Google has released the TensorFlow Object Detection API, a new API that provides access to an open source framework for constructing, training and deploying object detection models. The framework is built on top of Google's TensorFlow, an open source software library that developers can use to deploy (via TensorFlow APIs) numerical computation to mobile devices, desktops, and servers.

ProgrammableWebIn other API Economy News: FreePNs Instead of VPNs and Most Wanted Database Chops

Remember Blackberry? Over a decade ago, one of the chief selling propositions of a Blackberry was the security of its wirless network and its datacenter. Blackberry was essentially offering a tightly integrated mobile messaging solution with a virtual private network (VPN) on steroids because of its privately operated network. It's one of the reasons that Blackberrys were so successful in the government space. Back then, I kept thinking, why the hell isn't someone offering a FreePN; a VPN that was free?

ProgrammableWebGoogle Adds Semantic Time Capability to Awareness API

Roughly a year after launching its Awareness API, Google is improving the API's time fencing functionality in response to developer feedback. Previously, time fencing was limited to absolute/canonical time (e.g. 10:30 a.m., 11:00 p.m., etc.). However, it became apparent that people refer to time in more abstract terms in the ordinary course of life (e.g. before lunch, in the evening, during the week, etc.).

Amazon Web ServicesAWS Marketplace Update – SaaS Contracts in Action

AWS Marketplace lets AWS customers find and use products and services offered by members of the AWS Partner Network (APN). Some marketplace offerings are billed on an hourly basis, many with a cost-saving annual option designed to line up with the procurement cycles of our enterprise customers. Other offerings are available in SaaS (Software as a Service) form and are billed based on consumption units specified by the seller. The SaaS model (described in New – SaaS subscriptions on AWS Marketplace) give sellers the flexibility to bill for actual usage: number of active hosts, number of requests, GB of log files processed, and so forth.

Recently we extended the SaaS model with the addition of SaaS contracts, which my colleague Brad Lyman introduced in his post, Announcing SaaS Contracts, a Feature to Simplify SaaS Procurement on AWS Marketplace. The contracts give our customers the opportunity save money by setting up monthly subscriptions that can be expanded to cover a one, two, or three year contract term, with automatic, configurable renewals. Sellers can provide services that require up-front payment or that offer discounts in exchange for a usage commitment.

Since Brad has already covered the seller side of this powerful and flexible new model, I would like to show you what it is like to purchase a SaaS contract. Let’s say that I want to use Splunk Cloud. I simply search for it as usual:

I click on Splunk Cloud and see that it is available in SaaS Contract form:

I can also see and review the pricing options, noting that pricing varies by location, index volume, and subscription duration:

I click on Continue. Since I do not have a contract with Splunk for this software, I’ll be redirected to the vendor’s site to create one as part of the process. I choose my location, index volume, and contract duration, and opt for automatic renewal, and then click on Create Contract:

This sets up my subscription, and I need only set up my account with Splunk:

I click on Set Up Your Account and I am ready to move forward by setting up my custom URL on the Splunk site:

This feature is available now and you can start using it today.

Jeff;

 

John Boyer (IBM)MQ Appliance upgraded

Today's blog entry may be of interest for z/VSE user's, that are dependent on an MQ solution.

As described in earlier posts, the Websphere MQ for z/VSE had end of service in September 2015. 

You may use the MQ Client for VSE together with MQ trigger monitor and a MQ server on another platform as an alternative. See my blog entry here.

The MQ for z/VSE migration paper might be helpful. It is on our technical article page here.


Instead of a MQ server on another platform (e.g. on Linux) the MQ appliance can be an option too. The MQ appliance is available since some time. It was just upgraded to the latest software level, see the announcement letter for details - here.

The MQ Appliance is available in two options:

  • The M2000A for larger enterprise workloads.
  • The M2000B for smaller workloads and lower processing capacity.

John Boyer (IBM)Preparing for Enterprise MacOS App Distribution

Exciting news is coming for MacOS management through MaaS360.  Many of you have already heard of our addition of the MacOS app catalog, and the pre-release feedback from beta testers has been amazing.  We're excited to mark this as GA in the near future, and help take MacOS management with MaaS360 to the next level.

 

If your organization needs the ability to manage applications that do not reside in the App Store (let's be honest, as detailed as it is, there are still an extraordinary number of useful enterprise applications that are hosted on websites instead), there are some pre-requisites that need to be filled in order to accomplish this.

 

First, your organization will need an Apple Developer account.  This is because in order to deliver applications remotely to devices, and have the proper permissions to install them with little to no user interaction, there is a certificate needed to grant those permissions.  The certificate, called a Developer ID Installer cert, can only be obtained via a developer account, and only by the Agent (I'll get to that in a moment.

 

To create the account, head to https://developer.apple.com/ to create an account if your organization does not already have one.  If they do, seek out the Team Agent.  This is essentially the owner of the dev account.  If you do not have a developer account, you can start one for an annual fee of $99 (prices subject to Apples terms and could change), and whomever creates the account will be the Team Agent.  The good news now is that this bundles both Mac and iOS together, so you only need one account for all your needs, including early BETA testing of new OS features.

 

The Team agent is the only one that can generate the Developer ID installer, and there are 2 different ways to go about this.

XCODE

XCode is a free program that can be downloaded from the MacOS app store (it is large, about 4GB, so prepare to make space).  Once the program is downloaded, launch it, and find the preferences
image


Once in the preferences, head to Accounts and sign in with the Apple ID that belongs to the Agent.  Once signed in this status should be reflected in the user info.  Select the user and "View Details" on the bottom right corner

image


This will create a popup that will present the certs available to create (or reset if already created).  Find the Developer ID Install towards the bottom and select "Create"

image

 Method 2: Via Developer Site

The same cert can also be created directly via the developer website, though this requires a few more steps, it won't take much longer, and the end result is the same.  This is a great option if there isn't enough space for Xcode, or if the Team Agent is working remotely without access to the Xcode system.

From developer.apple.com, go to Account, sign in, and navigate to "Certificates, IDs, and Profiles."  Make sure to change the platform to MacOS, and click the '+' to create a new cert.

image

Select Developer ID, next, then Developer ID Installer

image

image

Generate the CSR per the instructions.  Open Keychain Access, in the Keychain Access Menu navigate to Certificate Assistant, and Request a Certificate from a Certificate Authority:

image

image

imageUpload the CSR and the cert will be generated.  You can download the cert on the final page.

 

Getting the cert and preparing for the Catalog

In the developer console, the certs generated through Xcode or the developer site will show up in the console under Certificates, IDs, and Profiles.  Make sure that the device type is set to MacOS (it is not the default view).  Find the Developer ID Installer and download the cert.  We need the .P12 format, and it will download to a .CER.  The easiest way to change the format is to upload the .CER in to the Keychain.  Find the cert in "My Certificates," right click, choose to export, and make sure to save in the .P12 format.  Assign the cert a password and save.

image

image

image

The certificate is now ready for upload to MaaS360.  You'll need to download the MaaS360 package installer through the MacOS app workflows, information on which can be found here

John Boyer (IBM)Effective Guidelines to Follow when Finding an External Hard Drive

Gone are those times when people had very low definition multimedia content. We must be thankful for the boom in technology and some discoveries which have introduced high-quality audio and video recorders. Powerful computers started to conceptualize in the open and online markets to process these high definition audio and video files. It sounds legit, but the only drawback to such high definition quality is the dire need for memory which these powerful computers need to store such high-quality files.

 It is very unfortunate to state that these powerful computers got equipped with 1TB of disc space out of which a 60% is already taken up by the Operating System. Also, it must always be a reminder that none of the electronic machinery is durable in nature, so as with powerful computers. They can fail at the most inconvenient of times. A backup of all your important files can help a lot in storing all those important files. And all these backups getting stored in a handy portable device can save you from a lot of trouble. So, this is where the external hard disc drives get introduced. Gaming console storage, too much of pictures and videos can save your internal drive from slow performance. An external hard drive helps in saving such files and saves you from the agony.

image

 

As we know that leaving your content on your computer without being backed up can be havoc for several important reasons; mainly that it actually and terribly slows down your computer. And for the other crucial part is that you run the risk of losing just everything in the event of a hard-drive crash. At some instances, you may not even trust your content to be stored in the cloud, so if you intend to have a physical copy of your files and not just that something which is floating in the ether, you will seriously want to consider a hard disc drive. Even a relatively small external drive can be your savior if you are just a small time media collector.

When you are deciding of what kind of external hard drive will suit best for your needs, you must also completely consider the following like, What will you be using for?; How much space do you need?; and how often will you backup your files. So here are certain guidelines to be met when you intend to buy one:

 

Storage Capacity

The storage capacity in external HDD’s can range from 2GB to 4TB. In fact, in some of the drive companies, they put two 4TB drives in a single chassis, therefore creating an 8TB HDD which is awesome for most of the people. Computers these days come anywhere between the range from 250GB and 750GB of space in the hard drive. The choice is yours as to whether you want a mini external HDD or a considerably larger one, the possibilities are almost endless. These drives come with a decent amount of space of which the price ranges from $70 - $3200.

 

Transfer Speed

If you are considering storing in large files from your computer, then you need to make sure that the data being send should be in a quicker manner. A hard drive with USB 3.0 interface would be the best although USB 2.0 is still commonly present. The USB 2.0 copies at a speed of 10 times lesser than USB 3.0. So before having a choice, make sure that your computer can handle a USB 3.0 port.

 

Portability

If your external HDD tends to be always at home, then you must make sure that you buy the one which costs less and weighs more. If your external HDD is an outside device then make sure that it is pocket-friendly. Make sure your external HDD is well protected and also it gets packed of durable materials like quality plastic or aluminum.

 

Safety

Make sure your external HDD comes with a hardware-based encryption which can be more dependable than software-based encryption. Since your HDD is a commute, then it is more prone to lose and theft. So make sure that you make encryption a concern.

 

Reliable and Easy to Use

You need to choose an external HDD which has included software so that you don’t end up fussing with the installation and configuration.

 

Compatibility

Make sure to keep in mind while you are shopping since some of the external hard drives are either compatible with PC’s or Mac’s, but it is never both. If you ever purchase a PC-specific hard drive for Mac, then either you will have to reformat the HD you buy, or be limited to one that is PC or Mac specific.

 

External Solid State Drives

External solid state drives or SSD’s are much pricier than HDD as they remain rather rare and never boast of monster capacities. The SSD’s have a memory capacity ranging from 64GB to 512GB. Generally, it is better if you have an SSD inside your computer than outside.

John Boyer (IBM)How Lawyers Use Technology at Work

No app can make your logical reasoning sound better to jurors. But there are hundreds of apps that can help you do things faster, and make your life as a legal professional easier.

So if you’re still reluctant to embrace technology in your practice, this article will show you how new apps and online services can take the stress out of practicing law.

6 Ways Technology Made a Lawyer’s Life Easier

1. No More Bulky Envelopes and Boxes

Cloud storage services and electronic case management software has changed how law firms handle documents.

 

Before, you’d have to search through cabinets of files just to find one document pertaining to your case. Now, law firms store gigabytes of data on secure data servers so anyone on their firm can search, track, edit, send, or archive the documents they need. What used to take hours can now be done in as little as five minutes.

 

Paperless depo software is also available to help lawyers prepare for mass torts without lugging around box after box of documents. The software allows you to bring exhibits and other documents as scanned PDF, and share the documents with the court reporter and other participants as needed.

 

2. E-Filing Documents

Federal courts now allow case-related documents to be submitted and accessed online, so counsels can access them without going to the court’s record office.

 

3. Billing Clients and Tracking Billable Hours

Do you still track your billable hours with a spreadsheet? Not only does that waste your precious attention span and time, it’s also prone to errors.

With automated time-tracking tools like Toggl, PayPanther, and RescueTime, you can work continuously without switching every 10 or 15 minutes to your spreadsheet to record your minutes. These apps work in the background and record your computer activity. Some of them even include screenshots, sub-projects, and timers, so you can track your time switching in between cases.

 

Online invoicing tools like Freshbooks and Harvest remove the need for clunky MS Word template invoices. You can also bill clients for retainer work via subscription payments or create a one-time invoice for projects. These tools also have a time-tracker option, so you can either use it or export your time tracking data from another app.

 

4. Protect the Privacy of Your Firm and Your Clients

Do you remember shredding old and unnecessary documents as an intern? What about blacking out sensitive information on documents using a redacted stamp? Did you use Bates-label and printer labels to organize documents? It’s all tedious and time consuming.

 

Now, Adobe Acrobat can Bates-label, redact, and OCR documents electronically, so you or the interns in your team can organize files quickly.

 

5. Talk to Clients and Colleagues in Different Time Zones

Before Skype and the advancement of VOIP technology, lawyers had to go to the office just to talk to clients in different time zones. Otherwise they’d be hit with a huge phone bill for every choppy cross-country and international call they make.

 

Because of VOIP systems, lawyers can take client calls and conference calls even when they’re at home or on the road. All they need is a strong data connection.

Another benefit of this is the time tracking feature available in VOIP programs. Now you don’t have to track the time you started and end the call. You can tally all of the calls at the end of the week or month, and then bill each client for the total duration of all your conversations.

 

6. Discovery Just Got Easier

Researching documents and relevant cases takes time, that’s why lawsuits cost a ton of money. Before, you had to read through archives of old cases just to find useful information.

Now, online service Docket Alarm helps lawyers build a case profile, then quickly research relevant case files based on their search parameters. It’s literally a Google for lawyers.

 

You can use Docket Alarm to find out the likelihood of winning a divorce battle, based on rulings from previous proceedings. The search engine can also tell you which judges are biased, based on their previous decisions.

 

Lawyers can also use this website to aid them in advising clients of their potential for winning or losing a case. If you find multiple cases where the defendant of a similar case won, then you can show your client why he doesn’t have to settle quickly.

The Dark (or Gray) Side of Legal Technology

Unfortunately, technology isn’t just used for the advancement of legal practice. It’s also threatened the livelihood of many lawyers, and misled hundreds of consumers.

 

Consider the existence of template contract websites, where all you have to do is input your name and other pertinent details to get a complete contract or form for just about anything. While such websites don’t claim to provide legal advice, they provide the same deliverable (an executable contract) as lawyers—with one crucial component missing.

These template forms are not loophole free, and are not customized enough for the needs of every user. None of these websites can guarantee that the will, document, contract, or bankruptcy filings they’re providing to users will accomplish what the user intended.

 

What if a rental contract obtained from the website has a clause that’s not enforceable in a certain state?

While the services show convenience for users, they’re also quick to disassociate themselves from the liability and attorney-client privileges available in any legal product or service.

Automation and Technology are Just Tools

Practicing law is easier now because of technology. Solo professionals can now take on more cases because of online services that help them do their work faster, while small firms can better compete with bigger firms because they can now handle huge cases previously exclusive to the big guys.

Automation isn’t supposed to interfere with your legal practice, so embrace it. If technology overtakes one part of your practice, there are still other tasks within the legal profession you can do.

 

  

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>