Trust theory.

Trust is the foundation of everything that we do.

When we cannot trust, we experience anxiety – and most people will make significant changes in their lives to get trust back. Simple examples include the impact on your daily life of being unable to trust the opening hours of a store, or changes you make to your route choice when you know one means of transport is not trustworthy. In economics, all specialization and trade depend on trust, and all risk management is fundamentally about how we can create trust in circumstances where we wouldn’t naturally find it.

Trust is ultimately about vulnerability. Vulnerability occurs when there is a potential for loss that is greater than the potential for gain. When we trust, we become vulnerable to whatever or whoever we have trusted. If we are not vulnerable, there is no need for trust.

The most useful definition of trust that I can find comes from the work of Aneil K Mishra who defines it as “one party’s willingness to be vulnerable to another party based on the belief that the latter party is 1) competent, 2) open 3) concerned, and 4) reliable.”

Competence

You have to be good at what you’re doing for people to trust you. This means that you need to demonstrate expertise – and is also one reason why people from professions that require significant learning (doctors, engineers etc.) have high trust scores – people trust that the significant learning ensures they are competent. An interesting side point to this though is that you can gain the perception of competence by association, surrounding yourself with competent people can give an aura of competence in areas where you may not have it – and obviously, advice from experts is always of value.

Openness

Openness is all about people understanding how you are thinking, and why you are doing what you are doing – and knowing that it has some level of truth about it. In tough situations, people are more likely to trust people who help them understand how they are thinking about the situation, and how they are thinking about making decisions about the situation. This helps people understand a likely path and gives them some understanding of what the future will look like.

Concern

Concern is the belief that the person you are trusting wants you to be successful in your endeavor, and that they do not want to take unfair advantage of you. Concern is a balancing problem – concern will always balance between the interests of yourself and whoever you represent and require performance from.

Reliability

Reliability, while last in the order, is contributed to by each of the previous factors and it is simply that people do what they say they are going to do. That’s not a difficult concept for anyone to master – but it does run deeper. Reliability doesn’t mean that you can’t change your mind – the perception of reliability is built up over time and can have many levels. The idea that you were unreliable on something that you said does not necessarily mean that you were unreliable on having real concern for someone, and acting in it – quite the opposite in some situations.

Conceptually, trust is simple. Why is it useful to understand Trust under this framework? Most truly disruptive organizational breakdowns, failures of brands and failures of people to find needed support are failures of trust. The research though is in this case, quite clear – demonstrate that you are competent, open, concerned for the outcome, and reliable – and people will trust you, your brand and your product.

* Aneil K Mishra’s paper “Organizational responses to Crisis: the role of mutual trust and top management teams” can be found here –  https://www.researchgate.net/publication/35508350_Organizational_responses_to_crisis_The_role_of_mutual_trust_and_top_management_teams

*Sam Crosby’s book “The Trust Deficit” is also a good read and covers much of this subject material in a political context – https://www.amazon.com.au/Trust-Deficit-Sam-Crosby-ebook/dp/B01DYDFNRO

Advertisement

The Checklist Manifesto – it’s worth your time.

I’ve had the checklist manifesto in my “to-read” list for about 5 years now. It’s a book I’ve had reccomended to me many times and, unsurprisingly, it’s a manifesto for the use of checklists. Atul Gawande who wrote it is a surgeon who has a history of process improvement projects. The checklist manifesto is his attempt to convince people that checklists, simple as they are, can massively improve the output quality and consistency of tasks that we repeat frequently. What is more surprising though, is that his research uncovers that even in areas where there are complex problems for which we can’t checklist – checklists can help significantly in resolving complex and unforeseen problems.

The thesis is simple. Checklists raise output quality and consistency. The reason they do this is simple – we dramatically overestimate our ability to routinely perform series of tasks. In scenarios where there is stress or complication, the rate at which we overestimate our ability (and screw things up) rises dramatically. Gawande examines three key scenarios – flight checklists, large scale construction projects and his own home – the operating theater to provide anecdotes and then evidence for how effective checklists are..

IN each scenario, he tackles simple, routine problems and also complicated and complex problems. What emerges is a surprisingly strong case for checklists as a tool to ensure consistency, and to change behavior, and also as a tool to aid resolution in complex and unforeseen circumstances.

The bottom line is that we are inadequate repeaters of routine tasks, people routinely skip steps for one of two reasons – either they just forget through distraction or inattention, or they don’t know about, or don’t believe in the efficacy of a step – so they skip it. In each of these cases, checklists function as a kind of spot audit – telling people that they didn’t perform a step, and ensuring that they do. In many cases, check steps were important and (just like in the toyota production system) authority was granted for usually unauthorised people to stop a process if it wasn’t performed as written. What followed in each case was a dramatic improvement in performance – in one case, more than a thousand fewer deaths a year.

Routine tasks aside, complex tasks were also found to benefit. In routine simple and complicated tasks, the series of steps required to be carried out were documented in order so that they could be directly followed. In complex tasks however, this wasn’t possible because what was required to be done was typically an emergent phenomenon like an accident or disaster and needed to be analysed and dealt with on the fly. While task based checklists were not capable of operating in this environment, what was shown to produce results were checklists mandating communication among team members – simple things, like formal introductions prior to surgery so that people knew each other’s names, and discussions about what they were about to do. A side point was also made that delegation of responsibility away from the centre is of extreme importance in effective crisis response.

The book contains excellent tips on how to make checklists work. It boils down to

• Keep the checklist short, precise, and practical.

• Don’t over-describe – provide reminders of only the most critical steps.

• The point of invocation of the checklist needs to be clear for it to be useful.

• Keep the checklist between 5 and 9 items.

• Formatting and readability matter.

• Checklists come in two distinct types – do-confirm and read-do. One is about an audit of what you carried out, the other provides steps to follow – which you do in order and tick off.

Checklists are the simplest way to protect yourself and others, from you and the systematic mistakes you make by believing that you’re systematic when you’re not. They’re also the simplest way to get an advantage without being smarter or first, you can be more thorough – EVERY time. The anecdotes that end the book state that despite the gigantic and obvious advantage, they are not regularly taken up, people are looking for the sophisticated – because it almost couldn’t be that simple. Unfortunately for them, as the weight of evidence in the book proves, it is – so who’s going to take them up?

Saving time effort and money in the DA Process by moving correspondence digital – overview and case.

Paper approval process are expensive. Digitising them results in many hard and soft cost savings. The DA process is a prime example of where great work has been done to accept and process applications digitally, but there are still expensive gaps. The majority of councils use Pathway or a similar solution to capture and manage the application, and a product like Trapeze to work on and ultimately stamp the plans for release back to the applicant. Releasing the approved plans back the applicant though is typically still done on paper, as is significant correspondence across the lifecycle of an application and the subsequent project. This release is a significant driver of cost and lost time.

Printing, packaging and sending Development Applications consumes immense amounts of staff time and has both hard and soft costs. Hard costs in the form of delivery and printing costs, and soft costs in both the time taken to complete these routine, low value postage activities, and the time lost to other activities that often fall within the delegations of the same officers.

Hard costs can be tens of thousands for paper approvals and correspondence. Standard letters are a minimum of $1 per sent document and in the case of full development applications that include approved plans, this can be as high as $5 and in some cases must be sent to multiple parties. In the case of one moderately sized council, direct costs saved from postage of around 1200 development approvals were around $18,000. This cost does not include the cost to maintain printers and the costs of printing supplies.

Soft costs can be significant and can add up to many FTEs across processes. A time and motion study at one of our customers indicated that printing and sending a full development application consumed an average of 45 minutes per application. With 1200 full development applications per year, the council was expending around 900 hours of staff time or around 1/2 an FTE just to send approved DAs. For an appropriately qualified staff member, this time could be used to complete 150 – 200 building related incident inspections, significantly expanding the response capacity of the council without incurring cost.

Applicants and owners have also reported significant increases in satisfaction. Posting to an address is often problematic as well as slow and expensive. In one case, a rejection posted to an applicant was not discovered for 3 months – delaying the application and the subsequent development project. Application processes that deliver routine correspondence via email and large content or statutorily required correspondence via post often suffer from this problem. Moving to a full digital engagement, while not appropriate in every situation, can significantly increase the speed of completion and the satisfaction of applicants.

Key Benefits –

  • Confirmed delivery of determinations reduced from a week to under an hour.
  • Significant savings in printing costs – $18,000 for one council across around 1200 applications.
  • Significant savings in staff time – equivalent to 1/2 an FTE for one customer.
  • Significant increase in responsiveness to building incidents.
  • Improved satisfaction of applicants and owners.
  • All content interactions captured in the local authority system of record where supported.

Typically used in Conjunction with –

  • Information Management System – HPE TRIM/RM/CM, Objective ECM, SharePoint based systems.
  • Infor Pathway.
  • Objective Trapeze

What is it used for?

  • Delivery of stamped plans post approval.
  • Secure and private collaboration among the assessment team, contractors and with the applicant and their agents on complex approvals.

The gap in the electronic signature market that blockchain and cryptographic technology might be able to fill

Last year DocuSign IPO’d for over six Billion. They make a great product that provides a high level of convenience and a degree of certainty for the people that use it. At a recent conference I caught a session about digital signatures by Mark Henderson from Adelaide Law firm Kelledy Jones. It highlighted where electronic and digital signatures work well, and places where there are problems that make them uncertain. There is also one area in which they are currently unusable, and which I think provides a legitimate opportunity for the most over-hyped technology in the market to do something useful.

The requirements for a valid contract to be formed are well understood, and the legislation and legal precedents for entering into contracts legally via electronic means exist. In essence, there is not generally anything stopping an electronic contract in most situations. The quotation from one relevant case by Justice Harrison reads “Mr Stuart typed his name on the foot of the email. He signed it by doing so. It would be an almost lethal assault on common sense to take any other view”. There are however a number of areas in which electronic means of signing cannot currently be used.

These areas generally require one or both of two things – a witness, or the physical fixing of some form of stamp or seal. I am not qualified to go into the nuances of which instruments these are, but what intrigues me is that these seem like logical areas for a technology like Blockchain.

Witnesses and seals raise the volume and quality of information required to reduce the chance that a contract can be repudiated. As an example, a contract with a single signature on it could be repudiated by a company on the basis that the signature was forged. A contract signed by the chairman and CEO of the company, with the corporate seal affixed and witnessed by a non-interested third party on the other hand is far harder to repudiate. Each step adds information that reduces the chance that an apparently valid contract can be repudiated – and this is why blockchain might provide a technical solution in the presence of enabling legislation.

The centre piece of blockchain is non-repudiation. This means that it has technology that proves a transaction has not been tampered with. It does this by using cryptographic measures that verify the position of an item in the chain based on the information from the items in the chain prior to it. This means that once an item is entered in the chain, it cannot be changed unless every other item after it is then altered as well. Obviously this could be accomplished, but a blockchain is also required to be distributed among many peers and kept synchronised to make tampering more difficult – so someone trying to change one transaction would also have to get the majority of peers to change their chain too.

This all means that a blockchain could provide a publicly available repository of contract data – kept in public and distributed so that it could not be tampered with, and cryptographically signed by both sides of the transaction. Witnessing of the transaction could take the form of a future transaction by a neutral third party that references the original transaction. This witnessing could include cryptographic information that replaces the current function of a corporate seal using Public Key cryptography.

The simplicity of this idea is that it could scale to encompass scenarios requiring multiple signatures and witnesses, and also the affixing of a corporate seal. It could also include amendments and variations to contracts through self-referencing. Given the flexibility of blockchain, it would also be trivially easy to use a digital identity provider to verify identity and sign the transaction – and include information about liveness tests.

Personally, I’ve used DocuSign and formal digital signatures for many transactions and entered into contracts by email and other digital methods. The legal framework makes doing this simple and the actual technology largely irrelevant. In the case of transactions requiring additional information to verify their validity though, we have both a legal and technical challenge, I think that blockchain provides a useful way to solve the technical side of the issue – which would enable a common law test and might lead us to a world that no longer requires the signing ceremony.

Autonomous Machines and ethics – why we need a comfortable answer to who dies before the era of autonomous anything.

We are a hairs breadth away from the era of autonomous machines. The possibilities that they offer are endless, and the way they will influence everything from how work gets done, to city design and transport mean a future that is barely recognizable. It looks like the first autonomous machines that society at large sees will be cars, but before they can become a part of our lives, we need to solve the problem of how our software decides who dies when there are no other choices.

The Trolley problem is a thought experiment in ethics that deals with situations in which act or a failure to act lead directly to a death. The only choice available is to cause death by action or do nothing and let a death occur. While the problem has been used to study how we weight lives – dealing with relatives vs strangers, old vs. young and many deaths vs. a few, the underlying problem is one that will be a reality for autonomous machines. Which is a problem, because someone has to tell them how to make it.

As an example scenario – a self driving car senses a pedestrian on a road way in front of the vehicle, it calculates that evasive action is possible, but the available evasive action is likely to result in the death of the driver. What should the car do in this situation? More pertinently, how should it be programmed to respond to this situation?

Computers are deterministic systems. Their programming encodes rules that they follow exactly. There is no randomness to their behavior (unless caused by bugs or bad input). Mostly, this is a great thing. When making life and death decisions though, it means that someone needs to tell the computer how to make the decision, that it will follow, exactly. Someone needs to tell the computer who to kill.

This means that we need to make the decision ahead of time about who will live, and who will die. More importantly, we need a societally acceptable way to program something to make the decision – and it doesn’t exist.

The problem is larger than autonomous machines. That there is no societally acceptable way to make a decision about who should die. It’s why legislation doesn’t tell us how much a life is worth. The decision is left to judges and actuaries.

In the context of autonomous vehicles, this means that the decision is going to get left to actuaries to game out, and programmers to execute. This means that ultimately, we won’t decide how to handle it. We’ll let a company make it, we’ll let someone die – and then we’ll wait years for a judge to decide who was liable, and if they made the decision acceptably – at which point it will become an insurance issue. The alternate scenario is that it becomes a political issue – and ultimately, I think that’s worse for everyone.

We don’t have legislation that tells us how much a life is worth because no politician wants to touch the issue. It is a no-win scenario. There is no political move forward in putting a price on a human life, or in providing a way to decide who should die. Politicians are also action oriented people, when was the last time a politician appeared on the news saying that they weren’t going to “do something”. Logically, the only thing a politician can do is impose a ban – or conditions that might as well be a ban.

This is one reason I think it’s going to be quite a while until we see a totally autonomous private vehicle – at least on a road shared with human drivers. At some point, there will be a real life trolley problem, and someone will die. If we have not decided how to make the decision in a societally acceptable way, we leave to chance that it will be a political process that makes it for us, and we may give up the next major advancement in the quality of our lives.

The four questions you need to ask to balance external collaboration and risk management

In Darkest Hour there’s a scene in which Winston Churchill is almost begging Franklin Roosevelt for the warplanes that they had ordered, and now desperately needed. It got me thinking about collaboration and about how important it can be for gaining capability; and how badly it can go if we don’t assess the new risks that collaborating entails. I’ll refer back to this scenario throughout the post but please note that for the history buffs, I know I’m not being historically accurate.

Collaboration is essential to doing business – I don’t think this is in dispute (if you’d like to dispute it, I’ll happily debate you). What I don’t think gets talked about often enough is how we should think about the decision to collaborate. All collaboration should be cost-benefit based, with the decision predicated on whether the benefits outweigh the risks.

I’ve found that the frame below a useful way to think about the decision to collaborate with another organisation.

There are four basic questions that need answering:

  1. What benefit is there to collaborating?
  2. What new risks does the collaboration expose us to?
  3. Can you effectively mitigate the risks?
  4. Who has the authority to decide whether the balance is right?

1. What benefit is there to collaborating?

The UK needed planes to fight a resurgent Germany – but should they build them, or collaborate to get what they needed sooner and maybe even cheaper?

Stage 1 must be about quantifying the gain from collaboration. Collaboration is always about achieving objectives cost-effectively. If your organisation can already achieve the objective cost effectively, you don’t need to collaborate. Your thesis should be that collaborating will let you will gain access to information, capability or knowledge at a cost that makes the risks of collaborating worth the organisational gain. The deliverable from this stage should be a cost-benefit analysis.

2. What new risks does the collaboration expose you to?

Getting the planes sooner and cheaper gives the UK more capability, and more reserves to buy more capability if they need it – but what risks will relying on the US introduce?

Collaboration will always expose your business to new risks – so stage 2 should be identifying and quantifying them. New risks will principally come from two places –

  1.  The fact that your counter-party will always have better information about what they offer.
  2. The information you will expose about and from your business while collaborating.

To get this assessment right, you should seek input from experts in all areas of your business. Once you have an accurate risk register, and assessment of the magnitude of harm, you can move on to mitigation strategies.

3. Can you effectively mitigate the risks?

The war winning or losing question – if the US doesn’t deliver the planes when we need them, what do we do?

Once you understand the risks, mitigation measures can be defined. Risk mitigation will look to balance the incentives of involved parties, minimise the amount of information leakage, make sure adequate information about service quality is available, and ensure that adequate backup plans are in place. Once you understand your ability to effectively mitigate each risk, you can then make a call about whether to insure against or accept any residual risks.

4. Who has the authority to decide whether the balance is right?

Get it right – and the UK gets the planes and has the capability it needs to defend itself. Get it wrong, and the UK faces a resurgent Germany without what it needs to defend itself. Who had the authority to risk the future of the United Kingdom on this collaboration?

Someone in the organisation will need to accept the risks of a new collaboration. How high in the organisation they are will depend on the impact of loss. If you know who this person is, you should consider their appetite for risk early on and include that in your early go/no-go decisions. It goes without saying that putting together a great case for someone whose’ risk aversion won’t let you move forward should be avoided.

Conclusion

We all collaborate many times a day without thinking about it. When we go across the boundary of our organisation, the dynamic changes. We should be doing it because there is a cost-benefit relationship that lets us access information, capability and knowledge at far below the cost of developing it. Benefits though are only one side, we need to also consider the risks and whether we are able to take those risks on behalf of our organisation. If you keep this frame in mind, you should get good outcomes.

Does this reflect your view on collaboration and how it should be approached? If not, I’d love to hear from you and understand why.

The only four reasons why you should collaborate with another organisation

Collaboration is “the act of working together to produce something”. Over the last ten years it has been billed as a magic bullet, and like so many magic bullets, it has become unfashionable to question why, how and if we achieve value from it. Which I think is a mistake that has eroded our understanding of it, and lead to the current magic bullet view.

In a previous post, I outlined what you can get access to when you collaborate. Just because you can though, it doesn’t mean that you should. There are four reasons that I think make collaboration worth consideration:

  1. You need something faster than your organisation can deliver it.
  2. Developing or acquiring what you need in-house is too expensive.
  3. Your organisation cannot create what you need because of regulations.
  4. You’re looking for the capability to become a customer.

Under any other circumstances, you should seriously question the need to collaborate. Collaborating is a build or buy decision that has to stand up to a cost-benefit analysis. If your organisation can deliver what you need fast enough and at a market competitive price, without violating any regulations, what justifiable reason do you have for collaboration?

The three things you can get by collaborating with another organization.

Collaboration is “the act of working together to produce something”.

It’s fundamentally about going outside your organization to get access to something without having to build it. There are also times when you might not be able to get what you need internally for regulatory reasons, or because the capability to become a customer is a self-defeating cycle if you’re just buying your own stuff.

When you collaborate with another organization, you should think about the gap you are trying to fill in terms of one of the following three things –

  1. Information – facts about something or someone.
  2. Knowledge – the theoretical or practical understanding of a subject.
  3. Capability – the power or ability to do something.

Once you define your need in one of these terms, you’re much more likely to make the right decision about whether you should collaborate, and then to find an efficient supplier of what you need.

Where is multi factor authentication in the 2017 Australian Government Information Security Manual?

This post is going to be a bit dry, it is written to provide an accurate overview of specifically where you can find multi-factor authentication controls in the 2017 Australian Government Information Security Manual (ISM). It is accurate as at the 3rd of March 2017. If you are in a security or IT decision-making role, and are considering whether multiple factors of authentication should be part of your security apparatus, the ISM provides both a minimum standard for accreditation, and guidance that can be used to inform your risk assessment. Each control is contextual, and doesn’t apply to every situation – you should seek a qualified opinion from a member of the IRAP program to ensure that you are assessing the right controls.

The minimum standard is imposed through controls that are listed as “must” for compliance purposes. In areas where some consideration of control vs. ease of access is appropriate, the controls are listed as “should”. What is clear from the ISM is that for system administrative activities, it is not considered acceptable to act without multiple factors of authentication. In some individual user access scenarios though, multiple-factors are listed as “should”. This lessening of controls for end users provides scope for each agency to consider the level of risk associated with access to the system, the level of burden that it is appropriate for users of that system to bear, and the level of operational complexity that the additional factors add.

As always, prior to looking at controls, the grade of information the service will carry needs to be decided. The cost of achieving each higher classification rises substantially, and each successive classification focuses more on access control than ease of access. From a risk perspective, more consideration of whether “should” should become “must” should also be considered. Appropriately qualified security and risk management personnel should be engaged to advise on these matters.

From a pure control standpoint, the controls focused on multi-factor authentication are listed below, each applies to all classifications –

  • 0974 – “Agencies should use multi-factor authentication for all users.”
  • 1039 – “Agencies should use multi-factor authentication for access to gateways.”
  • 1173 – “Agencies must use multi-factor authentication for” – system and database administrators, privileged users, positions of trust and remote access.
  • 1384 – “Agencies must ensure that all privileged actions must pass through at least one multi-factor authentication process.”
  • 1401 – “Agencies using passphrases as part of a multi-factor authentication must ensure a minimum length of six alphabetic characters with no complexity requirement.”

Some discussion of Multi-factor authentication can also be found in the “Access Control” section of the ISM – Principles manual.

All the documentation you need can be found at https://www.asd.gov.au/infosec/ism/index.htm

The 2017 ISM – Controls can be found at – https://www.asd.gov.au/publications/Information_Security_Manual_2017_Controls.pdf

The 2016 ISM (the latest) – Principles can be found at – https://www.asd.gov.au/publications/Information_Security_Manual_2016_Principles.pdf

What access security really is, and the myth of “totally secure”

A good definition of security will let you do two things –

  • Make more objective decisions about how much to spend on security.
  • Sort out who the people who don’t know what they’re talking about (and the liars) are.

So what is security as it relates to access?

Access security is the process of making access cheap for people who are authorised, and expensive for people who are not authorised.

That’s a simple but objectively useful definition. It’s useful because it can be applied simply to every form of access security – armed guards, door locks, IT security, theft laws – it’s universal. With the definition in place, you can move on to a conversation about how expensive access should be, then think about how you invest. Any investment in security should introduce significantly more cost for someone attempting unauthorised access.

In the light of that definition, it’s also easy to see why “totally secure” is a myth. Any form of access means something is now only secure against a certain amount of expenditure.

Next time you’re talking to someone about security and they’re asking you to make an investment, ask them about how much access expense the investment will add for an unauthorised person. Then you can compare it to the cost of what you’re securing. If you can’t make sense of that, you should find someone who can help you can. Any investment in security that doesn’t make unauthorised access many times more expensive than its cost is just the purchase of a warm and fuzzy feeling.