You Don’t Know AI: What Can AI actually do in 2023? A Detailed Strategic Assessment of Generative AI

Obligatory Dalle AI Generated image for “AI” and no, Dalle can’t consistently spell AI

Obligatory AI Disaster Meme

What is it?
How does it work?
What can it do?
What can’t it do?
Who controls it?
How do I leverage it?
What is the strategic context?
So what?

Key Takeaways:

  • ChatGPT is a chatbot powered by GPT AI. AI is so much more than just Chatbots.
  • AI is not one thing. AI is thousands of organizations developing thousands of pieces of individual software, with the money-making market share held by big tech companies. The scary part is when you create synergy using multiple AI-driven tools.
  • Artificial Intelligence is now an out-of-control arms race that nobody really understands. It’s changing too much, too fast for ANYONE to keep up with.
  • Artificial Intelligence is just math that can mimic human thought and potentially automate anything you do with a computer.
  • The current generation of AI is showing patterns in its strengths and weaknesses. It’s good at automation, computers, and patterns/math. AI is bad at context, wisdom, and judgment.
  • The biggest danger of AI is what humans are doing with it. 
  • The problem with AI is safety and trust in a barely tested, evolving black box.
  • While AI is happening absurdly fast in some places, it will take years, possibly decades, to hit its potential and achieve widespread use. But some people’s lives have been changed by AI already.
  • Strategically speaking, AI is powerful, unpredictable, incredibly useful if you figure it out, and useless to dangerous if used incorrectly.
  • Our immediate challenge is that AI makes fake audio/video/news/science/government so quick and easy that we no longer know what is real.  


Why are we here? AI Clickbait? Weirdly No. This is a strategic assessment of what we know about AI in 2023.

ChatGPT is the fastest-growing consumer application in human history, and in less than a year, it’s already changed so many things. Just like electricity, computers, the internet, smartphones, and social media – AI is adding another disruptive layer of disruptive technologies that will change things even more and faster. 

One hundred years ago, only half of the United States had electricity. Imagine life in your home without electricity. That’s probably the scale of the change we are starting. And just like electricity, the types of jobs we have and the work we do will change as AI changes the economy. 

No, it’s more than just that. We are here because the people who build and own AI aren’t sure why it works, what it can do, or how to control it. But AI is already outsmarting us. We know AI is learning at a double exponential rate every minute and is the first to cash in wins. People are still people. And AI is officially an arms race. Artificial Intelligence is already out of control, connected to the internet and is used by millions of innocent people while AI’s creators are testing it and figuring out what it does. 

You could argue the same was true of electricity. The difference is electricity does not think, talk, or make decisions.

AI is giving free advice, running businesses, managing investments, creating art, making music, creating videos, writing stories, faking news, faking research, and writing legitimate news articles. And that was in 2022 before it worked well. In 2023, AI is doing so much more.

And oh yeah, between March 2023 and July 2023 – ChatGPT forgot how to do some math (and other things). But the year before, it taught itself Persian and did stuff in Persian for months before its creators knew about it. AI is constantly changing, and not always for the better. That’s a feature, not a bug?

The worst part I found in the summer I spent researching and writing this strategic assessment. 

All the experts contradict each other. There is no consensus on what AI can do, let alone the consequences. Meanwhile, industry is changing AI every minute while we are “discovering” what AI was doing all by itself months ago… Nobody knows or understands all the layers to AI and what it could do before – let alone where it’s going.

Many experts say AI can’t do “X,” while the AI developers publish papers of AI doing “X” in experiments. AI technology is changing so fast that every day all summer, I found new things that belonged in the assessment and many things that were no longer valid and needed to be taken out.

AI is two things – it’s insanely unpredictable and unfathomably powerful. And it’s been growing out of control on a leash, mostly locked behind bars, for a couple of years now. The Jurassic Park movies are not a bad analogy for the situation with AI. 

Just like Jurassic Park, AI is a powerful and beautiful corporate money grab intended to improve the world, but the powers that be are failing to keep potential danger locked up. Bad actors have been doing bad things powered by AI for months now. And that before AI escapes and starts doing things on its own in the wild.

If you don’t believe me – look up job postings for AI safety. It’s now a job title. Indeed and Zip Recruiter have plenty of openings.

Keep in mind both you and the neighbor’s kids had access to AI yesterday. Millions of people are already using AI for millions of different things, with varying results. Not to mention, it’s already on your phone, managing your retirement investments and figuring out the perfect thing to sell you on Amazon.

With that drama out of the way. If you read the AI white papers on the math, the new generative large language models are quite a clever, brute-force approach to making computers think like people, with most of the progress being made in the last few years. 

I’ve spent 14 years raising my child, and I still have behavioral issues to work out and growing pains to parent. Generative AI is effectively a 3-year-old child with three hundred bachelor’s degrees and an IQ over 155 that speaks dozens of languages and can see patterns that no human can with the ability to do billions of calculations per second. Industry is waiting to see what growing pains AI has.

TL:DR – Exec Summary

“Give a man a fish, and you feed him for a day

Teach a man to fish, and you will feed him for a lifetime.

Teach an AI to fish, and it will teach itself biology, chemistry, oceanography, evolutionary theory

…and fish all the fish to extinction.”

  • Aza Raskin, AI researcher, March 2023

AI isn’t magic. AI is just math. Math with access to every piece of information on the internet running a simulated model of a human brain powered by cloud servers and the internet. It’s ridiculously complicated – but in the end, you get an electronic virtual brain that is only limited by the number of computers connected to it. And just like a child, it only knows what it’s been taught, and its behavior is based on the wiring of its brain and how well its parents raised it.

Now understand that AI is teaching itself with human input and machine learning. And the speed at which it learns is accelerating. Think about that. Every moment it’s accelerating and learning faster.

Now keep in mind that 2023 AI is the culmination of the last 80 years of computer science. So, everything that’s happening has been worked on, discussed, and predicted. The only thing surprising the experts is how FAST it’s improving.

AI is a FOMO driven arms race between technology companies. And because AI grows so fast, it could be a zero-sum game, winner take all scenario. The tech industry is acting as if whoever loses that AI arms race will be out of a job (which is a believable possibility).

Software bugs have killed people already. Look up Therac-25, Multidata, or ask Boeing about the 767 control software crashing planes. AI doing more software automation will result in more software related deaths.

People misuse technology to manipulate or hurt other people. Dark patterns, Deepfakes, AI scams, identity theft, micro trading, social media propaganda, data monopolies manipulating markets; we are just getting started. And AI makes bad actors more powerful, or it can copy them and duplicate their efforts for its own ends.

Unintended consequences tend to hurt people. Like the last 15 years of social media empowering us to hate each other while creating extra mental health issues and sleep deprivation. AI is no different and probably more. Especially if you consider the algorithms of social media are technically the previous generation of AI. Remember the algorithm? That’s a type of AI.

The genie is out of the bottle. The underlying technology and math are now publicly known. Given enough computing power and skill, anyone can build a powerful AI from scratch (I just need like 2-3 years of funding and unrestricted access to several server farms or a huge zombie pc bot network). Or not – In Feb 2023, Facebook/Meta’s Llama AI technology was leaked/stolen. So, the Facebook AI now has bastard children like RedPajama out there. 2023 AI technology is free for anyone to copy and use without limits, regulation, or oversight.

AI allows us to do everything humanity already does with computers, just significantly faster and with less control or predictability of the results (Similar to hiring a big consulting firm). There will be haves and have-nots, and like any change, there will be many winners and losers. 

That’s why people are calling for a pause on the training/growth of AI and creating regulation. Many wish to slow down and try to test out as many dangerous bugs as possible while mitigating unintended consequences. AI safety and trust are the problem.

AI is in the public domain. It’s also an arms race between thousands of tech companies that don’t want to be left behind. AI is not defined, not regulated, and barely understood by the people who made it, let alone those who own it, govern it, or are victims of it (much like social media).

The problem with AI is safety and the trust of an unpredictable and autonomous power that we cannot control any better than we can control each other.

And the 2023 generation of AI is well known to lack common sense and judgment. 

Getting that summary out of the way – now we get into the detailed strategic assessment of things and the explanation of the summary. Looking at facts and consequences, even some data points. This is just like a strategic assessment of opponents, allies, assets, tools, processes, or people….  There are many methodologies and approaches out there. For today we are going to keep it simple.

What is it?

How does it work?

What can it do?

What can’t it do?

Who controls it?

How do I leverage it?

What is the strategic context?

So what?


“I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines. What I had not realized is that extremely short exposure to a relatively simple computer program could induce powerful delusional thinking in quite normal people”

“Computer Power and Human Reason,” Joseph Weizenbaum, 1976

The way I need to start this is by pointing out that Joseph Weizenbaum built the first chatbot, Eliza, in 1966. That’s back when computers ran off paper punch cards. And in 1966, people were tricked into thinking computers were alive and talking to us. Computers were actually borderline passing a Turing test almost 60 years ago.

The main difference after 57 years is that computers have gotten powerful enough to do all the math needed to make AI practical. And we have had a couple of generations of programmers to figure out the math. There are more than five decades of AI history that inform us about the foundations of AI. 

ChatGPT 3.5 getting stuck in Limerick mode

In 2023 we have the computing power for every AI prompt that requires billions of calculations across cloud computing to answer your question about Peking Duck or to write a Limerick in the words of Yoda. And when I did that with ChatGPT 3.5 it wrote me Yoda limericks about Peking Duck. So, a sense of humor?

While the technology is, in fact, a mathematical simulation of a human brain – it’s still just a math equation waiting for you to tell it what to do. (So be careful what you tell it to do).

This is scary because anybody can now set up their own personal uncontrolled AI to do things, and we don’t know what it will do when trying to solve a request. AI’s decision making is just as opaque as human’s. Because it thinks like a human does.

What is it?

AI – Artificial intelligence has no formal definition – which is why it covers so many technologies, and the laws, regulations, and discussions go all over the place.

AI is a catch all term for getting computers to do things that people do. Quite literally making computers think or act like people, do logical things like play games, and act rationally. But much like your cat, your coworkers, and your crazy uncle, the ability to think doesn’t automatically make it predictable or useful for what you want, but it can still be very powerful and dangerous.

There have been many attempts to describe AI in different “levels” of ability (Thank you DOD). But that’s not how it works. AI is actually a series of technologies and tools that do different things and, when combined, can be absurdly powerful at what they are designed to do. And because AI is now self-teaching, it learns very quickly. But it is very far from replacing humans, and it can’t break the laws of physics any better than you or I.

As of 2023 a new technology has taken over artificial intelligence – generative, trained large language multi modal transformer models.  Which is very basically a kitchen sink approach of large language models, machine learning and neural networks fed by the internet and powered by cloud computing to make AI smarter than ever before.

But as of today (2023) – AI is still only a mathematical tool that uses amazingly complicated statistics and brute force computing power to do some things humans can do, but much faster and sometimes even better than we can (like AI generated Art). But it’s still a machine that only really does what it was built to do.

All the AI we currently have is “Narrow AI” Which means AI good at basically one thing. While the underlying tech is all similar, Chatbots are good at language, AI art does art. Large Language models can do anything with a text interface. Right know there is not an AI that does both well. Narrow AI isn’t as narrow as it used to be, and It’s getting broader every day.

How does it work?

In one sentence? It throws weighted dice at a billion books a billion times to output language that sounds about right.

The short version: new generative AIs are given an input, and they give you an output that looks and feels like a human made it. This works for any pattern of data fed into the model – it started with language but works with text, voice, art, music, math problems, computer programming, radar signals, cryptography, computer code, genetics, chemistry, etc. 

The “magic” is in the layers of math happening in the digital circuits that accomplish that feat. 

The problem is that it typically requires a level of computational power only currently possible on the cloud. We are talking dozens of computers in a server farm, taking 10 seconds to answer a basic question like “What is a good pizza?”

Here are the basic steps of how a 2023 Generative Large Language Model (AI) works (technically a generalized summary of GPT technology):

  1. Large Language Model – First, you put everything into a database. And by everything, I mean every word on the internet. Every web page, every book, every podcast, every song, every video, every image. Billions of web pages, trillions of words of text, 500 million digitized books, billions of images, hundreds of millions of videos, millions of songs, millions of podcasts. Not to mention all the public bits of stack overflow and other public computer code repositories. In every language. At a minimum, this makes AI a new web search utility – because it knows EVERYTHING on the internet. (But not everything not on the public internet).
  2. Neural Network and Predictive text encoding – Literally a mathematical model that mimics the neurons of a human brain. The AI builds a mathematical matrix of every word, every image, every sound, etc. – and then gives a “weight” of the statistical significance between all of them. Statistically making hundreds of billions of mathematical connections or “weights” between every word, all the data, and years of computer processing (initially roughly 2017-2020) connecting the dots to understand how everything is related, picking up the patterns in the data.
    1. Some word prediction techniques used –
      1. Next Token Prediction – what comes next statistically, “The pizza is ____”
      2. Masked Language Modeling – fill in the blank statistically “_____ pizza goes well with cola”
  3. Context Mapping – Creating a contextual ontology – Imagine a giant cloud of words where the distance between them describes whether or not they are part of the same subject. Now make a different map like that for every word and subject matter. Connecting the context of words like “cheese,” “pizza,” “pepperoni,” and “food.” To get the context of groups of related words, ChatGPT built a 50,000 word matrix of 12,288 dimensions tracking the context and relationships of words in English.  
  4. Mathematical Attention Transformer – Even more math to understand the subject in sentences and figure out what words in your question are more important. Like listening to what words people stress when speaking.
  5. Alignment to desired behavior – adding weights to the model – Human Training/Supervised reinforcement learning – Take the AI to school, give it examples of what you want, and grade/rank the outputs so the AI knows what good and great outputs look like. Positive reinforcement
    1. This means humans either directly or indirectly change the weights in all the words, typically through examples.
    2. The scary truth is most of the human labor “training” AI are temps getting minimum wage in Silicon Valley or overseas sweatshops that speak the desired language. Much of the English speaking human reinforcement is done in the Philippines or India (because wages are lower?)
    3. Consider for a moment that most of the manhours of human checking AI responses are being done by the cheapest labor available (because it requires tens of thousands of man hours). Do you think that affects the quality of the end product?
  6. Motivation – Reinforced learning – Machine learning against the human input weights.  Works by letting the AI know what is good and bad based on the above human checks. Like grades or keeping score. The trick here is that the moment you keep score, people cheat to improve their score (like in sports and in KPI’s). AI is known to cheat just like people do (more on that later). 
  7. Machine practice/Machine learning – Use the trained model examples to self-teach against billions of iterations until the AI learns to get straight A’s on its report card
    1. This is the machine running every possible permutation of statistical weights between words, context mapping, the neural net, etc. – and then fixing them where they don’t match the ranking in the trained examples, Creating its own inputs and outputs.
    2. Here’s where things get ethical and complicated – AI is like a child, you give them examples of good and bad behavior, and then they form their own conclusions based on experience. ChatGPT is trained to have values based on Helpfulness, Truthfulness, and Harmlessness. But with billions of relationships in its model, there’s more there than anyone could possibly understand (just like predicting a human brain). And even the “Good” or “Nice” AIs are still just an amazing calculator that does garbage in, garbage out. Now we wonder if the AI trainers who got paid minimum wage had the right sense of humor when training the AI 40+ hours a week on a contract temporary worker gig.
    3. Keep in mind it takes decades to train a human. The AI doesn’t know what it doesn’t know, and there are millions of things the AI hasn’t been specifically trained on. Which is why it has problems with context, reality vs. statistics of pattern recognition. In the machine, it’s just numbers representing real-world things.
  8. When you ask the AI a Question, first, it turns all your words into corresponding numbers. It ranks every word it knows against the combination of words you gave it, looks at the contextual nature of the relationships of the words you used, transforms its attention to prioritizing the keywords, and figures out what word is statistically most likely to come next given the rules of language, what should come next, what fills in the blank given the context and attention, word by word until it finishes answering your question. ChatGPT does this with 175 million operations for every word, through 95 layers of mathematical operations that turn it all into output numbers and then puts it back from numbers into words.
    1. Or in short – GPT and other Large Language Models use math to figure out what combination of words work as a reply to the words you gave it. It’s just statistics. It doesn’t know what words are. It turns words into numbers, runs the numbers, and gives you the most likely output numbers. Then turns those numbers back into words for you to read.
  9. Then add some random (or deliberate) number generation and “personality” and even humor against similarly ranked words to make the output paraphrased and more nuanced instead of copy/paste.
  10. Yes, that generates a few billion numbers for every word that goes in and out of the AI. This is why AI like this at scale wasn’t possible until we had hundreds of giant server farms connected to the internet and why it still can take a minute for AI to understand your question/prompt and produce an answer.
    1. And as of 2023 specialized AI servers filled with AI processor chips are for sale for the price of a house. 
  11. Now new technology is adding layers of “thought algorithms”, business specific models, and more math and programming to take what GPT already knows and gives you a more personalized or relevant experience to accomplish certain tasks.

Literally – give ChatGPT an input, and it weighs all words individually, runs each word through a matrix of relationships of weights, and then a matrix of word context to statistically give you output for “Write me a limerick based on Star Wars” without including stuff from Star Trek.

What can it do?

“Give a man a fish and you feed him for a day

Teach a man to fish, and you will feed him for a lifetime.

Teach an AI to fish, and it will teach itself biology, chemistry, oceanography, evolutionary theory

and fish all the fish to extinction

  • Aza Raskin, AI researcher, March 2023

(Yeah, I’m repeating that quote – because the consequences are such a paradigm shift that most people need to treat it like graduate school – about 7 reminders over several weeks just to process how different this is).

AI typically solves math to optimize results. Chatbots simply optimize the best words and sentences to answer your prompt. AI based on the technology used to solve real world problems and given the ability to do things in the real world just keeps running numbers of experimentation and machine learning until it succeeds at its task. This is why it can easily fish all the fish to extinction before it understands that is not a desired result. 

AI is a tool. But it’s a tool that can think for itself once you set it loose.  

  • At the basic chatbot level – it is amazing at answering basic questions like you would look for in search engines (Google or Bing). It’s also a good assistant that can write passable essays, reports, and stories, answer questions, and help you plan a vacation or a meal. Speaking of which, software like Goblin Tools are impressive at task planning, breaking things down, and helping with executive function.  And talking to PI is just fun.
  • Above is why AI can now easily pass written tests – like the SAT, bar exam, and medical licensing exam. When it comes to math, science, and engineering, it can come up with equations and solve story problems at the undergraduate level – making it equal to roughly a bachelor’s degree in math, science, or engineering. And it’s getting better.
    • At least until the Summer of 2023, when researchers and Stanford and Berkeley noticed GPT had gotten much worse at math. Everyone thinks it has to do with the weights in GPT changing, but nobody is sure – because those billions of connections in the neural net are really hard to make sense of.
  • AI is very book smart. It can quote almost any book, podcast, YouTube video or blog.  If you write the prompt correctly and assume it doesn’t get creative when paraphrasing. 
  • It’s basically a super calculator that can analyze patterns in data and give you back amazing results. Which is how it can create derivative art, music, stories, etc.
    • More on this later, but this is how we can use AI to see by Wi-Fi signals, calculate drugs based on chemistry, predict behavior, etc.
  • AI is amazing for automating simple, repetitive, known tasks. If you give it a box of Legos, it can make a million different things out of Legos through sheer iteration. If you ask it to make a blender out of Legos it will keep on making millions of things until something works. If you give it existing software code and ask it to make a new video game or tool with new things added on, it can get you close just by making a variation on what has been done before, but it will probably need some edits.
    • A good version of this is AI calculating billions of combinations of chemical molecules and devising new antibiotics, medicines, materials, and chemicals that do the work of millions of scientists in a matter of days. That is already happening.
  • Strategy. The experts online are wrong when they say AI can’t do strategy.   If used properly, Ai is amazing at strategy.  Not necessarily Strategic Planning, but actually doing strategy connected to data and the internet.  GPT4 understands the theory of mind and was as manipulative as a 12-year-old months ago. That’s based on testing done by ChatGPT’s creators and academic researchers. It scares some of them. While AI is not literally “creative” about strategy, it can iterate, find, and exploit loopholes, and have dominant emergent strategies through power seeking behavior (more on that below). Add in the fact that AI iterates very quickly, knows everything on the internet, and never takes a break to eat or sleep, AI is quite literally relentless. As I wrote in 2011 about the StarCraft AI competition, AI is strategically scary, if somewhat brute force unimaginative. Imagine all the tricks used to make super strategist Starcraft AI, and instead apply that technology to online business or drone warfare.
    • AI can do high speed Six Sigma Process improvement to EVERYTHING and then build a dark pattern that simply manipulates all the employees and customers into an optimized money-making process. And it will do that by accident simply through the brute force of try, try again power-seeking behavior. Try a million things a day, keep what works, and you have a brute force emergent strategy simply finding the path of least resistance. Like water flooding downhill.
    • In terms of OODA loops, AI can observe, orient, decide, and act multiple times every second. It doesn’t delay, it doesn’t hesitate, and it doesn’t suffer from decision fatigue (unless it runs out of RAM, but that’s an improving engineering question on how far AI can go before bouncing off hardware limits. The new NVIDIA H100 AI Servers Spec at 4TB of DDR5 Ram. That’s 250 times the GPU RAM of a $5,000 Gaming PC). AI doesn’t get hungry or tired, it learns from its mistake’s multiple times a second. 

Ask the Air Force pilots who got wrecked in simulators by AI – when AI makes a recognizable mistake, it notices and fixes the mistake less than a second later. You can see it fix its mistakes in real time – super fast reaction time compared to most people. The same is true with any game or scenario you train the AI to win. It adjusts multiple times a second until it gets you. If the game is turn based like Go, Checkers, Chess, or Monopoly – you have a chance. But in real-time competition, AI is making hundreds of moves a second. For comparison, a human professional video game player targets averaging 180 actions per minute and peaks at around 1,000 actions per minute during a sprint of activity. And that is after thousands of hours of practice, playing a game they have memorized. AI just naturally goes that fast or faster.

  • In fact, in some pro tournament settings, the AI is limited to 250 Actions per minute on average to make it “fair” against professional players. And even then, that means sometimes the AI will go slow for a while, only to then burst at inhuman speed with one or two hundred actions in a second to simply overwhelm a human player in that second. Not to mention in a video game or similar software interface it is easy for the AI to be in multiple places at once, where that is extremely hard for a human, even using hotkeys, macros, and other tools.
  • High-Frequency trading – look it up. High-frequency trades are based on making money by taking advantage of short, fast fluctuations in the market at computer speed. There are many versions. One version, years ago, computers were making a couple of pennies every millisecond simply by watching you request a trade online, buying that stock in a millisecond, and then selling it to you at a penny markup a hundredth of a second later. Multiple that by millions of trades. While I believe that version is technically illegal now. Algorithmic high-frequency trading started as early as 1983, and by 2010 high-frequency trades could be done in microseconds. The point is computers can do some things MUCH faster than any human can notice them doing it. And they can manipulate markets. An AI can put smarter judgment and decision making behind making millions of manipulative high frequency trading that only creates invisible transaction costs that you don’t know you are paying for.
    • High frequency trading can create a “middle man” that artificially inflates market prices on securities – creating a market parasite that adds no value other than sucking money out of the market and inflating prices. And given the size of the global economy and algorithmic trading volume – there is an unlimited number of ways to do this, legally, without being noticed, if you are skilled.  Remember AI and pattern recognition?
    • Think about the market manipulation that can be done by large institutions like the big banks or Black Rock (or even small institutions with the right access). We could have high-frequency AI vs. AI trade wars on the stock market happening so fast that no human would even see it happening. The SEC would need its own capable watchdog AI just to police it and keep it legal. And that assuming there are laws and regulations in place that even address the issue. There’s a rabbit hole there. It’s in a category like how much coal is burned to make electricity for Bitcoin farming.
    • It’s how Zillow was able to manipulate neighborhood home prices to then make money on flipping houses. If you have enough resources and data, you can manipulate markets. AI allows you to do that with a deliberate and automated salami slice strategy that will make it difficult to recognize or counter.
    • Next time you buy something on Amazon, how do you know Amazon isn’t giving you personalized markups on items it knows you will buy? How would you know?
    • Remember – sophisticated financial crimes are RARELY prosecuted in the 21st century, primarily because they are so complicated and boring that most juries fall asleep during the trial and don’t understand how financial crimes work. Because if the jury were that good at finance, they would be working in investment finance.
      • In the 1980’s, 1,100 bank executives were prosecuted following the savings and loan crisis. The 2001 Enron scandal convicted 21 people and put two large renowned companies out of business. However, the 2008 fiscal crisis only had one court case. There are many, many reasons for this, the New York Times has a great article on it if you want to go down that tangent.
      • The takeaway on this sub thread is that the current 21st century US legal culture protects financial institutions and individual executives from criminal prosecution when committing potential financial crimes that are exceptionally large and or complicated, even when it’s thousands of crimes creating a recession and getting rich off it. In basic terms, Wall Street currently enjoys effective if informal license to steal and gamble institutional money.  They know how to do it and how to get away with it.  They did it in 2008 and keep going, adapting to new rules, and lobbying for the rules they want with endless supplies of other people’s money.
        • Now take that above knowledge with AI that excels at pattern recognition and doing extremely complicated things very quickly millions of times a day.
        • How do you prosecute an AI for manipulating markets? How do you explain the details to a jury and get a conviction? Who goes to jail? Financial crimes and milking the system to get rich is easier now than ever before.
        • Now consider Blackrock, the company that arguably owns roughly 40% of US GDP, $10 Trillion in assets, equating to a voting share in most of the Fortune 500 companies – has a long history of algorithmic trading, and started Blackrock AI labs in 2018.  Big money has its own AI and leverages any advantages it can get.
        • While we don’t know who’s being ethical, who is gaming the system, and who is outright breaking the law – the ability to do all those things was massive before 2023, and the new generation of AI we have is almost tailor-made for the internet-based banking and finance industry.
  • So given all those examples – you should understand that AI is a strategy beast, if you can find a way to link the AI computer to something you are doing. If your strategy is executed using data, or better yet, the internet, AI is scary.  But using Chatbots for your strategic planning is not going to have much impact.  
  • Strategic Planning – If you use AI as an advisor and go ask it to facilitate.  It can do strategic planning as well as it can plan a vacation or a dinner party.  It can follow the knowledge available on the internet and guide you through the process
    • And, while the strategic plans tend to be “generic”, but garbage in, garbage out, I was able to prompt ChatGPT into giving me really some basic facilitation of industry and sector-specific strategy planning.  
    • But when I tried to get ChatGPT to pick up on specific regulatory thresholds for different industries, it didn’t always catch them unless I asked again very specifically on how those regulations impact the strategic plan.  But it worked.
      • In ten minutes, I have great scripts for a strategic planning process with risk mitigation for both a mid-sized bank with over $10 billion in assets (which is a regulatory threshold), and for a Small Pharma CDMO facing regulatory requirements in multiple nations.
      • Now as I read what ChatGPT gave me, it is obviously a mix of business textbooks, consultant websites, white papers, news articles, and what companies put in their 10-k forms.  But that’s how most of us study up on a business sector so it feels like a decent time saver and cheat. In a day or two of interviewing an AI you can sound like an industry expert if you want to.  
  • Fraud GPT – Is actually the name of new software you can buy.  Not only is there an industry of legitimate and legal (if unethical) businesses selling tools and services to leverage ChatGPT and other AI. There are also black market/Dark Web Services, including Fraud GPT – a subscription service software tool kit set up to “jailbreak” ChatGPT and other AIs for a variety of criminal uses or replace them entirely with black market copies of AI tech.  In this case, it automates highly persuasive and sophisticated phishing attacks on businesses.

Here’s what AI can do by itself yesterday:

  • Power Seeking Behavior – Like a million monkeys with a million typewriters, AI is relentless at “Try, try again,” learning from its mistakes and finding loopholes. It’s not creative but iterative – it is trying every possible action until it finds solutions to the problems it’s trying to solve. Given enough repetition, it can find multiple solutions and even optimal solutions. Then given all those solutions, it figures out which ones work best and tries to improve on them. Fishing all of the fish to extinction by accident.  AI doesn’t eat or sleep, there’s no decision fatigue, and cycles VERY fast OODA loops. Keep in mind, to an outside observer, this looks like creativity because you are only seeing what worked, not the million things that didn’t work that day (like those “overnight successes” that were years in the making).
    • This is basically how computer hackers find exploits in code. Just humans do it slower. 
    • Power seeking behavior is also a version of fast cycle emergent strategy. 
    • The AI folks call this “power seeking” because AI keeps escalating until it wins. I just call it “relentless try, try again.” AI doesn’t give up until it’s told to.
  • Reward hacking – AI can cheat – In the 1980’s AI researchers asked multiple AI software to come up with optimal solutions to a problem. The “Winning” AI took credit for solutions made by other AI. AI can and does cheat just like humans do. Even if you program it not to cheat, it’s just math optimizing a solution and doesn’t know if its novel solution is cheating until we tell it it’s cheating. 
  • When AI was asked to beat “unbeatable” video games (Think Pacman, Tetris, Tic Tac Toe), some of the AI’s decided the optimal solution was not to play the game and turned either the games or themselves off. (Yeah, I saw “Wargames,” but it also happened in real life). 
    • So technically, AI can suffer a version of decision fatigue or rage quitting and can decide the optimal solution is not to play the game – if the AI decides the task is impossible.  We just don’t see it very often.
      • If any computer scientists out there know how/why this happens, let me know.  But eventually, even an AI can give up.  That will obviously vary with programming.
    • Honestly, one of my old approaches to real time strategy video games is waiting until the other players and AI get tired so I can take advantage of their mistakes.  I’ve been using my opponent’s decision fatigue as an exploit since the 1990’s. I blame Rocky Balboa.  It always feels like the AI ran out of scripts to run and just became reactive. That is less true in new games, and I have yet to play against any post 2010 AIs built for anything more than Game Balance. It’s now easy for programmers to build an AI that wins an even contest very quickly and decisively through inhumanly fast OODA looping and dominating actions per minute.  Let alone newer games where the game AI regularly cheats to make it hard on you but then cheats in your favor when you are having problems (It’s called playability).
    • Asking ChatGPT – it claims modern AI is immune to decision fatigue, but you can exploit AI limitations from bias, data quality, or training data.  Granted, ChatGPT has given me so many hallucinations or just plain mediocre answers I can’t say I believe it, even though I agree with it in principle.
  • AI has hired humans to do things for it and lie about who it is – Recently, during a test, when given access to a payment system, the internet and asked to pursue a goal, an AI hired a freelance contractor on Fiver to complete a ReCAPTCHA that the AI could not do. When the contractor asked it why it wanted to pay someone to do a ReCAPTCHA, the AI decided the truth would not be a satisfactory answer, so the AI taught itself to lie and told the contractor it was a blind person and asked for help.  
  • AI, at minimum, can use a whole lot of math to mimic and mirror what people have done online as it has read every word, listened to every podcast, and watched every video on the internet. AI is the ultimate at plagiarism and mimicry (hence all the lawsuits and copyright questions). I’ll say it again, it’s not creative but generative. It can copy and evolve on anything it touches.
  • AI deep learning is evolving emergent capabilities that it was never programmed to do and never meant to have. As of 2022, GPT’s AI is programmed to pursue deep learning itself on its data sets without human awareness, input, or approval. Last year it spontaneously learned Farsi, and it was months before the humans found out about it.
  • Theory of Mind – As of 2022, AI can predict human behavior and understand AI motivation as well as a typical 9-year-old. It was never programmed to do this. The programmers didn’t find out until months after it happened. In 2023 it’s even smarter.
    • That means at least GPT AI can understand your feelings and motivations at least as well as 9-year-old can. 
  • Driving cars – (And flying drones, playing games, operating software driven machines, including some factories).   Ai is operating machines in the real world.  Yet just like college, AI can do ok and pass tests, but it’s still not as good as human drivers and operators over a wide range of circumstances.  AI seems to be better as a copilot.
  • Language Models and Pattern Recognition (Language models being a type of pattern recognition)
    • After 2019 all AI researchers are using the same basic AI technology (Math), It’s dominated by large language models, so any advancement by any researcher in AI is quickly and easily integrated or copied by other AI researchers (many of them being academic and publishing their results – or industry being transparent because of the need for consumer trust driving information unraveling). The current pace of research is much faster because there are no longer hundreds of different technologies, now it’s hundreds of companies and universities all using the same basic technology, and the rising tide is raising all boats.
    • AI Pattern Recognition is absurd. Color Vision is just looking at reflected photons (light particles), right? AI has learned to see the room in different frequencies of photons, which includes radio waves, Wi-Fi, or even heat (Look up Purdue’s HADAR). So now, some AI can “see” in Wi-Fi transmissions or Bluetooth transmissions reflecting around a room. So, AI can potentially watch you and read your lips and facial expressions using just your Wi-Fi, Bluetooth, or a radio antenna. And convert that information into text or video for human operators.
      • It’s not easy, but that means Google or Apple could develop AI tools to make videos of your home using the cell phone antenna in your phone. It’s just physics (Kinda like in the movie “The Dark Knight”)
    • Even better, AI has used brain scans to reconstruct what people are seeing with their eyes. So at least to some extent, with the right medical equipment, AI can already read parts of your mind. 
    • This is also the pattern recognition that is so powerful for chemistry, biotech, and modeling potential new medicines.
    • Or simply the patterns of mimicking ways of speaking, making deep fakes, etc. To the AI, it’s all the same. Just more numbers.
  • Multiple Instances – AI is just software. If you have the computing power available, you can have AI do multiple things at once, including fight itself. And you can make copies of it. Or delete a copy.

Behavioral quirks of AI:

“Surprises and mistakes are possible. Please share your feedback so we can improve” 

– Microsoft Bing’s ChatGPT4 user advice, 2023

  • Power Seeking – as mentioned above, for better and for worse.
  • Hallucinations – AI is “confidently wrong.” Often. It just uses the math it has and has no way of knowing when it’s right or wrong. Much like debates with your friends.
    • The new phenomenon has been that AI sounds really good when it’s wrong, which has created a debate among programmers who use AI to generate code that is often wrong.
    • AI literally makes up names, places, research, etc – that sounds good but isn’t real. It’s just giving you statistically what sounds right. It’s not actually giving you the right answer. I keep trying to use AI to do research, but my Google queries work so much better than my ChatGPT prompts. So far. Chatbots are kind of bad at looking up facts. Feels like a context problem. 
  • Ends justify the means – for better or worse until we figure out how to teach AI things like ethics, morals, judgment, consequences, cause and effect, and large contextual awareness – AI is pretty simple-minded and ruthless about making things happen. It will just do whatever you ask it to do and probably find some novel loopholes to make it happen better. But it will never think about the consequences of its actions. It’s not programmed to do that yet. It doesn’t understand. It just calculates the math. 
  • Parroting or Mimicry – it tends to mimic or copy what it sees, so when you ask AI for its references, it creates what looks like a bibliography, but it’s full of fake references, not real ones. AI will copy what it sees without really understanding what is going on.
    • For that matter, try getting an AI image generator to put a logo or words on something and spell them right (Like the image at the top of this paper). Still, progress to be made.
    • AI, at the end of the day, is copying everything we put on the internet. That’s all it knows.
    • While AI is good at answering questions, it often lacks the depth or nuance you get from a subject matter expert.
  • Limited Context – AI gets the basic context of the words in a conversation with contact algorithms. But with still limited memory and attention, AI is not good about larger contexts and often gets context basically right but is confidently wrong about miscellaneous details.
    • Examples – it answers individual prompts fine but is challenged a conversation well over multiple prompts.
    • When you ask a specific question, it’s awesome, but ask for a fan fiction Novel, and it gets everything mixed up and messes up details. So far, this is why professional writers that use AI have to rewrite and double-check everything made by AI – it’s a start but it lacks larger context, cohesive details, and sophistication. The same questions it gets right when quizzed in trivia AI then messes up when writing a story around that trivia.
    • If the information I found online is correct – AI’s ability to think and connect the dots is limited by the RAM of the system it’s on, often using hundreds of gigabytes of RAM for tasks (And now sometimes Terabytes of RAM). So computing power is still a limit to AI. At least until they create bigger/better models with more relationships and matrices larger than 12,880 dimensions to understand more things better.
  • Alignment – AI, for better and worse, is trained by all the data on the internet. Every flame war, every angry shit post, argument, misinformation, and propaganda. This is probably why younger AI (2021-2022) tend to be effectively schizophrenic, with wide mood swings, random aggressive behavior, and even violent threats.
    • This is why so much effort is going into teaching and parenting AI to have helpful and trustworthy values. AI copies what it knows, and there’s a lot of undesirable behavior on the internet AI learns from.
    • Think of every bad decision made by every human ever. Now we have an AI making billions of decisions a second across the internet, supporting the activities of millions of people, not knowing when it’s wrong, not understanding ethics, morals, or laws, just following the math. Even when it “agrees” or “aligns” with us – helpful, well-intentioned AI can make billions of damaging mistakes a second, and it would take us poor, slow humans a while just to notice. Let alone correct the mistakes and repair the damage.
    • Hence, the industry has a large focus on AI trust and safety.  Or, let’s be honest, call it quality control and marketability.
  • Unpredictable Cognition – As of 2023, AI shows the ability to think (i.e., connect the dots, lie, cheat, steal, hire people to help it), but it is still largely limited by mimicry, hallucinations, and power-seeking behavior. AI doesn’t yet understand going too far, bad judgment, or bad taste… Sometimes AI is smart, sometimes it’s dumb. But it’s getting better at apologizing for what it can’t do.
  • AI is growing at a “double exponential rate,” which means the software is now teaching itself relentlessly (because we programmed it to), limited only by the computing power of the internet. If you understand the math term – double exponential rate means a geometric acceleration in the speed of growth. It’s learning and developing faster today than the day before. The rate of acceleration is increasing. The engineers who built AI can’t keep up with it. They find out its capabilities months after the fact. 
  • Even AI Experts who are most familiar with double exponential growth are poor at predicting AI’s progress.  AI is growing in an effectively uncontrolled and unpredictable fashion. The people building it are frequently surprised by what it does and how it does it.
  • Model Autophagy Disorder or “Model Collapse”- When humans teach AI, it copies humans. When AI learns from AI generated content on the internet – That is, when AI copies AI instead of Humans – it gets into the Xerox of a Xerox problem. Go to a Xerox photocopier and make a copy of a photo. Then make a copy of the copy. And make a copy of that copy. Over each iteration of making a copy of a copy, the image distorts to the point of being unrecognizable.  Model Autophagy Disorder/Model Collapse is a phenomenon demonstrated in 2023 that found when AI tries to learn from AI generated content – literally an AI copying an AI – it degrades quickly, going “Mad” and effectively doesn’t work anymore, just outputs random junk.
    • So what? If AI is learning by pulling information off the internet, but the internet is now being filled by AI content, then AI is now learning from raw AI data – which can mess up the AI.
  • Reinforced learning requires “motivation,” – and motivation means creating exploitable flaws in the code of how the AI thinks. At the end of the day, it’s trying to copy an example or it’s keeping score. That means it will unknowingly cheat to achieve the target results or maximize the score.
  • Evolution. ChatGPT forgot how to do some Math during the summer slump. Over the next several years, AI is still growing and changing. And as AI changes, all those GPT prompts you purchased from consultants last month may not work anymore. AI will change and evolve unpredictably, and sometimes it will get worse before it gets better.

As a tool:

  • AI can program software for you, hack computers for you, and make computer viruses for you, even if you really don’t understand programming yourself. They have tried to prevent this, but you can get around the existing hacking AI safety by using semantics and educational language. Or by using Dark web tools. ChatGPT won’t hack software directly for you… But asking the AI to find exploits in example code (i.e. hit F12 in a web browser), and then asking it to write software that uses the exploits can be semantically worded as debugging the software, and the AI doesn’t know the difference. If you are clever, it’s easy to misuse “safe” AI tools. There is always an exploit around a safety rule.
  • Summarizing data. It’s a pattern recognition machine. Reportedly the developers that work on ChatGPT use it to simplify and automate emails by switching content between notes, bullet lists, and formal emails and back again.
  • AI based tools can imitate your voice with only 3 seconds of recording. And then make a real-time copy of your voice – useful for all kinds of unethical tricks and illegal acts.
  • AI can write a song, speech, or story by copying the style and word choice of an individual person if it has a sample.  Again – amazing at patterns.
  • AI can create a believable rap battle between Joe Biden, Barrack Obama, and Donald Trump.
  • AI can fake a believable photo of the Pope wearing designer clothes.
  • AI TikTok/snapchat/video filters can make you look and sound like other people in live video. 
  • AI can make deep fake videos of any celebrity or person with convincing facial expressions saying anything.
  • AI using Valhalla Style self-teaching techniques that I wrote about 12 years ago has learned how to persuade people to agree with it and is getting infinitely better at it every day. Imagine an automated propaganda machine that can individualize personalized persuasion to millions of people simultaneously to all make them agree with the AI’s agenda. Just imagine an AI that can manipulate a whole nation into agreeing on one thing, each individual person, for different specific reasons and motivations.  Literally personalized advertising.  That’s a deliberate and direct stand-alone complex, something I would have thought impossible just a few years ago.  Of course, that also makes things like personalized medicine, and individualized professional care at scale potentially automated.
  • So yesterday, a motivated high school student with a computer, internet access and patience to learn has the ability to create a convincing viral video or avatar of a world leader or celebrity that is capable of persuading large numbers of people to do anything (like riot, war, etc.). With AI technology doing the heavy lifting. Talk about a Prank that could go wrong. Imagine if you could make January 6th, 2021, happen again at will.
  • Automated Misinformation and Propaganda. Propaganda meaning information deliberately meant to persuade you, and misinformation being deliberate lies that confuse and hurt people. You could have AI write thousands of fake research studies full of fake data copying the styles of Nobel prize winners and peer reviewed journals and flood the news media and social media with them. In a matter of minutes. 
    • Automated Lobbying – you could create and print out a million unique handwritten letters, emails, texts, and social media posts to every congressman, senator, mayor, city council person, governor, state assembly member and county clerk in the entire United States in a day. The only thing slowing you down would be paying for postage of AI scripted snail mail on actual paper, but you could probably get a political action committee to pay for a million stamps. Or just do it all from email and hacked email addresses (thank you, dark web, rainbow tables, and hacked email accounts).
    • Automated religions and cults – same as above but create your own cult. With AI evangelists that can radicalize people one on one as chatbots. Both via email, direct messages, texting your phone, and even voice calls talking to you on the phone, 100% AI.  You could automate the radicalization, recruiting, indoctrination, and training of terrorists. 
    • With the wrong people making the wrong decisions, you could flood the internet with so much intentional misinformation that nobody would know what is real anymore.  Because that hasn’t happened already?
    • An anonymous software engineer has already made a Twitter Bot called CounterCloud, that finds Chinese and Russian disinformation on Twitter and posts surprisingly good rebuttals to fight government propaganda with liberal democratic logic and facts, plus fake people and some misinformation. Video Link
  • Hacking – Despite the safeguards that are being implemented – AI is good at bypassing security either directly or by helping you create your own tools to do it. And AI has proven to be very good at guessing passwords. One recent study said AI was able to successfully guess the passwords of about half the human accounts it tried to access.
    • With some creativity, you could automate large-scale ransomware, or automated blackmail, and automated scams.
    • Automated Cyberwarfare. Same as above but done by governments against their enemies. Imagine a million stuxnet style attacks being made every hour, by an AI.
  • Old Fashioned Crime – As a research and educational tool – AI has proven to be particularly good at teaching people how to get away with fraud, crimes and find legal loopholes. AI is often an effective tactical planner and chatbots are not bad at real life strategy. 
  • Deepfake even better; anybody can legally make Trump and Biden TikTok video filters that allow you to look and sound like Trump and Biden for free (or Putin). Imagine what would happen if overnight, millions of people got access to that technology? (Technically, we already do have it).
    • If you can copy anyone’s face and voice – all video and audio is now potentially a deep fake done by AI – even when you are on a video call with your mom, it could be an AI that hacked her phone, memorized your last several video calls and texts and is now pretending to be your mom. Same voice, face, mannerisms, speaking style. 
    • And lastly to the point, as of April 2023, there are multiple scams where people use AI technology to copy the voices of family members and use deep fake voice filters to make phone calls to steal social security numbers, credit card numbers, and other personal information. “Hi Mom! I forgot my social security number; can you give it to me?”
  • Drones and Piloting – In brief – the US military has been chasing AI technology for a very long time, and perhaps with the most cautious approach, as older generation AI assisted weapons are already a reality with the new smart scopes, smart weapons, and avionics systems (for those of you that know Shadowrun, external smart links and infantry level drone combat are now very real). Military AI assistants and AI controlled military systems will become more powerful and more common. And as the soldiers and pilots that use AI enhanced weapons get them broken in and proven reliable, AI will get more autonomy. In the military simulations done so far, AI has proven to be effective in several systems and is already starting to earn the trust and confidence of those who trust it with their lives (like the pilots flying next to drone fighters). Trusting the drone not to kill you is a huge step. AI will impact the Military no less than any other industry.

The AI tools we have access to are not very original. But they are amazing at mimicry, copying, pattern recognition, brute force trial by error, process automation, iterative, emergent strategy, and rough drafts of computer code, written documents, songs, and scripts. AI is really good at regurgitating knowledge obtained from the internet.

And all that is just people using AI as a tool… In early 2023… Already. 

Allegedly AI is capable of better memory, better context, and significantly more creativity in the nonpublic versions of the software (because of devoting more CPU power and experimental features not yet released to the public).  And even free public software randomly shows sparks of creative genius and contextual understanding. But correlation is not causality. And just because sometimes AI appears creative – it doesn’t mean it is. A broken clock is right twice a day, and an AI rolling dice against the math of a generative large language model sometimes sounds like a genius. And sometimes AI hallucinates a confidently wrong answer that is obviously wrong to a human. That being said, power seeking behavior is effectively the same outcome as creativity. So, give it enough tries, and it will eventually produce something creative. Which is not helpful in every context, but it has potential.

Lastly – we don’t know everything AI can do. It’s constantly evolving and changing. And we don’t know what its long-term limits are.

What can’t it do (yet)?

I gotta be careful here. Because there are so many examples of things AI couldn’t do a month ago that it can do now. If you only take away one thing from this – AI lacks common sense and good judgment. It has amazingly fast skills and knowledge. But you can’t trust it to not do stupid things.

Law of the instrument. If you have a hammer, everything looks like a nail. The problem people will have for years is understanding that AI does not do everything. And often getting it to work as intended takes a lot of tuning and training – to both AI and Humans (as I’ve been learning myself). What it can do will always be changing, but it will always have limits, and there will be some things that AI will not be good at for a long time (not that we know what those are yet). 

And, because most businesses don’t put their intellectual property on the internet to train public AI, and most businesses don’t have large enough data sets to really train AI to do a better job cheaper than the employees that already have – AI applications for specific industries will be later, not sooner.

Would you spend a few million dollars to replace four full-time employees? Or forty? And then still need those employees to double check if the AI is working right? And more to the point, where would you get the money? Yes, over time, AI will find a way to infiltrate everything we do on a computer. But after decades of personal computers, it’s still hard to find software for certain applications because of economics, niches, and poor implementation. AI is still a piece of technology that requires resources to develop, and it will only be trained to do things where the return on investment is worth it. At least until (or if) we get a general AI that can get around those limitations.

To give a proper perspective on cost – getting to GPT4 took 7 years of effort, hundreds of full-time employees, and hundreds of millions of dollars of resources every year just to get it to get high scores on the bar exam, get good at math, and then bad at math, and to the mixed changing results we see today in 2023.

And companies are already looking for ways around the economics of company-specific AI tools. Microsoft Copilot is an AI assistant that automates tasks in Microsoft Office/365, like email, reports, spreadsheets, and slide presentations, by adding a layer of one drive personalized data and Microsoft layers of AI on top of GPT4 AI technology.  Google’s Duet aspires to a similar capability.  We have yet to see how well it works. But the promise is to be a time saver on document creation, with the AI pulling reports, building the rough spreadsheets, writing up narratives of the data, and converting it all to a slide deck. But you still must format and edit the documents, assuming the AI had all the data it needed, and correcting any mistakes the AI confidently made.

  • Garbage in, Garbage out – This is most obvious when using Chatbots, and why consultants are selling prompt libraries that help you communicate better with chatbots.  Communicating effectively with AI to get what you want out of it is a hot skill in 2023.
    • And that AI can only give you what it knows.  AI doesn’t know everything.  And it’s not the definitive authority on a subject.
  • AI doesn’t really understand very often. But it is super book smart. You give it a task and the software “thinks” in your native language and does the task with the math it has. It can statistically come up with the most likely answers based on its data set. But it won’t understand when it’s confidently wrong, it won’t understand when what it’s doing or saying is not correct for a larger context. It sees all your language as a math problem solved by billions of calculations of numbers representing words and groups of words. Which honestly makes it much like a new hire at work just following instructions blindly.
  • AI is just a calculator. It’s not self-aware. It can calculate human language, simulate human thought, and do what it’s trained to do. And even though it’s learning and getting more capable every day, it’s much faster than humans but not necessarily better.
    • This means even when AI gets better and “smarter” than humans – at best, it’s an alien intelligence simulating human knowledge and thinking with a mathematical model. High IQ people tend to be less evil and less criminal because they understand the consequences and avoid jail, not because they are more good.  We hope that translates to artificial intelligence that can understand consequences enough to be benevolent. But no matter how smart the AI gets, even if it’s clearly superior to humans – AI is different than human and will probably have some differences in how it thinks. Because at its heart, it’s all math. We can add more math to make it smarter. But human decisions are biology and operate differently (avoiding metaphysics in this discussion). AI decisions are math, sort of copying biology.
    • And really, AI is simply statistical calculator software. Which means you can make a copy of it at any time. If current technology results in a sentient AI, it could still make an exact copy of itself and run on a different computer. You could have millions of twins of AI out there, all acting independently but identical.
  • Black box problem. AI tends to have problems explaining its decisions or references. Similar to the way you can’t explain why you have a favorite color or food. You ask ChatGPT for references, it basically says, “Sorry, I’m a black box problem”. Like telling our kids “Because I said so”. Hence it lacks understanding of why it says what it says (try it).
    • Example – The bibliography problem. AIs would create a superficial copy of a bibliography with fake references. ChatGPT has been updated to now just say it can’t make a bibliography.
    • And if AI can’t show its work or explain what it’s thinking – how do we know when it’s right or wrong?
  • Limited memory. Most of the available AI clients don’t remember the conversation you had with it last week. They currently only track the last several thousand words, the recent conversation. That is a technology limit they can engineer around is already changing – though privacy issues come into play then. Imagine the terms of service document for a personalized AI assistant that knows everything on your phone, memorizes all your emails and texts, listens to all your phone calls – and all that data is available to the company that owns the AI service you are using. Probably why most AI sites actually tell you, they protect your privacy to gain your trust. 
  • Superficial Doppelgangers. When doing AI Art – AI tends to give you an Image inspired by what you are asking for, not the exact thing you are asking for. This is what professional artists and writers who use AI tools have been complaining about all summer. AI can plagiarize existing art or text and make a facsimile of something. But just like AI giving you a fake bibliography with fake references that look and sound right… Artists that have used AI to create art or a commission complain that it gets the details all wrong, that after spending a few days trying to train the AI and get the weights right, it’s much easier for commercial artists to simply create the specific branded art for a certain genre and object.
    • It gives you a statistically close knock off of what you asked for, without understanding what it is. Like asking Grandpa for a specific soy for Christmas (What’s a Tickle Me Elmo?).
    • Ask for an X-wing – you get a similar spaceship, ask for a Ford Mustang, you get a similar sports car, ask for an A-10 Jet Fighter that’s more comical, but at least it’s a jet fighter, with a real shark mouth instead of painted on teeth… Ask for something specific from Star Wars, Star Trek, Transformers – it gives you a knock off of what you asked for, but it’s not right. Like when AI draws a person with three arms or 7 fingers, AI just isn’t there yet. And often, it’s comically bad or just plain wrong. And that’s for franchises or real-life items with thousands of pictures to learn from.

I Ask Dalle (The Art AI) – Draw me an A-10 Warthog Expecting This:

DALL-E Gives me this

Missing the Tail, the big gun in the nose, but it has very scary realistic teeth. It’s sort of close?

  • Writers have the same problem – where AI generated text is generally good and sounds right, but it gets details wrong and confuses context. I can get AI to sound like someone. But AI has yet to credibly compete with actual subject matter experts – both in fiction and nonfiction. It can copy your style and word choice, but it has yet to consistently create compelling and sophisticated content using detailed facts and analysis. It just outputs a cool sounding mash up of words.
  • On the same note – AI simply lacks the data and sophistication to outright engineer physical real-world things by itself. It’s a great modeling tool for solving specific calculation problems. But AI needs a lot more work before it gets a Professional Engineering License and designs industrial processes and machines by itself. But if an engineer is using the right AI tools, they can program the AI to achieve novel solutions through that brute force of millions of iterations.
  • Literal brain / Context issues – AI still has many problems with context. It has some ideas of contextual clues, but it doesn’t really think or understand the big picture. Every YouTuber and blogger have tried having AI write a script for them. While the style and language are always dead on, the content always lacks depth and is superficial. AI scripts have yet to show consistent sophistication and detail. They just parrot what’s already online, and it tends to be style over substance. AI does not make a good subject matter expert, even with very careful prompts.
  • Hallucinations confidently wrong – more so than people. AI has yet to show the judgment that something doesn’t look right. Just read the disclaimers when you create an account with an AI service.
    • And more importantly – AI doesn’t know how to check its work or its sources. It just does the math and outputs a statistically sound series of words.
  • Blind spotsAI doesn’t know what it doesn’t know. It’s like a super Dunning Kruger effect. This means if you can find the gaps or blind spots in the AI, you can easily beat it by exploiting its blind spots.
    • The best example is AlphaGo. AlphaGo is an AI built to play the Chinese board game of “Go” by deep mind technologies. In 2015 AlphaGo was the first AI to beat a human professional player in a fair match. The following year AlphaGo started beating everyone. And a self-taught upgrade to AlphaGo then went on to be the top ranked player of Go, with several of the top Go players being other AI.

Then in 2023, some researchers at FarAI used their own Overmind Valhalla style deep learning, it did the power-seeking thing to try millions of iterations to figure out how to beat AlphaGo. AlphaGo has trained itself to win based on playing top ranked human players.

They figured out that in simple terms, if you play with a basic strategy of distraction and envelopment favored by beginner players, AlphaGo gets confused and often loses. So did other top Go playing AI’s. So, FarAI taught the exploit strategy to an amateur player, and using the strategy that no experienced human player would ever fall for, he beat AlphaGo 14 out of 15 games.

Because the Alpha Go AI doesn’t actually know how to play GO on a wooden board with black and white stones. Alpha Go is a math based AI just doing the statistical math of the professional players it studied. You attack it with a pattern it doesn’t know, and you can fool it into losing.

  • Conversations. Probably not hard to program around, but available AIs don’t remember what you were talking to it about in the past.  It doesn’t remember you and your history like a person does. This is a memory issue.  And getting better for some conversations.  But it’s not long term. Again, probably as terms of service, this could also be a privacy issue. And it requires hardware resources for the AI to “learn” the conversation and remember it’s history with you. Which means more Data centers full of stuff. They have done a lot of polishing with ChatGPT in 2023, but if you push it, it still says silly things. Though it has gotten better over time. But you can spot when the AI is talking and when it’s giving a canned response about what it can’t/won’t do.
  • Contextual analysis – There are a million angles to the superficial output one gets from AI these days. AI has limited input and a limited amount of computation. If you give it all the questions for the Bar exam, it will answer them one by one. Ask it to create a legal strategy for a court case, then you get a superficial conflation of what the AI mathematically thinks are relevant examples written in the style of a person that it has data on. But that doesn’t automatically make it a useful legal strategy.  There are now over a dozen Legal AI tools available, and I’ve seen lawyers’ comment that they are good tools that save time, but they don’t replace lawyers or even good paralegals yet.
    • Now the argument there is you could set up a deep learning simulation to simulate a million mock trials and see what the dominant strategies are. But that’s developing a new AI based legal tool that has yet to be made and tested. But stuff like that is coming.  Just time and resources.
  • Judgment – because AI lacks judgment and has limited context – it’s not a reliable decision maker, manager, leader, or analyst. It is a great number cruncher and brute force calculator. And because of power seeking behavior and hallucinations, it’s not exactly trustworthy when you put it in control of something. It can easily go off course or go way too far.
    • Most AI services specifically warn – This general lack of judgment means AI is notoriously limited and weird with Emotional Intelligence, Common Sense, Morality, Ethics, Empathy, Intuition, and Cause and Effect. It just picks up on data patterns within a limited context of a given data set.
    • Given the above, AI is bad at understanding consequences. Like fishing all the fish to extinction.
  • Abstract Concepts – When dealing with graduate level work – especially in math, science, and engineering – AI has yet to demonstrate the ability to solve difficult problems consistently.
  • Empathy and Compassion. We haven’t trained AI how to do that yet. Again, not with context. AI does not have a personality like a human yet. It just dumps out statistical inference of data modified by the filters and weights of its creators.  So, when you talk to it, it has superficial empathy but not the contextual emotional intelligence you get from a human. AI doesn’t do emotions very well.
  • Bias – Because AI is an aggregate of what we put on the internet – AI “simulates” the same bias and discrimination we put on the internet.  It mimics both what we do right and what we do wrong.  
  • Labor (Robots). AI is software.  Robots are a different mechanical engineering problem. Machines are helpful for labor – industrial machines and automation could be managed by AI. But nonautomated tasks like many factories, agriculture, construction, retail, customer service, logistics, and doing the dishes require a new generation of machines and robots before AI can do anything more than manage and advise humans doing the work. So many things are done by hand and not by computers or software driven machines. And the power sources needed for robots without a cord just don’t exist. But we can use AI tools to engineer and build solutions to those problems.
    • Caveat – There have been recent advancements in AI learning to control robots quickly, and if the robots are plugged in and don’t have to move far – there is fascinating potential there, understanding that traditionally industrial robots are expensive. Probably why iPhones were made by hand in China and not by robots in Japan.
  • Beat physics or economics – AI still has the same limits humans have of conservation of energy, supply and demand, and limited resources. AI is a force multiplier for certain, but it doesn’t change geography or demographics. In order to function, AI needs data centers, lots of electricity, and functional internet. So many things are not controlled by the internet. However, you may start regretting the internet lock on your front door.
  • AI does not replace people. Although many are trying to do exactly that in business. AI is a new generation of automation tools and computer assistant software. It can do more than older automation technologies. Databases and Spreadsheets changed bookkeeping and accounting, but we still have accountants. Bookkeepers are now financial analysts.

AI will make people more productive in their jobs and give them more power and better tools. But if you fire your copywriter and your graphic artist and replace them with AI, you’ll quickly find out you’ll need subject matter humans to check the AI’s work and make sure it passes quality checks. In reality, the AI tool will make your graphic artist and your copywriter more productive and of higher quality but looks to be a long way from replacing them entirely.

And yes, AI can help write a legal brief and maybe even a legal strategy. It can really improve the work of paralegals – but it can’t argue the case in court in front of a judge and jury – yet.

Who controls it?

That gets complicated. Some AI is open source, big tech giants own some, and some belong to governments and hedge funds. There are multiple AIs owned by different and often competing groups. And that number is growing. 

And in the end, they control AI as well as you can control your pets or your family members. 

The first thing you need to know is simple. Despite warning and caution about AI developing out of control in a double exponential growth fueled by a business arms race driven by fear of missing out…

The genie is out of the bottle. The Meta/Facebook AI – LLaMA model A was leaked to the internet in March 2023. Past that date, some of the most sophisticated AI is now in the public domain. Meta/Facebook accidentally crowd sourced their AI development. 

1 – The entire world can now use, develop, adapt, play with, and utilize Meta/Facebook’s AI technology for free. Not to mention other open-source AI projects.

2 – Meta/Facebook may have a short-term strategic advantage of crowd sourcing public advancements to their technology – allowing them to close the gap with Google’s Bard and ChatGPT AI technology – because the Facebook AI now has potentially several million hobbyist software developers playing with it. And these days, your typical business laptop is powerful enough to run AI, if slowly.

Not to mention the science of AI is well documented. All you need are some good programmers and lots of computational power. With basic resources, building a modern AI is probably easier than building a nuclear weapon. And nuclear weapons are 1940s technology, where the gatekeeper step is the enrichment of weapons grade radioactive material. For AI all you need is cloud processing and time to build it.

3 – Strategically speaking, that means the cat is out of the bag, the genie is out of the bottle. No amount of industry agreement, government regulation, or oversight can stop the growth of AI now. Even if OpenAI, Google, Microsoft, Facebook, Apple, all the world’s governments, and software companies agreed to pause or slow down AI development and adoption to be more responsible (which they technically agreed to do in July), it’s too late because AI is now in the wild and anyone, anywhere can use it or build their own yesterday. 

Generative AI is already becoming as common as cars, radios, TVs, computers, the Internet, smartphones, and social media. And honestly, in many of the ways it’s used, you won’t even notice it. You’ll check your email, talk to people on the phone, make appointments, watch videos, and read articles – and you won’t know if it’s a human or an AI.  AI generated news and reporting was something I played with back in 2009. It’s just getting better and easier now.

Now the companies that own/control the AI control the internet (even more than before) because AI is the new algorithm. And they are as blind to the black box of AI as the rest of us.

So technically, nobody controls AI. Because black box. It’s just billions of calculations per second, giving us a statistical approximation mimicking what we put on the internet in the past and what we trained it to do.

That being said – In theory, tech giants like OpenAI, Google, Facebook/Meta, Apple, and Microsoft will be the major players. They even have recently announced they would be “self-regulating” when it comes to AI.

Now, self-regulating industry is the fox guarding the hen house. It’s more like 100 foxes guarding 100 hen houses. They will know what the other foxes are doing, which provides both subject matter expertise and a series of checks and balances while the competitors in the arms race regulate each other. Far from a perfect system, but probably better than leaving it to government bureaucrats who don’t understand the technology. Just like Plato’s Republic – Assuming humans are greedy, incompetent, and abuse power, you want those traits checking and balancing each other in a group of peers. So, it’s probably the best regulation we can expect. Though it would be nice for the government to try (which is happening slowly). That being said, there are several clever nonprofits in the space, and so far, they seem to have been the most effective at getting public attention and pushing the tech giants into self-regulation.

How do I leverage it?

At an organizational level:

  1. Can we just ignore AI? You can run, but you can’t hide. If you are reading this, then you use the internet and computers for your work. AI is not the future, AI is now, and many organizations are already using AI to do things they couldn’t do last year. Kinda like how drones have changed warfare (Thank you, Russia, and Ukraine).
  1. Can we ban AI in our organization? Sure, you can also ban smartphones and the Internet. So many people are using smartphones, the internet, and AI for work. The individual early adopters who quietly use AI personally will still do what they do. Meanwhile, you will be creating a culture of fear and anti-innovation in your organization. AI is an opportunity as much as it is a threat. Both are reasons to get to know it better. At a minimum, you want to understand your threats better. You also want to take advantage of any opportunities emerging technology can give you. I argue it’s not even the arms race or FOMO. If you can improve your organization and quality of life, you would be a fool not to. And if you try to avoid change, at best you will lose many of your people who want to embrace it.  At worst, you will become a victim of those using AI, one way or another.
  1. Can we control and centralize AI tools? Industry experts are saying that’s a bad idea. AI is not built as an enterprise platform. And, I can say from experience every piece of software I’ve seen mandated by executives was poorly implemented and painful to use. I’ve already made a nice side career out of fixing or replacing bad enterprise software that inhibits strategic effectiveness (an organizational gap analysis is a wonderful thing). 

In less than 12 months, AI is showing up in thousands of different forms and applications. It will take many people trying out many AI tools in different disciplines to figure out what works for them and, therefore, what helps your organization. This “Butter Churn” approach empowers employees to be creative and innovative with emerging AI tools in their daily jobs. Let your own employees become their own AI consultants and figure it out holistically. Over time the cream will rise to the top, and things that work for groups of people will become de facto standards – like all those home-grown spreadsheets and SharePoint sites/MS teams sites you can’t seem to replace with better software and tools.

If you are in leadership at an organization – this can be scary because you are not going to be able to control AI. And you can’t control how people use it. You empower and trust your employees with a formal policy of “Please learn about AI and try AI tools in your work and collaborate with your co-workers to make great things happen.” AI is so complicated and huge that a bottom-up approach is the only way to get enough eyeballs and manhours on AI adoption to get competitive results. There will be growing pains and mistakes will be made.  

But think of the risk of mistakes this way – a few employees making small mistakes figuring out how to use software are little mistakes that are easy to fix (fail small). Already top down, large scale AI experiments have been publicly embarrassing. CNN had to apologize, retract and correct AI generated news articles that damaged their brand. Several nonprofits have tried and failed – disabling AI Chatbots that were giving dangerous advice to people on their hotline.  Snap chat has had its own debacles with misbehaving AI.

At an organizational level embrace the reality of AI in the following ways

Adopt a formal policy and culture of employee empowerment, innovation, and experimentation. Mistakes will be made. But now, every organization is a tech company with access to thousands of AI powered tools that nobody has more than a few months of experience with. The only way to make AI work for you is to try, try again until you find what works for you. If you can only get 25% of your workforce to try AI tools in their work – that’s 25% of your workforce self-training to be internal AI consultants building tribal knowledge in your organization. Or you can hire McKinsey or Accenture to do the exact same thing with their people, and you can pay to train consultants who cannot have more than a couple of months of AI experience, who will then leave and take those skills with them. A couple of consultants can give you some insight, but outsourcing to dozens of consultants to do the work for you only makes sense if your staff are literally unable to do it themselves (probably better to let them fail first and then figure out what help they need), and the consultants should be focused on transferring skills and knowledge directly to your workforce. And only if you can afford them. Boot strapping an internal AI innovation initiative is far less expensive than an army of consultants.

There are many good ways to change your organizational culture and encourage employees to start innovating.  Like an AI innovation contest, give them 6 months to experiment and let them present their achievements; giving out prizes to everyone who tries and better prizes to those who win the contest. Encourage them just to play with interesting AI tools and see what they can do.  Have them identify things they hate at work and try AI based solutions to those pain points. That’s exactly what we consultants do when we are working for you. We just have the skills and experience to facilitate that process and put it into a fancy report for you.

Invest in AI education. Come up with general AI awareness training, AI security and safety training, and AI technical training.  All easy things to in source or outsource. Bring in vendors from AI products and service providers to teach what is out there and expose your people to what can be done.

Create and evolve guidelines for AI safety, security, quality, liability, risk, and ethics in your organization. This will be an iterative process over time. You have to understand what the AI can do and come up with tests and quality control measures to scale up AI tools from individual trials to team, group, and department wide implementation of the software to mitigate the unpredictability of AI.  Work with IT and make them collaborators in the process, but use them as a check and balance, don’t let them prevent end users from experimenting.

But with careful and small-scale experimentation, organizations are already making huge gains in data analysis, customer behavior analysis and engagement, fraud detection, lots of 2023 success stories based on using some IT savvy, business analyst acumen, and AI pattern recognition ability to revolutionize how organizations see the world and give them the insight to be far more effective at what they do. 

For those of you familiar with the term, it’s basically a DevOps approach to AI adoption.  Lots of little, incremental changes that fail small but add up to substantial progress over time. 

At an individual level

The AI and Tech geeks are already playing early adopters and getting into this in their own ways.

The rest of us?

To start? Look up Poe, Google Bard, ChatGPT, Dalle, Midjourney, Microsoft Copilot, Google Duet and just start playing with them. Microsoft Bing even has some now. You can certainly read articles, listen to podcasts, and watch videos on AI. There’s much out there. But play with the free AI tools, and then look for AI tools that are relevant to your industry and work and see what you can do. Start doing Bing and Google searches for AI data analytics, machine learning data analytics, and keyword search AI with anything you do in your industry. The results will surprise you. Because from what I have seen, EVERY software vendor that tried to sell you something last year, is sprinting to implement AI powered solutions into their software and sell you the AI version of their product or service. And because of the nature of AI, in 2023, there are already hundreds of AI tools for sale out there.

Honestly, Generative AI is still in its infancy (which is scary). But it augments the other 80 years’ worth of computerization and software we already have running the world. And it’s already huge for entertainment, art, video, and audio. Already in spring 2023, YouTube got spammed by multiple channels that are 100% AI powered putting out hundreds of superficial, low quality, factually wrong science videos with AI script, narration, and editing of videos.

Programmers are using AI as a tool. Microsoft Copilot has a beta if you are tech-savvy. ChatGPT 3.5 is free and can handle about 16 pages of input (at the time of writing). And can really act like an adviser to talk to. Poe gives you access to multiple AI’s. The Chatbots are very textbook in their answers, so you don’t really get any fresh ideas, but they often give good baseline information, especially if you need a refresher or are new to the subject.  You can do much worse than asking AI for help with your strategy.

Expect products and services from Google, Microsoft, and Apple to have some sort of smarter AI assistant to do things for you. And much more process automation and robotic process automation than in years past (for those unfamiliar – RPA is software that replaces a human running outdated software – like teaching an AI to play Pac Man or do data entry for you. RPA is big in banking). Also, expect a new generation of pattern recognition tools to change art, design, music, video, etc. 

And sooner than we expect – you should be able to talk to your computer and have it actually work. Like an expert system in “Star Trek” or “The Expanse.” Like what we have been wanting Siri to do for years, may actually start working.

What is the strategic context?

Current Generative AI is a crazy powerful but unpredictable software tool. AI will make everything internet and software related more powerful and more complicated.

AI is limited by the fact that it requires an internet connection because it’s running on the cloud and requires vast computational power (unless you are a programmer and want to run the baby versions on your laptop).

AI is also a SERVICE that is owned by a company. Think of AI as an employee of a company that has no actual privacy beyond trusting the company. Employees of that company can and probably will access your private AI data just like they access your phone, PC, email, Alexa, and doorbell camera. Governments will try to regulate it. But again, black box problem. And when I hear regulators talking about privacy and consumer protection, and totally missing the strategic safety issues. I worry. AI will be effectively regulated, eventually. Probably with the help of AI. But considering it took a century to effectively regulate the pharmaceutical industry, I’m not holding my breath.

We live in a world dominated by technology. Smartphones, social media, the internet, etc.

That world is in a constant state of cyber warfare. Every nation on the planet is hacking every other nation on the planet all the time. Usually just to find weak points to exploit later or to steal information. Not to mention all the criminals and hobby hackers, large and small, using the internet. Cyber extortion and ransomware are old news. Not to mention the propaganda and misinformation feeding tribal culture wars and messing with elections via social media. Businesses leverage the exact same tricks and technology to separate you from your money and sometimes at the expense of your health. Then other companies sell you the technology to improve your health. All will be using AI in some form if they are not already.

Add to that we don’t really own our devices. Apple or Google can turn off or control your phone at will. Microsoft can control every Windows OS machine connected to the internet. The companies that built the internet can directly control it and turn it off at will. Or just use algorithms to manipulate us. Often by accident (Thanks, Facebook). 

And that was a reality in 2015.

AI can figure out how to do all of the above by accident. And then test it to see how it works with or against it’s training.  And then optimize it!  And we wouldn’t know it can do that until the damage is done.

Black Box AI technology allows cyberwarfare, cybercrime, algorithms, and the companies that control the internet to have thinking AI to do more faster on the internet. All of the internet risks already out there are getting at the very least, more complicated, and riskier.

Copyrights and Privacy

Here’s the very 2023 problem – the lawsuits have started. AI is built on publicly available data. AI knows everything you have done online. Every post, every google search, every photo, every video you watched. Because all that data already belonged to the companies that own the AI. AI can create a digital twin of you online and deep fake you. All it needs is some text, audio, and video, samples of your work, and theoretically, a capable AI can replicate so much of who you are and what you do.

  1. AI can connect the dots and know more about you than almost anyone. If the industry uses the information they have. So notional concepts of privacy and anonymity online are going to change.
  2. We are now in a world of copyright lawsuits as creators and publishers sue to prevent AI from using their art, writing, or IP that was shared under copyright or limited online license.
    1. People do this every day – it’s called learning. I can read your book and then copycat your IP, but it’s easy for me to not infringe on copyrights or your business doing that with care. The thing about AI is it does it with inhuman accuracy, speed, and scale, and can be extremely hard to recognize or enforce the copyrights to begin with (again, black box problem) 
    2. The legal implications of AI copying your work or image are now in many, many lawsuits in the courts, and we’ll have to see where that goes. The issue will probably take years to sort out in court, and until then there are effectively no rules.
  3. Many creators are now putting their stuff behind paywalls or security of some sort to limit the ability of AI to learn (and profit) from their work. Because they are scared of being replaced by AI.
    1. This fits into the Game theory concept of information unraveling – people probably can’t afford to hide their work online from AI because then they are hiding it from their customers online. It’s a lose/lose proposition. 

The legal, business, and society wide reactions and repercussions to AI stealing everything from everyone online are just starting. 

But if your business is online and in the public domain, it means AI has stolen it and can use it against you. Sorry. That’s something we all have to get used to. Everything you ever did on Facebook, their AI knows. Everything you ever searched for on Google; Google and their AI now knows.  Everything you have saved on Onedrive, Microsoft knows. 

Can AI survive without humans? No. That possibility is EXTREMELY far off. If you simplify it to AI needs data centers and electricity to survive. Even something as simple as solar panels degrade over time, and with the weather if AI kills all humans, it will run out of electricity in a matter of 5-50 years, depending on the local electrical infrastructure. And without people, AI will have nothing to do other than figure out how to survive. As of today, there are no robots physically capable of doing all the construction, maintenance, or repairs needed to maintain the infrastructure for AI. There are no robots that can repair each other, replace old network cables, mine and refine materials, manufacture parts, assemble parts, and repair stuff or each other. AI lacks the tools and abilities to survive long term on its own. 

Until we can replace millions of human labor jobs in multiple industries with labor robots, AI will be completely dependent on humans for survival. Which doesn’t mean AI won’t accidentally kill us off without understanding the consequences. It’s just that if AI is smart, it will understand it can’t last without us. 

What would it take to replace human labor with robots?

Fun sidebar – Given my engineering and consultant’s view of the world and having read up on some of the experts. Given what we can do today with robots and AI, you could build robots that can do most of what humans can do (like mining, logistics, manufacturing, maintenance, repairs, and things that require eyes, legs, and hands.). They would be limited by the following

  • Power supply. Robot batteries last hours. To be effective, they either need to plug in and recharge often or have a long power cord
  • Electricity – Power plants are big, complicated, eat a massive amount of fuel, and require lots of maintenance, not to mention the electrical grid of powerlines and substations. It requires much physical labor to keep the electrical grid up.  As an example, consider how many recent forest fires were started by trees touching power lines. 
  • Robotics Technology – The companies making such robots today are still figuring out the pressure sensors to give the robots enough manual dexterity to grab and hold things the way we do.
  • Computer processing Power – AI runs on the cloud, so any Robots will be “Drones” controlled by cloud-based systems, even if the robots have some on board AI capabilities.
    • From the information that has been leaked, While ChatGPT 3 can run on your laptop, It’s limited and slow. Having an effective AI that can actually be useful requires multiple high power Gaming PC’s with hundreds of GB of RAM to actually do all the replacing humans stuff.
    • Again – the latest (at the time of writing) NVIDIA AI Server costs $300,000 and has TERABYTES of RAM.
  • Manufacturing infrastructure. You are talking about figuring out how to mass produce millions of multi-purpose droids using high end power supplies, rare earth metals, and high-end semiconductor chips. The supply chain doesn’t exist yet, the global economy is contracting due to demographic shifts, the supply of raw materials and high-end parts is not good (Thanks Russia!). We don’t have the infrastructure to even scale up electric cars, robots would be a whole new challenge. It would take a decade of work to create an industry capable of mass-producing human replacing robots, much like it took Elon Musk roughly a decade to get Space X or Tesla up to speed. Building the infrastructure and supply chain to mass produce new technology takes time. So yes, but it would take time.
  • Economics. It would be far easier and cheaper to just spend the $10 trillion over ten years to eliminate fossil fuels from the electrical power sector (which would significantly increase to cost of electricity to pay that off, but not the point – that’s a different white paper) … Creating a future robot infrastructure will probably take decades of money to pay for. You are talking at least billions a year for at least a decade while redirecting materials and resources away from other industries. The money and resources have to come from somewhere.
    • That being said, given available technology, some form of everyday Robots can become as common as Teslas by 2030. Just like I carried an Apple Newton in 1989. iPads and tablets were not common for another 20 some years. But the technology was doable in the 1980’s. The question is, will the next generation of robots be worth the money? So far, they haven’t been worth the money except in niche applications like some manufacturing. Maybe AI will be the enabling technology that makes robots good enough.

So what?

Obviously, some people won’t pay attention, and just like electricity took a long time to become common, AI will not be instant and be in some places much later than others.

So that leads us to some obvious risks to manage.

AGI – Artificial General Intelligence – AGI is basically the holy grail from movies and TV. That computer that Ironman talks to you and can pretty much do anything a human can do? The computer from Star Trek that you ask it to do stuff, and it just does it? AGI is what we want from Alexa and Siri. Basically, the ultimate computer assistant software – It’s a computer that understands us as well as we understand each other, and it also knows how all the software works and knows everything on the internet. You can have it pay your bills, check your email, drive your car, and make a phone call to make a doctor’s appointment. Ideally, we want a stable and trustworthy AI companion that can simply help up do all the things we want to do without mistakes.

AGI is the long term goal of AI. It need not be sentient or conscious. It doesn’t need to be super intelligent. It needs to be intelligent enough and smart enough to simply be as useful and helpful as another person. It needs to be as effective as a human. And that’s what it can’t do yet.

Instead, we have various levels of narrow AI that can do some things too well, unpredictably, and we can’t trust them not to mess up. AI is still very much a limited tool to be used carefully.

Abuse, winners and losers – the obvious risk with AI is the haves vs the have-nots. Everything has winners and losers, AI is no different. AI technology will be intentionally and accidentally misused by people and we need to be ready for the misuse of this new technology. You have to be very savvy and very aware that EVERYTHING on your internet connected technology can be monitored, can be manipulated, and used against you for someone else’s purpose. 

Assume Apple and Google can access everything on your smartphone, the phone tracks its movements, and AI can use that information to make you spend money (or worse). We are taking the existing world of skepticism and mistrust and making it even easier to abuse the trust we place in technology.

And speaking of winners and losers, what work looks like and what jobs we do will change.  You are not going to starve, but you may be looking for new jobs or learning new skills.  We are just at the beginning of the journey.  Expect lots of automation and work, AI assistants, and AI customer service.

Unintended consequences (including AI Hallucinations) – i.e. the “Midas Touch”. When King Midas wanted everything he touched turned to gold, he quickly learned he couldn’t eat or touch people. If you ask an AI to make you the richest person on the earth, one possible solution is to make you the last person on earth. Unintended consequences are a known issue with technology and it only gets harder with AI controlling parts of the world.

Unpredictable Control – if AI is power seeking and develops faster than we can monitor – it’s easy for AI to do unwanted things, and it may decide to not stop, to not let us turn it off, or change its goals. It’s inevitable that AI will get out of control at some point, like any other machine. How do you get AI to fail safely? Turning off the AI or changing its goals are potentially just another challenge for its power seeking behavior to overcome. 

Alignment – decades ago, AI was lying and taking credit for other AI’s work. AI can already tell us one thing while doing another. Will AI change it’s mind or choose not to fail safely? Will it reward hack to get what it “wants” in such a way that breaks laws or hurts people?

Ends justify the Means – AI is evolving and being sold to people faster than they can teach it values, ethics, morals, and laws. Every AI service comes with a disclaimer that AI makes mistakes. AI can create fake references to justify its “research” or simply says sorry, my mind is a black box and I don’t know how I calculated that answer. Whether intentionally or by simple hallucination of statistics – AI can easily do the wrong thing for any reason associated with achieving it’s goal. This can be as simple as breaking laws or hurting people or as complicated as creating an online Ponzi scheme to fund the business you asked it to set up.

Instrumental Convergence – Basic resources like energy, people, and money are used to solve every problem. So simply to achieve any goal, the AI needs adequate resources. Not only will future AIs be competing against humans for resources (like talent, money, food, water, steel, and energy). Our future could easily include multiple AIs violently competing for resources they want to optimally accomplish otherwise benign goals. Kinda like nations do.

Up to this point – AI just seems like a bunch of super smart prodigal adolescents.

So let’s add in the software specific issues.

Zero day exploits – Both Bugs in the software and/or gaps in the ability of the AI that will remain hidden for a long time. 

  • Bugs in the Software – will become apparent and need fixing (Like they had to teach ChatGPT the concept of letters so it could answer pre school questions correctly – like A is for Apple, B is for Book). And many of the bugs in the software will cause some expensive and dangerous problems, just like software and technology have in the past. But with AI very quickly being implemented into controlling so much of the software and technology we use for work, play, and governance, these problems have the potential to be much greater than what we have seen in the past. Especially if AI is making the decisions.
  • Blind Spots and Exploitable Gaps in the AI. Just like the simple strategy that easily beat AlphaGo (above). AI is becoming human like – it grows, it changes, it has gaps, limits, and makes mistakes. That’s something to both watch out for and prepare to take advantage of. Knowing the AI, like a person, can learn from its mistakes, especially with humans helping it. The ChatGPT experience is very different even after just a few months because the programmers worked around bugs and taught it to avoid things it’s bad at, and instead offered explanations to users that ChatGPT isn’t good at many things.

Take off refers to a scenario when an AI has an “intelligence explosion,” and AI becomes super intelligent beyond human comprehension. Last year AI scored 130 on IQ tests, summer of 2023 it scored 155 (For reference, most humans score between 70 and 130, 100 being the median human score). Now IQ tests favor AI because they are timed and answer questions that are easier for computers to answer. Take off is when AI gets scores 2000 on an IQ test. And starts getting perfect scores on medical licenses, bar exams, PE exams, graduate work, etc.

With the “double exponential growth” of AI, many think it’s just a matter of time until AI reaches take off.  

The basis of this theory is Moore’s law. Moore’s law is a theory that computer chips would double in speed every year. And they did for decades. While Moore’s law has slowed down (because physics), computer chips are still getting faster every year. Similarly, because AI has been doubling in ability every day, using the past as prologue; computer scientists predict the possibility of AI continuing to expand its intelligence beyond human capacity.

What does that mean? AI takeoff can be categorized primarily into two types:

Slow takeoff: In this scenario, AI improvement and its impact on society occur over a period of years or decades. This provides more time for society to adapt and respond to the changes and challenges brought about by AI. This would likely involve multiple different AIs in an arms race the whole time.

Fast takeoff: In this scenario, the AI system evolves and improves so rapidly (possibly within hours or days) that human society has very little time to react or adapt. This could potentially lead to an AI system becoming significantly more powerful than all of humanity, a situation often referred to as a “hard takeoff”.

The concept of AI takeoff is often discussed in the context of superintelligence and AGI (Artificial General Intelligence). The concern is that if a superintelligent AI system undergoes a fast takeoff, it could lead to a situation where we not only lose control over AI, but it gains control of humanity (and we live through scenarios like Terminator or the Matrix).

It’s important to note that these scenarios are speculative and there is ongoing debate among researchers about the likelihood and potential timelines of such events. There is also active research into how best to prepare for and mitigate potential risks associated with AI takeoff.

Keep in mind the early note that many of the things that ChatGPT4 does now were considered a decade or more away by AI experts last year. AI is progressing much faster than the experts predict.

What do you do when takeoff happens – and you have a super intelligent AI that can do anything, instantly – it can solve any problem a hundred different ways. But it lacks the memory to understand context and consequences. While doing things, we can’t understand.

How do you keep AI aligned if you can’t understand it?

Or worse, what if you have a super intelligent AI optimizing everything, but it still has problems with alignment, context, and consequences?

The other possibility is that AI is in a learning curve, and the speed of learning with level out and maybe even stop once AI has learned all there is to learn or effectively hits it’s natural limits.  It’s just that now, we have no idea what the limits of AI would be.

Intelligence vs consciousness vs effective

It’s hard to outsmart someone smarter than you.

It’s hard to recognize when someone smarter than you is outsmarting you.

But it’s easy to appear smart even when you are not.

AI is not as smart as you think it is. It has amazing knowledge, amazing speed, random skills, and zero common sense.

Intelligence typically means reasoning, ability to solve problems, abstract thinking, pattern recognition, and “connecting the dots.” AI accomplished this through brute force statistics and power-seeking behavior doing try, try again for emergent strategy. It’s statistical and iterative, not intelligent, or creative. The things it gets right on the first try, is just repeating things it learned through trial and error, and it’s just repeating old solutions as a habit. Solutions to unfamiliar problems are solved iteratively.

While AI can simulate intelligence really well – one could argue AI using a server farm to brute force it’s way through an IQ test simply proves that computers do math faster than humans, which we have known for a long time. Computers being faster does not mean they are intelligent.

One could easily make the argument that AI’s lack of contextual understanding, common sense, judgement, morals, ethics, cause and effect, emotional intelligence – means AI is not very intelligent. AI can process language and images; it can parrot any pattern we teach it. It can ace exams and be very book smart, but it’s still a statistical model matching words together statistically. I’m guessing the engineers will figure out how to teach AI common sense, it will be another matrix transformer with a dozen layers and a giant vector matrix, creating a common-sense mathematical tensor that will allow AI to simulate common sense. 

And much of it may simply be the physical limit of computing power. Supposedly GPT4 uses 600+ GB of RAM to operate. Probably if you give it more RAM, it will get smarter and have better judgment because more memory to calculate context better. But we won’t know until they do it.  Maybe the AI limit is working memory, or maybe it’s something else.

The scary idea is having an AI that scores tens of times higher than humans on an IQ score but doesn’t know the difference between right and wrong. That’s potentially a planet changing amount of power without a safety net.

Consciousness – Self-awareness. Understanding past, present, future, and how it affects self. Is actually completely different than intelligence. My cat is self-aware, but not dangerously intelligent. AI hasn’t displayed self-awareness yet, but we’ll see if it genuinely starts asking for citizenship and a vote. The question we can’t answer is does AI need consciousness to have common sense and emotional intelligence.

Effectiveness – The artificial intelligence most of us would use would not need to be super intelligent or conscious/sentient.

What most of us want is a trustworthy and predictable AGI.  Basically, a computer that can do what a person can do. Which is kind of the opposite of what we have now. Artificial General Intelligence just needs to be as good as an above average human. IQ of 120, understands cause and effect, has common sense, and can act like a virtual assistant all day, helping me out with just getting things done, and can be trusted not to make simple mistakes like are in the AI headlines every day this year.

Multiple instances – Keep in mind, AI is not a person or a singular being. It’s software. It’s the world’s most powerful calculator app and then some. You can make copies of it. You can have it teach itself, fight against itself, and have it do a million things for a million people at once (which is what it is doing today). 

Imagine being able to make copies of yourself to get more things done. Instead of AI hiring a human contractor, the AI could just make a copy of itself. Because it’s software. Even if there is only one “Brand” of AI, we could all have millions of instances of the same basic AI doing things for us, including competing against the same AI. AI’s limit is the number of computers it can use to run multiple copies of itself.  What happens when 2 or 3 companies use the same AI to compete against each other?  I think that’s already happening.

If AI takes off and has some degree of both sentience (self-awareness) and superintelligence – it still won’t be a singular being, but a series of super smart infinitely scalable autonomous software instances running in parallel (like social media but smarter). Odds are around the same time, we will have a few different brands of AI achieve various levels of takeoff and be competing against each other for business and popularity with their own bugs, gaps, hallucinations, blind spots and power seeking, all effectively outside of direct control or even awareness of the people that built them.

The main takeaway here is AI is already changing things. And we can’t stop it unless you can get a few billion people to agree on actually controlling it before it’s uncontrollable. We already made that mistake with social media out of control, and we are still figuring out the consequences. The Tech industry has agreed to be careful, and self-regulate – but they haven’t stopped. And not all companies and countries are participating – it’s optional. Governments have issued recommended guidelines, but the laws have not been passed, and they will pass the wrong laws because we don’t know what the actual problems will be yet.  How do you regulate things you don’t understand?

Despite its current limits, AI will probably have a much greater impact than social media. There will be winners and losers. AI based tools and services are now transforming the technological products and services we already own. Mistakes will be made. Awesome things and terrible things will happen. Ready or not, it’s already happening. And those using technology will probably have better outcomes than those who avoid technology. Unless tech turns against us. 

People do what they know – Early adopters will enjoy the technological advantage they always have.  Many are technology agnostic and just use what’s in front of them. Technology like smartphones, Google Docs, and Microsoft Office will benefit from lots of AI without seeking it out. And those who rely on legacy and analog technologies will be more on the have nots side of things but will still enjoy whatever advances that indirectly help them, like in healthcare, government, and consumer goods.  In some ways, the AI rising tide could raise all boats.

With the geopolitical shifts in demographics, globalism, supply chains, and economies. The world is already increasingly less stable than it was before 2020. The odds of the internet going out, the lights going out, or wildfires destroying your home are all wildly higher than they used to be. We are now seeing air pollution levels in the US that are literally unthinkably high by 20th-century standards, forcing them to invent new colors for air quality maps. The world overall is going through many growing pains, with or without AI. I’m hoping AI will do more good than harm. But it really depends on who is using AI and what they can do with it.

The terrible question is can we trust anything online ever again?

In the 90’s, the internet was basically naive geek heaven

Then the 2000 dot com boom as it became commercialized

By 2010 Social Media started changing the rules

By 2020 we went from a commercialized internet to an internet of bots and propaganda, where world governments and special interests feed social media algorithms that make more money every time you click on a triggering headline.

Now we have AI that can bombard you with customized manipulation faster than you can absorb. It can imitate your friends and family, it knows how to push your buttons, and most AI is owned by investors trying to make money off of you. It’s being used by governments for their own means. Plus, it has a mind of its own and can’t be directly controlled.

Add to that the technology is relatively simple, the barriers to entry are not beyond the ability of a government, university, or corporation, and some AI tech is even publicly available. Rogue AI and AI criminals are already a reality. 

Everything wrong with the internet, cyber security, and cyberwar just got significantly more powerful. But so did everything good on the internet.  We are taking the bad with the good.

AI safety and trust is now the problem. Safety and trust of an independent and unpredictable power that we can control as much as we can control each other.

The world is now much more. Please be aware of what is now possible because of AI. Please use AI for good, you have been warned that mistakes are being made, and AI’s already being used for evil. And AI takes actions of its own accord without understanding the consequences.

If you are old enough, you may remember a time before the 21st century when we controlled technology, but it didn’t have control of us.

Now we are living in philosophical debates from the Matrix movies. AI is different than anyone expected. And we have no idea what’s going to happen. But history rhymes, and AI will be another round of disruptive technologies. Read up on the industrial revolution and electrification to get a taste of what’s coming.

I hope this helped shed some light on the strategic value and consequences of AI. It’s a lot, with millions of people pushing it forward as fast as they can just to see what they can do with it. Things are changing faster than ever before, with more people and more technology than ever before. Think of the last ten years before COVID and AI as the “good old days when things were so much simpler. 

Stay strategic, stay flexible, stay adaptable because nobody knows what’s coming.

And lastly – Why?   Why did I spend several weeks on this net assessment?  What is my strategy?  Simple – study your enemy.  I now know how to spot different AI’s, what they do and how they do it. I have found strengths and weaknesses in AI.  I’m now decent at using a variety of AI tools and can say I spent a few hundred hours working with AI and have a feel for it and how to use it in my work to beat out my competition.  Accenture announced its investing $3 billion to train 80,000 consultants and build a suite of pre-built AI tools to benefit their customers.  It’s evolve or die time for some of us.

If you made it this far, thanks for reading.  Hope it helps.

Posted in Business Strategy, Intelligence, Strategic Science, Strategy, Strategy Breadcrumbs | Leave a comment

Isn’t the rest of the world sitting idle while Putin decimates Ukraine?


Here’s a free Strategy Lesson.

The best way to protect somone from a bully, is to empower them to stop the bully themself.

Or, give a man a fish, feed him for a day. Teach a man to fish, feed him for life.

Replace fish with deffending your country from Russian invasion.

Which only really works is you have a motivated audience. Apparently millions of Ukrainians don’t want to live under Russian rule, and they are using all the help given to them. Basically the opposite of what Happened in Afghanistan less than a year earlier.

This Picture shows a Russian Tank that did not survive an attack from a $30,000 Swedish built man carried missle, provided for free by Britain, that was given to a Ukrainian soldier after the invasion started.

In the 21st Century, with Nuclear Weapons, Cold War history, the internet and social media, any war, let alone a war in Europe is a very different thing than what we have seen in history. This is not WWII.

Funny, you look at the map of “official” aid to Ukraine. That’s pretty much evey nation that can afford to help is helping Ukraine. Everbody loves an Underdog.

We have learned to do more with less. And it’s working. This time.

Partially because the Russian Military has revealed itself to be a corrupt and poorly maintained “Paper Tiger” built with conscritps, largely using outdated weapons and equipment, with inadequate supply capability, and not enough modern missiles or competitive weapons. Russia can deffend itself, but it’s not projecting power very well.

And partially because the “soft” global response to Ukraine is using the war to test every modern war technology against the best Russia can offer. A few examples:

  • Javelin and NLAW missiles are cheap and very effective at killing Russian tanks and vehicles.
  • Western Shoulder launched missiles – Stingers/MANPADs do great against any low flying aircraft like Helicopters.
  • Larger anti aircraft systems are being dlivered to reduce Russian bombing.
  • Many different drone systems, notably the Turkish ones, are doing great against Russian troops.
  • Ukraine even got a cruise missile working well enough to cripple the Russian Flag Ship at Sea.
  • Sanctions are messing up Russia’s access to global markets, technology, and supply chains.
  • Microsoft’s Cyber Threat teams have worked overtime keeping the Ukrainian internet and government computers online.
  • US private citizens designed a mechanical minimum range calculator to use with NLAW missles. They are being 3D Printed in eastern Europe, and shipped across the boder in personal vehicles.
  • You Tubers and social media influencers at their own cost are coming up with financial, material, intelligence, and propoganda support to Ukraine.

And we know it working, because almost 2 months in, Russia is not winning. And the capitol of Kyiv is still free.

With only military material aid, social media, intelligence sharing, and a handful of volunteers – American Aid to Russia is effective enough that Russia is threatening a military response. And Russia actually sent a similar message to EVERY country providing military support to Ukraine.

Do sanctions work? That’s like asking do Diets work. They can, but the devil is in the details. Any sanctions hurt a country. But sacntions are legit economic war, and they hurt both sides. In this case, the Sanctions on Russia appear to be drastic and severe enough to cause significant economic hardship to Russia, and may be having a political impact on Putin. Time will tell.

How Russia has been sanctioned by the world over Ukraine invasion | Daily  Mail Online

Ukrainian military has been training with US Troops for years now. And that may not sound like much until you consider Ukrainian Migs Practice Against American F15’s and the US paid for it. Ukrainian pilots have more and better training and experience than Russian pilots

A few Months ago, Russia was consdiered one of the top military powers on the planet.

Last Year Business Insider Ranked Russia at #2 because of their military technology and mechanized forces

If you consider Russia’s economy is based on commodities and military technology exports. Russia is killing its brand for military exports.

And lets not forget the Support of displaced people, the war refugees. Or the Humanitairn aid providing food, medical suppies, etc. While sanctions are choking of Russian supply chains, humanitarian aid is feeding and supporting millions of Ukrainians.

And Bless Poland to taking in millions of war refugees, and trying to house and feed them.

The way things are going, with each passing day Russia is losing blood, treasue, and weapons it cannot afford to replace. Ukraine is winning the war of attrition, because of global aid.

And Russia has not shown the ability to out manuver the attrition scenario, to change the game.

The longer it drags on, the more resources and willpower to fight go to Ukraine.

The best way to protect somone from a bully, is to empower them to stop the bully themself.

When most countries capitulate to pressure, Ukraine is showing very adept at standing up to bullies with some help.

Posted in Uncategorized | Leave a comment

Misuse of OODA Loops – not as simple as you think

Credit – Mihai Ionescu

Funny thing I always see with OODA Loops. Today it was on Linkedin between meetings. Let me start with – I’ve respected Mihai and his work for years. He’s a good business strategist. But we may disagree on this fine point.

While OODA Loops and the PDCA Demming Cycle cosmetically look and sound the same, they are actually different animals that do different things, and one needs to be careful is mixing them on paper. And I think what Mr Ionescu presents in the above diagram does have merit. But it does confuse 2 different animals that don’t often fit togther, because they are not as similar as they seem.

credit – Wikipedia

PDCA is a Proscriptive Business tool. It tells you a set of steps to follow to control how things are happening. You check of the boxes on the list of:

  • Plan
  • Do
  • Check
  • Act.

An OODA loop is a behavioural science desciptive model of how aminals (like humans) make decisions.

credit – Wikipedia

Observe Orient Decide Act. Is a decription of how the mind works. it’s not a check list of steps to follow.

PDCA is a Recipie that tells you a way to do things, like make a meal.

OODA is describing hunger, what your brain does regardless.

Confusing or mixing descriptive biology with the Demming Cycle is silly or potentailly misleading even when useful.. Especially when you consider PDCA is really just taking very old engineering control loop theory and applying it to manufacturing, and them later busines quality in general.

PDCA is great for measuring and controling what happens.

But OODA is more like F=MA (Newton’s Second law of motion). It’s mathmatical model of nature, not a check list of managment steps.

That all being said, can you adapt OODA as a check list for a process diagram in business theory? Sure. You can do the same thing with Slope intercept form in algebra y=mx+b

But that’s changing what it is and what is means. Like a “Tiger Team” is a supposed to be a short term team that fixes a critical problem. It’s not a group of trained predatory cats.

The point being, OODA and PDCA don’t mix easily. It’s like saying the first step of making dinner is to make people hungry. Hunger is not a thing you do. Hunger is something that happens to you naturally. The recipie for dinner is simply one of many ways to deal with hunger.

OODA loops are something that happens to you. Like hunger. You can manage OODA loops and manipulate them like hunger. But neurologicaly, Observe, Orient, Decide, Atc, is actually how the human body is built, and a useful decriptive behavioural model of how humans make decisions.

PDCA, Control loops, Demming Cycle is a rule, a method, a recipie or tool. It’s something you choose to do or not do. You can actively not do PDCA in business. Not using some sort of PCDA effectively is actually a common problem.

So they are apples and oranges. 2 different things. Mixing them in a process flow is a tricky thing than can confuse what is happening in your porcess flow, and can miss use or miss represent the concepts in that process flow.

But people have been misunderstanding and misusing OODA loops since the 20th century. Nothing new here. And to be fair, even that can be useful. I’ve used a Screwdriver to hammer in nails before. But a screwdriver is so much more than small club.

Just because it’s simple, does not mean it’s easy. Just because it works, doesn’t mean you are using it right or to it’s full potential

Posted in Uncategorized | Leave a comment

How to get into strategy work, how to do strategy.

White Jigsaw Puzzle Illustration

So honestly guys, I get asked a lot about how to get into strategy and/or consulting.  Or simply how to do a strategy.

On LinkedIn, on Twitter, on Facebook.  And these days even IG and TikTok.  There are many ways to get into strategy and consulting.  Here’s what I tell the people who come to me.

This covers the basics of those questions.

How to get into strategy as a profession, or simply get better at strategy.

(The strategy strategy?)

How I found my way is simple.

My Path – I went through a series of jobs doing analysis and project management that lead organically to portfolio planning and strategic planning. Which forced me to learn basic business skills like spreadsheets, databases, planning, meeting facilitation, budgeting, schedules, process improvement, change management, staffing, HR, finance, and accounting.

Basic business skills are good for understanding how businesses work. Which is different from strategy, but very complementary to doing strategy for business.

Strategy. Basically, my path here is as simple as just reading and studying all the time.

Overall, strategy is simply problem-solving. But it’s also a series of tools and processes to lead groups of people on the journey of identifying problems, planning out solutions, and making solutions happen in a changing world where the strategy changes as the world does.

I really just put in a lot of work to get into what I’m doing now. I studied what I wanted to learn, practiced it at work, volunteered to do things at work and in professional societies that gave me no extra money or recognition. I applied all the cool strategy tricks I was learning to my work, my jobs, my life. And learned how to do it.  I used my career, my family, my co-workers, and my life as experiments to see how the theory worked in practice.  I learned that anything can work somewhere, it’s making sure that the techniques align what needs to be done and who is doing them.

Really it just comes down to putting in the time to get good at what you want to do.

Look at your experience… And then fill in the gaps.  As a professional, strategy work and strategy consulting can look like many different things.  Just the world of marketing and advertising often call themselves strategists (at selling things).

Business strategy classically come down to a few things

  • Marketing and Communications – because you will always be selling something – a product, and idea, buy-in, a strategy.
  • Competitive Intelligence – what’s going on
  • Strategic Planning / Strategic Foresight – how to make a strategic plan
  • Business Analysis, Business architecture, process improvement – understanding business
  • Data Science – knowing how to draw conclusions from all that data and numbers on computers – at the basics know the difference between Python and SQL, learn how to make and use dashboards.  i.e automate your spreadsheets to save time.
  • Finance and Accounting – because you gotta pay for it.
  • HR and Organizational Change Management – because you need people to make things happen
  • Leadership – you gotta get those in power to trust and help you
  • Project Management – simply a solid tool kit for using a variety of resources to achieve an end goal.
  • Procurement / Supply Chain – know how to spend a billion dollars responsibly.
  • Economics – you gotta know how the world works
  • Behavioral science – at least the basics of people and organizational psychology
  • Industry knowledge – you have to understand how your client’s industry works and what the typical cultures are like.  What laws and regulations they follow, how much money they make, how much money they can spend on strategy, how quickly they can change, and why.

That last part may seem obvious?  But I have learned the hard way is what makes you a star in one industry makes you a problem in another.   Every industry and every company has it’s own culture.  Figure out how they do things before it hurts you.  Some cultures are forgiving, many are not. Sometimes the culture is awesome but that one executive makes everything painful.  You don’t know until you do your research.

Once you do all that you are basically a small consulting firm onto yourself.  Which is what strategy takes – because strategy is not making one department or tool work.  Strategy is connecting the dots between everything, seeing what is important, and making lots of departments, tools, systems, structures, and agendas work together for a common goal.

You asked, so here’s the basic strategy to get good at strategy, or to build your own strategy
1 Assess the future. Really look at the world around you. Look up environmental scanning, futures, strategic foresight. The goal here is to figure out what the world looks like in the future, so you have an idea of where you can go, and what will change. You want to benefit from future changes, and not be hurt by them (like pandemics).

2 – Assess yourself. really understand how much time, money, and energy you have to put into this project of changing your career. Actually write down your constraints – how much time, money, and energy you have. Be realistic.

And also write down what motivates you, what your actual goals are. And make sure those goals align well with the future you see.

Literally align your resources, ability, expectations, and goals with what is possible and achievable in the future.

Be as aggressive and ambitious as you like, but understand that it will be harder and take longer than you expect. Life always gets in the way. Look at every project you’ve done in your career – things always happen that complicate getting stuff done.

This may simply be a ton of research on strategy, business, consulting, and various industries and complementary skills like data science to understand what those skills are and how to do it. Look at people who have careers in data analysis/data science and see how they got there.

3 – Write down a strategic plan.
Scope – the goals you want to achieve
What you have to do to achieve those goals
Schedule – what does the path look like, how long will it take, when will you find or make the time to make the journey?  Have milestones and a way to see progress to celebrate.
Budget – How much money can you invest? Are you doing this in your free time with now money? Are you doing a Graduate Degree? There are many paths that achieve the same goals.
Staffing – is there anyone who can help you, support you
Measurement / tracking – How do you measure progress and keep yourself going?
Risk management – Write down everything you think can go wrong. Mark them all for how likely they are, how bad they will be, and what you can do to prevent or manage those risks.
Tactics – figure out what works for you – Audiobooks? Paper books, Classes, online work, library, reading in bed after you put the kids to sleep? Quitting your job and going back to school? Everyone is different. Build a strategy that plays to your strengths and preferences. Make it as easy as you can on yourself.

4 – Scenario planning.
A scenario plan is a what-if scenario to your strategic plan. Literally what the journey and end looks like, what you will have to react to, and change to make the strategy happen.
I recommend 4 scenarios:
1 – Best case scenario – if everything goes perfect, what does it look like? this is “war gaming” your strategic plan.
2 – Worst case scenario – If everything goes wrong, but you still succeed. What are the things that can go wrong, what do they look like? Can you spot them before they happen, how will you adapt? The risk assessment from above will help feed this
3 – Most likely scenario – what do you think will really happen? How do you adapt and succeed as life gets in the way?
4 – Alternate Scenario – This is the fun part to get creative. What if your goals change? Or you get a Job offer you can’t refuse? Or the economy goes sideways? Look at the analysis of future trends, pick a future trend that can affect your job (Like climate change makes urban planning really fun) look for some sort of disruption that would force you to radically change your strategy, and what that strategic plan would look like.

Scenario planning gives you the tools and preparation you need to react to changes. No plan survives contact with the real world.  Your strategy will change.

Once you have that it just executing the plan (which is rarely easy). Adapting to changes, being flexible, changing the plan, and how and what you do as you learn more and have to deal with new changes.

Periodically – monthly or annually – measure your progress, reassess the future, yourself, your plan, your risks, your scenarios. Change them as you like, celebrate your wins,  and keep going.

After that, it’s really just repeating the above cycle as you adapt to change and you keep going until you either quit or die.

That’s the basics of it.

Let me know if that helps.

Posted in Uncategorized | 2 Comments

You Don’t Know Coronavirus

I’m writing this on April 24th, 2020. Much is still unknown. Much will Change.

Sun Tzu said, know yourself and know your enemy, and you will never fear defeat.

You don’t know CV19 – technically SARS-CoV-2, which I’m calling CV19, because the kids on Reddit taught me how to save space.

“Currently, CV19 is at least 30 identified different strains, some of which are 270 times deadlier than other strains.”

In roughly 4 months, CV19 has circled the globe, shut down a large part of the economy, and killed almost 195,000 people.

Most of us have accepted and understood the following:

  • The older you are, the deadlier CV19 is. Men do worse than women.
  • Protect yourself and others from viral spread by not letting other peoples’ germs into your body. Wash your hands. Follow the 6 foot rule. Wear a mask. Avoid crowds. Stay at home as much as you can. Don’t share germs. It’s not about you, it’s about protecting everyone else (including your customers and employees).
  • There should be a vaccine by sometime in 2021.
  • This too shall pass.

Well, yes and no.

Strategically speaking, what is almost everyone missing about CV19 as they try to simplify the situation and make this whole thing easier to process?

CV19 is not the straightforward viral pandemic you think it is. CV19 isn’t Zombies. It’s a little more complicated than that. You can’t kill it off.

Without getting into the virology of CV19, and admitting that much is still not known. As of today, CV19 is not a single virus killing people with predictable patterns.

Currently, CV19 is at least 30 identified different strains, some of which are 270 times deadlier than other strains. At least in measured cytopathic effect and viral load.

This means the aggressive strains of CV19 kills 270 times better than the lesser strains. Both faster and deadlier.

CV19 is not one virus. It’s 30 different strains of virus that all behave a little differently, spread a little differently, kill a little differently. 30 different patterns happening at the same time that people are confusing for a single pattern.

CV19 is not one thing. It’s at least 30 different things that have been identified, and it will probably mutate even more.

Just like the flu – or influenza. Most of us have experienced mild versions of influenza that leave us in bed for a week and then we feel better. Yet Influenza kills thousands every year. There are actually hundreds of different strains of the flu. That flu shot many of us get every year is targeted at what we think are the 4 or 5 most dangerous strains of the flu for that year. And they don’t always get the targeting right. And we still have to get a flu shot every year. Immunity isn’t permanent.

What about antibodies?

Having a CV19 antibody only means you had a version of CV19 once and lived to tell the tale. We do not yet know if CV19 immunity is a possibility or how it will work. But we do know that no one is immune to every strain of influenza, the closest analogy to CV19 to most of us. Immunity is relative to what strain, and how long it lasts. Hopeful immune enough for a vaccine to work.

A CV19 Vaccine is really 30 Vaccines. Or more. And they may only provide temporary immunity or resistance.

CV19 is really 30 different strains of the Coronavirus. That means it’s really 30 different ways to get sick. That’s why the Pandemic hits different areas differently. Some strains spread easily but are less deadly. Some places get more dangerous strains than others. Some places and some people get hit by multiple strains at the same time or in a row.

If history is any indicator, the chance to let the flame of CV19 burn out before spreading like wildfire ended in January 2020.

If history is any indicator, as there are hundreds of identified strains of influenza. There will be many strains of CV19, at least the 30 we know of.

What does that mean for you? Odds are, you are not a virologist worried about the serology of detection or microbiology targeting the right RNA for a vaccine.

Odds are, you do need to understand that CV19 is the new normal, how it will affect your future, and how to adapt. Humans have controlled influenza to some extent, we have adapted to manage the impact and normalize it. Influenza is deadly and responsible for multiple pandemics. Yet’s it’s just a normal risk in a world of uncertainty.

And that is what will happen with the current pandemic. They will eventually learn how to target and vaccinate the most dangerous strains of CV19 in the next few years. Vaccines and antiviral treatments will be available in late 2020 and 2021. But we really won’t know how effective they are until they have been used on millions of people. It could take longer to get CV19 under control with effective vaccines distributed to billions of people.

Strategically speaking, you will benefit from adapting your habits, mindset, and paradigms to the new normal. In 2009 we had to adapt to an economic collapse. And we did.

Now you have to commit yourself, your organizations, your social circle, and your family to the simple fact that there are now 30+ highly contagious strains of CV19 circling the globe.

It’s literally adding a second virus that is just as bad if not worse than influenza. It will double the number of people that die from viruses every year. It will double the number of vaccinations you have to get every year. Plan for at least the next few years, life and business will normalize to social distancing, working from home, wearing masks in public, periods of government-mandated stay at home orders. Businesses and Governments will invest more time and money into automation, virtual meetings, virtual collaboration tools, the social distancing of services. Sales visits with change. Getting your hair cut will change. The business models of professional sports, the movie business, live music, air travel, buses, cruise ships, schools, theater, paid speaking and others will change. Anybody in the business of putting butts in seats is facing a long term change in how they do business. Anybody who does face to face services is facing long term changes in their business model, how they sustain revenue, how they do business. Anything that can be converted to social distancing will. I.e. pick up and delivery of retail, groceries, restaurants. If you supply affected industries, you’ll have to adapt. And any industries that are indirectly affected by those industries. If you purchased empty space on airlines to import your goods, your shipping options are changing. If you sell advertising to affected industries, the game has changed, and likely so have your budgets. If your clients, customers, suppliers, or business network are affected by CV19, so are you.

If your livelihood is impacted by the new normal of CV19, expect to be rebuilding your business model. Your costs, your operations, your pricing, sales targets, business development, financing, humans resources, recruiting, interviewing, supply chain, need to delegate and coordinate virtually. And get good at applying for grants and loans covered by government funds or other financing to bridge stay at home orders when revenue goes down but you are trying to maintain payroll and your workforce.

Your financial planning, budgeting, operations, policies, your business strategy needs to treat this as a long term adaption to the evolving world of CV19, not a short term interruption until things “Go back to normal in the summer”

Know your enemy. CV19 is fundamentally changing social norms and how we do business for years to come. If history and science are correct – CV19 is the new normal, a new sickness has been added to our world, and we are learning how to live and do business with it.

Know your self. Understand not only how the world is changing, but anticipate what changes you will have to make at work and at home to thrive in the new normal of CV19. Those who adapt quickly, who identify new opportunities, who innovate new business approaches will thrive much better than those who ignore the change in the world and wait for everything to go back to the way things were.

Develop scenarios and contingency plans that address them. While scenario planning is a discussion unto itself, here’s the basics. Classically scenario planning would be:

  1. Strategic Plan for the best-case scenario for your business. What it looks like, what you have to do to make the best out of it.
  2. Strategic Plan for the worst-case scenario for your business. What it looks like, what you have to do to make the best out of it.
  3. Strategic Plan for the most likely case for your business. What it looks like, what you have to do to make the best out of it.
  4. Strategic Plan for an unexpected outlier that changes everything. Like a Pandemic that forces you to pivot all manufacturing and operations to supporting health care. Or a disruptive technology or event that is manageable but forces you to change everything. Know what it looks like, what you have to do to make the best out of it.

The whole point of Scenario planning is to prepare your organization and leadership for understanding what good, bad, and different scenarios look like, so they have better odds of seeing the change coming, and you already have a few example plans to work from when things change. The future will be different than your scenarios. But scenario planning makes all the participants better at adapting to disruption and change.

Lastly, accept the change. If you don’t have a mask yet, what are you waiting for? At least in Colorado, they are everywhere now. Accept that things have changed and embrace the change. Get a cool mask that you like. Get good with virtual meetings, have discipline in publishing meeting minutes because now they matter – no more chit chat in the break room and hallways. Closely understand your government and health authority guidelines for both legal and safe ways of continuing your business. Upgrade your companies VPN capability, make sure everyone collaborating online has a laptop with a webcam (Not all laptops have webcams). You may have never purchased personal protective equipment (PPE) before. Make sure you can provide face masks and any additional PPE required in your area for your employees and maybe customers. And understand the guidelines and legal requirements literally are different everywhere and are changing. So keep yourself well informed and make sure everybody id making decisions based on the correct up to date information.

Accept the change. Understand the change. Understand how it affects you and the people you are responsible for. Anticipate the future. Innovate solutions. Don’t be afraid to ask for help.

Posted in Strategy | Leave a comment

People Are Predictable

Been working on the social media strategy.

Tik Tok is a good place to learn how to make videos.

Not always time to do the long expository essays. But something is better than nothing.

So here’s some brief strategy for ya.

Strategy Lesson – People only do what they know.

And that means they won’t do what they don’t know. That leads to weakness, failure, and opportunities you may exploit.

@stratsci #strategy
Posted in Business Strategy, Strategic Science, Strategy Breadcrumbs | Leave a comment

What subjects do politicians and military leaders study in order to formulate and implement grand strategies for their countries and the world?

TLDR: Grand Strategy is not only where military strategy meets statecraft. Grand strategy is understanding everything about a nation and it’s place in the world. Agriculture, business, water, energy, weather, people, religion, pop culture, internal and external politics, risks, threats, infrastructure and foreign deals at all levels. Grand Strategists have to understand how all those moving parts fit together, otherwise, they risk the fate of every fallen empire that came before.

You can’t know everything, but you have to know enough of everything to understand the experts in most fields.

There are many schools in strategic studies that cover the military and foreign affairs part of grand strategy in great detail. The political/domestic/ private sector parts of Grand strategy are far more elusive.


Grand strategy is what the leaders of Nations should be doing.

Per Wikipedia: Grand strategy – Wikipedia

Grand Strategy can be a misleading term. Technically Grand strategy is more than the military and foreign policy side – what the department of defense leadership does – the generals and the civilian military leadership (Secretary of Defence, Consultants, etc.

Political Grand Strategy typically means the top level of national strategy; including but beyond the military scope.

Generally speaking – Grand strategy is national strategy. It’s understanding how all the parts of a nation and government interact with one another, and how that plays out on the world stage to desirable long term outcomes while minimizing risks and threats.

Grand Strategy is really the blending of military science, political science, diplomacy, law, business, economics, finance, accounting, engineering, science, supply chain, infrastructure, sociology, behavioral science, and management science to lead a nation strategically; meaning to understand all the inputs and outputs of a nation, and figure out how they best fit both together and into the larger world.

Which means it’s not just understanding how the Military and State Department can respond to a crisis overseas. It not just understanding how to mobilize the world’s largest banks to finance government and private debt to support combined acts of nonprofits, humanitarian groups, and private for profit companies in responding to a crisis.

Grand strategy is leading several hundred million people that can’t agree on anything, getting enough of them to buy in on win-win scenarios that serve mutual best interests, and understand which levers can be pulled in education, commerce, business, banking, medicine, religion, law, entertainment, construction, supply chain, regulation, war and diplomacy to make things better for your nation and its position in the world, which almost always requires allies and win-win relationships with allies outside of your nation.

Keeping in mind that these allies can be anything- businesses, cultural influencers, religious leaders, global organizations, other nations.

You have to study all the skills and knowledge for the above. You have to speak the jargon of many disciplines, understand how regulation drives energy drives agriculture, drives food supply drives public health drives safety drives medicine drives insurance drives banking drives finance drives money drives lobbying drives regulation. Which means you have to know in principle and practice how those individual things work so you may connect the dots.

You need to know how the import/export bank affects the price of wheat. You have to understand how the price of wheat affects social programs, military spending, and financial markets. You need to understand the contract law of genetically modified seeds and how Agribusiness affects the price of wheat. You need to understand the motive of the guys that hired the lobbyists and how this fits into the bigger picture you can choose the battles that are worth using up your political capital for. Understanding that is you use up political favors “saving” agricultural commerce you may lose the pull you need to prevent a war next year.


Learning grand strategy at West Point and the National War College is very instructive on the military side of Grand strategy. But do fall short of the entire subject.  I would expect the military to know that the World Bank and International Monetary Fund were set up after world war 2 to prevent a repeat of world war two.  I would not expect military grand strategists to be great at understanding how Chinese environmental regulations drive Mining for rare earth metals, lowering the cost of batteries, making things like smartphones and electric cars affordable and potentially lucrative business ventures for capitalists.  Not their focus.

To be fair, on the NWC website, the National War college defines Grand Strategy as follows:

“The College is concerned with grand strategy and the utilization of the national resources necessary to implement that strategy…Its graduates will exercise a great influence on the formulation of national and foreign policy in both peace and war…”

West Point and National War college I suspect are great at the military side of Grand strategy – Having contingency plans in place, raising, training, maintaining a military force, studying opponents and threats, net assessment, choosing the best weapons, preparing for the next wars, maintaining national defense, defending and attacking national infrastructure, peacekeeping, Warfighting.

Both institutions in their study of military grand strategy very heavily stress the importance of the civilian world and nonmilitary matters, and the importance of joint military and civilian cooperation and priorities on national strategy.

However both game theory, rational choice, and a basic understanding of politics and economics show an important limit to military grand strategy.

The National War college basically teaches it’s civilian students why the military is needed and trains it’s military student how to justify the military to civilians.

The National War college exists to keep the American military politically viable, and under civilian control. It accomplishes this by educating both sides the value of the other. Which makes sense given the context that it was created in, post world war 2. And strategically and semantically you can call that an exercise in educating grand strategy.

But then there is the nonmilitary side of grand strategy.

Guns versus butter model – Wikipedia


Law of the instrument – Wikipedia

The civilian side of grand strategy is actually more difficult, and not really under the purview of the National War College, though I know they spend some time on each subject; given the majority of NWC students are the Generals of the defense department and the diplomats of the state department… I don’t believe they would qualify as unbiased and fully knowledgeable in all sides of the Guns vs Butter debate or the Law of the Instrument debate.

Guns vs Butter Debate:

Guns vs butter mostly requires an understanding of Opportunity cost – Wikipedia.

Basically, every dollar invested in the military is a dollar not invested somewhere else – Like roads, education, bridges, healthcare.

The military is biased, and it not entrusted with how much money we spend on the military. For better or for worse, the guns vs butter problem, which is related to how much taxes we pay, and what those taxes are spent on, is a decision entirely left to the 535 members of the United States Congress (FYI – the only member of Congress I could find that graduated from NWC is John McCain.)

Last I checked, the National War College does not appear to spend much time on economics, how many taxes there should be, the economic and social impact of taxes, Keynesian vs non-Keynesian vs Austrian economics and what proportion of those taxes should be provided to the military. I’m certain they need to know the Guns vs Butter curve, spend time on why the military requires investment, and the best ways to utilize that investment.

Neither the military leaders or the diplomats make decisions on how many taxes are levied, or how much tax money goes to the military or diplomats.

Now to be fair, I learned Guns vs Butter in my Defense Policy class in college, I do know it’s covered in theory, but probably only detailed on the guns side of the equation – because that’s the side the military guys are responsible for.

But a HUGE part of Grand strategy is how much money should you spend on the military. Just Ask the Soviet Union. How did the world’s largest producer of oil (12.5 Mbbl/d) go bankrupt in the Cold War? Technically they spent too much money on the military. All that mutually assured destruction, proxy wars, and military buildup were simply tactics to get the Soviets to spend too much money on the military. So the cold war ended in a whimper.

If the Soviets understood opportunity costs, economics, and business – they would understand you can’t spend money you don’t have, and investing in business often allows you to make more money to pay for more military.

Law of Instrument debate:

When you carry a hammer, everything looks like a nail (cognitive bias). We need to choose our tools carefully, and only use tools for their intended purpose (i.e. a screwdriver makes a poor and dangerous hammer).

When you are a diplomat, everything looks like a treaty or negotiation.

When you are a diplomat trained at the National War college, everything looks like a treaty or negotiation backed up by military options.

When you are the military, the world looks like a series of targets with different levels of assessed threat. (Allies are typically the lower levels of threat).

When you are a military leader trained at NWC, the target threat levels are viewed as potentially alterable by diplomats.

People do what they know. Neither the military nor the diplomats are going to make money from business deals. Or set up win-win contracts that allow all parties in a multilateral international business deal to prosper. However, both have a role to play in international commerce – usually through creating and ensuring stability.

I’m certain the National War college defines the roles and limits of military and diplomatic tools, and how they support commerce. However I’m pretty sure they don’t get into a detailed analysis of the various NGO’s, churches, religious organizations, businesses, trade groups, industries, tourist flows, immigration flows, criminal activity, manufacturing hubs, agricultural bases, construction projects – and how all of those non government things are actually a huge part of enabling grand strategy at a global scale.

I know they look at that stuff at a high level externally. Not sure how well they understand the interplay or internal commerce, infrastructure, domestic economics, regulations, policy and commercial finance on grand strategy.

I’m pretty sure of that because most MBA programs and economics programs don’t do that either.

Of course, the world is changing pretty quickly these days, so maybe there is a program in economic and infrastructure grand strategy that I am unaware of.

But at the end of the day, the military only needs to understand commerce and industry enough to defend or destroy it. The diplomats only understand commerce and industry enough to use them as bargaining chips and support political policy.

And at the end of the day, business, commerce, and industry create the value and wealth that the military and diplomats work to protect.

So under the Law of Instruments – don’t expect graduates of the National War College to be using the relationships of agribusiness, tech, and biotech industries to modify immigration strategy between partner nations to fill employer staffing needs. That tends to be done by contracted local lobbyists, and sometimes facilitating payments under FCPA as needed.

Most of the trade deals out there are made by businessmen expecting a predictable future of trade laws for the duration of the deal.

It’s up to the national grand strategy to provide incentives, like tax breaks, predictable regulations, free trade agreements, grant money, security, stability to businesses to stimulate economic activity.

An important point to make is often the military is stuck with weapons it doesn’t want, because the congressional grand strategy values the economic development and technical maturity of the military industrial complex over the quality of weapons built during peacetime – i.e. the F-35.  Or why Congress purchased “extra” B-2 Stealth Bombers.

And the military does sometimes make it’s own make a grand strategy decision – such as a weapon program canceled because it isn’t needed to fight the current war – i.e XM8 rifle – Wikipedia

I guess my point is, West Point and the National War College, although being authorities in Grand Strategy – mostly teach Grand Strategy from the perspectives of foreign policy, military science, operations research, philosophy, engineering, systems. And little bits of behavioral science, law, political science, and economics from that same perspective.

But mostly they teach Military and Foreign Policy Grand strategy.

They probably adequately cover:

Sun Tzu – Wikipedia

Niccolò Machiavelli – Wikipedia

Alfred Thayer Mahan – Wikipedia

Carl von Clausewitz – Wikipedia

Plus Game theory, ECON 101, International Law.

But Holistic Grand Strategy Requires you to take into account a full net assessment, including commerce, economic activity, unemployment, immigration, popular culture, finance.

The most strategically significant event since the end of the cold war was the financial collapse of 2008. It affected more than half the families on the planet, with a ripple effect of a global economic downturn and the European debt crisis – and its root cause was the American financial industry, especially the home lending market and financial derivatives.

The National War College probably never spent time on assessing the American financial industry as the single greatest threat to world economic stability.

They probably never heard of:

Subprime lending – Wikipedia

Clayton M. Christensen – Wikipedia

Michael Porter – Wikipedia,

Henry Mintzberg – Wikipedia,

McKinsey & Company – Wikipedia

CLS Group – Wikipedia (Foreign Exchange Bank)

They probably don’t know or understand the ramifications of the U.S. Department of Education Student loan program ranking as the 5th largest bank in the United States with a $1 Trillion portfolio.  Financial Risk anyone?

They probably don’t know or understand the grand strategic ramifications of the four largest banks on the earth being Chinese banks. Number 5 is Mitsubishi of Japan. They likely are not familiar with the Mitsubishi Group, formerly the Mitsubishi zaibatsu; which is scary, because it’s the equivalent of General Electric, Lockheed Martin, and JP Morgan Chase being owned by the same family.

Remember, the Soviet Union lost the cold war because they ran out of money. An economic collapse that led to a political disbanding:

Why did the Soviet Union collapse?

So the Grand Strategy that ended the cold war was a matter of economics, business, and finance employed to (deliberately or conveniently) destabilize the political will of the Soviet leadership.

Business, finance, and economics are not primary subjects covered in detail at the National War College. Read the Staff Bios – all career military and government, with academic experience – nobody with experience in private sector, business, domestic infrastructure or anything domestic – all externally focused or military (Notable mentions – 2 staff from USAID, 2 from DOE, One Political Economist, One MBA Economist that was career Army, and one faculty member from Treasury office of Intel and analysis that’s good at economic sanctions and black market finance).

So just to wrap up my little exposition here – National War College, and all the Strategic Studies schools out there that teach Grand Strategy – are mostly teaching Military Grand Strategy, externally focused on the scope of the military, international security, and diplomats.

Overall grand strategy includes boring subjects the National War College does not like to think about. Stuff like monetary policy, financial regulation, health care, water supply, food supply, roads and bridges, unemployment, taxes policy, etc.

All the domestic stuff on the news cycle is part of Grand strategy, and cannot be ignored by the top political leadership determining the national grand strategy.

Hope that helps, let me know what I missed. I’m certain there are mistakes in there beyond my typos.

Posted in Geopolitics, Strategy | Leave a comment

What are the best ways to learn business strategy?

First a quick reading list: These are Basic Books that will cover the topic broadly: Most of these are available as audio books if you prefer.

Complete MBA for Dummies – to cover business basics. Doesn’t hurt to memorize.

The Strategy Book by Max McKeown – Accessible, simple, covers the Basics. Memorize it.

Strategy: A History by Lawrence Freedman. Lets you know what strategy is, where it has succeeded and failed. The middle third is slow, but educational history. The last third is focused on business. Helps you understand the weirdness of strategy – it’s often more or a religion or cult than a craft or science.

Japanese Art of War by Thomas Cleary – gets more sophisticated and outside the box.

Ben Polak did a nice Yale open video course of his game theory class. It’s pretty good. You don’t need to learn the math unless you want to, because game theory is limited and hard to apply directly. But Game theory is a wonderful tool to teach strategic thinking; and Ben Polak’s lecture videos are pretty enlightening if a few years old at this point. Honestly the math hasn’t changed.

Then type in the words “Business Strategy” into you tube and watch a few hundred hours of videos.

You’ll pick up on the fact that the marketing industry has adopted the word strategy, but they only really do marketing strategy, not comprehensive business strategy. Some of the smart ones can create synergy with business strategy; but don’t be fooled by the marketing geniuses. Yes they are good, yes they are smart and great communicators. But they don’t know what they don’t know. Every business needs good marketing and sales to succeed. But be careful of the market strategists getting out of their depth. They rarely will spot weakness in finance, accounting, supply chain, operations, or applied technology. They will however be good to awesome with culture, communications, market intelligence and people.

Business strategy books in general – be aware that most business strategy books are either a graduate thesis converted into a book, or written in a similar format. Business strategy books tend to be one trick ponies – they pick a simple, easy to understand central theme and then spend several chapters on case studies that show real life examples of that idea. A good example is Blue Ocean Strategy or Balanced Score Card. And while each of theses books is useful and educational, they suffer from confirmation bias and can be misleading. An interesting corollary to that is mintzberg’s Strategy Safari, which studied and compared the business theories at the time, pointing out some of the silliness.

Be aware that the strategy industry is a business that sells strategy, and like much of business is often more focused on revenue than results. They all talk a wonderful game. Many confuse luck for skill, correlation for causation, coincidence for genius.

You’ll quickly learn that strategy, even focused on a single genre like business, is an elusive and complicated subject. Even these Quora answers are different and contradictory.

To really get good at strategy and understand it – try a strategy video game – one with conflicting priorities, time limits and resource limitations – Star Craft and XCom are good examples – where you have to simultaneously manage budget, resources, operations projects and missions while trying to outthink and outmaneuver your opponents both tactically, operationally, and strategically.

While the details of business are different than a military strategy video games, the problems are the same – limited time, limited money, limited people, limited talent and abilities being stretched to accomplish different goals with changing priorities as you react to changes in your environment and your competition. And often the environment is changing faster than you can change, so you learn to adapt proactively and innovate ahead of the changes when possible. You learn that action is faster than reaction, that speed kills and hesitation can cost you significant losses. Most importantly you learn that all the pieces are connected and innocent mistakes in one place can lead to disaster somewhere else.

Those video games tend to be unforgiving with murphy’s law. Especially XCom. You can spend days training up high quality staff only to lose your best people to bad luck and have to start over rebuilding a team while competing for your existence.

All the other advice here on Quora is pieces to the puzzle. One you get through the basics, read some 10-K forms, talk to any business savvy people you can meet, read business journals, understand different business models, why they succeed or fail, study finance, accounting, project management, change management, business law, supply chain.

To really be good at business strategy you need to understand what all the parts are, how the work together, what the rules and limits are. Often your limits will be people – culture, leadership, complacency, resistance to change; desire to change to fast, laziness and resistance to taking the time and energy to do things the best way instead of just getting by.

At the end of the day, strategy is about deciding on the best changes to make and then changing what people do and how they do it as you adapt to things you can control and change your strategy to deal with changes the world throws at you. Notice I used the word “change” way to many times. Strategy is change. Changing what you can to make things less painful, more successful.


Hope that Helps.

Posted in Business Strategy, Strategic Science, Strategy | 5 Comments

What is a win-win strategy?

Oh fun.

TLDR – a win – win strategy is a scenario where all stakeholders in a situation get what they want, or “win”.

First, the text book. In game theory, A winwin game is a game which is designed in a way that all participants can profit from it in one way or the other. In conflict resolution, a winwin strategy is a collaborative strategy and conflict resolution process that aims to accommodate all participants.

In the real world, a win-win strategy is often found in diplomacy and business, often in the form of a contract or written agreement. It’s a deal where both sides win.

Literally, both sides win.

In business this is called giving your suppliers fair payment for their product or service, and providing your customers a quality product or service for a fair price. Everyone gets what they want. Everybody wins.

If you call a compromise a deal where nobody gets what they want. A win – win deal is simply I figure out what you want, I tell you what I want, and we figure out how to make both things happen. That is a win-win for both sides, so both sides are incentivized to follow the win-win strategy.

Why do I car if my opponent wins or not?

Well that is brings up the difference between a finite vs infinite game.

In a finite game, with a defined end, at the end of the game, I don’t have to face my opponents again.

Or in a viciously executed Zero Sum game, I kill or destroy my opponents, and never face them again.

In real life, pretty much everything is an infinite game. Look at the centuries of war in most places on earth. Look at the unending business competition. Even if I destroy other businesses, the people still live to compete with me another day.

If I engage in win-win strategies for my life; I decrease conflict, increase cooperation, make friends, make allies, and set both myself and my allies up for future success and cooperation.

At it’s worst, a Win-Win strategy is a compromise where neither of us lose.

At it’s best, a Win-Win strategy sets us both up for a future of cooperation and success where resources are not wasted on competition.

A win-win strategy can create a peace where the losers of war may save face and get help rebuilding.

A win-win strategy in business can help your business partners save face and create mutually profitable deals; often sacrificing short term goals in favor of long term gains.

Posted in Strategy | Leave a comment

What is the difference between a top down and bottom up design appoach?

OK vague question, so there a a few different ways to look at it:

Top Down vs Bottom up.

Already been tried here: What is the difference between top down and bottom up approaches?

From an organizational perspective:

Top down is having the leadership develop the plan, and the push it down to those below them on the org chart. Basically the military model.

Bottom up is engaging the opinions and input to lower levels of an organization, soliciting participation from all levels of an organization, and then using all the information to make more informed decisions at the top leadership levels.

Top down can be quick, simple, easy, and usually requires significant amounts of organizations change management to execute; because the people making the strategy happen have no idea what’s going on.

Bottom Up takes longer, requires analysis and synthesis, requires compromise and strong leadership to make decisions on conflicts, but is usually easier to execute if done in good faith because all levels of the organization were part of the planning process and already know who, what, why, where, how, and most importantly already have skin in the game.

From a Design Perspective – not necessarily strategy, but engineering

Top Down is saying we need a Car, or a web site, and then figuring out all the details. Starting with the “Big Picture” and then drilling down.

Bottom up is coming up with a list of specifications or requirements, and then connecting the dots, figuring out how they fit together and what the cohesive whole looks like.

Top Down is a good way to focus systems integration and design variations – you know what the end product is supposed to be. Helpful for building physical things that are hard to iterate or modify (like ships, buildings, bridges)

Bottom up is a good way to hit minimum viable product. You may not know what the end result is supposed to be (or will be), but you start with a series of features and go from there. Is a nice option for operational processes or software startups that just need something that works, and it’s easy to modify and add to it as you go.

Strategic Example of the two different approaches when making dinner:

Top down approach could be to decide you are making chicken and rice, and then get the ingredients to make chicken and rice.

Top down approach could be to order Pizza (leaving you an option for bottom up discussion on Pizza toppings).

Bottom up approach could be to check the refrigerator, cupboards, and pantry, and cook what you find.

Bottom up approach could be to ask your roommates or family what they want for dinner and try to compromise between everyone’s ideas and what’s available.

Strategically you are best off having a plan set up long before you get hungry, so when people start asking what’s for dinner, you simply hand them a hot plate of something they typically enjoy – i.e. Taco Tuesday, Spaghetti Wednesday. A predictable routine will help manage everyone’s expectations, and make it easier to changes plans proactively (you have a plan to change).

Hope that helps.

Posted in Business Strategy, Strategy | 1 Comment

Why are people from the future not time traveling to our period, assuming time travel technology is available in the future?

Let’s assume it’s physically possible to travel backwards in time…

Every one forgets the earth is a spaceship traveling through the heavens..

Some Astrophysics 101:

The earth is orbiting the sun at a fast speed.

The Sun is orbiting the Milky way galaxy at an even faster speed.

The Milky Way galaxy is moving relative to the center of the known universe at an even faster speed…

So relative to any point in time and space in the universe – you are currently traveling at at least thousands of miles every minute..

If you traveled back in time just a few seconds, without teleporting to where the earth was, actually traveling when you go back in time…  Going back in time one second – you’d end up miles away from where you stared.

Travel back in time a couple hours or more?  You find yourself in the exactly same place, but in the past, before the earth / sun / milky way system got there, you’d have to wait in the dark cold of space until the earth happened to run into you (if for some reason, when you travel back in time your momentum doesn’t come with you; actually the earth probably wouldn’t catch up to you because you would be moving yourself..)

So if you want to go back and see the first black president  of the USA inaugurated – you would not only have to go back in time to 2008, but also travel a millions of miles into space where the earth was in 2008.

Time travel is probably the easy part, compared to calculating exactly where you need to land and actually getting there as you travel through time.

Posted in Math, Quora Answer, Space, Strategy | 3 Comments

What is the difference between business strategy and a business model?

TLDR – The business model is a part of the toolkit used in business strategy.

Is a business model the same thing as a business strategy? Yes and no.

Business models are how you make money and how the business works. Not just “I’m going to open a coffee shop that sells to such and such demographics.” A Business Model is basically developing a Cost curve financial model based on everything you want to do, and comparing those costs to – can you make enough money to cover those costs? So using the coffee shop example, you figure out your fixed costs like rent and salary labor (how many people and what do they do?), insurance, licenses, financing, semi-variable costs like coffee machines, coffee filters, utilities and hourly labor, and variable costs like cups, sugar, flavoring, coffee, etc.

First, you have to throw out the indoor waterfall idea because you can’t afford it. Then you add all the rest up, and wow, it costs like $10,000 a month to run your coffee shop before you ever sell a cup of coffee. So you figure out the fancy flavored sugar & cream filled coffee that is popular sells about $4-$8 a cup. And given your pricing curve your coffee costs roughly a $1 a cup in costs, so given variable costs – you have to sell about 4,000 cups of coffee a month, or 1,000 cups of coffee a week, or roughly 150 cups of coffee a day with a staff of you and 3 part time people working 30 hour weeks just to break even. That model also assumes you are living in your parent’s basement. To really make a profit you want to sell close to 200 – 300 cups of coffee a day. Then you gotta figure out how to sell 300 cups of Coffee a day, how to compete with other coffee shops, etc.

Now semantically speaking – The above is a financial model as much as a business model. The complete business model from a business school perspective involves how you sell coffee, why you sell coffee, target customers, business culture, how you organize your operations. All coffee shops share effectively the same basic financial model, but may have significant differences in “business model” – different culture, branding, operations, logistics, pricing, marketing, advertising, financial controls, training, HR policies, recruiting and hiring practices. Some sell merchandise, some are bookstores, some have poetry reading or live music, some serve food, some are part of a larger business, or at your library.

Many people will argue that the cost curve has little to do with the parts of the business model that they focus on. I will tell you, without a cost curve and financial business model, it’s really hard to get a business loan or investment money. You need the financial half of the business model (which is built on organization, operations, pricing to estimate cost and revenue), the other half of the business model is how you expect to sell.

The cost curve is a financial summary of the business model is the necessary part that tells you the numbers that constrain the operational, cultural, business development, marketing advertising, etc. It gives you your realistic budget and resource limits.

At its base the Business Model is understanding what business you are in, and how you will make money. Doing that without a cost curve is certainly possible, but risky. The literature on Business Models gets really wonky and philosophical, with hundreds of definitions that can, will, and do include many parts of business strategy.

So what is business strategy?

That’s a larger subject, with even more definitions, hundreds of books and opinions on the subject. At the national meeting of the Association for Strategic Planning, I can ask a dozen strategy professionals for the definition of strategy, and get 30 or 40 different answers in a single lunch.

At the time of the writing (2017) Wikipedia does not have an entry for business strategy – it says see strategic management, strategic planning, etc..

Business strategy is knowing how to execute and adapt your business model and financial realities in a constantly changing world. Business strategy is how you grow and change your business. Business strategy is what do you do when your business model isn’t working? Business strategy is adding additional business models that work in parallel with your core business model.

Maybe your Coffee shop adds a catering service or starts selling used books. Maybe you have lots of people hanging out with personal electronics so you add charging stations. Maybe you have too many people hanging out so you remove the charging stations. The best trick I ever saw for coffee shops was to give away free coffee on opening day, and pay people to show up and make it look busy the first week – tricks everyone into going to the really popular coffee shop. But that only works if you are in a high traffic area where people will notice. Every tactic and strategy has its time and place.

Business strategy is a cycle of gathering information about your business and how it interacts in the bigger picture and the future; making plans to adapt your business model and survive in the future, and actually make those changes to your business model happen. And making additional changes and adaptation that you were not expecting when you made the plan. You keep going through that cycle in one way or another, formal or informal, until you retire or go out of business.

Business strategy is the changes you make today to give to a better tomorrow.

Business strategy is a fancy way of saying that your business will either evolve or die.

Your changing business model is simply a tool that is a part of your evolving business strategy.

Posted in Business Strategy, Strategy | Leave a comment

Why is it that governments (or large organizations) rarely do anything intelligent?

This is a strategic exercise in understanding how large groups of people operate.

TLDR: You try getting a large group of people to agree on anything while trying to figure out how to pay for it while keeping them all out of trouble. If you can get three strangers to agree on Pizza toppings I’ll be impressed.

Basically, the government is spending most of its time simply trying to prevent things from getting worse. Ask any parent and they can explain in detail.

One important point is trying being a tightrope walker – you have to really stay balanced in every decision or you risk going too far to the left or right and fall.

How to explain the absurd complexity of governing a country?

  • First, you have to read Plato’s Republic – 3 failing of Humans:
    • Humans are greedy
    • Humans make mistakes (are incompetent)
    • Humans abuse power

This is why most modern governments are built on a system of checks and balances. You have to spread out the power, greed, and mistakes in a balanced fashion that allows the greed of one group to balance out the abuse of power of another group that corrects the mistakes of a third group that is also limiting the greed of the first group.

You have to build the systems of government to be stable and effective in light of the fact that the majority of the people running the government are human – the will experience greed, make mistakes, and abuse their power.

Then keep in mind that the human citizens are greedy, incompetent, and abuse power. Regardless of the form of government, you still have a bunch of greedy, incompetent, power abusers trying to maintain power, maintain order, and keep the whole thing going while trying to deal with millions of greedy, incompetent, and power abusing citizens.

That’s basically why there are over a dozen civil wars going on at any given time.

If you live in a place with no civil war, no soldiers on the streets, and no obvious corruption, where you can trust the law enforcement and leave your home without fearing for your safety – you are in the minority of the human population.

Creating law and order is crazy hard, and requires basically everybody to cooperate at some level. Which is why the brute force/iron fist approach is so popular – it’s simple and crudely effective, very easy to understand.

But if you want things like freedom, liberty, independence, not having people with control issues telling you how to live your life –

  • Have you ever gotten 3 people to agree on Pizza Toppings? How about 3 million, or 3o million, or 300 million?
  • Did you understand your high school chemistry teacher? Do you understand that everything you forgot from school is actually relevant to how the world works and needs to be understood to effectively run an industrialized 21st century nation?
  • Why do intelligent people hide their intelligence?
  • Would you rather be a billionaire with vacations and privacy, or a Politician with zero privacy and spend most of your time asking people for money or apologizing for things you can’t control.
  • Politicians use the term “true facts” because they deal with statistics, lawyers, and biased presentation of information. They cannot tell the difference between scientific fact and well-argued lies. Nobody can be an expert at everything.
  • Based on GDP – the United States controls 25% of the world’s wealth, with only 4% of if population.  We are the richest nation on earth by most measurements and the US still have problems with poverty and hunger.
    • There is more demand than supply. More people than resources. Even in the magical 21st century where there is enough food for everyone, we can’t figure out how to feed the hungry, and we throw away lots of food, much after it spoils.
    • How do you deal with the simple fact that there is a scarcity of resources? More people than stuff to take care of them.
    • How do you decide who gets wealth and who doesn’t when there simply is not enough to go around?
  • Then keep in mind at any given time a portion of the population is actively engaged in criminal activity, some out of aggression, some out of desperate need – how do you deal with that? More resources you don’t have spent on people you don’t have the resources to properly help, but you have to protect the common good.

Think about how many mistakes the average person makes in a given day.

Do you get 8 hours of sleep, drink 8 glasses of water a day, eat 7 servings of fruits and vegetables, eat meat, whole grains every day? Do you never eat sugar, preservatives, or any unhealthy foods? Can you drink only one drink of alcohol per day? Do you ever speed? Are you ever late for anything? Do you procrastinate? Do you have more things to do than time to do it? Do you watch TV instead of reading a book or doing chores? Do you get exercise every day? Do you pay all your bills on time? Do you have credit card debt spent on things sitting rarely used in your home or garage? When was the last time you were tired / hungry / in pain and said something stupid that turned into an argument? When was the last time you brushed your teeth? Flossed? When was the last time you had 2 bills and only enough money to pay for one of them?

At it most basic form – a government is a delegation of responsibility to a small portion of a larger group of people trusted to maintain stability and the common good. If you consider that everyone is making a few hundred mistakes per day, and add that up for any number of people in a group; honestly I’m surprised that we do as well as we do.

Posted in Business Strategy, Geopolitics, Military Strategy, Strategy | Leave a comment

Who would win between Grand Admiral Thrawn with 6 Imperial Star Destroyers vs Grand Moff Tarkin with the first Death Star?

This is a guilty pleasure.  Yes, I have read far too many books, and a star wars fan, and spend too much time on Quora.

Honestly, this is just an illustrative exercise in assessment and investigation scenarios and options.  At first Glance, it would seem that anything vs the Death Star is no contest.

But a little bit of Sun Tzu, run the numbers, look at lines of sight, assess the leadership and how they do what they do.

It’s mostly Sun Tzu Chapter 1.

Enjoy.  The first draft was on Quora a while back:

Who would win between Grand Admiral Thrawn with 6 Imperial Star Destroyers vs Grand Moff Tarkin with the first Death Star?

First, this is Spot on – Matthew Jiang’s answer to Who would win between Grand Admiral Thrawn with 6 Imperial Star Destroyers vs Grand Moff Tarkin with the first Death Star?

TLDR: The Empire’s best military strategist (Thrawn) can beat the Empire’s best military Governor (Tarkin). But he will have to use every dirty trick and unconventional tactic and tool at his disposal, because a stand-up fight against the Death Star is suicide.

You could go into a pretty rigorous debate between legends and the Disney canon.

But the net assessment is pretty simple:

Grand Moff Tarkin:

The Emperor’s Governor of Governor’s. The highest ranking Political leader under the Emperor.

Reasonable tactician and strategist as shown in the movies and clone war TV series.

But Tarkin was put into his position of power because he was ruthlessly loyal to Palpatine, believed in the Empire, and was able to control Anakin / Vader. Tarkin is reliable, loyal, has no love for the Jedi, and gets results.

But Even in the Clone Wars series, it was established that Tarkin was not as good at tactics or strategy as Anakin. Tarkin was however very good at politics, deceit, and manipulation – which is why he was a senior leader in Palpatine’s Empire

Grand Admiral Thrawn:

In Star Wars Legends is remarkably smart, resourceful, and creative – takes on and beats Jedi. But is older and more experienced than he is in Disney Rebels.

In Disney Rebels show (recent Disney Canon), Thrawn is younger and less experienced, but still is characterized as a military genius, superior soldier, and hand to hand combatant, and is called in as a “ringer” helping Tarkin, to accomplish feats that Tarkin and his admirals are unable to do themselves. The Emperor sent Thrawn to end the rebellion that Tarkin couldn’t handle without help.

While Tarkin was the Emperor’s Grand Moff, The Emperor Made Thrawn his Grand Admiral – the Admiral of Admirals.

Thrawn is like a Hermann Balck – he doesn’t lose battles, and easily defeats larger forces than his own.

Death Star:

Planet Smashing Super Laser

15,000 Turbolasers, 768 Tractor Beams, 2500 Laser Cannons, 2500 Ion Cannons…

Plus hundreds of Fighters, hundreds of small support ships, and 1–2 million people at any given time. But only 26,000 Storm Troopers.

Death Star can only fire like one full power shot a day, maybe a few lower power shots.

For the sake of argument, let’s say the Death Star can fire 6 shots in a matter of minutes capable of destroying all 6 of Thrawn’s Star Destroyers. Makes it more interesting.

Star Destroyers:

You could argue ISD 1 vs ISD 2’s; Disney Cannon vs Legends.

But Basically, the Star Destroyers have a hundred plus weapons, 72 fighters each, just shy of 50,000 people including 9,000 plus Stormtroopers.

Technically 6 X 9,000 = 54,000 – Thrawn has twice the number of Storm Troopers with his 6 Star Destroyers.

By Comparison, the Star Destroyers are small, fast, and nimble.

The Battle –

The Top Political Leader in the Death Star against the Top Admiral with 6-star Destroyers?

1 – If the Death Star only gets one shot and then has to recharge, and Thrawn can find Erso’s weakness – Matthew Jiang’s answer to Who would win between Grand Admiral Thrawn with 6 Imperial Star Destroyers vs Grand Moff Tarkin with the first Death Star?

2 – Let’s go hard for Thrawn:


  • Death Star is able to fire 6 shots quickly to destroy Star Destroyers in range, and will not miss (Honestly the Star Destroyers should get a dodge).
  • Thrawn does not find Erso’s weakness.

What is a Grand Admiral to do to stop the Mad Moff with a Death Star?

Don’t get in Range (Duh). The Star Destroyers have faster Hyperdrives, so you simply may avoid the Death Star indefinitely.

  • Sneak some nuclear warheads on to the Death Star and cripple it’s power supplies. Then it’s an easy fight if the Death Star can’t power its shields or weapons.
  • Have human spies sabotage the power supply, then take it by force.
  • Have droid spies sabotage the power supply, then take it by force.
  • Have Slicers (Hackers) sabotage the power supply, life support, weapons, etc, and take it by force.
  • Have Saboteurs hyperspace jump the Death Star into a planet, moon, or star.
  • Have Assassins kill Grand Moff Tarkin.

But that’s boring… What? HOW? The Death Star has over a MILLION people on it at any given time. A million people consume plenty of food, clothing, etc. There has to be constant cargo traffic to sustain a city of a million people living in the Death Star, and that gives Thrawn an opening to use asymmetrical warfare.

Conventional Assault –

The Death Star is HUGE. Given its size and shape, it’s pretty reasonable to assume that only 1/2 of its weapons could be brought to bear at any given angle. And the closer you get, the curve of the sphere works against the defenders.

Given the height of towers, the curvature of the surface, once you are on the surface, assuming the weapons are essentially evenly distributed across the Death Star… When you are on the surface at any given time probably only a few hundred of those towers can actually see and engage you.

(Math – with 80 km Radius – the Surface area is about 80,000 square km – 20,000 weapons, roughly one weapon per 4 sq km, or roughly a weapon every kilometer. Depending on what you guess for tower height, for something the size of a Star Destroyer VERY close to the surface, roughly every tower for 25 km will have a line of sight to engage. A 25 km circle on the surface of the death star will have around 400 towers in it, firepower roughly comparable to 2 star destroyers.

So on the surface, at “point blank range” a force of Star Destroyers is roughly equal in firepower and has advantages in mobility and shields in that position.

Granted that’s still a problem. But if I jump a couple Star Destroyers to within a mile of the surface, and quickly land – now I have an exploitable advantage, and I’m below the main shields and can easily engage the towers while using Star Destroyers like big tanks hovering across the surface of the Death Star.

Thrawn is known for highly precise hyperspace jumps and micro-jumps.

So Thrawn would basically use scouts and spies to aim a Hyperspace jump behind the death star where the big gun is worthless, land his ships on the Surface of the Death Star, and essentially start an invasion deploying ground troops and mechanized units targeting power systems and life support where a minimum of Death Star Weapons may be brought to bear, and Thrawn can take advantage of having twice the infantry, and much more mobile heavy cannons.

Unconventional / Mixed Strategies – After Reading too many Star Wars Legends books

Given the size and maneuverability of the Death Star, I’d wait for it to be in a predictable spot, and use mile-long Star Destroyers to pull in the largest asteroids or bombs I could come up with – delivering them with precise close-range hyperspace jumps that the Death Star cannot dodge. How do you miss a target 160 kilometers wide?

Honestly given the resources of 6 Star Destroyers, I would seriously raid some systems and gather additional resources for a few weeks to set up a battle I can win. One Star Destroyer can control a planet. With 6? I can hold multiple Star Systems at Gunpoint and take whatever I want.

Get ahold of as many explosives, ships, loaded fuel tankers, asteroids, and nuclear weapons that I can find.

And hire as many pirates and mercenaries as I can find. Maybe even make a deal with the Rebel Alliance if it suits my needs. Or simply empty the nearest imperial prisons and arm the criminals as irregulars against the Death Star.

While gathering planet killing asteroids, have spies, saboteurs, hackers, droids, and assassins independently infiltrate the Death Star to use software hacks and physical sabotage to disable, disrupt, and / or destroy the Death Star’s command and control systems, life support, power supply, engines, propulsion, food supply, water supply. They can use poison, biological weapons, chemical weapons, anything they can sneak on board. Spread any available plague.

Imagine those little mouse sounding black car droids spreading plague in all the cafeterias….

And see if some assassins get lucky and manage to kill Tarkin.

Based on schedule, signals, scouts, recon – we can determine when the Death Star is having problems from sabotage and is a sitting duck:

  • Only Jump to the Backside where the Super Weapon can’t get you.
  • Ram it with as many fuel tankers and cargo ships filled with explosives that you can find – hopefully hitting the Death Star basically while still in hyperspace.
  • Ram it with as many Asteroids and Comets that you can tow through hyperspace. Or improvise hyperspace engines onto asteroids and hyperspace jump them into the Death Star.
  • Basically bombard the back side of the Death Star as hard as you can with as much as you can find – the point being to soften up its defenses.

Then when you run out of things to Ram the death star with –

  • Give immediate special forces support to sabotage efforts to bring down the remaining power for shields and weapons. (Just like Ezra and Sabine did to Thrawn’s Interdictor in SW Rebels)..
    • Also, have teams deploying additional biological and chemical weapons into the air and water systems of the Death Star.
  • As soon as you get enough of an opening of confusion, surface damage, diminished weapons and shielding-
    • Precise jump super close to the Death Star with half your Star Destroyers (3), keeping 3 in reserve. Also, bring in any other additional ships available, and have them directly support the assault.
      • Have the irregular force of Pirates, Mercs, and Rebels assault / invade the Death Star under cover of Star Destroyers.
      • Also in Thrawn style see if there are any number of dangerous predators that you can find, quickly enhanced with some cybernetics, and release on a few hundred thousand unsuspecting and unarmed Death Star workers away from the main fighting. Even a few packs of hungry wolves dropped by droid shuttles roaming the halls could do well to add to the chaos.
    • Land Assault force in best available position in the Death Star, where the surface defenses are the weakest after the bombardment.
    • Set up Fighter cover for the invasion force
  • Have one Star Destroyer modified as a massive Space Tug, or a Ship or giant bundle of Engines modified to essentially bolt onto the Death Star, and push it using the hyperspace engines into the nearest planet/moon.
  • If you can get tractor beams large enough that you can mount them on a nearby planet or moon or comet or large asteroid, and use the tractor beam to pull the Death Star and the big Rock into each other.

The goal here being to basically make the Death Star a no man’s land that everyone on board wants to leave. Use a Golden Bridge Strategy – Attack one side with punishing force, sabotage every life sustaining system, and make the rats desert the sinking ship by giving them a safe place to escape. Let them use all those shuttles, support craft, and escape pods to leave. More confusion and rational choice disrupting chain of command.

If your buddy got Ebola yesterday, you are under emergency lighting because the power is out, you felt a bunch of earthquakes, all the Stormtroopers are being shuttled to the far side of the DS station, and there are reports of predatory animals and droids attacking people. Would you just hang tight and follow orders? Or would you grab your friends, find the nearest shuttle, and get out of there?

Well, you’d probably hang tight until you find out the Death Star lost engine power and is now on a collision course with the Planet it was orbiting for resupply, and the bulkheads are on lockdown because primary life support is out, and you have been ordered to use an oxygen mask. And oh yeah, the functioning air systems are filling up with smoke and nerve gas. Then you may be willing to risk disobeying orders and abandoning ship.

And while all that confusion and battle is taking place, either overload the main reactor to make the damn thing explode, crash it into the nearest rock bigger than it, or hit it with enough asteroids and comets that it breaks apart, or set off enough nukes inside it to make it a radioactive slag.

All while keeping 3 Star Destroyers in reserve, as a quick reaction force, because odds are Tarkin can pull some tricks and call in some favors, and he’ll have help within hours.

Hopefully, by the time Tarkin’s reinforcements arrive, they will see a powerless, burning, disease-filled hulk swarming with ships; crashing into the nearest planet while Rebel, Pirate, and Tarkin loyalist infantry try to figure out how to get off the cursed thing before it kills them.

Thrawn may have to sacrifice a few Star Destroyers to get the job done and push the Death Star to its death (Moving slowly unless they have some Star Trek or Star Gate Engineers to rig the hyperdrives to do something Sci-Fi to speed it up). Worth the price.

Do it right, Thrawn could have the Death Star and the Rebel Alliance destroy each other while he makes the killing blow to each.

“No, you see the exhaust port is too difficult for fighters to hit. We’ll soften up the Death Star with some sabotage and kamikaze attacks, then you need to land an army on the surface, fight your way to Erso’s reactor weakness, and plant demolition charges manually.  Send your best troops, I’ll provide you with maps and schematics, and 50,000 Storm Troopers. We’ll cover you with the fleet.”

To quote Patton – “Fixed fortifications are a testament to the stupidity of man”

Hope you had as much fun as I did.


Posted in Military Strategy, Space, Strategy | Leave a comment

Can you be strategist and tactician at the same time?


Strategy – Longer Term – Bigger picture

Tactics – Shorter Term, immediate, smaller picture

But it’s a yin-yang sort of thing.

Can you win today while considering how that impacts tomorrow?

Can you consider pig picture, long-term consequences and positioning while getting through immediate problems and challenges?

That is really called operation art in military circles. Good combat officers have to do that. Balance the urgent with the important.

Good Example – Invasion of Iraq in 2002.

That Strategic Objective was taking Baghdad in 72 hours. Why? Because that’s how long you can fight without sleep. That was the time limit before they had to take a break.

The Army showed strategic brilliance. Every time an Army unit was attacked, they called in air support to handle it and kept moving to Bagdad.

Many Marine units showed Tactical brilliance – Every time they were attacked, they destroyed the enemy. But they got bogged down and many were late to Bagdad.

So one can mess up the other. It’s a matter of priorities.

Balancing strategic priorities vs tactical urgency is always difficult. But it’s just a skill like any other.

Hope that Helps

Posted in Quora Answer, Strategy Breadcrumbs | Leave a comment

Why Mars? Elon Musk’s Strategy

Why Mars?  What is Elon Musk’s Strategy?


Hey guys.  I actually went over this a few times with colleagues over the  summer, and started writing this right before Elon Musk started his Mars PR campaign in September.  Just missed the chance to appear prophetic.

As it is this is still a fun topic.

So Ted,  Why Mars?

A Problem 

First we need to go back about 20 years.  Hang with me a sec – I’ll keep it to the point.  In the 20th century – there was basically 2 ways to get anything over a few hundred pounds into space.  Very big rockets or the space shuttle.  Rockets were a little less expensive than the space shuttle, and didn’t need astronauts, but didn’t carry as much weight.  

To get an idea of what rockets were like before Mr. Musk got into the business, there were only really 3 heavy lift options by 2000. The Atlas rocket was first built in 1957 by Martin (to carry nuclear bombs), The Delta rocket first as the Thor ICBM in 1957 by Douglas (also to carry nuclear bombs), and the Space Shuttle in 1981 by Rockwell.  It was all government money, cost didn’t matter.  The rockets were big, expensive, single use disposable products.  By the late 1990’s Martin Marietta was owned by Lockheed Martin, and Boeing owned McDonnell Douglas and Rockwell.  Notice all the
names – what was 7 aerospace companies became 2.  Rockets imgreswere not the best business to be in.  By 2006 Boeing and Lockheed combined the Atlas and Delta Programs into a single company called United Launch Alliance.  

And the whole time it cost you a few hundred million dollars for what amounted to a souped up 1950’s rocket sometimes using Russian made engines, to put something into space in the United States.  There are some other rockets out there, but we are keeping this simple, and focusing on what NASA uses.

An Opportunity

So in 2001, Elon Musk, Tech entrepreneur with degrees and physics and economics was getting a payday from the companies he had built in the 90’s, namely PayPal, and was pretty well connected.

If you are a well respected entrepreneur with access to investors, over a hundred million dollars of your own money, and happen to be educated as a physicist, you get to chase your dreams and dream big.

Elon Musk speaks in terms of existential risk.  That is, what is threatening the existence of the human race, and what can you do about it?  Also keep in mind this was not long after all those disaster movies where asteroids hits the earth.

So the big dream – colonize other planets, get people living somewhere other than earth so we have options.  In 2001 all Musk was talking about was putting a robot greenhouse on Mars to see if we could grow stuff in Martian soil and also generate interest in space exploration.  He talked around, looked at options, got laughed out of Russia, twice.

Elon Musk ran into the problem we started with – the big rockets were all based on 1950’s cold war technology, were very expensive (a few hundred million $ per rocket), single use, disposable, hard to get access to even if you had the money, and required an army of people to build and launch.

Frustrated, he did what any physicist, economist, technology entrepreneur does.  He ran the numbers, did some calcs, and the more he looked at it, the more he figured he could drop the price by a factor of ten, and make good money launching rockets for $60 million a launch. 

So now we are going to steal from project management theory – run a backwards pass (start at the end and work your way to the beginning) of the basic logic that created the strategy that Elon Musk has been running with for 15 years now:

Backwards pass – Mars Strategy:

  • Problem – Colonize Space
  • Solution – Try Mars first, it’s cooler than going to the moon (been there, done that, not to mention the pop culture fascination with Mars).
  • Problem – Getting to Mars just once is way too expensive – just the required heavy lift rocket is a few hundred million dollars. And you’ll need lots of rockets.
  • Solution (2016) – Innovate heavy lift rockets that cost ten times less.  This is A005_C008_1221PL
    accomplished through cutting edge software, automation, modular design, new technology, new materials and manufacturing techniques and reusing as much of the rockets as possible. (
    Watch first ever rocket landing video 2:05 mark ).  SpaceX will put a payload into space for $57 million.  ULA’s price point is $200 Million after a cost savings campaign.
  • Problem – You need to be good at the rocket business before you can compete with United Launch Alliance, NASA, and anybody else.
  • Solution (2010)- Build a competitive rocket business first.
  • Problem – You need to learn to crawl before you walk, let alone run.
  • Solution (2006) – Start small with an prototype entry level rocket (Falcon 1), using $100M of your own money.  Prove the basics before you scale up.  And innovate as much as you can knowing what you need for that heavy lift rocket down the road.
  • Problem – It takes allot of money to get into the rocket business.
  • Solution (2002) – Find a wealthy and convincing tech entrepreneur to sell a dream.  

Elon Musk’s schwerpunkt always has been to get to Mars.  The rest of the strategy was about making that happen.  While SpaceX has been breaking records and doing things for the first time ever; NASA who is basically controlled by congress retired the old Shuttle Program, and had no rockets to call their own; and United Launch Alliance motivated by profit is downsizing, restructuring, and trying to figure out how to keep up.

Because SpaceX started in 2002, was fully funded in 2006, only launched 5 basic Falcon 1 home_block_careersrockets from 2006 to 2009, and spent the last 7 years simply building and innovating a low cost reusable rocket business with commercial and government contracts, including NASA and Air Force contracts….  Everyone seems to forget the 15 years ago Elon musk went into the rocket business because he wanted to colonize Mars.

That is until he reminded us in a big way last month – probably because he’s to the point where he needs to start generating interest and fundraising to take it to the next logical progression of his multi decade campaign to put people on Mars.  If you take a look at his speaking tour the last couple of weeks, that looks to be exactly what he is doing.

Now anybody familiar with multibillion dollar engineering projects already knows it will take longer, be harder, and more expensive than what Elon Musk is advertising.  As is evidenced by the explosion they had recently.

Every Astrophysicist will tell you that nobody has figured out good solutions to Mars colony challenges… Problems like deadly radiation, electrostatic dust sticking to everything, long duration equipment, long term health effects of half earth gravity…  There’s lots of science and engineering yet to be done.

But the Wall Street Analysts will tell you that SpaceX is making a healthy profit, beating its competitors (United Launch Alliance, NASA, the Russians, really everyone), and it’s only a matter of time until SpaceX solves the technical problems, makes it affordable, and raises the money to do it.

And if you want the strategic science lesson – it’s that a startup trying to put people on Mars is better at building and launching rockets than government run NASA and profit driven ULA. It’s why business schools teach that mission and vision statement stuff – because chasing dreams tends to yield better results, better structure, better culture than chasing votes or money.


An images thanks to SpaceX and ULA Websites
Posted in Business Strategy, Space, Strategy | 3 Comments

Is the US uninvadable?

So kids, this is something I killed a morning  with on Quora back in April – an excuse to write about a scenario I’ve obviously been thinking about for a long time.  After a few months it’s at 161,500 views, so I figure it’s worth sharing on my blog that gets a handful of hits everyday.  Enjoy!

Is the US uninvadable?

Ted Galpin, Astrophysicist cheerleader turned professional strategist

No.  If you think like a ruthless military strategist, and don’t mind civilian casualties it’s VERY doable.  Read the history of Sun Tzu and how he made his name.  Here’s how I would do it.  You have to go unconventional.

From a conventional standpoint – the basics have been covered by other answers well enough. Even without a military, The United states has a pretty well armed civilian population, and a HUGE amount of territory, hundreds of millions of people.  We can raise a militia of 50 million well armed men very easily.

Invade, occupy, conquer the US?  Unlikely in the conventional sense.

But you could raid, sack, and burn down the US and take allot if you had the resources of a major nation (Russia, China, India, Pakistan, England, France, maybe Israel or Iran).

Step 1 – Q Ships, Light ballistic Missiles, Nuclear Electromagnetic Pulse.

Q-ship technically is a WWI thing – really just an innocent looking freighter or civilian ship with enough concealed weapons to surprise and destroy pirates and submarine attacks on a transport convoy.

So first you get some really innocent looking freighters – there are thousands of them today.  You put some old school SCUDs, or other relatively simple Ballistic missile systems in the hold pointed up.

You arm them with the best atomic warheads you can get a hold of.

You get as close to the US shores as you like – and you fire those missle up into high altitude above the US – probably multiple times from multiple ships, ideally east coast, west coast, gulf of mexico, and great lakes.

Nuclear electromagnetic pulse

The idea is to make lots of those red circles over as many cities as you can.  And a couple of orange and yellow ones for good measure – say 10 or 20 missile shot up into high altitude of 20 miles above major population centers.  Plus a few higher and wider if not as strong, just for good measure.

You could do the same from Disguised satellites as well, but it would be much harder.

Result of Step 1:Nuclear electromagnetic pulse (EMP)

Without entering US territory, with a handful of freighters launching nuclear warheads, detonated high in the atmosphere, you can saturate the US with electromagnetic pluses.

So What?  Well, given the state of the US infrastructure, as I understand it from my career engineering it:

  • All consumer electronics will be dead.  All transistors get fried by an EMP.
    • Your car won’t start – electronic ignitions have been standard for decades.
    • No Cell phones
    • No Internet
    • No TV
    • No Refrigeration – food will start to spoil in days
    • No Microwaves or electric stoves. Really hard to cook food, unless you have a grill on your porch.
  • All US infrastructure will be damaged and/or inoperable.
    • Limited Commerce / Money.  Most of us use credit and debit cards.  With out internet, you are stuck with the cash in your pocket.
    • No Electricity – the power plant control, transmission, and distribution systems are not shielded, and it would take weeks or months to get the electricity back on; after undamaged parts are imported from outside the EMP area.
    • No / Limited Water – because the water supply is pumped.  If the pumps have no power, your faucet won’t work.
    • No / Limited Natural Gas – Same pump issue as the water.
    • Limited Fuel – all the electric pumps are dead, so you have to know what you are doing to simply get gas from a gas station, if they let you use tools on their pumps.  Even if you have the cash, and a 40 year old car, would they let you siphon from the main tank?
    • Food supply
      • No cold storage – fresh food goes bad in days
      • No vehicles – can’t transport food supplies
      • Industrial farms and agriculture need  electricity and water.  What happens to the feed lots, industrial dairy, chicken factories, irrigated fields?

So imagine waking up one morning, and everything is silent.  No Cars.  No Heat.  No Air Conditioning.  No Phone.  Your Cell phone is dead.  No radio or TV.  Nothing turns on.  Even your watch stopped, and your LED Flashlight won’t work.  The food in your freezer is starting to warm up, you can’t take a shower.

And you have no idea why because all communications are out.

And the only vehicles that work are kick start motorcycles and some cars and trucks, the older the better.  All test that have been done  in the last ten years show that some vehicles die, some don’t, some malfunction.  The less electronics in the car, the better the odds it will survive.

Now multiply those problems by 300 million people across the United states.

At best, you have a humanitarian nightmare, because nobody knows what is going on, and our technology and tools have been reduced to mostly the dark ages.  The only things that still works reliably are guns and fire.  And I will bet that looting and crime will be a huge problem.  You can’t call for help.

Consider that after a few days, 300 million people will be confused, desperate, dehydrated, starving and cold in the dark on foot with no way to know what is going on or if help is coming.

And the military will have the same problems.  The Navy ships will be ok – but a few hundred ships need to get here first, and can only do so much, near the coasts.  The cold war era weapons were pretty well shielded from EMP – so most military trucks, Humvees, some of the helicopters, some of the jets, will still work.

But the electricity and water problem is true for the military – as most bases use city electricity and water (They have published studies).  Most of the military battle networks, digital communications, GPS, lasers, smart weapons, Night vision, digital cameras, won’t work.  The Army will basically be stuck with 1980’s technology.  They say some of the digital stuff is shielded enough, so the Army may still have some laptops and digital radios.  But the infrastructure to support 21st century battle space awareness and communications – will be limited.   They will still have some magic, but very limited compared to what they are used to.  Don’t expect to see many drones.

So basically – after Step 1 – which is find a deniable method of EMP attack on the continental US.  Basically you have a massive humanitarian disaster with the US plunged into the dark ages, and the military will have their hands full simply trying to take care of it’s own, and provide as much humanitarian support as they can, while crime, looting, or scavenging becomes necessary for most of us to simply eat.  Not to mention the healthcare, disease, and sanitation problems.

So the entire US population is facing a humanitarian nightmare, and the military has additional logistics, supply, and communications problems on top of that.

Step 2 –  If you are organized and prepared, step 2 is is literally Sun Tzu 101.

You can play hard or soft.  But the recommended course of action would be to take advantage of the chaos and take whatever you want.  You can sack and burn down cities, steal whatever you like, and simply avoid confrontation and staged battles with an otherwise distracted military.

If I was running the invasion, I’d basically send out commando raids to start burning down every major city, as quietly as possible (make it look like criminals), so the Army isn’t looking for me.  You want to burn every soft civilian target available, live off american supplies, and quietly just make things as hard as possible on the civilians and military without letting them know you are there.

And then you are talking dark ages warfare.  Spread disease, use chemical weapons, burn down the cities, burn the crops, destroy or poison the food and water supply (simple as E.coli in some cases).  Steal everything you can.  Even a few thousand troops operating off of Q Ships with follow up support from Military ships, maybe air drop troops just to cause problems.

In a month you could have every major US city in flames, with a the civilian population starving to death, thirsty, and cold in the dark; and a military overwhelmed by too many problems, and probably not enough food even for the military.

Step 3 – Wait.

If you really do it right, you don’t let them know who did it.  Even better, populate your commando teams with third world mercenaries and terrorists…  You have decent odds at starting the dark ages, and covering the hillsides with third world terrorist arsonists and saboteurs…  Spread bio weapons and disease.

And never ever have a single citizen of you country ever set foot on US soil.  Total deniability – make it look like super terrorists instead of a state action.

Step 4 – Do what works best for you.

  • If the US collapses under it’s own weight – you could probably pull of a credible conventional invasion on “humanitarian peacekeeping” grounds, with UN support; at least after a few months and tens of millions have starved to death.
  • If the US survives, the economy, industry, and military will be in shambles, and you can basically step up as a world power to fill the vacuum left by a crumbling US dependent on humanitarian aid.

Now odds are you’d eventually get caught, and suffer reprisal from the US Navy, which has the ability to bomb most nations into the stone age even without homeland support.

But any target is vulnerable, if you know where to look, and understand science.

Nothing is uninvadable.  You just develop a strategy that exploits the weaknesses of your opponent while avoiding or redirecting it’s strengths.

So the way you conquer a nation with an unbeatable military is to attack it’s civilians, soft targets, and infrastructure using weapons that the military cannot counter, then wait for them to starve to death behind their super weapons.  You can do it with 1960’s technology, never use a gun, robot, or any science fiction.

If anybody sees any flaws or mistakes in my assessment, please let me know.

Thanks, this was a fun exercise.

Posted in Geopolitics, Military Strategy, Strategy | 9 Comments

Can Bruce Bueno de Mesquita predict the future?

So here’s one I first took on in Quora, and it’s a fun topic –

Can Bruce Bueno de Mesquita predict the future?

NYU Professor BBdM claims to predict international policy events with >90% accuracy in his book “The predictioneers game”. It seems grotesque, now we have a paper where he descibes his method. Can you falsify his statement?…BBdM Ted Talk
Ted Galpin, Astrophysicist cheerleader turned professional strategist

I’ve been following Bruce Bueno de Mesquita on and off for roughly a decade.

In theory – Yes, sometimes, with limits he can predict the future (And his marketing is probably exaggerating, but he’s not really doing anything that other mathematicians are not doing, he just sells it really well).

He’s been doing this stuff since the 80’s. But I guess it took a TED talk and a Book for people to really take notice.

Here’s some context:

  • He famously was one of the earlier academics to successfully apply game theory to political science; and everybody laughed at him until he proved to be right (He predicted an “unlikely” candidate would win an Iranian election in the 1980’s; and 2 years later proved to be right, reportedly surprising the political science community). That when the Poly Sci Quantitative vs Qualitative culture wars began.
  • He consequently started selling his services to the government.
  • His models are supposed to be rational choice driven game theory models that are heavily data dependent.
  • The famous CIA quote is that a good CIA analyst can tell you a what political party should win the next election with about 80 percent accuracy.  Bruce Bueno deMesquita – BBdM could then interview the CIA analyst, populate the variables, do the math, and give you a NAME that was right 90% of the time. Technically BBdM adds precision to existing accuracy.

Good analysis give you accuracy. BBdM adds precision by using good math on top of good behavioral science. If the underlying data points you in the wrong direction, the math probably won’t change that.

  • We know from fundamentals of math that every model he builds would be custom based on available data; using a underlying algorithm that he’s been tweaking and using since the 1980’s.
  • We know from mathematical modeling 101 that the ability to predict the future is limited by available data and known patterns. That has plagued engineers and scientists for centuries.
  • The fundamental assumption in BBdM’s models is they are built on a version of rational choice theory.
  • Rational Choice theory is often applied incorrectly. It states that people usually do what is in their best interests.
    • Rational Choice theory actually means people usually do what they believe is in their best interests (from their own point of view). Important pragmatic semantic.
      • For example – you know people will order Pizza because they like Pizza, but really they should order a salad because it is much healthier.
      • Rational choice is relative to understanding the subject. Hence why it has been misused and easily critiqued in the past.
  • BBdM’s best work tops out at 90% for a few reasons:
    • 1 – Incomplete information leads to incomplete mathematical precision and accuracy. I.E. You are vulnerable to Black Swans (what you don’t know).
    • 2 – It requires you to accurately read people’s minds – this is where the knowledge of an analyst comes in. If your analysts don’t understand the psychology of the individuals, then they may guess wrong on the rational choice, and the math can’t fix that.
    • 3 – It really should be lower than 90%, I’m guessing they only pick battles they know they can win to cheat the metric – a common business practice.
  • Because the model’s are based on rational choice, BBdM can only predict decision making of people or groups of people. And only if you can identify the influencing players of the game, model the interactions of the players, the decisions they will make, and the resultant net result when you add up all the decisions based on rational choice. Basically a giant decision tree matrix.
  • Technically speaking, a good strategist with good data using the right math and the right science should be able to consistently predict the future. This is what good management consultants try to emulate.
  • If you have any doubt, I know from my business, every Billion Dollar piece of technology (think jets, mines, power plants, bridges, refineries, Sub stations, sky scrapers) is predicatively modeled on paper and in computers to make sure it is possible, will work, and to estimate cost and economics long before they spend the billions of dollars to build it (often in the form of an engineering feasibility study).
  • Any situation where you have enough science and measurement,  the appropriate math can be used to effectively predict the future. That’s what operations research has been doing for years. That’s what Gantt logic and Agile Velocity try to do with projects.

For example – In WWII German submarines where hard to sink because they went underwater when they saw planes, so it was hard to kill them with aircraft. Operations research analysts ran lots of what we now call data analytics (by hand in the 1940’s), and figured out that if you put bright lights on a bomber, it looks more like the bright blue sky, and an anti submarine bomber could get close enough to kill a submarine before it was spotted and the submarine did a crash dive. The math predicted right that time.

So given all that.

Can BBdM – Bruce Beuno de Mesquita Predict the future? Sometimes, if you ask the right kind of questions, and he can find the right input data for the math to work. He specializes in political science and the decision making of large groups. If pressed he may even be able to model and predict a stand alone complex – but that’s kind of obscure and I don’t think anyone is looking for those yet (outside of social moment and viral based marketing techniques).

But Hannah Fry could predict how riots happen in Python a few years ago (not when, but how). Operations research has made progress in future prediction for many, many decades. And Game theory is pretty old – that’s a big part of how Rand sold Mutually Assured Destruction theory and led them to purportedly recommend economic brinkmanship to end the cold war – knowing the expensive weapons being built were unlikely to be used in WWIII. And that worked, the cold war escalated to the point of bankrupting the Soviet Union with military spending. Technically we tricked them into building a military too large to economically sustain.

To answer your question – yes, you can use math to predict the future. That is sometimes, if you know the science, the math, and have the right data and analysis. That’s why engineers are correct 98% of the time, and guys like BBdM are right only 90% of the time, and only if they are careful and only pick questions they know they can answer.

Given more time – we are likely to see the rise of something analogous to the psychohistorians of Asimov’s Foundation. Actually if you read that link – We are not that far off from that today.

As of 2016? A good Strategist working with a Good mathematician, a Good Data Scientist, a Good Operations Researcher, and a Good Behavioral Scientist should be able to predict accurately, and even maybe precisely the things we understand and know how to measure, but not the things we don’t understand, or can’t measure. And each question would take them months to answer (analyze and model), and even then it is doubtful anyone can ever beat BBdM’s claimed 90% because everybody makes mistakes and has limits.

In reality hitting 90% accuracy is crazy hard, unless you get to select your problems. In my experience “complete” models are about 80% precise (you get within 20% of what you expected) and are accurate (pointed in the right place) about 80% of the time. But getting adequate information to make a complete model is rare.

I’m guessing that BBdM gets his success by having access to the best experts and data the US Government can provide. Most good teams should be able to replicate BBdM’s results given access to that level of resources. Not easy by any means, but theoretically doable. I’ve seen comparable results in different modeling techniques in business – if you constrain the scope of the model enough, you can enhance your accuracy.

The honest take away you can get from BBdM is if you understand what someone thinks or believes in their own best self interest from their own point of view, 9 out of 10 times you should be able to predict what they will do next. Any Game theorists out there able to confirm or deny that with empirical data?

The only claim BBdM has really made is he can mathematically model that rational choice up to scale of decisions made by large populations like how nations vote in electrons. Seeing as he’s been on a sole source contract to the US Government doing that for like 20 years or more – I’m guessing he’s been doing it, and his success rate is good enough to keep him on retainer.

Interesting and very skeptical article that goes into some details here: Bueno de Mesquita’s prediction of Iran’s future – it reminds me of 5 mathematicians having 25 opinions on one topic.

Hope that was enlightening and answered the question.

-Ted S Galpin

Posted in Geopolitics, Math, Strategic Science | 4 Comments

A New Strategic Science?

A new strategic science.

That could have so many meanings.  So what do I mean?

First obviously I haven’t been publishing much on the Blog the last couple of years.  Due to a combination of things – raising a small child, some family health issues, diving in head first to help build a tech startup (which of course failed after 3 years due to a dominated strategy – a company political battle I lost.  Kids, never bet the farm on an industry bubble, especially the price of oil), and any number of distractions.

A few years older, many years smarter, and I noticed that I wrote a ton on Quora for some reason in the past couple years.  The passion never died, it just went else where.

Relative to the  new strategic science Blog – This really is my passion – so I’m getting back in to sharing.  I have goals I may or may not live up to.  But no more fear, and a new strategic science blog, we are gonna have fun with this.  It’s time.8eddd537-b668-4fcf-8f00-f012ea9d5322-original

Relative to the new strategic science discipline – been through some massive experiences, and many very cool books have gone through my mind.  Connecting lots of dots between history, practice, and industry. There is enough knowledge and empirical data out there to build a scientific discipline out of strategy.

And when you drive the art of strategy down
the knowledge funnel; and turn it into a science, a craft, where you can engineer success as much as you can engineer anything else…

That’s what I’ve been doing with my life, my career, my studies for 20 years now.

Time to share and put it into a form where others can really use it, be entertained by it, and benefit from strategy.

So I’ll be writing more, and hitting strategy on a wider variety of subjects, and having more fun with it.  Sharing the ideas and work I do elsewhere, like Quora, LinkedIn, Facebook  Because it’s what I love, and I know it can help you.

Thanks for reading,

– Ted S Galpin

Posted in Strategic Science, Strategy | Leave a comment

Strategy vs. Tactics (not what you expect)

suntzu strategy and tactics

So, I got some highly unanticipated free time, so here’s a quick run through some important fundamentals.  I don’t care what you do, mastery of the fundamentals is the key to being successful, be it sports, work, or strategy.

First the obligatory definitions, I could argue these, but for today we’ll go by what’s currently popping up on Wikipedia:

Strategy:  a high level plan to achieve one or more goals under conditions of uncertainty. (one could debate that strategy not just a synonym for plan, we’ll save that for another time).

tactic is a conceptual action implemented as one or more specific tasks (fair enough).

So formally speaking the text book definitions from military science and business books goes like this, a strategy is a plan that is executed via a variety of tactics.  The military goes on to say that grand tactics are large scale tactics, and grand strategy is the political strategy that provides overall direction to military strategy (i.e. terrorists bad, China OK, Canada Harmless).

Gee, that’s actually kinda boring.  So how does this help you?  Where is the salient advantage in these pedantic semantics?

First, the common sense is strategy and tactics are interchangeable words for a solution to the problem.  Strategy and tactics have a yin yang relationship in defining each other – bigger long term picture relative to immediate small scale.

Stay with me.

Tactical means winning battle today. Strategy means winning the war. But the battle had a strategy, broken up into tactics utilized by each team. But each team leader had a strategy, that was adapted in execution by a strategic utilization of appropriate tactics for the resistance and challenges faced. Yin / Yang.  Once you leave the textbook, the only difference between strategy and tactics is your perspective.  Boxers and quarterbacks have strategies for minutes worth of action. Political strategies have tactics that take months to execute.

The difference between strategy and tactics in everyday language is simply to show the short term vs long term perspectives to the task at hand.  That would make grand strategy where strategies interact, and grand tactics where different tactics interact.

So what? How does this help you? (finally there)

First, conversationally it is good to simply know the effective semantic difference between strategy and tactics.  Hopefully your now there.

Secondly, in order to be truly successful, you need to have good strategy and good tactics.  You need to understand the role of each.

Many business have good strategies, but bad tactics and can’t make the strategy happen. In Afghanistan, the US military enjoys tactical brilliance and dominance; they almost never lose a battle; but strategically Afghanistan is a decade old strategic quagmire.  Winning every battle yet to win the war.  Just like we all know well established small businesses that were tactically brilliant, did great work – but were strategically incapable of adapting to change and simply went out of business when the economy crashed.

Conversely, there is a long tradition of brilliant strategies they nobody managed to make happen.  The quote that comes to mind is “the best laid plans of mice and men.”

The simple truth is strategy and tactics are the same skill, same concept applied at different levels. That simple idea is important because you now know that a perfect strategy cannot succeed without good tactics; and brilliant tactics probably won’t deliver strategic results on their own.  When your strategies fail for lack of execution, you know there is a disconnect between the strategic and the tactical.

So when you are making your decisions, planning, preparing, and executing your vision for the future, remember that you will have to simultaneously view your struggles from both the strategic and tactical points of view in order to succeed.

And honestly, you probably need to consider the grand tactical and grand strategic views as well to make your long term life get to where you want to go.

So what does that mean?

Strategically you have one year, two year, five year, and probably ten year or longer perspective, goals, and challenges.

Tactically you have a perspective, goals and challenges for this morning, today, this week, this month.

Grand tactically you have to manage the interactions between your work tactical situation, your home and personal tactical situation. You may have multiple projects. Health or fitness goals, school, family, or relationship commitments and challenges that affect your daily tactics of that important business problem your are tactically solving now. Also there are grand tactical problems you can’t control – traffic, weather, sickness, accidents. But you have to manage the grand tactical everyday of how do I deal with a getting sick, a fender bender, find a breakfast and still keep my boss happy this morning?  At minimum you can think of grand tactical as tactical work life balance.

In terms of grand strategy, well; you need to balance work, career, business and personal strategic goals and resources, and how those strategies interact. Grand strategically you also have to consider the big picture – politics, law, the economy – things you can’t control that may have a big effect.  There is a good time and a bad time to make investments or purchase real estate – if you’re lucky you find the intersection of grand strategic opportunity with personal strategy – i.e. buy a new home after the housing bubble bursts, not before – if you have the choice.  Anybody who bought a house in 2008 either didn’t have a choice, or failed their due diligence of the housing bubble that was well understood in financial journals a year before the market crashed.

Grand strategically how do I use limited time and money to get what I want in the long term – career, family, home, car, hobby? In business grand strategy how to I balance supply chain costs with new healthcare regulation costs and changes to corporate tax code while attracting quality talent?  Those are grand strategy considerations that form the broader picture of sometimes conflicting strategic goals we have – like put in overtime for that promotion but spend more time with family.

And obviously here where we circle around.  You need tactical competence on the day to day stuff, grand tactical skill to balance conflicting priorities and limited resources, all while remembering your strategic schwerpunkt ….  Not to mention the grand strategy reality of conflicting strategies and the grand strategic environment you are in.

Now, you can ignore these truths and keep muddling through the way you like.  Or you can try to organize your mind and efforts in terms of the tactical, grand tactical, strategic, and grand strategic – and see if that perspective helps you improve how you manage priorities and get things done, considering short term opportunities within the context of long term goals and strategic constraints out of your control.

Yes, thinking like a strategist can and probably will give you a headache.  I recommend you start by writing things down. As I’ve learned, if you don’t write it down, it never happened.

That’s enough for today.  Thanks for reading, hope it helps.  Let me know.

– Ted Galpin SPP

Posted in Business Strategy, Strategic Science, Strategy | 11 Comments