201902_AI.jpg  

BY: HANS ALBRECHT, CIM®, FCSI, VICE-PRESIDENT, PORTFOLIO MANAGER AND OPTIONS STRATEGIST, HORIZONS ETFS

February 26, 2019

When I was a youngster, I loved the movie WarGames, an early 1980s cold-war thriller starring Matthew Broderick. At the time, the premise that a young and generally unmotivated youth could hack into the school database in order to change his grades was indeed exciting to me! The story gets much more serious when the protagonist believes he’s playing a computer game called Global Thermonuclear War, not realizing that he had actually found a back door to a NORAD supercomputer that is running these war simulations.

The simulations are misconstrued as real attacks by military personnel at NORAD and chaos ensues. I won’t ruin the rest, but I will say the movie was well ahead of its time in depicting many of the processes and overall philosophy in place in aritificial intelligence ("A.I.") today. The computer in the movie learns and draws conclusions from self-play, a bottom-up deep learning neural net process that the best A.I. agents mimic.

Combining A.I. with game-playing – computer-based or otherwise – has become a go-to method in developing what are called reinforcement learning outcomes. Video games in particular are able to offer up countless complex narratives that can serve as a great training ground for A.I. So adept are today’s A.I. agents now that machines are quite literally learning by playing simulation games on their own. The rate at which A.I. systems are learning to play well, even in games like poker (which involve many behavioural subtleties like unknown cards and bluffing) is startling onlookers. Google’s Q-Network A.I. mastered 49 old Atari video games without even having been given any instructions on how to play them. But let’s dial back the clock before continuing.

In 1997, IBM’s DeepBlue super-computer beat the world’s greatest chess player Garry Kasparov, and Garry was none too happy about it. It was hyped as a man-versus-machine encounter that seemed to put the weight of humanity itself on Kasparov’s shoulders. It was, in fact, a rematch of a year-earlier battle in which Kasparov was the victor. The world then waited to see if DeepBlue’s developers could improve the system’s A.I. sufficiently in a year’s time. Indeed, the computer “took home the hardware”. It was impressive, to say the least, but it represented an earlier form of A.I. – one that I refer to as a top-down approach. The system was coded by great chess players and developers to recognize a set of rules and every possible move. There was no learning involved – just a ruthless and emotionless chess-playing engine that could evaluate 200,000,000 moves per second and explore up to 40 moves ahead.

Fast-forward to 2016 when the A.I. of Google’s DeepMind’s AlphaGo beat Lee Sedol, the world’s second-ranked professional player in Go – a Chinese strategy board game. The fundamental difference in AlphaGo is that it uses a bottom-up approach. It was fed a great deal of data in the form of a hundred thousand Go games, but a crucial difference that set it apart from previous A.I. was its incredible ability to learn. AlphaGo honed its skills by playing thousands of Go games on its own. Before 2016, no one had thought it remotely possible for a computer to beat a top player in the complex game of Go, but it happened.

The amazing thing about this approach is that the A.I. can learn to adapt to unexpected situations and therefore can potentially learn beyond the capability of what a human can accomplish. Deep-learning networks are showing us not only improved ways of doing things, but entirely different ones too. Go players have carefully studied the strategies that AlphaGo employed in the tournament, and they’re marveling at moves they had never even considered. Some of those players have vastly improved their Go rankings as a result. A.I. is now teaching us to perform better. It’s hard to believe that there could be moves that a Kasparov or Sedol haven’t contemplated, but I believe we only think this because we’re human and inherently skewed by our own humanness.

We are emotional creatures of habit, seeking the familiar even at a subconscious level. As much as we resist, we do tend to repeat behaviour, good or bad, effective or not. But AlphaGo doesn’t forget datasets. It doesn’t conform to prevailing popular methods. It doesn’t fear hurting its reputation should it misstep. It only cares about improving and winning. A generalist approach to A.I. is what will ultimately lead to improvements on a level that DeepBlue’s framework could never have possibly achieved. In fact, the newest iteration of AlphaGo only employs learning and with no previous dataset at all– it’s all about improving over time through learning. Learning from experience is a game-changer of the utmost importance to A.I. It will forever change the way we work and live our lives.

In 2017, Garry Kasparov, in a 180-degree change of heart, wrote that we should now embrace the A.I. revolution and the progress that it promises. I say we should invest in it too. FOUR and RBOT invest in the leading companies in the space, as well as in related beneficiaries of the theme – big data, smart robotics, autonomous vehicles, cybersecurity, healthcare and e-commerce applications. The next A.I. revolution has begun and investors should consider being a part of it.

The views/opinions expressed herein may not necessarily be the views of Horizons ETFs Management (Canada) Inc. All comments, opinions and views expressed are of a general nature and should not be considered as advice to purchase or to sell mentioned securities. Before making any investment decision, please consult your investment advisor or advisors.

Share This Article

Next article
 

Checking the Charts with a Middle Ages Mathematician

Commissions, management fees and expenses all may be associated with an investment in exchange traded products managed by Horizons ETFs Management (Canada) Inc. (the "Horizons Exchange Traded Products"). The Horizons Exchange Traded Products are not guaranteed, their values change frequently and past performance may not be repeated. The prospectus contains important detailed information about the Horizons Exchange Traded Products. Please read the relevant prospectus before investing.

The Horizons Exchange Traded Products include our BetaPro products (the “BetaPro Products”). The BetaPro Products are alternative mutual funds within the meaning of National Instrument 81-102 Investment Funds, and are permitted to use strategies generally prohibited by conventional mutual funds: the ability to invest more than 10% of their net asset value in securities of a single issuer, to employ leverage, and engage in short selling to a greater extent than is permitted in conventional mutual funds. While these strategies will only be used in accordance with the investment objectives and strategies of the BetaPro Products, during certain market conditions they may accelerate the risk that an investment in units of a BetaPro Product decreases in value. The BetaPro Products consist of our 2x Daily Bull and 2x Daily Bear ETFs (“2x Daily ETFs”), Inverse ETFs (“Inverse ETFs”) and our BetaPro S&P 500 VIX Short-Term Futures™ ETF (the “VIX ETF”). Included in the 2x Daily ETFs and the Inverse ETFs are the BetaPro Marijuana Companies 2x Daily Bull ETF (“HMJU”) and BetaPro Marijuana Companies Inverse ETF (“HMJI”), which track the North American MOC Marijuana Index (NTR) and North American MOC Marijuana Index (TR), respectively. The 2x Daily ETFs and certain other BetaPro Products use leveraged investment techniques that can magnify gains and losses and may result in greater volatility of returns. These BetaPro Products are subject to leverage risk and may be subject to aggressive investment risk and price volatility risk, among other risks, which are described in their respective prospectuses. Each 2x Daily ETF seeks a return, before fees and expenses, that is either 200% or –200% of the performance of a specified underlying index, commodity futures index or benchmark (the “Target”) for a single day. Each Inverse ETF seeks a return that is –100% of the performance of its Target. Due to the compounding of daily returns a 2x Daily ETF’s or Inverse ETF’s returns over periods other than one day will likely differ in amount and, particularly in the case of the 2x Daily ETFs, possibly direction from the performance of their respective Target(s) for the same period. Hedging costs charged to BetaPro Products reduce the value of the forward price payable to that ETF. Due to the high cost of borrowing the securities of marijuana companies in particular, the hedging costs charged to HMJI are expected to be material and are expected to materially reduce the returns of HMJI to unitholders and materially impair the ability of HMJI to meet its investment objectives. Currently, the manager expects the hedging costs to be charged to HMJI and borne by unitholders will be between 15.00% and 35.00% per annum of the aggregate notional exposure of HMJI’s forward documents. The hedging costs may increase above this range. The manager will publish, on its website, the updated monthly fixed hedging cost for HMJI for the upcoming month as negotiated with the counterparty to the forward documents, based on the then current market conditions. The VIX ETF, which is a 1x ETF, as described in the prospectus, is a speculative investment tool that is not a conventional investment. The VIX ETF’s Target is highly volatile. As a result, the VIX ETF is not intended as a stand-alone long-term investment. Historically, the VIX ETF’s Target has tended to revert to a historical mean. As a result, the performance of the VIX ETF’s Target is expected to be negative over the longer term and neither the VIX ETF nor its target is expected to have positive long-term performance. Investors should monitor their holdings in BetaPro Products and their performance at least as frequently as daily to ensure such investment(s) remain consistent with their investment strategies.

*The indicated rates of return are the historical annual compounded total returns including changes in per unit value and reinvestment of all dividends or distributions and do not take into account sales, redemption, distribution or optional charges or income taxes payable by any securityholder that would have reduced returns. The rates of return shown in the table are not intended to reflect future values of the ETF or returns on investment in the ETF. Only the returns for periods of one year or greater are annualized returns.