Signals – O’Reilly


We’re tracking notable developments in the democratization of AI, open source supply chain attacks, brain-computer interfaces, and more.

Radar trends began as an internal resource for O’Reilly. It’s a monthly list of things that I find interesting or important—possibly not “trends,” strictly understood, but ideas that might become trends.

Most items on the list have links to media sources—some original sources, more frequently other reporting, whichever I think is more informative. Some items are personal observations or summaries of interesting conversations.

Many items are about technology, conceived rather broadly. Over time, topics will include biology and biotech, design and user experience, ethics, open source communities, energy, and more.

We hope you find these observations useful and informative!

AI trends

Germany, France, and Japan have formed an alliance for “human-centered” AI. Canada is also a potential member. This move is partially to get them “critical mass” in AI research, rather than coming off in third place, and partially because they don’t trust ethical stances in the US and China, and they see a market opportunity for ethical AI.

RunwayML is yet another entry in the “create a deep learning model with minimal programming” sweepstakes. Jeremy Howard’s platform.ai sounds the most radical; there’s also AI2GO (from xnor.ai), AutoML (from Google Cloud Platform), and others. Are we on the edge of programmerless AI? Is this Software 2.0?

Cerebras has announced a trillion transistor chip. It’s by far the largest chip ever marketed (roughly the size of a sheet of paper), and it’s designed for training AI systems. Although it has huge power requirements (15 KW), it probably makes AI development less power hungry, and certainly less time consuming. In the long run, this probably doesn’t address the problem of power consumption; given faster, more powerful processors, people will create bigger, more complex models.

We’ve seen Snorkel (now a startup) and the other tools from Chris Ré’s lab at Stanford. Scale AI appears to be doing something similar (partially automated image tagging, though it still uses contract workers on the back end). This is an important step in the democratization of AI. As Ré said at the O’Reilly Artificial Intelligence Conference in New York, data collection and model building have largely been automated, but data tagging and cleaning are stubbornly dependent on human labor.

Security trends

The Linux Foundation’s Confidential Computing Consortium aims to protect data that is in use–i.e., data that is being computed on, not data at rest (in storage) or in flight (being transferred). This requires a combination of hardware and software to build a trusted computing environment. It has significant backing from Intel, Microsoft, and Red Hat/IBM.

Supply chain attacks aren’t entirely new, but they’re becoming more common. The idea is to attack the open source supply chain: find a project that isn’t well managed (of which there are many), and submit changes that create a backdoor that can be exploited in other projects that include this project as a dependency. Backdoors (and other security problems) can be very subtle, and easy to introduce into a project that isn’t being watched very carefully. Examples of successful exploits are Webmin and RubyGems.

Google’s proposal for controls on cookies and browser fingerprinting is interesting on several levels. It establishes a privacy budget. A publisher can make a limited number of calls asking for information about the browser; that’s enough to give a publisher partial information, but not complete information. The publisher can decide what it wants. This isn’t as extreme as Apple’s restrictions, or what Mozilla is likely to implement in Firefox, at least in part because Google is dependent on advertiser income.

Google is talking about replacing passwords with biometrics. They already have fingerprint recognition on Android, but they are also starting to use biometrics for access to other services. We’ve heard for some years now that passwords would disappear; maybe it’s finally time? On the other hand, fingerprints are ultimately just objects in databases, and like anything else, databases can be attacked. Once a fingerprint is comprised, you can’t change it.

Brain-computer interface trends

It’s not just Elon Musk who wants to put wires in your brain. Facebook does, too. Here’s Facebook’s take on brain-computer interfaces. The ostensible motivation is pretty much the same as Musk’s: helping people with disabilities first, but then a big step forward in user interfaces.

One researcher is building neural networks with real neurons (and putting neurons into chips, where they live for a couple of months). It’s very futuristic research, but biological computation could become something worth following. Biological circuits aren’t terribly fast, but they’ve certainly solved some connectivity, power, and density issues.

Other trends

I’ve seen recent interest in web frameworks within frameworks: frameworks within React, frameworks within Vue, etc. I think this says something about the fragility of the current state of web programming. React, Angular, and their equals are just too complex. I don’t know if metaframeworks are the answer (I think they’re not), but their existence is certainly a signal of the problem.

RISC-V is an open source architecture for building high-performance CPUs. This would compete directly with the Intel chips and all the other mass-market CPUs. It’s interesting to see that Adafruit and Raspberry Pi are getting involved.

The Federal Reserve has announced that it will create its own electronic system for clearing payments. This is effectively a government “blockchain” (though who knows if it will use blocks or chains). My guess is that it’s too little too late (launching in 2023 at the earliest, and competes with projects already in progress at major banks).

Attacking NLPs, Data Kit, Quantum Computing and Computation, and What-If Question Archive

  1. Universal Adversarial Triggers for Attacking and Analyzing NLPWARNING: This paper contains model outputs which are offensive in nature. How to make small changes to the input text used to build a model, causing large downstream changes in its accuracy. (via Nasty Language Processing: Textual Triggers Transform Bots Into Bigots)
  2. AP Data Kitan open-source command-line tool designed to better structure and manage projects. It makes it easier to standardize and share work among members of your team, and to keep your past projects organized and easily accessible for future reference. AP DataKit works off a basic framework that includes the core product and a few key plugins to help you manage where your data files and code are stored and updated.
  3. Quantum Computing and the Fundamental Limits of Computation (Scott Aaronson) — three lectures: The Church-Turing Thesis and Physics (watch, PPT), The Limits of Efficient Computation (watch, PPT), and The Quest for Quantum Computational Supremacy (watch, PPT).
  4. WIQAthe first large-scale dataset of “What if…” questions over procedural text. WIQA contains three parts: a collection of paragraphs each describing a process, e.g., beach erosion; a set of crowdsourced influence graphs for each paragraph, describing how one change affects another; and a large (40k) collection of “What if…?” multiple-choice questions derived from the graphs. […] WIQA contains three kinds of questions: perturbations to steps mentioned in the paragraph; external (out-of-paragraph) perturbations requiring commonsense knowledge; and irrelevant (no effect) perturbations.

Bayesian Philosophy, Combining Features, Quantum INTERCAL, and Universal Decay of Memory

  1. The Philosophy and Practice of Bayesian StatisticsA substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism […] Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
  2. Erlang, or How I Learned to Stop Worrying and Let Things Fail (John Daily) — talk from 2014 that highlights how it’s not just one killer feature that makes Erlang work so well for its problem domain, but rather it’s how features work so well together. A thought for language and product designers everywhere.
  3. Quantum INTERCAL — INTERCAL is a parody programming language (The full name of the compiler is “Compiler Language With No Pronounceable Acronym,” which is, for obvious reasons, abbreviated “INTERCAL.”), and this page proposes quantum computing extensions for it. I love this kind of whimsy, and hope our industry doesn’t lose it.
  4. The Universal Decay of Memory and Attention (Nature) — Our results reveal that biographies remain in our communicative memory the longest (20–30 years) and music the shortest (about 5.6 years). These findings show that the average attention received by cultural products decays following a universal biexponential function. Mysteriously fails to cover the incredible vanishing act of new people’s names in my brain. (And previously-known people’s names)

Distributed Consistency, Face Anonymization, Game Mechanic Discovery, and Images of Images

  1. Waltz: A Distributed Write-Ahead LogWaltz is similar to existing log systems like Kafka in that it accepts/persists/propagates transaction data produced/consumed by many services. However, unlike other systems, Waltz provides a machinery that facilitates a serializable consistency in distributed applications. It detects conflicting transactions before they are committed to the log. Waltz is regarded as the single source of truth rather than the database, and it enables a highly reliable log-centric system architecture.
  2. DeepPrivacya generative adversarial network for face anonymization. A first attempt at an interesting line of privacy provision.
  3. Automatic Critical Mechanic Discovery in Video GamesWe present a system that automatically discovers critical mechanics in a variety of video games within the General Video Game Artificial Intelligence (GVG-AI) framework using a combination of game description parsing and playtrace information. Critical mechanics are defined as the mechanics most necessary to trigger in order to perform well in the game.
  4. tilerBuild images with images. This seems like it could be useful, but I can’t immediately think how.

Mapping Values, Crawlers are Legal, Laser Tripwire, and Coercion-Resistant Design

  1. From Values to Rituals (Simon Wardley) — interesting to see him taking mapping into the world of culture and values.
  2. HiQ Labs v LinkedIn Decision — not only is scraping legal, LinkedIn can’t put barriers in the way of HiQ’s crawlers. (!) (via Hacker News)
  3. daytripper — a small hardware box with a laser tripwire that triggers actions on your computer, e.g. to hide the game you’re playing and replace it with a work app.
  4. Coercion-Resistant Design (Eleanor Saitta) — a security architect looks at how to protect the privacy and security of your users in the face of a malicious state.

Secure Android, Group Chats, Ethical Location Data, and Philosophy of Computer Science

  1. Secured Android Phone SpecI want to build a secured phone that can be used as either a hardened comms device, or even as a daily driver. I have a decade of experience in practical applied operational security, and over 5 years of experience working on secured Android phones. This project is my last attempt to make a secured phone for everyone.
  2. China is Cashing In On Group Chats (A16Z) — interesting for the mechanisms to keep it manageable and free of buttheads: QR codes to join and capped group sizes; anonymous usernames (but WeChat knows who you are, for police purposes); no visible history before you joined, etc.
  3. Ten Years On, Foursquare Is Now Checking In To You (NY Mag) — But even if Crowley and Glueck have the best intentions, until there is federal oversight, they are a cork in a dam, accountable to themselves, investors, and one day, with a potential IPO looming, shareholders. This x1000.
  4. Philosophy of Computer Science — long and good. I love that it starts with “what is Computer Science?” and works up to AI and ethics through “do we compute with symbols or with their meanings?” and “copyright vs patents.” I like his questions that CS is concerned with: What can be computed?; How can it be computed?; Efficient computability; Practical Computability; Physical Computability; Ethical Computability.” The author is the CS professor who gave us the wonderful valid sentence “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo”. (via Hacker News)

Code Reviews, Dogfooding, Deobfuscation, and Differential Privacy

  1. How to Do a Code Review — Google’s guidelines. Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe. Let the church say Hallelujah!
  2. The Work Diary of Parisa Tabriz, Google’s ‘Security Princess’ (NYT) — Grab my iPhone and Windows laptop for the day. Neither is my primary device, but I like to use them on Wednesdays. Thursdays, I try to mostly use my Mac, and the rest of the week I’m on my Chromebook or my Pixel Android phone. I’m responsible for Chrome across every operating system, so I try to use all the different Chromes each week to catch the subtle and important differences, and give feedback or file bugs if something isn’t working right. Yes, this is something product managers should do.
  3. SATURN: Software Deobfuscation Framework Based on LLVMWe show how binary code can be lifted back into the compiler intermediate language LLVM-IR and explain how we recover the control flow graph of an obfuscated binary function with an iterative control flow graph construction algorithm based on compiler optimizations and SMT solving. Our approach does not make any assumptions about the obfuscated code, but instead uses strong compiler optimizations available in LLVM and Souper Optimizer to simplify away the obfuscation.
  4. Google’s Differential Privacy Library — I particularly liked: This project also contains a stochastic tester, used to help catch regressions that could make the differential privacy property no longer hold. (via Google Developer Blog)

Cultural Competency, Computer-Generated Sound, Bottom-Up CS, and Continuous Compliance

  1. TikTok is Fuelling India’s Deadly Hate Speech Epidemic (Wired UK) — “All platforms including TikTok lack the cultural competency to enter our market with a clear understanding of the volatile nature of its internal dynamics,” says Soundararajan. “There is not a single platform that has cultural competencies related to caste and religious extremism.”
  2. Stanford Music 220A: Fundamentals of Computer-Generated Sound — “Fundamentals”. I see what you did there.
  3. Computer Science from the Bottom Up — reminds me of NAND2Tetris.
  4. Continuous Compliance — automate the testing of all your invariants. This works for security, privacy, reliability, usability, and more.

iOS Security, IOT Wifi Attacks, Interactive SSH Programs, and Replacing Faces in Video

  1. A Secured Android Phone is Safer Than an iOS Device (thegrugq) — The iOS ecosystem is a monoculture, where security is tied to latest hardware and latest software. If you’re behind on either one? Vulnerable to commercial exploit chains. Multiple chains. Android has become incredibly more resilient, and due to diversity much harder to attack.
  2. ESP32 and ESP8266 AttacksThis repository demonstrates 3 Wi-Fi attacks against the popular ESP32/8266 IoT devices. They’re the go-to Wi-Fi chips for Arduino-type projects, and so are in a startlingly-large number of IoT devices.
  3. Building Interactive SSH ProgramsWriting interactive SSH applications is actually pretty easy, but it does require some knowledge of the pieces involved and a little bit of general Unix literacy.
  4. DeepFaceLab — open source tool that uses machine learning to replace faces in video.

Crummy Translations, Synthetic Datasets, Building Communities, and Deleting Accounts

  1. Classifying Topics in Speech When All You Have is Crummy TranslationsWhile the translations are poor, they are still good enough to correctly classify one-minute speech segments over 70% of the time—a 20% improvement over a majority-class baseline. Such a system might be useful for humanitarian applications like crisis response, where incoming speech must be quickly assessed for further action.
  2. Synthetic Data Sets: A Non-Technical Primer For The Biobehavioral Sciences — respecting confidentiality by generating data sets that preserve their statistical properties and relationships between variables while varying the specific data.
  3. Get Together — a book with how-to advice from real community builders, from the team behind The Get Together podcast.
  4. Just Delete MeA directory of direct links to delete your account from web services.

Enigma Simulator, Robot Startups, Code as Type, and Conversational Modeling

  1. Visual Enigma Machine Simulator — this is the triple threat: interesting, of historic interest, and PURTY. (via Tom MacWright)
  2. Move Fast and (Don’t) Break Things: Commercializing Robotics at the Speed of Venture Capital (YouTube) — interesting talk from a conference whose theme was “Notable Failures.”
  3. Leon Sansa geometric sans-serif typeface made with code. It draws on an HTML Canvas so you can futz with the typeface yourself to get nifty effects. The demos are cool.
  4. Microsoft Icecapsan open-source toolkit with a focus on conversational modeling.

Multi-Language Teams, AI Release Models, Security Myth, and The Internet is for End Users

  1. Lessons From Working With Teams Who Speak English as a Second LanguageWe normally “take notes in public” by writing down key points in a shared document or chat. One of us will act as a contemporaneous note taker, capturing key points that anyone makes so that everyone can read the gist in addition to hearing it. This “real time subtitles” approach also communicates that we are actively listening and if we have misunderstood something, or they realize they misspoke and want to rephrase, then they can revisit that point immediately. This can be a superpower.
  2. Release Strategies and the Social Impacts of Language Models — a paper on OpenAI’s strategy for releasing their language model, which is scary-good at generating text. We chose a staged release process, releasing the smallest model in February, but withholding larger models due to concerns about the potential for misuse, such as generating fake news content, impersonating others in email, or automating abusive social media content production. We released the next model size in May as part of a staged release process. We are now releasing our 774 million parameter model.
  3. The Myth of Consumer Grade Security (Bruce Schneier) — that distinction between military and consumer products largely doesn’t exist. All of those “consumer products” Barr wants access to are used by government officials — heads of state, legislators, judges, military commanders and everyone else — worldwide. They’re used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They’re critical to national security as well as personal security.
  4. The Internet is for End Users (IETF Draft) — As the Internet increasingly mediates essential functions in societies, it has unavoidably become profoundly political; it has helped people overthrow governments and revolutionize social orders, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying others’. All of this raises the question: Who do we go through the pain of gathering rough consensus and writing running code for?

Debugging a Scale Problem, Verifying Cryptographic Protocols, Remote Team Stress, and PAC-MAN Source

  1. 6 Lessons we Learned When Debugging a Scaling Problem on GitLab.comWhen you choose specific non-default settings, leave a comment or link to documentation/issues as to why; future people will thank you. This.
  2. Verifpalsoftware for verifying the security of cryptographic protocols. Building upon contemporary research in symbolic formal verification, Verifpal’s main aim is to appeal more to real-world practitioners, students, and engineers without sacrificing comprehensive formal verification features.
  3. Stress in Remote Teams — features a good list of the causes of stress in remote teams. The section on work-family conflict struck close to home (so to speak).
  4. Atari PAC-MAN Source Code — original Atari 8-bit PAC-MAN source code. You can even compare versions with and without use of the macro assembler.

Tech and Politics, Crypto-Mining Malware, Cost of Securing DNS, and Anti-Fuzzing Techniques

  1. Summer School Presentations — a great selection of talks on technology and political structures.
  2. A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted WealthIn this paper, we conduct the largest measurement of crypto-mining malware to date, analyzing approximately 4.4 million malware samples (one million malicious miners), over a period of 12 years from 2007 to 2018. We then analyze publicly available payments sent to the wallets from mining-pools as a reward for mining, and estimate profits for the different campaigns. Our profit analysis reveals campaigns with multi-million earnings, associating over 4.3% of Monero with illicit mining.
  3. Analyzing the Costs (and Benefits) of DNS, DoT, and DoH for the Modern Webtwo new protocols have been proposed: DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT). Rather than sending queries and responses as cleartext, these protocols establish encrypted tunnels between clients and resolvers. This fundamental architectural change has implications for the performance of DNS, as well as for content delivery. In this paper, we measure the effect of DoH and DoT on name resolution performance and content delivery.
  4. Fuzzificationanti-fuzzing techniques.

Personal Information, Research Data, Massive Lamba Scale, and The Moral Character of Cryptographic Work

  1. Presidio — recognizers for personally identifiable information, assembled into a pipeline that helps you scrub sensitive text such as credit card numbers, names, locations, social security numbers, bitcoin wallets, US phone numbers, and financial data.
  2. Microsoft’s Academic Knowledge Grapha large RDF data set with over eight billion triples with information about scientific publications and related entities, such as authors, institutions, journals, and fields of study. The data set is based on the Microsoft Academic Graph and licensed under the Open Data Attributions license. Furthermore, we provide entity embeddings for all 210M represented scientific papers.
  3. GG — code from the paper From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers, describing a framework and a set of command-line tools that helps people execute everyday applications—e.g., software compilation, unit tests, video encoding, or object recognition—using thousands of parallel threads on a cloud functions service to achieve near-interactive completion times. In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy. (via Hacker News)
  4. The Moral Character of Cryptographic WorkCryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plead for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.

Avoiding Sexual Predators, YouTube Radicalization, Brian Behlendorf Interview, and Cyberpunk Present

  1. How to Avoid Supporting Sexual Predators (Valerie Aurora) — Your research process will look different depending on your situation, but the key elements will be: (1) Assume that sexual predators exist in your field and you don’t know who all of them are. (2) When you are asked to work with or support someone new, do research to find out if they are a sexual predator. (3) When you find out someone is probably a sexual predator, refuse to support them.
  2. Auditing Radicalization Pathways on YouTubethe three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. [Intellectual Dark Web] content in the past. And recommendations steer people to more extreme content.
  3. Brian Behlendorf InterviewWhere a distributed database that was not just “Here is a master MySQL node and slaves that hang off of it,” was not just a multi multi-write kind of system, but one that actually supported consensus, one that actually had the network enforcing rules about valid transactions versus invalid transactions. One that was programmable, with smart contracts on top. This started to make sense to me, and was something that was appealing to me in a way that financial instruments and proof-of-work was not. Hyperledger was announced by a set of large companies, along with the Linux Foundation to try to research this space further, and try to figure out the enterprise applications of these technologies.
  4. Employees Connect Nuclear Plant to the Internet so They Can Mine Cryptocurrency (ZDNet) — on the one hand I’m “we’re living in a Cyberpunk novel!” and on the other hand I’m “oh god, we’re living in a Cyberpunk novel!”.

Open Source Economics, Program Synthesis, YouTube Influence, and ChatBot Papers

  1. The Economics of Open Source (CJ Silverio) — I’m going to tell you a story about who owns the Javascript language commons, how we got into the situation that the language commons is *by* someone, and why we need to change it.
  2. State of the Art in Program Synthesis — conference, with talks to be posted afterwards, run by a YC startup. Program Synthesis is one of the most exciting fields in software today, in my humble opinion: Programs that write programs are the happiest programs in the world, in the words of Andrew Hume. It’ll give coders superpowers, or make us redundant, but either way it’s interesting.
  3. Alternative Influence (Data and Society) — amazing report. Extremely well-written, it lays out how the alt right uses YouTube. These strategies reveal a tension underlying the content produced by these influencers: while they present themselves as news sources, their content strategies often more accurately consist of marketing and advertising approaches. These approaches are meant to provoke feelings, memories, emotions, and social ties. In this way, the “accuracy” of their messaging can be difficult to assess through traditional journalistic tactics like fact-checking. Specifically, they recount ideological testimonials that frame ideology in terms of personal growth and self-betterment. They engage in self-branding techniques that present traditional, white, male-dominated values as desirable and aspirational. They employ search engine optimization (SEO) to highly rank their content against politically charged keywords. And they strategically use controversy to gain attention and frame political ideas as fun entertainment.
  4. Chatbot and Related Research Paper Notes with ImagesPapers related to chatbot models in chronological order spanning about five years from 2014. Some papers are not about chatbots, but I included them because they are interesting, and they may provide insights into creating new and different conversation models. For each paper I provided a link, the names of the authors, and GitHub implementations of the paper (noting the deep learning framework) if I happened to find any. Since I tried to make these notes as concise as possible they are in no way summarizing the papers but are merely a starting point to get a hang of what the paper is about, and to mention main concepts with the help of pictures.

I Don’t Know, Map Quirks, UI Toolkit, and Open Power Chip Architecture

  1. I Don’t Know (Wired) — Two per cent of Brits don’t know whether they’ve lived in London before. Five per cent don’t know whether they’ve been attacked by a seagull or not. A staggering one in 20 residents of this fine isle don’t know whether or not they pick their nose. (via Flowing Data)
  2. Haberman — interesting research into one way that online maps end up with places that aren’t places.
  3. Blueprinta React-based UI toolkit for the web. It is optimized for building complex, data-dense web interfaces for desktop applications which run in modern browsers and IE11. This is not a mobile-first UI toolkit.
  4. IBM Open Sources Power Chip Instruction Set (Next Platform) — To be precise about what IBM is doing, it is opening up the Power ISA [Instruction Set Architecture] and giving it to the OpenPower Foundation royalty free with patent rights, and that means companies can implement a chip using the Power ISA without having to pay IBM or OpenPower a dime, and they have patent rights to what they develop. Companies have to maintain compatibility with the instruction set, King explains, and there are a whole set of compatibility requirements, which we presume are precisely as stringent as Arm and are needed to maintain runtime compatibility should many Power chips be developed, as IBM hopes will happen.





Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version