Since July, I’ve been periodically dipping into an old folder on my hard drive that contains electronica tracks I put together back in my early 20s (circa 2008-10).
I’d more or less forgotten about most of them, as it was around that time that I was starting grad school and the time I could devote to my more creative hobbies started to dwindle.
(and if I recall correctly, it was also where I was suffering from horrible insomnia and carpal tunnel pain, causing me to quick making music on the guitar as well).
In any event, I’ve been steadily uploading these more or less “lost” tracks under the moniker Mike Ensign since then.
One of the tracks I published just the other, for reasons known perhaps only to the deities running and managing the “suggested videos” algorithm, more or less exploded, gaining over 1,000 views in a 24 hours.
Considering most of my other uploads have earned roughly 1-2 (or a big 50-60), breaking 1K feels like a lovely little milestone.
To celebrate, I decided on doing some shameless self promotion.
Here’s a link to it on YouTube:
Here’s a listing I also quickly put together on Bandcamp for it where it can be purchases for $0.99 or whatever people want:
Perhaps this was just one anomaly, or maybe I hit a good stride in optimizing my videos to get some views on them. We’ll see!
Imagine you’re on a website, reading the news about the recent passing of a major figure in the tech industry.
You discover details about when they passed, the names of some companies they were attached to, and which famous people shared their heartfelt condolences.
It’s only when you reach the end of the that you realize the author never mentioned their name.
Stories like this have started appearing on sites across the internet. At first glance, they seem like normal news pieces, but when you look closer, you realize something is off with the details.
The reason, of course, is that it was written by an AI.
Do Journalists Use AI?
AI has been participating in the newsroom for longer than we might expect.
According to What’s New In Publishing, newspapers such as the Washington Post have been using generative AI for the past several years to write hundreds of articles.
To be clear: rather than write whole articles, in-house bots have been condensing articles, creating short alerts of high impact news, and even composing the equivalent of Tweets about sports games.
In these cases, their use has largely been like that of a copy editor, helping churn out micro-content at a rate faster than most people could. Most us have have likely never noticed because the use was subtle and, if I may say so, appropriate.
After all, AI can be a powerful in the hands of the right person.
Due to its limitations at the present moment, there is the constant need for a human to either supervise the delivered product or set its operational parameters in such a way to constrain appropriately without letting it go off the rails.
Where things get tricky is when AI is used to write articles wholesale – either from a prompt and headline provided by the editorial team, or to generate everything (topic, title, content) from scratch.
For anyone who has played around with ChatGPT, we all know what can happen if we give an AI too much room to be “creative” – or end up with a hallucination as they are often calld.
In sum, we can easily get bonkers results.
These mishaps can range from misunderstanding the prompt and producing something irrelevant to making things up – such as imaginary guitar chords.
Of course, terming these missteps as hallucinations is a coy euphemism for what’s really going on: the language learning model simply making something up because it sounds, well, like something that a person might write.
Will AIs Be Responsible for Fake News?
At the moment, I’m hesitant to call hallucinations and made up nonsense produced by AIs as fake news.
Fake news is commonly used to describe what happens “news” sites either wholesale makeup stories or write them in a way that the takeaway is entirely misleading.
There’s an intentionality in fake news – whether it’s for creating divisions ahead of a political campaign, driving up support among tribal lines, or simply for exploitatively generating clicks and more ad revenue based on sensational headlines.
The AIs used for generating text do not have intentionality, so it is not as if these tools are responsible for everything they produce.
However, their operators certainly might be inclined to produce fake news, and in their hands these tools have the potential to supercharge their output.
A lot of the fake news out there is produced by the equivalent of sweatshops, where a team of numerous writers in a country with loose labour laws and low costs sits around and hammers out as much content as they can – effectively throwing all kinds of nonsense at the web and seeing what sticks.
Where the worst case examples were essentially limited by how fast a person can type – a bot such as ChatGPT could reduce the time to produce a fake article from say 30 minutes to 30 seconds.
There are some limitations on most language learning tools that will mean a human editor has to go in and shape them up after. But, there is already an “unethical ChatGPT” available on the darkweb that can likely be used to produce more harmful content faste than before.
Will AI Replace Journalists?
The truth of it is that AI is already being used to write news articles and replace journalists.
If it seems like more and more content out there, you’re not crazy – and the rate at which this content is published is likely accelerating.
The Guardian recently identified 50 news websites that appear to be composed of entirely (or close enough) AI generated content. Many of the articles are top 10 listicles, but others also contain advice columns and “news articles” that may not be grounded in actual events.
While most of the sites on that list look like quick traffic grabs or the kind of mid-to-low quality content a typical affiliate site offers, more reputable publishers are also posting articles written by machine learning tools.
The Daily Mirror and Express have both begun publishing articles written entirely by AI, despite claims neither itends to replace its human writers. German newspaper Bild, however, has begun laying off staff as it transitions to more AI.
Technology providers are also doing little to stop the trend.
We’re not at the point where a bot can replace investigative journalism or original on the scene reporting, but the other duties that make up a journalist’s day – such as picking up stories from a wire or commenting on last night’s sport’s game – are already going the way of the dinosaur.
These fears aren’t hypothetical either. Unlike the television industry where there hasn’t (as far as we know) been any well-received or rival productions that were entirely AI generated, the music industry has already had a couple.
Back in April, an AI-generated track that mimicked the vocals of Drake and the Weeknd went viral and was found on multiple streaming services such as YouTube and Tiktok before being taken down. Despite
So, will AI be replacing our pop stars and starlets in the near-future? In this case, I would go with most likely, and there’s a few reasons why.
1) Some music is already being replaced by AI and likely won’t go back to being produced by musicians.
If you’ve ever been put on hold during a call and heard a jingle or noticed a certain non-descript pop track in the background of an audio ad, that was likely stock music available from one repository or another.
Historically, stock music has been its own thing separate from the kind of tunes we hear being released by record labels. Most stock music is produced by musicians either creating tracks on commission, as freelancers, or just releasing tracks with a copyright that allows it to be used for commercial purposes.
While these compositions tended to be simple arrangements, often only a minute or two in length, they’ve still historically needed someone to handle the composing, arrangement, recording, and all the rest.
Nowadays, that’s already changing. While most stock music might not currently be AI produced, it almost certainly will in the next year or so.
One reason for this is because there’s no shortage of AI music generators available on the web. Sites such as Boomy, Soundful, and Soundraw, among others, offer users the tools to generate unique AI produced tracks in a variety of genres.
To be honest, some of what is produced by these isn’t half bad and doesn’t sound all that different from royalty free stock music that’s already available out there. The main difference is that where an artist might need an afternoon to put together a stock music track the traditional way, with AI it takes seconds – and each piece is unique.
And so, while you can still go to a stock music site and browse what’s available by genre, length, and all that, paying a subscription fee as is usually the case, a producer could also simply sign up for an app and create it all themselves – cutting out the middlemen.
It’s not hard to see the appeal in that, especially as it cuts down the cost and time for finding the right track. However, it does mean that the folks who have traditionally been producing royalty-free stock music are going to be facing stiff competition from AI tools and tracks alike.
2) AI is already able to produce backing tracks and instrumentation on command.
Similarly, to how we have tools to produce stock music, there’s little in the way from using those tools to create tracks that accompany an artist’s lyrics or just having someone sing over tracks spun from an AI tool.
However, the real deal is the ways in which AI plugins and tools are becoming available for the tools that producers are already using to put together their tracks.
If you do a quick search for AI tools available with FL Studios (formerly known as Fruity Loops), you’ll come across a couple of dozen plugins already available – not to mention, a whole ton of YouTube videos giving tutorials for how to blend AI into your productions.
The shortcuts these tools offer is pretty obvious – no more writing out a catchy baseline or piano to accompany your chord progression. In a couple of clicks you can have an AI churn out a few options and pick the one that goes best with what you had in mind.
As someone who creates music electronically (I’ve been using Reason on and off for the past 20 years to self-produce electronic tracks), it’s both a little disheartening and somehow empowering at the same time.
On the one hand, all the work that goes into the discovery process of putting a track together can be automated with little blood, sweat, and tears.
On the other hand, as someone who also has less free time than they used to, the idea of generating the basics with the click of the button is an appealing (if also kinda frightening) shortcut.
In any event, if tools like this exist, people will use them.
3) AI generated music is starting to flood streaming services.
Up and coming music producers, indie artists, and DIY folks like myself, have been using platforms like Soundcloud, YouTube, and even MySpace back in the day, to release music, give it a home, and hopefully find an audience.
It’s tough enough to get any attention even in a niche genre as at any point there’s hundreds, if not thousands, of others also releasing their own tracks.
AI generated tracks are going to create pure pandemonium on these platforms.
Giving anyone the ability to generate a track in under a minute means that on any given day, we’re likely going to be seeing the number of new releases skyrocket.
Sure, AI most AI tools are generally urging users to keep this music for their own projects, but there’s little to stop someone from labelling it something they self-produced and then start promoting it far and wide.
Worse, while the current tools are a little predictable in their output, and hardly as creative as even your average suburban garage band, because AI is driven by machine learning – they will learn and become more advanced, and likely more quickly than we realize.
I wouldn’t be surprised if by the end of the year, half the streaming platforms become flooded with AI compositions not clearly marked as such – and whether we will be able to even tell the difference will be up for debate.
4) The deepfakes are real, and only going to become more common.
Much like how the recent Black Mirror episode “Jane is Awful” played with the notion that actors might soon be giving up rights to their voice and image (mirroring an actual concern of SAG-AFTRA in their strike), it’s hardly a remote possibility that musicians will face the same concern.
We know that AI can already be trained to mimic an artist by listening to their recordings and be used to create convincing recordings (to circle back the case of Drake, The Weeknd, and John Lennon). So far it’s mostly been either industrial folks on the internet creating deepfakes, or surviving bandmates making the call, but how long before studios begin deciding when AI gets involved?
Let’s not forget that a few years back, 2Pac was “resurrected” as a 3D projection some time ago. It won’t require leaps and bounds to make a more autonomous 3D rendering of an artist start to compose new songs.
I strongly suspect it’s only a matter of time before contracts with record companies also start including clauses about who controls ownership not just of an artist’s current discography, but also their likeness and vocal styles.
When (and not if) we get to that point, will the listening public even care if a catchy song was produced by the original artist of an AI bot?
And even if there is an initial outcry, will it last or (like most things) will anybody even care?