Batlamok
AboutSearch

AI is making me obsolete as a musician (and I'm helping it)

My experiments with Suno V5.5, prompt engineering for music, and turning a Kreol song into an opera

J

JP · 9 min read · March 28, 2026

I've been making music for years. Guitar, Ableton, Reaper. Ten instrumental tracks under the name John Fish. I play in a band. I help run a rock festival. I know what a compressor does and when to use parallel processing on drums.

And now an AI can generate a song that sounds better than most of my demos in about 30 seconds.

That's not an exaggeration. That's Suno V5.5, which dropped a few days ago and is genuinely mindblowing. Full voice cloning (still rough, but it works), better mixing, better instrument separation, and the output quality jumped noticeably from V5. You can literally feed it your own voice and it sings your lyrics in your voice. We're not ready for this conversation.

What Suno Is

Suno is an AI music generator. You give it a text prompt describing the style you want, paste in lyrics, and it generates a full song with vocals, instruments, mixing, and mastering. The output isn't perfect, but it's shockingly close to what you'd get from a decent home studio session.

The free tier gives you a handful of generations. The paid tier lets you do custom mode: you control the style prompt, the lyrics, the structure, what instruments to include, what to exclude. That's where it gets interesting for someone who actually knows music production.

The Problem With Most Suno Users

Most people treat Suno like a vending machine. Type "sad love song" and hit generate. The results are generic because the input is generic.

But Suno's custom mode is actually a production interface disguised as a text box. It has a layered signal system: the Style field controls genre, tempo, key, and texture. The Exclude field suppresses unwanted elements. Meta tags inside the lyrics control section structure, vocal delivery, instrument cues, and dynamics. There's a whole syntax for background vocals, screams, atmosphere, ad-libs, and build-ups.

The problem is none of this is well documented. Suno's official help is sparse. The community knowledge is scattered across Reddit threads, YouTube videos, and random blog posts, half of which are wrong or hallucinated by people who tested once and declared victory.

So I Built a Reference Guide

I'll be honest: this started from a Suno V5 guide I found on Reddit. It was one of the better community resources out there. But after testing it extensively, I found gaps, inaccuracies, and tags that flat out didn't work. So I rebuilt it from scratch.

I cross-referenced every source I could find: the original Reddit guide, Suno's help center, Jack Righteous's meta tag series, HookGenius, the Suno Wiki, TagASong, sunometatagcreator.com. I tested each tag myself and marked them with confidence ratings: CONFIRMED (multiple sources + my own testing), LIKELY (community consensus but limited testing), or EXPERIMENTAL (inconsistent results).

The result is a 600-line field-tested guide. Here's the full thing:

SUNO V5 CUSTOM MODE REFERENCE
Field-Tested Guide · Cross-Referenced Against Suno Help Center, Jack Righteous,
HookGenius, Suno Wiki, TagASong, sunometatagcreator.com

PART 1: HOW SUNO PROCESSES YOUR PROMPT

Suno uses a layered signal system, not a command parser. Tags are weighted
probability signals, not guaranteed instructions.

Layer                    Location                Controls
Style of Music field     Top style field          Genre, tempo, key, texture
Exclude Styles field     Below style field         What to suppress
Meta Tags                Inside lyrics field [ ]   Section identity, vocal delivery
Lyric Writing            Body of lyrics            Phrasing, hook structure
Formatting Symbols       Inside/around lyrics      Emphasis, stretching, layers

The #1 rule: Tags shape probability. Generate 4-6 variations minimum.
The #2 rule: Test one variable at a time.

PART 2: THE STYLE OF MUSIC FIELD

Format: [Primary Genre], [Secondary Genre], [Mood], [Key Instruments], [BPM],
        [Key/Scale], [Texture]

Rules (CONFIRMED):
- Front-load the most important tags (early descriptors carry heaviest weight)
- Maximum 2 genre anchors (dominant genre first)
- Maximum 3-4 instruments named
- Maximum 2 mood/energy descriptors
- 4-7 strong descriptors outperform overloaded prompts
- BPM, key, and genre belong HERE, not in the lyrics field

PART 3: THE EXCLUDE STYLES FIELD (Pro/Premier only)

One of the most powerful controls. Tells Suno what NOT to do.
- Solo instrument isolation: sandwich method + expanded Exclude list
- Genre purity: exclude neighboring genres that bleed in
- Vocal control: "no female vocals" or "no male vocals"

PART 4: GENRE-SPECIFIC GOTCHAS

- "punk" causes shorter songs — use "post-hardcore" instead
- "progressive metalcore" triggers power-metal shredding
- Suno adds its own dynamics unless you say "no lulls, full intensity"
- "emo" works best paired with "emotional"
- "cinematic" fights with specific mood tags

PART 5: LYRIC FORMATTING SYMBOLS (CONFIRMED)

( )         Background/backing vocal layer (SUNG, not instructions)
[ ]         Structure and production cues (NOT sung)
- Hyphen    Stretch syllables: lo-o-o-ove
ALL CAPS    Louder, 1-3 words max per section
...         Pauses and trailing effects (NOT sustain)

CRITICAL: Parentheses are NOT for instructions. They are sung.

PART 6: SONG STRUCTURE TAGS (CONFIRMED)

[Intro] [Verse] [Pre-Chorus] [Chorus] [Post-Chorus] [Bridge] [Outro]
[Build] [Drop] [Breakdown] [Break] [Instrumental] [Solo] [Interlude]
[Final Chorus] [Chorus x2] [Outro: Fade out] [Beat switch]

PART 7: VOCAL TAGS (CONFIRMED)

Gender:   [Male Vocal] [Female Vocal] [Duet] [Choir]
Style:    [Whisper] [Spoken Word] [Rap] [Falsetto] [Belting] [Growl]
          [Operatic] [Screaming] [Autotuned delivery] [Raspy lead vocal]
Emotion:  [Crying voice] [Angry tone] [Vulnerable] [Defiant] [Intimate]
Effects:  [Reverb] [Delay] [AutoTune] [Distorted Vocals] [Vocoder]

PART 8: INSTRUMENT TAGS (CONFIRMED)

Keyboards:   [Piano] [Rhodes] [Organ] [Synth] [Analog Synth] [Moog Synth]
Guitars:     [Acoustic Guitar] [Electric Guitar] [Distorted Guitar] [Bass Guitar]
Drums:       [808s] [Drum Machine] [Blast beats] [Double bass drums]
Brass/Wind:  [Saxophone] [Trumpet] [Flute] [Harmonica]
Orchestral:  [Orchestra] [Full Orchestra] [Choir Vocals]

PART 9: META TAG STACKING (pipe syntax)

[Verse | raspy lead vocal | overdriven guitar | light reverb]

Rules: Lead with section label. 3-4 modifiers is sweet spot (max 6).

PART 10: GENRE RECIPE STACKS (copy-paste ready)

EDM:     [Drop | sidechained synth bass | sub drop impact]
Pop:     [Chorus | anthemic chorus | stacked harmonies | wide stereo pads]
Trap:    [Drop | 808 sub bass | off-beat hi-hats | snare rolls]
Metal:   [Breakdown | heavy distortion | half-time | palm-muted guitars]
Country: [Chorus | anthemic chorus | pedal steel guitar]
Gospel:  [Chorus | SATB | stacked harmonies | choir vocals]
Emo:     [Chorus | emotional | stacked harmonies | driving]

TAGS THAT ARE FAKE (DO NOT USE):
[Callback: Chorus melody]  [Hook Loop]  [Hook delay]
[Band drop-out before final chorus]  [Emotional release]

The guide covers a lot more (tempo/key reference, atmosphere tags, special techniques, slider settings) but that's the core. The key insight: most "advanced Suno guides" online are full of unverified tags that someone tried once and declared as features. I stripped all of those out and kept only what consistently works.

Then I Built a Claude Code Skill

Because I have a problem with automating things, I turned the guide into a Claude Code slash command. In Claude Code, you can create custom skills (commands) that act as specialized prompts with full access to your files.

My skill reads my lyrics file and reference guide, then outputs three copy-paste blocks ready for Suno:

1. Style of Music field:

Symphonic opera, cinematic orchestral, D minor, 66 BPM,
full orchestra, operatic tenor, timpani, dramatic, grandiose

2. Exclude Styles field:

no electric guitar, no drums, no electronic, no autotune,
no rap, no pop, no rock, no synth

3. Tagged Lyrics field with every section tagged with pipe-stacked production cues:

[Act I - Verse 1 | operatic tenor | sparse strings | piano | intimate]
[legato]
Létan  pé passé zordi pli vite pli vite
Létan  pé allé
[rising orchestral tension]
Dans to lesprit to en orbite orbite
et to planéte àcôté
[fortissimo]
RÉAZIR VITE
Pa azir brite

The skill follows all the rules from the guide automatically. Section tags on every section. No verbose tags that get sung as lyrics. Parentheses only for backing vocals, never for instructions. Emotion tags on their own line before the lyric. ALL CAPS limited to 1-3 words for impact. It detects the language, mood, energy arc, and implied genre from the lyrics, then picks the right BPM, key, and tag presets.

I just type the slash command in Claude Code, point it at my lyrics file, and get production-ready Suno prompts back. No manual tagging.

Writing Kreol for AI Pronunciation

Here's something nobody else is dealing with: getting AI to pronounce Mauritian Kreol correctly.

Kreol isn't a language Suno was trained on. If you write standard Kreol orthography, the AI mangles the pronunciation because it tries to apply French or English phonetic rules. So I developed my own way of spelling Kreol phonetically so that Suno produces the closest possible pronunciation.

Things like doubling vowels for the right length, using accents to guide stress, spacing words so the AI doesn't merge syllables. It's not linguistically "correct" Kreol, it's Kreol spelled for AI consumption. Trial and error over dozens of generations until the output sounds like someone who actually speaks the language.

The Kreol Opera Experiment

I had lyrics from a song called "Rézin" by our band K'Fouyaz, written by our drummer at the time. The original style prompt was:

an atmospheric fusion of progressive metal, ambient pop, and R&B. Dynamic contrasts: soft piano or electronic or synth passages that build into explosive, heavy guitar sections with polyrhythmic drums and deep, emotional vocals. The singing should shift between soulful and intense, screamed or distorted moments. The mood is cinematic, ritualistic, and deeply emotional, blending beauty and aggression with layered textures and live in studio

Good results. But then I thought: what if I pushed it somewhere completely different?

I used the Claude Code skill to reformat the same Kreol lyrics as a symphonic opera. D minor, 66 BPM, full orchestra, operatic tenor, timpani. Excluded all electric guitars, drums, electronic, synth. Restructured the song into three acts with an overture and curtain call. Added orchestral dynamics: pianissimo to fortissimo, string swells, brass fanfares, SATB choir on the choruses.

The result was a Mauritian Kreol opera. An AI singing "Létan pé passé zordi pli vite pli vite" over a full orchestral arrangement with choir harmonies and timpani rolls. It sounded ridiculous and genuinely moving at the same time.

Same lyrics. Same language. Completely different universe of sound. That's the power of knowing how to prompt the tool.

What I Actually Learned

Suno is a production tool, not a toy. The gap between a default generation and a custom-mode generation with proper tagging is enormous. Same lyrics, same AI, completely different output. The knowledge of how to prompt it is the instrument.

The Exclude field is everything. Most people never touch it. But telling Suno what NOT to do is often more effective than telling it what to do. Want a clean acoustic track? Don't just say "acoustic." Exclude electric guitar, synth, drums, electronic. The sandwich method (instrument name at start and end of the Style prompt) plus a comprehensive Exclude list is how you isolate instruments reliably.

Tags are probability, not commands. This is the number one thing people get wrong. Suno doesn't execute your tags like code. It treats them as weighted probability signals. That's why you generate 4-6 variations minimum. Serious creators go 6-10+. You're shaping the probability space, not writing sheet music.

Genre names have side effects. "Punk" makes songs shorter. "Progressive metalcore" triggers power-metal shredding when you wanted thick rhythm guitars. "Emo" works best paired with "emotional." These aren't documented anywhere official. You learn them by generating hundreds of variations and noting patterns.

The musician's advantage is real. Knowing music production makes you better at prompting Suno, not worse. Understanding what a pre-chorus does, knowing that palm-muted guitars create a specific texture, knowing when to use half-time feel. That knowledge translates directly into better tags and better results. The AI handles the execution. The human handles the taste.

Am I Obsolete?

Half joking with the title. But honestly, the question is real.

For demo-quality production, yes. Suno can produce in 30 seconds what used to take me a weekend in Ableton. For quick iterations on song ideas, for testing arrangements, for hearing what a genre shift would sound like before committing hours to it, Suno is faster than I am. And with V5.5's voice cloning, it can now do it in my voice.

For the thing that actually matters? No. The lyrics are still mine. The creative direction is still mine. The decision that this Kreol song should become an opera came from a human brain that thought it would be funny and interesting. The AI didn't have that idea. It executed it.

The tool changed. The craft didn't...Anyways, here's the song.