content
stringlengths
1.02k
10k
untruncated_content
stringlengths
1.02k
557k
category
stringclasses
10 values
date
stringdate
2025-12-01 02:18:43
2025-12-17 00:00:00
url
stringlengths
32
213
metadata
stringlengths
2
106k
Understall “I don’t know how you can keep your hands to yourself.” Seiya was finally beginning to like drinking with Haruka whenshesaid that. “What?”The straw he was chewing onfellout of his mouth when he responded. Haruka was shameless when she wanted to be. “Well,if it weremewho could—” She mimed Seiya’s transformation sequence instead of saying ‘grow a penis’because theydidn’tquite have that relationship yet.“Iwould’vebeenall over—.” “Woah.Wait.Withthem?They’rebasically family.I’d—. No...Never.” Haruka’s eyebrows shot past her bangs. “Youhave.” Seiya dropped his head onto the sticky bar table. He didn’t want to have this conversation right now. No, no, no. Just, no. “When? The tall or short one?Both?Was this while you were all over our Princessor after?”Seiyakept his head on the table, because if she could see him,she’dknow. She'd never, ever, shut up about his sex life. He hated Earth. The last time he'd used his magical human-boy penis had been in a public restroom. It probably wasn't the kind of activity Haruka had in mind when she'd asked him. Taiki was an experimental fellow(not-so-secret pervert), and Seiya just liked touching things in places heshouldn’ttouch them(normal pervert).The combination has led them to weird places. They hadn’t considered anything public before this, and Seiya wouldn’t say he was an exhibitionist, per se... The possibility of being caught nixed the fun. His career would never allow such risks. However, Seiya trusted Taiki implicitly. There was very little Seiya wouldn’t do at Taiki’s command these days. Seiya pulled his winter coat across his chest, hoping no one wouldnoticehim as he made his way to the men’s room.It wasinthe parking level of amostly defunct mall,empty except for thosein the know.Or sohe’dbeen told. His heart starts thudding in his chest the closer he gets, and he finds himselfadjusting his pantsbefore he steps inside.Taiki told himit’djust be them, butthere was a possibility he would run intoa perfect stranger. When Seiya pushed the door open,its hinges squealed in protest. If anyone decided to follow,he’dhear them before they saw him. All he could smell was antiseptic.The floor wasnearly spotless, aside fromthe faintest hint ofroadsalt dusting the floor.Whoever cleaned the restroom in the morning had left the cleaning supplies on the sink.There simplyweren’tenough visitors tobother withhiding human effort from them, and those who did need the restroom obviously did not care about the presence of (gasp) window cleaner. The bathroom seemedempty atfirstglance.After turning around the partition, hesaw thatno one was standing at the urinals.And while all the stall doors were closed, noneseemed to belocked... except one. Seiya could just barely make out a pair of Derby shoes under one of the stalls.Once he saw the shoes, Seiya shoved his way into the stall to the leftof them. On Earth, everyone seemed to know his name. It was nice to pretend he could be nobody, touching someone whose name he’d never know. These shoes were all he knew about the other person. Maybe they were a stiff businessman, or they just liked to look fancy. Someone who'd never heard of him, or his band, or his droves of fan girls. Seiya was wearingrattysneakers. They were dirty even beforehe’dworn them out in the dreary weatherthey’dbeen having. The laces were looseand frayed, trailing behind and catchingallthe slush. Helooked likea mess. Maybe theperson on the other side was as much of a mess as he was, andthey’dworn the shoes in hopes of impressing an interviewer. In any other situation,he’dask. When hesat down, heunzipped his jeans and rolled them half down his thighs.Seiya looked down at his crotch and shuddered at the sight of his damp briefs.Hedidn’tdareto pullthem down.He just rubbed acrosstaut cotton, circling the wet spot. Then he waited. His eyes drifted down to the gap between the stall and the floor.On the wall, written in black sharpie, wasknock4 fun. It was the kind of graffiti he wouldn’t pay attention to in any other situation. It was locker room talk that’d come out of completely, 100% heterosexual jocks. He thought this stuff was a joke. He still thinks it is, but someone else had clearly been in Seiya’s spot, waiting to do the exact same thing. How long did Taiki know about this place before they’d brought it up? Have they... with other people...? Probably not. Taiki wasn’t that adventurous. Just... strange. They'd probably just seen something they shouldn't have, or read something in a naughty magazine. TheDerbyshoesedgedtowards the divider, stopping whenthey peaked out.They were dirtier thanhe’dfirstthought;saltand melted snowturning shiny black into a matte grey.The winter weatherhadn’tbeen kind to them.Seiyafelt unfocused and fuzzywhen he nudged them with his own sneakers, scuffing them just a little more. They tapped once more. He tapped back. The stranger stood up, and their jeans dropped to the floor. The belt looped around it clattered as the buckle hit the tiles. Seiya could see theirshadowon the floor. Their shoes, their legs, their arms, theirhanddrifting closer to their midsection and wrapping aroundsomething solid. They shuffled until they faced the divider, letting him see as much as the shadows would let him. Seiya swallowed.He could only see the suggestion of it, but he could hear it. The soft huffing, the sliding sound when your hands are dry against it.A single strand of pre-cum hits the tile underneath them, and Seiyacan’ttake his eyesoffit. There itwas;proof that this was really happening. Seiya stuck his hand under the divider, holding his palm up.The ring on his finger caught the light of the light fixture above them. Their pace picked upfor a second before letting go of their cock. The strangergot to theirknees, shuffling so theirstomachwasagainst the divider. Seiya could see muscled thighs, littered withscarshe’dknown sincethey’dbeen bleeding.In their human forms, Taiki was softer aroundthelegs. It’shard to pretendit’sa stranger when you know their body so well.He wanted to pinch the skin andpullit.Maybe he’dfind something newunderthefluorescentlighting. Seiya joined them. The pants he’d kept around his thighs is pulled down to his ankles, and he rolls down his briefs too, just for the hell of it. He might as well show off what he's packing. He grunted as he sat on hishaunches.Thepantsaround his ankles really made it hard to move, huh.His knees popping sounded impossibly loud in the quiet stalls.He briefly skimmedthe inside of Taiki’s thigh, squeezing the soft flesh.Then his fingers continued tothecrux. And then... Taikireached their own hand under the stall, but Seiya pushed it back. If anyone even brushed against hisdickright now,it’dbe over.No one wanted to deal with him afterhe’dhad an orgasm. He got all soft and (occasionally,briefly) weepy. Seiyacurls his fingersagainst them,marvelingat the contrast between thesilver ring he wearsandTaiki’s flesh.He gives it a squeeze,smiles when he hears them whine on the other side. “Sorry,” they whisper.There’sa quavery undertone in their voicethat makes Seiya feel wild. He gives them an experimental tugbefore stumbling his way into a steady rhythm.Taiki’s started to pantalready.Their hips try to follow his hands, fucking his loose fist. For a short while, they both lose themselves in it. It's a familiar push and pull that relaxes him despite the public setting. It's something they've been doing for a long time together. Seiya looks down at his hand, and admires the cock sitting in his palm. He doesn't see it very often. The times they fool around with Taiki's pants off are few and far between, and they have the slightest preference for their other form. It's pretty. Taiki's just so pretty. The hair is neat and trimmed. It's larger than he is by a decent amount, but thinner. Seiya takes a second to collect the pre-cum beading on their tip. Taiki doesn’t warn them before coming. They usually grab him, a shoulder or a shirt, lost for words. With a wall between them, Seiya doesn't know what's happening until he has white dripping down his fingers. He slowly pulls for a second longer, only stopping when Taiki starts hissing in overstimulation. Seiya scoots closer to the divider and pulls his hand to his own crotch. Taiki’s spend drips onto him. He hopes they see it, a pearl of white sitting on top of the flushed, angry skin. The low, tortured sound he gets in response puts a smile to his face. He cups his fist around his cock, watching the way his length disappears under his hand. It almost looks like there's nothing there at all. There’s enough lubricant to make jerking himself off quick and (almost) painless. Seiya’s own cum mixes with Taiki’s in less than 20 seconds. It spills out of his fist and dribbles onto the floor. They should clean that up... Seiya hastily wipes his hand off on a wad of single ply toilet paper and pulls his pants on in a daze. He’s leaving sticky hand prints on his clothes, but finds he doesn't care. Shit, his hand still looks like a mess. He gives the gap between the divider and the floor one last look, and right under theknock 4 fungraffiti is a puddle.Impulse drives Seiya to stick the toe of his sneaker in it. Nothing magical happens, and now he hascumon his shoes.There’sno way to tell ifit’sTaiki’s (embarrassing) or his (embarrassing and loser-ish). Well. He feels sticky and a little gross when he walks out of the stall, and his legs wobble from being on the floor for so long. Actcasual. He goes over to the sinks, eyes on the window cleaner while he lets the tap run. Hedoesn’tput his hands under the water.A few seconds later, Taiki joins him,nearly shoulderto shoulder. “Nice seeing you here,” Seiya says, casually. “You’re not very good at washing your hands,” Taiki responds. They point down at the milky gloss on Seiya’s fingers and streaked across his palm. God, he hates that Taiki looks put together right now. There’s not a single hair out of place or anything. They've got a
Understall “I don’t know how you can keep your hands to yourself.” Seiya was finally beginning to like drinking with Haruka whenshesaid that. “What?”The straw he was chewing onfellout of his mouth when he responded. Haruka was shameless when she wanted to be. “Well,if it weremewho could—” She mimed Seiya’s transformation sequence instead of saying ‘grow a penis’because theydidn’tquite have that relationship yet.“Iwould’vebeenall over—.” “Woah.Wait.Withthem?They’rebasically family.I’d—. No...Never.” Haruka’s eyebrows shot past her bangs. “Youhave.” Seiya dropped his head onto the sticky bar table. He didn’t want to have this conversation right now. No, no, no. Just, no. “When? The tall or short one?Both?Was this while you were all over our Princessor after?”Seiyakept his head on the table, because if she could see him,she’dknow. She'd never, ever, shut up about his sex life. He hated Earth. The last time he'd used his magical human-boy penis had been in a public restroom. It probably wasn't the kind of activity Haruka had in mind when she'd asked him. Taiki was an experimental fellow(not-so-secret pervert), and Seiya just liked touching things in places heshouldn’ttouch them(normal pervert).The combination has led them to weird places. They hadn’t considered anything public before this, and Seiya wouldn’t say he was an exhibitionist, per se... The possibility of being caught nixed the fun. His career would never allow such risks. However, Seiya trusted Taiki implicitly. There was very little Seiya wouldn’t do at Taiki’s command these days. Seiya pulled his winter coat across his chest, hoping no one wouldnoticehim as he made his way to the men’s room.It wasinthe parking level of amostly defunct mall,empty except for thosein the know.Or sohe’dbeen told. His heart starts thudding in his chest the closer he gets, and he finds himselfadjusting his pantsbefore he steps inside.Taiki told himit’djust be them, butthere was a possibility he would run intoa perfect stranger. When Seiya pushed the door open,its hinges squealed in protest. If anyone decided to follow,he’dhear them before they saw him. All he could smell was antiseptic.The floor wasnearly spotless, aside fromthe faintest hint ofroadsalt dusting the floor.Whoever cleaned the restroom in the morning had left the cleaning supplies on the sink.There simplyweren’tenough visitors tobother withhiding human effort from them, and those who did need the restroom obviously did not care about the presence of (gasp) window cleaner. The bathroom seemedempty atfirstglance.After turning around the partition, hesaw thatno one was standing at the urinals.And while all the stall doors were closed, noneseemed to belocked... except one. Seiya could just barely make out a pair of Derby shoes under one of the stalls.Once he saw the shoes, Seiya shoved his way into the stall to the leftof them. On Earth, everyone seemed to know his name. It was nice to pretend he could be nobody, touching someone whose name he’d never know. These shoes were all he knew about the other person. Maybe they were a stiff businessman, or they just liked to look fancy. Someone who'd never heard of him, or his band, or his droves of fan girls. Seiya was wearingrattysneakers. They were dirty even beforehe’dworn them out in the dreary weatherthey’dbeen having. The laces were looseand frayed, trailing behind and catchingallthe slush. Helooked likea mess. Maybe theperson on the other side was as much of a mess as he was, andthey’dworn the shoes in hopes of impressing an interviewer. In any other situation,he’dask. When hesat down, heunzipped his jeans and rolled them half down his thighs.Seiya looked down at his crotch and shuddered at the sight of his damp briefs.Hedidn’tdareto pullthem down.He just rubbed acrosstaut cotton, circling the wet spot. Then he waited. His eyes drifted down to the gap between the stall and the floor.On the wall, written in black sharpie, wasknock4 fun. It was the kind of graffiti he wouldn’t pay attention to in any other situation. It was locker room talk that’d come out of completely, 100% heterosexual jocks. He thought this stuff was a joke. He still thinks it is, but someone else had clearly been in Seiya’s spot, waiting to do the exact same thing. How long did Taiki know about this place before they’d brought it up? Have they... with other people...? Probably not. Taiki wasn’t that adventurous. Just... strange. They'd probably just seen something they shouldn't have, or read something in a naughty magazine. TheDerbyshoesedgedtowards the divider, stopping whenthey peaked out.They were dirtier thanhe’dfirstthought;saltand melted snowturning shiny black into a matte grey.The winter weatherhadn’tbeen kind to them.Seiyafelt unfocused and fuzzywhen he nudged them with his own sneakers, scuffing them just a little more. They tapped once more. He tapped back. The stranger stood up, and their jeans dropped to the floor. The belt looped around it clattered as the buckle hit the tiles. Seiya could see theirshadowon the floor. Their shoes, their legs, their arms, theirhanddrifting closer to their midsection and wrapping aroundsomething solid. They shuffled until they faced the divider, letting him see as much as the shadows would let him. Seiya swallowed.He could only see the suggestion of it, but he could hear it. The soft huffing, the sliding sound when your hands are dry against it.A single strand of pre-cum hits the tile underneath them, and Seiyacan’ttake his eyesoffit. There itwas;proof that this was really happening. Seiya stuck his hand under the divider, holding his palm up.The ring on his finger caught the light of the light fixture above them. Their pace picked upfor a second before letting go of their cock. The strangergot to theirknees, shuffling so theirstomachwasagainst the divider. Seiya could see muscled thighs, littered withscarshe’dknown sincethey’dbeen bleeding.In their human forms, Taiki was softer aroundthelegs. It’shard to pretendit’sa stranger when you know their body so well.He wanted to pinch the skin andpullit.Maybe he’dfind something newunderthefluorescentlighting. Seiya joined them. The pants he’d kept around his thighs is pulled down to his ankles, and he rolls down his briefs too, just for the hell of it. He might as well show off what he's packing. He grunted as he sat on hishaunches.Thepantsaround his ankles really made it hard to move, huh.His knees popping sounded impossibly loud in the quiet stalls.He briefly skimmedthe inside of Taiki’s thigh, squeezing the soft flesh.Then his fingers continued tothecrux. And then... Taikireached their own hand under the stall, but Seiya pushed it back. If anyone even brushed against hisdickright now,it’dbe over.No one wanted to deal with him afterhe’dhad an orgasm. He got all soft and (occasionally,briefly) weepy. Seiyacurls his fingersagainst them,marvelingat the contrast between thesilver ring he wearsandTaiki’s flesh.He gives it a squeeze,smiles when he hears them whine on the other side. “Sorry,” they whisper.There’sa quavery undertone in their voicethat makes Seiya feel wild. He gives them an experimental tugbefore stumbling his way into a steady rhythm.Taiki’s started to pantalready.Their hips try to follow his hands, fucking his loose fist. For a short while, they both lose themselves in it. It's a familiar push and pull that relaxes him despite the public setting. It's something they've been doing for a long time together. Seiya looks down at his hand, and admires the cock sitting in his palm. He doesn't see it very often. The times they fool around with Taiki's pants off are few and far between, and they have the slightest preference for their other form. It's pretty. Taiki's just so pretty. The hair is neat and trimmed. It's larger than he is by a decent amount, but thinner. Seiya takes a second to collect the pre-cum beading on their tip. Taiki doesn’t warn them before coming. They usually grab him, a shoulder or a shirt, lost for words. With a wall between them, Seiya doesn't know what's happening until he has white dripping down his fingers. He slowly pulls for a second longer, only stopping when Taiki starts hissing in overstimulation. Seiya scoots closer to the divider and pulls his hand to his own crotch. Taiki’s spend drips onto him. He hopes they see it, a pearl of white sitting on top of the flushed, angry skin. The low, tortured sound he gets in response puts a smile to his face. He cups his fist around his cock, watching the way his length disappears under his hand. It almost looks like there's nothing there at all. There’s enough lubricant to make jerking himself off quick and (almost) painless. Seiya’s own cum mixes with Taiki’s in less than 20 seconds. It spills out of his fist and dribbles onto the floor. They should clean that up... Seiya hastily wipes his hand off on a wad of single ply toilet paper and pulls his pants on in a daze. He’s leaving sticky hand prints on his clothes, but finds he doesn't care. Shit, his hand still looks like a mess. He gives the gap between the divider and the floor one last look, and right under theknock 4 fungraffiti is a puddle.Impulse drives Seiya to stick the toe of his sneaker in it. Nothing magical happens, and now he hascumon his shoes.There’sno way to tell ifit’sTaiki’s (embarrassing) or his (embarrassing and loser-ish). Well. He feels sticky and a little gross when he walks out of the stall, and his legs wobble from being on the floor for so long. Actcasual. He goes over to the sinks, eyes on the window cleaner while he lets the tap run. Hedoesn’tput his hands under the water.A few seconds later, Taiki joins him,nearly shoulderto shoulder. “Nice seeing you here,” Seiya says, casually. “You’re not very good at washing your hands,” Taiki responds. They point down at the milky gloss on Seiya’s fingers and streaked across his palm. God, he hates that Taiki looks put together right now. There’s not a single hair out of place or anything. They've got a long coat that makes them look even taller, and the stiff fabric accentuates their shoulders... and it all looks good, despite being on the floor for ages. He grumbles. “I’ll fix that for you,” andsticks his fingers in his mouth. (It didn’t matter that it wasn’t just Taiki’s, just that some of it was.) The sight was fucking nasty, not that Taiki seemed to hate it. They exhale sharply out of their nose and stare at his lips like they wanted to bite him. They pull him in for a brief kiss, and Seiya melts into them. No biting involved, but now they both have dick breath. Nice. "I drove here," Taiki says against his lips. "I'm assuming you don't want to take the bus home." He pulls away from them after a second and digs his head into their shoulder, breathing in their scent. "Yeah, that'd be cool." Seiya doesn't quite get weepy, but he leans against Taiki for a long time. Taiki eventually gets the message, and draws him into their arms. They don't let go until the door hinges squeak, letting them know someone else was coming in.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75744656
{"authors": ["MarineCathedral"], "language": "English", "title": "Understall"}
My everything One day, on the top of an old mountain, something happened. What happened chichi, what happened ??? Heh... You all are not ready for the story im about to tell you... It was around 7am. The nature slowly awaken from its slumber and go about its business, the sky was so bright it could turn alive and start singing niccori survey team at any time given, and the rustling of the plants betrayed the unexpected visit of the spring wind. However, something disrupts this peaceful start to the morning. Between two rocks, under a huge toona sinensis, and next to the most dehydrated river in the whole region, a little ball appeared. What ?? A ball ?? What kind of story is this... But listen ! It wasn't any kind of ball... This ball was medium small: it could fit in a hand of a high schooler. And fluffy !! Soooo fluffy, im confident this ball could win against the fluffiest hamster, dog, cat, bunny, your partner, a feet, your partner's feet, whatever you want in a "Who's the fluffiest of them all" contest. But mind you, that wasn't the ball's weirdest attribute. This ball was.... Purple (the #BB88EE way). With a smug cat face on it (like.. this --> (–⩊–)). It had two cyan lines similar to pencil strokes I could've drawn with my mouse during class, one between its eyes and one descending along the left side of its.. face?... .... Completely stupid, isn't it ? Anyway... The ball quietly spawned in the middle of nothing, around one meter from the ground, levitated for two solid seconds before landing all softly on the grass (peter how are you doing that). ... So that's it ? A purple ball ?? My balls could be purple too after a long night with your mo- oi oi that's rude. Back to the intriguing sphere please. Strangely enough, the ball had a conscience. It could think, move, but not speak (yet). Yeah, have you ever seen a ball with a mouth ? That's what I thought. The cute orb looked around, started to feel for the first time what it was like existing. The caress of the wind on its surface, the peaceful atmosphere of this place, everything was so... Alive. As the ball was just born, it didn't have any life goal. To be able to feel its weight pressing on the land was already enough. Time passed, the ball progressively explored each coin of the mountain as the years passed. Days after days, months after months, years after years. The mountain couldn't be better, dressing in its finest clothes depending of the weather (diva, I could never) and seemed to show off its rain coat, snow scarf and flower cardigan at any given opportunity. Our little thing, on the other hand, was kind of lost. Well, it was indeed lost. But we're speaking here about the type of lost... Deeper within. 18 years passed, and no goal. No ambition. No friends either. The animals were too scared to approach him (passing to he/him pronouns from now bc poor little thing has a conscience). One time, a passing fox even called him a "zest fest". Can you believe that ?? What did he do to deserve this ? (everything). Actually, there is only one memory that the sphere cherish. At that time, the ball must be around 15 years old, and as usual, he was feeling pretty lonely. Everything happened in summer, in the middle of a cloudy yet hot day. "Another great time watching the sky" He sighed thoughtfully in his little hea- uuh ball. ? Brain ? Ball brain. You all must be wondering. "IDIOT WHY DON'T YOU MOVE FROM THIS MOUNTAIN" Savannah, slow down. He indeed understood very early (precocious child ahh) how to wal-.. roll, but he also understood that moving out of his comfortable and convenient birth place without knowing anything from the outside world was a really, really stupid idea. Purple ball was no afraid, he was just aware of the potential dangers. "Maybe i should wait a little –⩊–"" was his excuse. Anyway, back to the memory. Our sphere was lost in his thoughts, when suddenly... A giant, enormous, menacing shadow appeared next to him. Something big was approaching, and im not speaking about the shit I've been holding since dawn. As the thing gradually got closer, little ball was staring at it, a little anxious but his curiosity took over every other feelings. Yet he suddenly closed his eyes in a little moment of panick. When he opened them again, he found a being. Staring at him. Rude !! Dont do that kids. "I've never seen this kind of animal in my whole existence" Thought our main character. Considering the ball was on the ground, imagine a light pink very short haired person, around 15 years old too, face planted as if glued to the ground watching with sparkly eyes a random purple sphere who's colors didn't match the environment at all. What a bizarre scene... (BIZARRE ??? JOJO NO KIMYOU NA BOU-) -"WOAHHH !!! A BALL !! Hey it looks pretty cute..." She exclaimed. "Im gonna take it home~~.." As the energumen slowly reached their hand toward the "cute" thing, and when their finger was 2 centimeters from it, they suddenly flinched and stepped back holding their hand. "AHH !! IT BIT ME" Indeed, our weird glob opened his "mouth" for the first time and proceeded from a defensive perspective. "What the... How is that even possible ?? Is that a toy ? Stupid new products they create these days man." Visibly annoyed, the intriguing character decided to sit on the grass next to the ball, watched the clouds for a good minute, and started speaking alone. They seemed so lonely in their everyday life that speaking to a "toy" was apparently better than nothing. She presented herself as Mizuki Akiyama. She claimed to like cute things, but said that other people at her school kept trying to ridiculise her for that. Purple sphere was listening to her ramble, not knowing how to interact. (Mizuki, if you think it's normal to speak to a random purple ball you just found at the peak of a mountain, you have the survival instinct of a potato). She also explained how she got here. "I was searching for a place to think about everything and anything, you know ? If I had friends to talk to, I wouldn't have to visit this mountain and speak to a wannabe pokemon." She giggles mischievously looking at the purple thing. "Who are you calling a pokemon" is what he wanted to say but he wasn't feeling ready to speak yet. Hours passed, and the sky began to darken quickly. "Ahhh.. it's time to go home now. My big sis will be worried." Mizuki starts walking away, and suddenly turns back in the direction of the ball. She says tenderly, but with a hint of sadness in her voice. "Being lonely together wasn't so bad, i hope ill see you around again someday. Bye~ ☆" She then slowly disappeared going down the slope, fading in darkness, as quick as she appeared. What a melancholic individual. Ball thought about that interaction a lot. Not only because it was the first and only time he spoke to this weird specie, but also because it opened his eyes to a brand new catalogue of possibilities. Years passed, and as he thought and thought, one conclusion always came to his mind. It's decided !! He will explore the world and find companions to live his life with. Back to the present (he is 18) After thinking about multiple ways to start his travel, he finally decided to go on a sunny morning. This is the big day ! (Snifff they're growing up so fast...) Carefree and thirsty for curiosity, zest fest started rolling (no little ball roll back to kitchen). At that moment, he was at the peak of the mountain, he didn't even move a distance of seven meters that he found himself heading toward a slope and suddenly started rolling, rolling, rolling... ("MOU IKKAI, MOU IKKAI, WATASHI WA KYOU MO KOROGARIMASU TO ") The ball went past trees, rocks, didn't miss hitting countless poor plants and pebbles as he was going faster and faster down the slope. "Fu fu fu, this is kind of fun..." He thought as he was going so fast he could break sound barrier and create impact frames on spot. Wait, how does he know our language ??? Is that what you're worried about ? We're speaking about a purple sphere that appeared out of nowhere. Unfortunately (or fortunately), all good things must come to an end. At the end of the slope was waiting a giant rock. The ball didn't manage to stop in time- BOOM !!!!!!! What a great way to start a big journey. Dizzy and kind of hurt, the curious ball continued slowly along its trajectory, making sure not to fall this time. He met multiple animals on the way. Well... Met was a strong word, since all the species were seeing him as an outcast. "What is that thing doing here ??" Murmured anxiously a squirrel. A cocky doe added "It's not welcome in here. Seems like it's planning to leave the mountain. It finally understood its place." A bird who was flapping above them all finally said "Honestly, he always scared me a little..." (Bitch how) Little ball pretended not to hear anything, determined to find people he could call his friends, people who could accept the true him. As he descended the mountain, the sky darkened and thousands of stars appeared one by one and illuminated the way, alongside with the extravagant blue moon (no Caine hold back). He must not be very far from the end of the mountain. Finally ! he was tired of rolling on sneaky pebbles and encounter contemptuous animals. He was about to call it a day and hide inside ferns to sleep when something captivated his attention. Lower down, below him and about a hundred meters away, floating lights were making their way on a winding path between the trees. What ?? Little ball, arent you tired... There was around a dozen yellow lights, probably from lanterns, reflecting... People !!!! And horses. Little ball didn't know what horses were, but definitly knew what the other specie is. "The same as that person i met 3 years ago !!!" He thought with excitement. He could've ignored that and start sleeping right away, but we know our little sphere is a curious one. Without thinking twice, he started to tail them while hiding behind trees, all while being
My everything One day, on the top of an old mountain, something happened. What happened chichi, what happened ??? Heh... You all are not ready for the story im about to tell you... It was around 7am. The nature slowly awaken from its slumber and go about its business, the sky was so bright it could turn alive and start singing niccori survey team at any time given, and the rustling of the plants betrayed the unexpected visit of the spring wind. However, something disrupts this peaceful start to the morning. Between two rocks, under a huge toona sinensis, and next to the most dehydrated river in the whole region, a little ball appeared. What ?? A ball ?? What kind of story is this... But listen ! It wasn't any kind of ball... This ball was medium small: it could fit in a hand of a high schooler. And fluffy !! Soooo fluffy, im confident this ball could win against the fluffiest hamster, dog, cat, bunny, your partner, a feet, your partner's feet, whatever you want in a "Who's the fluffiest of them all" contest. But mind you, that wasn't the ball's weirdest attribute. This ball was.... Purple (the #BB88EE way). With a smug cat face on it (like.. this --> (–⩊–)). It had two cyan lines similar to pencil strokes I could've drawn with my mouse during class, one between its eyes and one descending along the left side of its.. face?... .... Completely stupid, isn't it ? Anyway... The ball quietly spawned in the middle of nothing, around one meter from the ground, levitated for two solid seconds before landing all softly on the grass (peter how are you doing that). ... So that's it ? A purple ball ?? My balls could be purple too after a long night with your mo- oi oi that's rude. Back to the intriguing sphere please. Strangely enough, the ball had a conscience. It could think, move, but not speak (yet). Yeah, have you ever seen a ball with a mouth ? That's what I thought. The cute orb looked around, started to feel for the first time what it was like existing. The caress of the wind on its surface, the peaceful atmosphere of this place, everything was so... Alive. As the ball was just born, it didn't have any life goal. To be able to feel its weight pressing on the land was already enough. Time passed, the ball progressively explored each coin of the mountain as the years passed. Days after days, months after months, years after years. The mountain couldn't be better, dressing in its finest clothes depending of the weather (diva, I could never) and seemed to show off its rain coat, snow scarf and flower cardigan at any given opportunity. Our little thing, on the other hand, was kind of lost. Well, it was indeed lost. But we're speaking here about the type of lost... Deeper within. 18 years passed, and no goal. No ambition. No friends either. The animals were too scared to approach him (passing to he/him pronouns from now bc poor little thing has a conscience). One time, a passing fox even called him a "zest fest". Can you believe that ?? What did he do to deserve this ? (everything). Actually, there is only one memory that the sphere cherish. At that time, the ball must be around 15 years old, and as usual, he was feeling pretty lonely. Everything happened in summer, in the middle of a cloudy yet hot day. "Another great time watching the sky" He sighed thoughtfully in his little hea- uuh ball. ? Brain ? Ball brain. You all must be wondering. "IDIOT WHY DON'T YOU MOVE FROM THIS MOUNTAIN" Savannah, slow down. He indeed understood very early (precocious child ahh) how to wal-.. roll, but he also understood that moving out of his comfortable and convenient birth place without knowing anything from the outside world was a really, really stupid idea. Purple ball was no afraid, he was just aware of the potential dangers. "Maybe i should wait a little –⩊–"" was his excuse. Anyway, back to the memory. Our sphere was lost in his thoughts, when suddenly... A giant, enormous, menacing shadow appeared next to him. Something big was approaching, and im not speaking about the shit I've been holding since dawn. As the thing gradually got closer, little ball was staring at it, a little anxious but his curiosity took over every other feelings. Yet he suddenly closed his eyes in a little moment of panick. When he opened them again, he found a being. Staring at him. Rude !! Dont do that kids. "I've never seen this kind of animal in my whole existence" Thought our main character. Considering the ball was on the ground, imagine a light pink very short haired person, around 15 years old too, face planted as if glued to the ground watching with sparkly eyes a random purple sphere who's colors didn't match the environment at all. What a bizarre scene... (BIZARRE ??? JOJO NO KIMYOU NA BOU-) -"WOAHHH !!! A BALL !! Hey it looks pretty cute..." She exclaimed. "Im gonna take it home~~.." As the energumen slowly reached their hand toward the "cute" thing, and when their finger was 2 centimeters from it, they suddenly flinched and stepped back holding their hand. "AHH !! IT BIT ME" Indeed, our weird glob opened his "mouth" for the first time and proceeded from a defensive perspective. "What the... How is that even possible ?? Is that a toy ? Stupid new products they create these days man." Visibly annoyed, the intriguing character decided to sit on the grass next to the ball, watched the clouds for a good minute, and started speaking alone. They seemed so lonely in their everyday life that speaking to a "toy" was apparently better than nothing. She presented herself as Mizuki Akiyama. She claimed to like cute things, but said that other people at her school kept trying to ridiculise her for that. Purple sphere was listening to her ramble, not knowing how to interact. (Mizuki, if you think it's normal to speak to a random purple ball you just found at the peak of a mountain, you have the survival instinct of a potato). She also explained how she got here. "I was searching for a place to think about everything and anything, you know ? If I had friends to talk to, I wouldn't have to visit this mountain and speak to a wannabe pokemon." She giggles mischievously looking at the purple thing. "Who are you calling a pokemon" is what he wanted to say but he wasn't feeling ready to speak yet. Hours passed, and the sky began to darken quickly. "Ahhh.. it's time to go home now. My big sis will be worried." Mizuki starts walking away, and suddenly turns back in the direction of the ball. She says tenderly, but with a hint of sadness in her voice. "Being lonely together wasn't so bad, i hope ill see you around again someday. Bye~ ☆" She then slowly disappeared going down the slope, fading in darkness, as quick as she appeared. What a melancholic individual. Ball thought about that interaction a lot. Not only because it was the first and only time he spoke to this weird specie, but also because it opened his eyes to a brand new catalogue of possibilities. Years passed, and as he thought and thought, one conclusion always came to his mind. It's decided !! He will explore the world and find companions to live his life with. Back to the present (he is 18) After thinking about multiple ways to start his travel, he finally decided to go on a sunny morning. This is the big day ! (Snifff they're growing up so fast...) Carefree and thirsty for curiosity, zest fest started rolling (no little ball roll back to kitchen). At that moment, he was at the peak of the mountain, he didn't even move a distance of seven meters that he found himself heading toward a slope and suddenly started rolling, rolling, rolling... ("MOU IKKAI, MOU IKKAI, WATASHI WA KYOU MO KOROGARIMASU TO ") The ball went past trees, rocks, didn't miss hitting countless poor plants and pebbles as he was going faster and faster down the slope. "Fu fu fu, this is kind of fun..." He thought as he was going so fast he could break sound barrier and create impact frames on spot. Wait, how does he know our language ??? Is that what you're worried about ? We're speaking about a purple sphere that appeared out of nowhere. Unfortunately (or fortunately), all good things must come to an end. At the end of the slope was waiting a giant rock. The ball didn't manage to stop in time- BOOM !!!!!!! What a great way to start a big journey. Dizzy and kind of hurt, the curious ball continued slowly along its trajectory, making sure not to fall this time. He met multiple animals on the way. Well... Met was a strong word, since all the species were seeing him as an outcast. "What is that thing doing here ??" Murmured anxiously a squirrel. A cocky doe added "It's not welcome in here. Seems like it's planning to leave the mountain. It finally understood its place." A bird who was flapping above them all finally said "Honestly, he always scared me a little..." (Bitch how) Little ball pretended not to hear anything, determined to find people he could call his friends, people who could accept the true him. As he descended the mountain, the sky darkened and thousands of stars appeared one by one and illuminated the way, alongside with the extravagant blue moon (no Caine hold back). He must not be very far from the end of the mountain. Finally ! he was tired of rolling on sneaky pebbles and encounter contemptuous animals. He was about to call it a day and hide inside ferns to sleep when something captivated his attention. Lower down, below him and about a hundred meters away, floating lights were making their way on a winding path between the trees. What ?? Little ball, arent you tired... There was around a dozen yellow lights, probably from lanterns, reflecting... People !!!! And horses. Little ball didn't know what horses were, but definitly knew what the other specie is. "The same as that person i met 3 years ago !!!" He thought with excitement. He could've ignored that and start sleeping right away, but we know our little sphere is a curious one. Without thinking twice, he started to tail them while hiding behind trees, all while being extremely careful not to be seen. He doesn't know yet that... This action will have consequences 𓂃 ࣪˖ ִֶָ𐀔 TO BE CONTINUED~~
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75745276
{"authors": ["Nogirui"], "language": "English", "title": "My everything"}
Things Change The festival was fun. Town all dressed up in new colors, peppered with attractions and stands. Streets overflowing with people enjoying their day. Kris had fun. They enjoyed spending time with Susie and Noelle. Laughed at the way Susie tore into every snack they grabbed. Smiled after the thousandth time they scared Noelle today. It was all just a great time. Kris didn't mind that Susie and Noelle were going together. Kris was fine just happening to tag along. Kris felt okay with being a third wheel to their childhood friend and school bully. They were happier together, anyways. Susie smiled more with Noelle, Noelle laughed more with Susie. They just clicked. Clicked better than what you get from growing up together year after year after year. Felt more than what comes from the weight of years, from all the things unsaid but known from familiarity alone. Clicked better than what you get from almost dying together time and time again. Loosened up more than laughing at insurmountable odds before winning anyways. Made more moments than adventuring through an impossible world. It was for the best. They would be happier this way. The moon was high by the time Noelle had to return home. The grand iron gate stood before the trio, high and mighty. Susie took a step beyond the threshold. "Um...Sorry Susie, I don't think Mom would...be okay with that. After...what happened yesterday." Susie stepped back quickly. "Oh, uh, yeah. Sorry 'bout that." "No need to apologize! It's...you didn't do anything wrong. It's just her being...unreasonable." "Pft, yeah. Old women, eh?" The skin by Noelle's eyes scrunched up as she laughed. "Fahaha, yeah!" "Anyways, I really need to get going...it was fun, though! Bye Susie!" "Bye!" Susie's rough purple hand fluffed Noelle's hair, the doe flushing at the touch. "OkIreallyreallyhavetogosorrybye!" And with that, Kris and Susie were alone. Kris knew they could follow if they wanted, but...why bother at this point. "Guess it's just us now, huh?" Kris nodded. A devilish grin alighted on Susie's face. "So, what now?" Just a couple hours ago, Kris would have loved to do just about anything. But now? "You go home." The dinosaur's shoulders slumped a slight before she picked herself back up. "...'Spose so, it's late. Gonna go with me?" "No." And then there was one, Kris, just Kris and a full moon. One step, two steps, three steps, four... ...but Kris didn't want to go home. Last night...there was probably going to be a repeat of that anyways. They didn't want to come home to that. Susie didn't want Kris seeing her home. Noelle...no. Ralsei...Kris wouldn't be able to get into the school. As Kris contemplated a destination, the feeling of grass underfoot alerted them that they had already chosen one. Bordered by deciduous trees in blazing tones, Kris took in the autumn air. The path was long and took a few twists through the woods, but it was one remembered well. Across the small bridge over the stream, through the marked trees, was the shelter. A rusty red metal door carved into a hillside, a keypad uncovered by a loosened panel. Within, countless acrid memories. Yet also within, the one friend Kris could always count on. With a satisfying crunch of leaves, Kris stepped into the small clearing around the shelter, only to find an unusual sight. Sitting in front of the door was a large white cat in a band tee and jeans, bangs forming a thin coat over her eyes. "What are you doing here?" Kris questioned. Catti's tail twitched as her head jerked to face Kris. "Sitting." "Why here?" "Quiet. Far from noise." "It's late. No noise." "A problem? With my position?" "No." "Then why. Do you inquire." "Didn't expect you here." Catti stood up, beckoning Kris to come closer as she indicated the keypad. "New. Unearthed." Kris was grateful that they always spoke in monotone. Otherwise, they would need to feign curiosity. "Weird. What do you think happened?" "Great impact. Dislodged. Revealed." Kris nodded. "Makes sense." Especially considering that was exactly what happened. "The glyphs. Meaning unknown." "Probably says who has what code." "...Sensible. Theories?" First, Kris indicated the pine tree. There was only one reasonable explanation for this one. "Carol." Then, the police badge. The current chief would make sense. "Undyne." And finally, the Delta Rune. It only seemed intuitive that the priest would have it! "Alvin." Catti's head tilted down for an instant before bobbing back up. "Agreed." A silence hung in the air for a few seconds. A small gust of wind chilled Kris through their sweater and bristled Catti's fur. Amber and ruby eyes lingered on the keypad before shifting to each other, and then the grass. Catti made a dull thud as she sat down, Kris following shortly after. The birds had fallen asleep, Kris presumed, going off the lack of song. So too had the town, and so too had the sun. For a moment, Kris supposed that the entire world outside this clearing had fallen asleep. They recalled staying up late with other friends in the twilight years of Dess' presence. It was always nice staying up beyond the hours one was meant to. In years past, a certain exciting, forbidden feeling filled their heart, amplified by the cover of shadow allowing things parents never would. Yet the night had been explored, and now was known well. Any sense of novelty had long faded as silence became not permission to rebel, but permission to think, to introspect, to mourn, to contemplate. Possibly a mandate. Still, the cold light of the moon felt like the warmth of a friend, and the dark of night like the light of home. Kris' scarlet gaze drifted to the cat beside them. Once, when they were smaller, this cat meant more. A friend. Although, upon second thought, perhaps she meant more now. A memory. That slight ache of a connection gnawed halfway through and pulled taut as the thread's endpoints grew further and further. Now was a good opportunity to reconnect, but...they'd grown apart, no? Catti had become a melancholic, dark creature speaking in fragments. Kris had become a monotone bundle of secrets and nostalgia, now twinged with envy. Unjustified envy, of course. Susie and Noelle were perfect for each other. Kris was a fool to think anything else. They shook their head and pushed the thought to the rim of their mind, noticing their current companion's glassy stare through the stars. Contemplating too, they supposed. A light tap on her shoulder roused Catti from her stupor, leaving her no time to recover before a question escaped Kris. "Was the festival fun?" Catti nodded quickly. "Was there anything you liked?" Her eyes narrowed as a circle spun itself in her frontal lobe and the walls of her skull closed in a slight. Her claws reached out for a response, but none came. "No. Lie." "Thought so." "Better out here. Quiet. No family. Just the night." "Yeah." "And you? The festival?" As if a trap had been sprung, the words fell from Kris' mouth immediately. "It was fun." Catti's tail hooked as her head tilted to the side. "Quick response. Suspicious." Kris' eyes fell to the grass once more, their voice falling in tandem. "It was fine. Nothing actually that bad." Catti smiled softly. "Your emotions. Shine brightly. No need to hide." "...it's stupid." "Life is so." Kris raised their view back up to Catti, interlocking their left hand's fingers with their right before releasing them again, over and over. "It's..." "...Guess it's just. Stuff with friends. Me being stupid over it all." "Complicated?" "Not really, just...unreasonable. Selfish, maybe." "Your friends. You care for them." Kris nodded. "I knew already. From past. Unless you changed?" Kris shook their head. "I...I'm pretty stuck in the past." "Memories. Precious things. Want to go back. Always. Can't." "...yeah." "But you. The same?" Were they? Kris considered. They... ...They absolutely weren't. They were so much more carefree back then. More hopeful. Playing silly pranks on Noelle, beating Asriel at video games, sitting in awe as Dess was so cool so effortlessly, using random objects as toys for no real reason, carrying that headband everywhere... There wasn't any prophecy, or at least it wasn't so real. There wasn't any Knight, or at least it wasn't their friend. It was all so much simpler, so much easier. Easier to smile, to hope, to connect, to wake up every morning and keep trying. But now...now... "A lot? To think?" One nod. "But you still care?" Many nods. Of course they did. "And them. Do they? For you. Does it seem?" ... "...No. I--I know they do, but, it just--they just--" "Silence to you?" "It's more...it feels fake. Like they just want to...to be alone. Together." A sharp crack accentuated Kris' voice. "Without...me." "And, I...don't get it. But I do, it makes sense, but..." Catti sat there, eyes fixated on the branch of a nearby tree for a few seconds before leaping to another branch, and another. "Have no answers. Difficult." A quivering smile spread across Kris' face. "...You have experience with this stuff?" Catti blinked once, twice, as her paw contracted and relaxed. She turned to face Kris fully. "Yes. You." Kris curled into themself. They'd hurt her? They'd really...they just messed it up, huh? Naturally. "In my mind. You lived on a pedestal." "Like a wellspring of happiness." "You, long ago. But no more." "You left. You left me." "From random entropy. Our bond withered." "No change, no event brought it." "It happened. And I learned." "Learned to live." "With it." "Without you." "I learned." "I grew." "Without you." "New friends. New style." "Not the same." Kris sighed. "Never the same." "Never." "I...I don't remember why I stopped talking to you. If you wanted an answer." "Did it. Not matter?" "It did. I don't get why I would have. Stopped." Piercing ochre eyes swept over Kris' form. The eyes of a cat
Things Change The festival was fun. Town all dressed up in new colors, peppered with attractions and stands. Streets overflowing with people enjoying their day. Kris had fun. They enjoyed spending time with Susie and Noelle. Laughed at the way Susie tore into every snack they grabbed. Smiled after the thousandth time they scared Noelle today. It was all just a great time. Kris didn't mind that Susie and Noelle were going together. Kris was fine just happening to tag along. Kris felt okay with being a third wheel to their childhood friend and school bully. They were happier together, anyways. Susie smiled more with Noelle, Noelle laughed more with Susie. They just clicked. Clicked better than what you get from growing up together year after year after year. Felt more than what comes from the weight of years, from all the things unsaid but known from familiarity alone. Clicked better than what you get from almost dying together time and time again. Loosened up more than laughing at insurmountable odds before winning anyways. Made more moments than adventuring through an impossible world. It was for the best. They would be happier this way. The moon was high by the time Noelle had to return home. The grand iron gate stood before the trio, high and mighty. Susie took a step beyond the threshold. "Um...Sorry Susie, I don't think Mom would...be okay with that. After...what happened yesterday." Susie stepped back quickly. "Oh, uh, yeah. Sorry 'bout that." "No need to apologize! It's...you didn't do anything wrong. It's just her being...unreasonable." "Pft, yeah. Old women, eh?" The skin by Noelle's eyes scrunched up as she laughed. "Fahaha, yeah!" "Anyways, I really need to get going...it was fun, though! Bye Susie!" "Bye!" Susie's rough purple hand fluffed Noelle's hair, the doe flushing at the touch. "OkIreallyreallyhavetogosorrybye!" And with that, Kris and Susie were alone. Kris knew they could follow if they wanted, but...why bother at this point. "Guess it's just us now, huh?" Kris nodded. A devilish grin alighted on Susie's face. "So, what now?" Just a couple hours ago, Kris would have loved to do just about anything. But now? "You go home." The dinosaur's shoulders slumped a slight before she picked herself back up. "...'Spose so, it's late. Gonna go with me?" "No." And then there was one, Kris, just Kris and a full moon. One step, two steps, three steps, four... ...but Kris didn't want to go home. Last night...there was probably going to be a repeat of that anyways. They didn't want to come home to that. Susie didn't want Kris seeing her home. Noelle...no. Ralsei...Kris wouldn't be able to get into the school. As Kris contemplated a destination, the feeling of grass underfoot alerted them that they had already chosen one. Bordered by deciduous trees in blazing tones, Kris took in the autumn air. The path was long and took a few twists through the woods, but it was one remembered well. Across the small bridge over the stream, through the marked trees, was the shelter. A rusty red metal door carved into a hillside, a keypad uncovered by a loosened panel. Within, countless acrid memories. Yet also within, the one friend Kris could always count on. With a satisfying crunch of leaves, Kris stepped into the small clearing around the shelter, only to find an unusual sight. Sitting in front of the door was a large white cat in a band tee and jeans, bangs forming a thin coat over her eyes. "What are you doing here?" Kris questioned. Catti's tail twitched as her head jerked to face Kris. "Sitting." "Why here?" "Quiet. Far from noise." "It's late. No noise." "A problem? With my position?" "No." "Then why. Do you inquire." "Didn't expect you here." Catti stood up, beckoning Kris to come closer as she indicated the keypad. "New. Unearthed." Kris was grateful that they always spoke in monotone. Otherwise, they would need to feign curiosity. "Weird. What do you think happened?" "Great impact. Dislodged. Revealed." Kris nodded. "Makes sense." Especially considering that was exactly what happened. "The glyphs. Meaning unknown." "Probably says who has what code." "...Sensible. Theories?" First, Kris indicated the pine tree. There was only one reasonable explanation for this one. "Carol." Then, the police badge. The current chief would make sense. "Undyne." And finally, the Delta Rune. It only seemed intuitive that the priest would have it! "Alvin." Catti's head tilted down for an instant before bobbing back up. "Agreed." A silence hung in the air for a few seconds. A small gust of wind chilled Kris through their sweater and bristled Catti's fur. Amber and ruby eyes lingered on the keypad before shifting to each other, and then the grass. Catti made a dull thud as she sat down, Kris following shortly after. The birds had fallen asleep, Kris presumed, going off the lack of song. So too had the town, and so too had the sun. For a moment, Kris supposed that the entire world outside this clearing had fallen asleep. They recalled staying up late with other friends in the twilight years of Dess' presence. It was always nice staying up beyond the hours one was meant to. In years past, a certain exciting, forbidden feeling filled their heart, amplified by the cover of shadow allowing things parents never would. Yet the night had been explored, and now was known well. Any sense of novelty had long faded as silence became not permission to rebel, but permission to think, to introspect, to mourn, to contemplate. Possibly a mandate. Still, the cold light of the moon felt like the warmth of a friend, and the dark of night like the light of home. Kris' scarlet gaze drifted to the cat beside them. Once, when they were smaller, this cat meant more. A friend. Although, upon second thought, perhaps she meant more now. A memory. That slight ache of a connection gnawed halfway through and pulled taut as the thread's endpoints grew further and further. Now was a good opportunity to reconnect, but...they'd grown apart, no? Catti had become a melancholic, dark creature speaking in fragments. Kris had become a monotone bundle of secrets and nostalgia, now twinged with envy. Unjustified envy, of course. Susie and Noelle were perfect for each other. Kris was a fool to think anything else. They shook their head and pushed the thought to the rim of their mind, noticing their current companion's glassy stare through the stars. Contemplating too, they supposed. A light tap on her shoulder roused Catti from her stupor, leaving her no time to recover before a question escaped Kris. "Was the festival fun?" Catti nodded quickly. "Was there anything you liked?" Her eyes narrowed as a circle spun itself in her frontal lobe and the walls of her skull closed in a slight. Her claws reached out for a response, but none came. "No. Lie." "Thought so." "Better out here. Quiet. No family. Just the night." "Yeah." "And you? The festival?" As if a trap had been sprung, the words fell from Kris' mouth immediately. "It was fun." Catti's tail hooked as her head tilted to the side. "Quick response. Suspicious." Kris' eyes fell to the grass once more, their voice falling in tandem. "It was fine. Nothing actually that bad." Catti smiled softly. "Your emotions. Shine brightly. No need to hide." "...it's stupid." "Life is so." Kris raised their view back up to Catti, interlocking their left hand's fingers with their right before releasing them again, over and over. "It's..." "...Guess it's just. Stuff with friends. Me being stupid over it all." "Complicated?" "Not really, just...unreasonable. Selfish, maybe." "Your friends. You care for them." Kris nodded. "I knew already. From past. Unless you changed?" Kris shook their head. "I...I'm pretty stuck in the past." "Memories. Precious things. Want to go back. Always. Can't." "...yeah." "But you. The same?" Were they? Kris considered. They... ...They absolutely weren't. They were so much more carefree back then. More hopeful. Playing silly pranks on Noelle, beating Asriel at video games, sitting in awe as Dess was so cool so effortlessly, using random objects as toys for no real reason, carrying that headband everywhere... There wasn't any prophecy, or at least it wasn't so real. There wasn't any Knight, or at least it wasn't their friend. It was all so much simpler, so much easier. Easier to smile, to hope, to connect, to wake up every morning and keep trying. But now...now... "A lot? To think?" One nod. "But you still care?" Many nods. Of course they did. "And them. Do they? For you. Does it seem?" ... "...No. I--I know they do, but, it just--they just--" "Silence to you?" "It's more...it feels fake. Like they just want to...to be alone. Together." A sharp crack accentuated Kris' voice. "Without...me." "And, I...don't get it. But I do, it makes sense, but..." Catti sat there, eyes fixated on the branch of a nearby tree for a few seconds before leaping to another branch, and another. "Have no answers. Difficult." A quivering smile spread across Kris' face. "...You have experience with this stuff?" Catti blinked once, twice, as her paw contracted and relaxed. She turned to face Kris fully. "Yes. You." Kris curled into themself. They'd hurt her? They'd really...they just messed it up, huh? Naturally. "In my mind. You lived on a pedestal." "Like a wellspring of happiness." "You, long ago. But no more." "You left. You left me." "From random entropy. Our bond withered." "No change, no event brought it." "It happened. And I learned." "Learned to live." "With it." "Without you." "I learned." "I grew." "Without you." "New friends. New style." "Not the same." Kris sighed. "Never the same." "Never." "I...I don't remember why I stopped talking to you. If you wanted an answer." "Did it. Not matter?" "It did. I don't get why I would have. Stopped." Piercing ochre eyes swept over Kris' form. The eyes of a cat could be an unsettling thing. "Is it too late to resume?" "Never too late." "Friends once more?" "Always were." "But never the same." "Never the same." Kris looked to the moon, high overhead. They supposed it must be past midnight by now. The air bit at their ears. "'S late." "You desire. To return home?" "Hell no. Mom's probably drinking again." "Stay here?" Kris shrugged, but the smile on their face betrayed a facade of apathy. "Nowhere better to be." "Agreed." The cat yawned, Kris following shortly after as if infected. "Where are you sleeping?" "Here. Fur holds warmth. Avoiding home tonight." "...Can I sleep here, too? Like old times. Sleepovers." Catti smiled at the memories. "You are welcome." She stood and on light footfalls treaded to the edge of the clearing, lying between the trees in a soft spot of grass. Kris followed a few feet behind, finding a nice spot to curl up. Catti's pupils shifted to Kris. "No fur. Will be cold." "It'll be fine. Sweater's warm enough." Catti's tail twitched a slight, just enough for Kris' drowsy eyes to notice. "No. Over here. Warmer close." She wanted them to sleep closer to her? They guessed that she was probably still comfortable around them. Time spent together wouldn't disappear that easily. Yet, Kris couldn't shake a small twinge of discomfort, that it would be upsetting. She did ask, though, and they didn't mind the idea in and of itself... ...And they were lying. They'd get cold...their sweater wasn't THAT warm, they could use to be a bit warmer... Before Kris could work up the strength to stand, they noticed a large, warm fluffy thing beside them. "Did myself." Kris tried to come up with a response, but there was nothing to say. Silence felt like enough. A furry arm hovered in the air for a few seconds before wrapping around Kris and pulling them closer, Catti's warmth comforting Kris alongside a flush of their own. "Just for warmth. No wrong ideas." Never the same.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75742766
{"authors": ["Radi__7"], "language": "English", "title": "Things Change"}
Lullaby That numbing feeling has come to haunt him once again.The guilt that has been with him ever since the death of his mother. A slow, agonizing ache that seems to envelop his entire soul. It cradles it wholly in misery and snuffs out any of his hopes with cynical glee. Oswell lets his horse tread slowly, lazily, further behind and away from everyone else. He wanted to get away from everything that left him with a mean taste in his mouth.Away from his men, their inquisitive eyes, their questions, their arguments (he's gotten too used to those lately). Away from the twins and everything their explosive temper entailed.It was a poor attempt to try and still the stirring in his gut that tended to spread all throughout and leave even his bones agitated. Especially now, whenever he gets a moment to think about his current situation. It's gotten bad ever since he had convinced his men to ride with Quinlan. Particularly now that they've acquired a horde of human fodder and he's been roped into playing shepherd. He could complain about it a thousand times over, he would never have agreed to it if he knew that this is where Quinlan was planning to take it. Nothing about dragging the remaining braves back into their makeshift settlement was enjoyable to him. His physical self entirely was repulsed and protesting, telling him that this was wrong. So he tries to think of something else. He returns to seek refuge in the fact that at least the mind could be soothed, or so he would tell himself. He would let himself be lured into the sickly sweet lie when his eyes grazed over the thick stashes of lush green cash he had collected with his boss earlier that day. He would swear by it when he felt the soft bills cascade and dance over his fingertips as he counted through them, and they seemed endless. Even compared to the affairs he had been through, it was an amount he'd never seen before.Tomorrow they'd go back to collect the rest of what they were owed, and he could lie to himself all over again. Try to still the murmur of his aching nerves once again. Fail once again when he watches how effortless it seems to be on Quinlan. Yes, so he thinks of Quinlan. The matter that's been stewing in his brain ever since their first meeting is now regurgitating back up again like bile. It seemed almost baffling to him, even back then, how absolutely unaffected Quinlan seems to be by the very thing that eats Oswell from the inside out. He's almost fascinated, in a way. He was able to watch how the other man worked, up close, when they were alone together. How effortless the lies came to him, how easily sweet sob stories spilled off his tongue, to be lapped up and believed by those who were none the wiser. Oswell had considered getting lost in it himself, considered how the cold Quinlan exudes could be used to soothe the slow simmering, burning sensation of his guilt cooking him alive. A glimpse into a small world of carelessness Oswell could have, if things were different, giving him a break from his racing mind, even if only for a second. A glimpse of the approaching camp creeping into his field of vision startles him enough to rattle this train of thought out of his brain and drag him back down to reality. He shakes his head to make sure it's truly gone. He finally approaches their encampment, after everyone else had already arrived and settled. He spares a glance off towards his own neglected tent, but the pleasant lullaby of the crackling, still lit campfire lures him in like a siren's song. His tent stays neglected for the night. He wagers that this is a better option than trying to retreat into a sleep that he knew would not come to bless him tonight. Everything that he's seen today, whatever would happen tomorrow, he knew it would only keep him up. Rile up the embers of shame in his gut once again. His steps steer towards the inviting glow, he relishes in the thought of the fire warming his tender skin, stroking deep into his soul as some form of spiritual cleansing. He lets this moment of temporary tranquility wash over him, until something presses into his vision, until he's able to muse at the sight of his boss's shadow, drawn long by the flames on the horizon. The almost unreadable stature that was Quinlan, hunched over, sitting in front of the fire in careful observation, seemed to not have taken notice to his approach yet. Or so he thinks. "You're up." Quinlan says, throws it at him before Oswell could even begin to figure out a way to approach the other man. He struggles to catch it as his mouth starts moving before his brain has a chance to process or compose itself. He stops in his obvious, betraying tracks for a moment and curses himself for not thinking that Quinlan — out of all people — could hear him coming. "Yeah-" he chokes out, just barely stopping short from blurting out the added explanation of 'I'm on edge'. He manages to keep that thought carefully tucked away in his brain. It's too soon. He thinks as he takes the last few steps needed to situate himself next to his boss. He doesn't know what a man like Quinlan would do with a hint of weakness. Quinlan glances at him, Oswell is sure the other man can decipher the befuddled expression of a man caught in the act currently still plastered on his face. He stares intently forward as he tries to ignore Quinlan sniffing his agitation out like a trained bloodhound. The following stretch of silence that feels just a little too long practically confirms it for him. His heart skips a beat and he rushes to start talking again before Quinlan gets a chance to interrogate him. "I'm on edge." There it is. He admits it. He notices that Quinlan's expression now seems to mirror his own visage of confusion. For a second he doesn't know if the pang of pride he feels over managing to startle the other man is appropriate. "On edge?" Quinlan retorts, his attention shifting and seeming to back off and move away from Oswell for now. Oswell chooses to ignore how this tone manages to extinguish that insolent flame in him again very quickly. Tension rises to replace it. He can tell that his brain is starting to grasp at straws trying to figure out a way to plead his case to the other man. He resorts to something much simpler for his riled up demeanour - an accusation. He settles his piercing gaze on Quinlan, entirely unfazed. "You seem to be sure about your plan— for tomorrow." He muses, adding more quietly as the anger he's channeling seems to snuff out the moment Quinlan looks over to meet his eyes. "You look like you're handling it quite well." Nothing but a disgruntled grumble. The implication that he, in turn, is not handling it well, seems entirely lost in the moment. Quinlan looks back at him, and says nothing. The murmur of anticipation softly strums Oswell's nerves. He thinks for a moment that the other man didn't even hear him. He shuffles in his seat, leaning in closer, and attempts it differently. "I'm just concerned about—" He swallows, trying to choose his words carefully this time. "About tomorrow. All the... just— everything..." his own voice sounds like nails on a chalkboard to him. "that we'll have to do." He lets any further thought trail off, he puts his trust in the assumption that Quinlan is smart enough to catch it. "You're scared it won't work?" The sting that reverberates all throughout him when Quinlan hits the nail right on the head shakes his conviction just a little. Oswell huffs. He wouldn't call it scared, more like rightfully concerned about their safety and the legality of the scheme they're going to pull. He can't let Quinlan be exactly right. "You're right, I'm not worried about it."Quinlan answers his own question offhandedly before Oswell has the chance to take his swollen pride and dig himself an even deeper hole. Quinlan shifts, Oswell notices how the fire he'd been intent on staring into (and ignoring him in the process) suddenly seems no longer interesting to the Irishman, as he inches closer to bar the distance between the two. Quinlan's undivided attention, fully on him now, makes Oswell shudder. The other man looks almost expectant, a challenging air reaching out to grasp at Oswell, asking him, what else have you got in that little brain of yours? threatening to dissect his entire being right then and there. He's sure that whatever else he does decide to throw at the Irishman would be spun undone by his words in seconds. He decides to try his luck, mustering up the courage to leave his last shred of decency behind in the scrutinizing judgement of blue and green. "How do you do it?" It was timid, almost too subservient for his liking, but the beating of his heart currently managed to drown out any rational thought. He watches the way the other man backs off to ruminate over the question, looking up to the night sky seemingly deep in thought. One part of him was relieved that he'd managed to get Quinlan's suffocating attention away from him for just a moment, the other observed how he was clearly faking it. He had seen his boss deep in thought before, it wasn't a very rare sight, as it seemed to Oswell that that was all the other man ever did. The thoroughly entertained smile currently plastered on his lips with almost childlike glee was a telltale sign that was utterly betraying his attempt at some twisted act of kindness to try and protect Oswell's dignity. He had asked a really stupid question. Oswell shook his head, ready to get up and leave to bury this interaction safely away in the back of his mind for the foreseeable future. Quinlan's response shakes him out of his train of thought. "I do what I have to." Oswell nods, one single, slow and deliberate movement of his head. Without anymore furor to channel into the conversation, he decides to leave it at that. Silence follows, Quinlan is still intent on watching him. He seemed to have noticed that this answer didn't actually satisfy Oswell's curiosity. The Irishman stands up, swiftly, the
Lullaby That numbing feeling has come to haunt him once again.The guilt that has been with him ever since the death of his mother. A slow, agonizing ache that seems to envelop his entire soul. It cradles it wholly in misery and snuffs out any of his hopes with cynical glee. Oswell lets his horse tread slowly, lazily, further behind and away from everyone else. He wanted to get away from everything that left him with a mean taste in his mouth.Away from his men, their inquisitive eyes, their questions, their arguments (he's gotten too used to those lately). Away from the twins and everything their explosive temper entailed.It was a poor attempt to try and still the stirring in his gut that tended to spread all throughout and leave even his bones agitated. Especially now, whenever he gets a moment to think about his current situation. It's gotten bad ever since he had convinced his men to ride with Quinlan. Particularly now that they've acquired a horde of human fodder and he's been roped into playing shepherd. He could complain about it a thousand times over, he would never have agreed to it if he knew that this is where Quinlan was planning to take it. Nothing about dragging the remaining braves back into their makeshift settlement was enjoyable to him. His physical self entirely was repulsed and protesting, telling him that this was wrong. So he tries to think of something else. He returns to seek refuge in the fact that at least the mind could be soothed, or so he would tell himself. He would let himself be lured into the sickly sweet lie when his eyes grazed over the thick stashes of lush green cash he had collected with his boss earlier that day. He would swear by it when he felt the soft bills cascade and dance over his fingertips as he counted through them, and they seemed endless. Even compared to the affairs he had been through, it was an amount he'd never seen before.Tomorrow they'd go back to collect the rest of what they were owed, and he could lie to himself all over again. Try to still the murmur of his aching nerves once again. Fail once again when he watches how effortless it seems to be on Quinlan. Yes, so he thinks of Quinlan. The matter that's been stewing in his brain ever since their first meeting is now regurgitating back up again like bile. It seemed almost baffling to him, even back then, how absolutely unaffected Quinlan seems to be by the very thing that eats Oswell from the inside out. He's almost fascinated, in a way. He was able to watch how the other man worked, up close, when they were alone together. How effortless the lies came to him, how easily sweet sob stories spilled off his tongue, to be lapped up and believed by those who were none the wiser. Oswell had considered getting lost in it himself, considered how the cold Quinlan exudes could be used to soothe the slow simmering, burning sensation of his guilt cooking him alive. A glimpse into a small world of carelessness Oswell could have, if things were different, giving him a break from his racing mind, even if only for a second. A glimpse of the approaching camp creeping into his field of vision startles him enough to rattle this train of thought out of his brain and drag him back down to reality. He shakes his head to make sure it's truly gone. He finally approaches their encampment, after everyone else had already arrived and settled. He spares a glance off towards his own neglected tent, but the pleasant lullaby of the crackling, still lit campfire lures him in like a siren's song. His tent stays neglected for the night. He wagers that this is a better option than trying to retreat into a sleep that he knew would not come to bless him tonight. Everything that he's seen today, whatever would happen tomorrow, he knew it would only keep him up. Rile up the embers of shame in his gut once again. His steps steer towards the inviting glow, he relishes in the thought of the fire warming his tender skin, stroking deep into his soul as some form of spiritual cleansing. He lets this moment of temporary tranquility wash over him, until something presses into his vision, until he's able to muse at the sight of his boss's shadow, drawn long by the flames on the horizon. The almost unreadable stature that was Quinlan, hunched over, sitting in front of the fire in careful observation, seemed to not have taken notice to his approach yet. Or so he thinks. "You're up." Quinlan says, throws it at him before Oswell could even begin to figure out a way to approach the other man. He struggles to catch it as his mouth starts moving before his brain has a chance to process or compose itself. He stops in his obvious, betraying tracks for a moment and curses himself for not thinking that Quinlan — out of all people — could hear him coming. "Yeah-" he chokes out, just barely stopping short from blurting out the added explanation of 'I'm on edge'. He manages to keep that thought carefully tucked away in his brain. It's too soon. He thinks as he takes the last few steps needed to situate himself next to his boss. He doesn't know what a man like Quinlan would do with a hint of weakness. Quinlan glances at him, Oswell is sure the other man can decipher the befuddled expression of a man caught in the act currently still plastered on his face. He stares intently forward as he tries to ignore Quinlan sniffing his agitation out like a trained bloodhound. The following stretch of silence that feels just a little too long practically confirms it for him. His heart skips a beat and he rushes to start talking again before Quinlan gets a chance to interrogate him. "I'm on edge." There it is. He admits it. He notices that Quinlan's expression now seems to mirror his own visage of confusion. For a second he doesn't know if the pang of pride he feels over managing to startle the other man is appropriate. "On edge?" Quinlan retorts, his attention shifting and seeming to back off and move away from Oswell for now. Oswell chooses to ignore how this tone manages to extinguish that insolent flame in him again very quickly. Tension rises to replace it. He can tell that his brain is starting to grasp at straws trying to figure out a way to plead his case to the other man. He resorts to something much simpler for his riled up demeanour - an accusation. He settles his piercing gaze on Quinlan, entirely unfazed. "You seem to be sure about your plan— for tomorrow." He muses, adding more quietly as the anger he's channeling seems to snuff out the moment Quinlan looks over to meet his eyes. "You look like you're handling it quite well." Nothing but a disgruntled grumble. The implication that he, in turn, is not handling it well, seems entirely lost in the moment. Quinlan looks back at him, and says nothing. The murmur of anticipation softly strums Oswell's nerves. He thinks for a moment that the other man didn't even hear him. He shuffles in his seat, leaning in closer, and attempts it differently. "I'm just concerned about—" He swallows, trying to choose his words carefully this time. "About tomorrow. All the... just— everything..." his own voice sounds like nails on a chalkboard to him. "that we'll have to do." He lets any further thought trail off, he puts his trust in the assumption that Quinlan is smart enough to catch it. "You're scared it won't work?" The sting that reverberates all throughout him when Quinlan hits the nail right on the head shakes his conviction just a little. Oswell huffs. He wouldn't call it scared, more like rightfully concerned about their safety and the legality of the scheme they're going to pull. He can't let Quinlan be exactly right. "You're right, I'm not worried about it."Quinlan answers his own question offhandedly before Oswell has the chance to take his swollen pride and dig himself an even deeper hole. Quinlan shifts, Oswell notices how the fire he'd been intent on staring into (and ignoring him in the process) suddenly seems no longer interesting to the Irishman, as he inches closer to bar the distance between the two. Quinlan's undivided attention, fully on him now, makes Oswell shudder. The other man looks almost expectant, a challenging air reaching out to grasp at Oswell, asking him, what else have you got in that little brain of yours? threatening to dissect his entire being right then and there. He's sure that whatever else he does decide to throw at the Irishman would be spun undone by his words in seconds. He decides to try his luck, mustering up the courage to leave his last shred of decency behind in the scrutinizing judgement of blue and green. "How do you do it?" It was timid, almost too subservient for his liking, but the beating of his heart currently managed to drown out any rational thought. He watches the way the other man backs off to ruminate over the question, looking up to the night sky seemingly deep in thought. One part of him was relieved that he'd managed to get Quinlan's suffocating attention away from him for just a moment, the other observed how he was clearly faking it. He had seen his boss deep in thought before, it wasn't a very rare sight, as it seemed to Oswell that that was all the other man ever did. The thoroughly entertained smile currently plastered on his lips with almost childlike glee was a telltale sign that was utterly betraying his attempt at some twisted act of kindness to try and protect Oswell's dignity. He had asked a really stupid question. Oswell shook his head, ready to get up and leave to bury this interaction safely away in the back of his mind for the foreseeable future. Quinlan's response shakes him out of his train of thought. "I do what I have to." Oswell nods, one single, slow and deliberate movement of his head. Without anymore furor to channel into the conversation, he decides to leave it at that. Silence follows, Quinlan is still intent on watching him. He seemed to have noticed that this answer didn't actually satisfy Oswell's curiosity. The Irishman stands up, swiftly, the gesture being just enough to rip Oswell out of his melancholic introspection. "Come," Oswell looks up, the muddled expression he wore on his face seemed to be enough to spur his boss on to continue voicing his request. "Come into my tent." Oswell lets the request ring in his ears. It wasn't a question. The sharp upwards inclination of tone, that would normally define it as such, wasn't there. It was not a question, because if it had been, Oswell would have denied it. He would have answered the question witI'm flattered but- or even it wouldn't be proper... If it had been a question, Oswell wouldn't have rushed to desert his seating place and hurry after Quinlan. The other man sauntered off casually, especially in comparison to Oswell's demeanour, still riddled by nerves. His stature made his strides just slightly faster than Oswell's own and he had to make a conscious effort to keep up, to keep close. He watched the Irishman hurriedly tug at the sleeves of his coat and discard it via a careless, quick shrug of his gaunt shoulders, intent on letting it fall to the dusty floor behind himself. Out of reflex, Oswell reaches out to catch it. Quinlan spins around to meet him, (for a second Oswell thinks that the other man had planned it this way.) A cold hand holds onto Oswell's as he grabs the coat, almost commanding enough to keep him in place entirely. "We'll go together," Quinlan begins, "tomorrow, just you—" A long, bony finger reaches out to point at Oswell, the emphasis in the very minimal space still lingering between them makes the hair on the back of his neck stand up. He quickly rubs his free hand over it to soothe himself. "—and me." Quinlan retracts his hands and plucks the coat out of Oswell's. He ends his gesture with a sly smile. Oswell swears that he's harvesting this glee from his nervousness like some form of sadistic energy vampire. "I'll make you comfortable, come." Quinlan turns and dips his head down to enter the tent. Oswell follows suit, he watches how Quinlan lazily stretches out on his back, hands resting idly over his stomach, and, for a moment, he thinks that he's been lured into a trap. That this was too good to be true, and that Quinlan would strike at any moment, like some predator toying with its prey. He tries, very quickly, to ignore the picture that thought paints in his mind. He settles in next to the Irishman, (and is almost surprised when he doesn't feel the searing pain of fangs clasping onto his neck). He situates himself so that Quinlan would have to turn his head to look at him, out of and away from that direct line of sight that could read him like an open book. Strung so tight, he fears that he could come undone by only one look from the Irishman. So he searches for a place to shift his attention to, somewhere else that won't make his heart beat out of his chest. Whatever words Quinlan had reserved to share with Oswell in this more intimate setting get very lost, very quickly. Oswell does not listen to a word Quinlan says when he notices how he could count the individual freckles on Quinlan's chest through the soft linen of his shirt, hardly concealing bare, pale flesh now. And, for a moment, Oswell lets his mind wander. He lets himself get wrapped up in their current arrangement, and thinks, just maybe, it wouldn't be bad to give in. He lets himself consider it, just for a moment, what it would be like to get close to him, to touch him, and let that cold touch embrace him wholly. Give and devote himself to his boss entirely. It lights the flames of nervousness licking at his gut anew, stronger, brighter than he'd ever felt before. It makes his fingertips itch with need to grab onto something tangible. He balls his hands into fists and focuses on the way his fingernails dig into his skin instead. A sliver of movement from the other man chases him out of his contemplation. Quinlan solemnly drags a hand from his stomach, to his chest and up towards his face.Oswell lets his gaze chase after it, the realization that he'd been unabashedly staring at his boss in a completely non-professional way only sets in when he locks eyes with Quinlan once again. Oswell swallows, afraid that any treacherous words could spill out of him if he wasn't careful. One last thought rushes through his brain as he wonders if the other man has ever felt like this as well. He pictures how he could relish in the salvation a simple i do could give him in his current state, he's entirely at the other man's mercy. Quinlan squints, a scrutinizing look meets Oswell and it makes him feel thoroughly exposed, as if the other were able to peel away all the layers that make him whole and reach down to simply grasp at the burning desire within him, make him quiver under his touch and toy with it to entertain any sadistic desires he might hold. It ignites the flames licking at his soul and this time they engulf his entire being, from his racing head to his shaking fingertips. He averts his eyes and rushes to try and compose himself. "I should—" He wipes the back of his hand over his lips, which had gone just as dry as his mouth. "I should go now." His haste change in demeanour seems to startleQuinlan, he sits up on his elbows, an inquisitive expression paints his face, (if Oswell had it in him to pay attention, he would almost be able to make out a hint of frustration within it). Instead, he scrambles to stand as the atmosphere in the tent turns almost suffocating, for a moment he fears that it could crush him whole. "I— I need sleep," he mumbles his excuse in haste, not daring to look back at Quinlan, he fears what it would do to him. "Goodnight." He stumbles out of the tent almost clumsily, tripping over his own feet as his weak knees struggle to properly support him. The sweet embrace of burying himself face-down in the earth doesn't seem that bad anymore when he remembers that he'll have to face his boss again come morning.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75742776
{"authors": ["sulfur_killingz"], "language": "English", "title": "Lullaby"}
Shake It Out The sun hung low over the campsite, casting long shadows across the trampled earth. Tav stood, sword in hand staring into the distance. Her pace was steady as she moved with relentless focus. Her blade a flash of silver and a rusting hue of dried blood from the battle won. Sweat traced down her brow, but her grip never faltered. She was alone in the clearing, not too far from the others but enough to be just out of earshot of any conversations. She had taken the opportunity to shed her armour and take a moment alone to steady her thoughts the only way the fighter knew how. By swinging her sword over and over till thoughts became clear. Each strike to the air was precise, each parry deliberate, as though she were carving discipline into the air itself. Her breath was steady and controlled and eyes sharp with determination. This battle had been a particularly tiresome one. And though they were victorious in the end the severity of it all weighed heavily on the fighter. She was experienced with many a foe with many a weapon. Slain goblins in their hordes, warlock and wizard alike. Defeated dragonborn at hand to hand combat. Thrashed and overcome even hellspawn in recent times. The more time went on the more she realised she wasn’t as experienced as she had thought, however. The dangerous of this journey becoming more and more consuming. Decisions becoming taxing and tiresome. In both body and soul. Today, taking on the likes of a wicked hag in her own lair had rattled something inside. Or perhaps it was when her comrades in arms were close to succumbing to injury that had rattled her. Heavy weighs the burden of leadership, and the role was so foreign to Tav that at each trial she wished to submit and step back. She happily would let Lae’zel step forward, as she so often threatened. Or perhaps the war bound Karlach and her years of commanding an army she hated would prove more efficient. Or the famous Blade of Frontiers Wyll surely was a better fit. He would know what to do when times were hard. A hero always knows.Tav was no hero. She was a selfish and lonesome person. She always opted the choice that meant an easy way out in life. She was accustomed to taking orders. The voice of her previous masters hissed in her ears, they had warned she would never survive without their guiding hands on shoulders. "How can you lead these people. You’ll lead them to their deaths.” Her sword swang in an aggressive response. Over and over she fought back against the doubtful voice of a master long dead. Each swing of her sword cut the air with a hiss. Beneath her boots the packed earth shifted slightly, gritty and firm. Grounding her stance with every step. She dug her right foot deeper into the earth, grounding herself into the real world. She pivoted, the soles scraping against the dirt sending up faint dust that caught the fading light. Dust rose in faint clouds as her boots shifted across the makeshift training ground. She drove the sword forward, then pulled back, over and over. Testing the precision of her stance. Each step was deliberate—heel planting, toes pivoting, weight transferring smoothly from one leg to the other. The ground’s firmness pushed against her soles, reminding her that balance was as much about the earth beneath her as the steel in her hands. All her life she had to fight. Fight for food shelter, the right to breathe. Even fight for what little pleasure she was ever afforded. Her head swelled with emotion. The hag has sensed the doubt in Tav’s heart, or perhaps the fear was written plainly on her face. The threat of an innocent woman’s safety in question and an unborn child, and then to have that woman swear and curse out the efforts at an apparent undesired rescue. The hag offered to end the fight prematurely, even the promise of power or anything the fighter desired in fact. She could feel the numbness of the beasts magic, tantalising and beckoning her to submit. It would have been all too easy, especially seeing the blood gathering on the others and the tiresome look in their eyes. And yet, like many decisions as of late, Tav did not choose the easy route. Yes, the witch was slain through harrowing efforts. But no thanks were given from the so called rescued. Astarion even chastised they shouldn’t have worried themselves. Gale at least praised efforts as he searched the lair for anything useful. A rather battered Shadowheart simply stood by quietly. Perhaps it was the lack of applause that knotted away. Was that why Tav did what she did? Simply for rewards and praise like some filthy dog begging for any scrap of approval. Had the rush from saving the tiefling’s become so infectious she now would seek out any opportunity to play hero. “Hero. Is that what you think you are, little mouse?” Her sword cut through the air with a sharp whistle, each swing heavier than the last. The rhythm of steel whipping through air echoed all around, but Tav's focus was no longer on precision—it was on force. Her fists clenched around the hilt, knuckles whitening and veins rising beneath her skin. The leather wrapping bit into her palms as she squeezed harder, anger fuelling every strike. So long she had not heard her master’s voice, but now it echoed as if he still breathed. Still held her leash. She shifted her stance, boots grinding into the dirt almost burying herself. The ground felt rough beneath her soles, anchoring her even as her movements grew reckless. Her breath came ragged, frustration spilling into every motion. She wanted control, wanted to steady herself but her grip betrayed her—too tight, too desperate. Pain began to bloom in her hands, sharp and insistent. The friction of leather against skin tore at her resolve, blistering her palms where sweat mixed with grit. Still, she refused to loosen her hold. Each blister was a mark of her stubbornness, a reminder that she was pushing herself beyond discipline into fury. The sword was no longer just a weapon. It was a test of her endurance of how much pain she could bear before her spirit broke. Like it would when she was so little. So weak. The pain in her hands was sharp, but it was more than suffering it was proof. Each blister, each sting of torn skin against the hilt reminded her she was alive, present, and unyielding. The ache grounded her, tethering her to the moment with a clarity no calm training ever could. She squeezed tighter in hopes to feel the rush that would eventually come from the pain she sought. The rush the priest granted back in the Goblin Camp. Tav knew the dangerous of dabbling once more in the worship of pain. How addictive it once was. How often she would push herself to the edge and teeter on the point of finality. How close she had come so many times in her extended half elven life. Dancing before the blade and antagonising it over and over to finally finish the job. Pushing through the pain to feel the dull ache of the eventual pleasure that followed. But the emptiness and shame, that would always ache in her heart afterwards. Never truly ending the void within. She could feel the ghost of a hand on her lower back. A memory of such. Fingers tracing long faded lines, scars that never truly went away. Not matter how many tattoos tried to cover them. A hand that once brought both pleasure and pain, sometimes not in that order. A hand that Tav would give anything to forget the sickening feeling of. One master firmly at the neck, while the other bound hands. She was their puppet, their plaything. Little pet. Her fists trembled, not from weakness but from the intensity of her grip. The force of her will poured into steel. Tears welled in her eyes, but they weren’t born of pain. They came from something deeper. Frustration, determination, the fierce knowledge that she was pushing herself beyond limits. The sting in her palms was a fire, and that fire made her real. She wasn’t there anymore. But where she was now, was it any better. She inhaled sharply, tasting dust and sweat, feeling the weight of her body ploughed to the earth. A trembling shake building from deep within. She had to shake it off. Stay strong. Remain focused. Her blade rose and fell in a relentless rhythm, each strike harder than the last. The air echoed with the metallic ring of steel, the sound growing ragged as her fury deepened. Harsh sharp breaths merging with the sound of steel to create a symphony of painful music. Her fists clenched so tightly around the hilt that the leather tore at her skin, raw pain spreading through her palms. Warmth seeped between her fingers, the sting of torn flesh mingling with sweat. A crimson wash running down her hands but she refused to stop. The only way she knew how to feel, to be alive was to be close to death. She swung again, boots grinding into the dirt and shoulders burning, breath ragged. Her body ached to stop and she ignored it’s plea just as she ignored the torturous voices of doubt. The pain was no longer just in her hands—it was in her chest, in her heart, in the weight of everything she carried. Tears welled in her eyes, she fought them back just like the thoughts. She fought against it, striking over and over, until her body betrayed her. "You are no hero. You are a pet, just missing your leash." At last, the sword slipped from her grasp, clattering against the earth. Fresh blood know decorating the hilt and running down like exposed veins. She dropped to her knees, trembling, her vision blurred by tears. The fight was gone, replaced by the raw release of emotion. She hung her head, squeezing hard and refusing to let loose the release that threatened beneath. She wasn’t this week, this pitiful. Shake it off, shake away the feelings.Maybe she was. Maybe she truly was broken beyond the point any bandages could put her back together. Pitifully broken. She could feel the hands pushing down on her shoulders, devil’s whispering doubt in both ears. Hissing the regret she ever thought she could live without their
Shake It Out The sun hung low over the campsite, casting long shadows across the trampled earth. Tav stood, sword in hand staring into the distance. Her pace was steady as she moved with relentless focus. Her blade a flash of silver and a rusting hue of dried blood from the battle won. Sweat traced down her brow, but her grip never faltered. She was alone in the clearing, not too far from the others but enough to be just out of earshot of any conversations. She had taken the opportunity to shed her armour and take a moment alone to steady her thoughts the only way the fighter knew how. By swinging her sword over and over till thoughts became clear. Each strike to the air was precise, each parry deliberate, as though she were carving discipline into the air itself. Her breath was steady and controlled and eyes sharp with determination. This battle had been a particularly tiresome one. And though they were victorious in the end the severity of it all weighed heavily on the fighter. She was experienced with many a foe with many a weapon. Slain goblins in their hordes, warlock and wizard alike. Defeated dragonborn at hand to hand combat. Thrashed and overcome even hellspawn in recent times. The more time went on the more she realised she wasn’t as experienced as she had thought, however. The dangerous of this journey becoming more and more consuming. Decisions becoming taxing and tiresome. In both body and soul. Today, taking on the likes of a wicked hag in her own lair had rattled something inside. Or perhaps it was when her comrades in arms were close to succumbing to injury that had rattled her. Heavy weighs the burden of leadership, and the role was so foreign to Tav that at each trial she wished to submit and step back. She happily would let Lae’zel step forward, as she so often threatened. Or perhaps the war bound Karlach and her years of commanding an army she hated would prove more efficient. Or the famous Blade of Frontiers Wyll surely was a better fit. He would know what to do when times were hard. A hero always knows.Tav was no hero. She was a selfish and lonesome person. She always opted the choice that meant an easy way out in life. She was accustomed to taking orders. The voice of her previous masters hissed in her ears, they had warned she would never survive without their guiding hands on shoulders. "How can you lead these people. You’ll lead them to their deaths.” Her sword swang in an aggressive response. Over and over she fought back against the doubtful voice of a master long dead. Each swing of her sword cut the air with a hiss. Beneath her boots the packed earth shifted slightly, gritty and firm. Grounding her stance with every step. She dug her right foot deeper into the earth, grounding herself into the real world. She pivoted, the soles scraping against the dirt sending up faint dust that caught the fading light. Dust rose in faint clouds as her boots shifted across the makeshift training ground. She drove the sword forward, then pulled back, over and over. Testing the precision of her stance. Each step was deliberate—heel planting, toes pivoting, weight transferring smoothly from one leg to the other. The ground’s firmness pushed against her soles, reminding her that balance was as much about the earth beneath her as the steel in her hands. All her life she had to fight. Fight for food shelter, the right to breathe. Even fight for what little pleasure she was ever afforded. Her head swelled with emotion. The hag has sensed the doubt in Tav’s heart, or perhaps the fear was written plainly on her face. The threat of an innocent woman’s safety in question and an unborn child, and then to have that woman swear and curse out the efforts at an apparent undesired rescue. The hag offered to end the fight prematurely, even the promise of power or anything the fighter desired in fact. She could feel the numbness of the beasts magic, tantalising and beckoning her to submit. It would have been all too easy, especially seeing the blood gathering on the others and the tiresome look in their eyes. And yet, like many decisions as of late, Tav did not choose the easy route. Yes, the witch was slain through harrowing efforts. But no thanks were given from the so called rescued. Astarion even chastised they shouldn’t have worried themselves. Gale at least praised efforts as he searched the lair for anything useful. A rather battered Shadowheart simply stood by quietly. Perhaps it was the lack of applause that knotted away. Was that why Tav did what she did? Simply for rewards and praise like some filthy dog begging for any scrap of approval. Had the rush from saving the tiefling’s become so infectious she now would seek out any opportunity to play hero. “Hero. Is that what you think you are, little mouse?” Her sword cut through the air with a sharp whistle, each swing heavier than the last. The rhythm of steel whipping through air echoed all around, but Tav's focus was no longer on precision—it was on force. Her fists clenched around the hilt, knuckles whitening and veins rising beneath her skin. The leather wrapping bit into her palms as she squeezed harder, anger fuelling every strike. So long she had not heard her master’s voice, but now it echoed as if he still breathed. Still held her leash. She shifted her stance, boots grinding into the dirt almost burying herself. The ground felt rough beneath her soles, anchoring her even as her movements grew reckless. Her breath came ragged, frustration spilling into every motion. She wanted control, wanted to steady herself but her grip betrayed her—too tight, too desperate. Pain began to bloom in her hands, sharp and insistent. The friction of leather against skin tore at her resolve, blistering her palms where sweat mixed with grit. Still, she refused to loosen her hold. Each blister was a mark of her stubbornness, a reminder that she was pushing herself beyond discipline into fury. The sword was no longer just a weapon. It was a test of her endurance of how much pain she could bear before her spirit broke. Like it would when she was so little. So weak. The pain in her hands was sharp, but it was more than suffering it was proof. Each blister, each sting of torn skin against the hilt reminded her she was alive, present, and unyielding. The ache grounded her, tethering her to the moment with a clarity no calm training ever could. She squeezed tighter in hopes to feel the rush that would eventually come from the pain she sought. The rush the priest granted back in the Goblin Camp. Tav knew the dangerous of dabbling once more in the worship of pain. How addictive it once was. How often she would push herself to the edge and teeter on the point of finality. How close she had come so many times in her extended half elven life. Dancing before the blade and antagonising it over and over to finally finish the job. Pushing through the pain to feel the dull ache of the eventual pleasure that followed. But the emptiness and shame, that would always ache in her heart afterwards. Never truly ending the void within. She could feel the ghost of a hand on her lower back. A memory of such. Fingers tracing long faded lines, scars that never truly went away. Not matter how many tattoos tried to cover them. A hand that once brought both pleasure and pain, sometimes not in that order. A hand that Tav would give anything to forget the sickening feeling of. One master firmly at the neck, while the other bound hands. She was their puppet, their plaything. Little pet. Her fists trembled, not from weakness but from the intensity of her grip. The force of her will poured into steel. Tears welled in her eyes, but they weren’t born of pain. They came from something deeper. Frustration, determination, the fierce knowledge that she was pushing herself beyond limits. The sting in her palms was a fire, and that fire made her real. She wasn’t there anymore. But where she was now, was it any better. She inhaled sharply, tasting dust and sweat, feeling the weight of her body ploughed to the earth. A trembling shake building from deep within. She had to shake it off. Stay strong. Remain focused. Her blade rose and fell in a relentless rhythm, each strike harder than the last. The air echoed with the metallic ring of steel, the sound growing ragged as her fury deepened. Harsh sharp breaths merging with the sound of steel to create a symphony of painful music. Her fists clenched so tightly around the hilt that the leather tore at her skin, raw pain spreading through her palms. Warmth seeped between her fingers, the sting of torn flesh mingling with sweat. A crimson wash running down her hands but she refused to stop. The only way she knew how to feel, to be alive was to be close to death. She swung again, boots grinding into the dirt and shoulders burning, breath ragged. Her body ached to stop and she ignored it’s plea just as she ignored the torturous voices of doubt. The pain was no longer just in her hands—it was in her chest, in her heart, in the weight of everything she carried. Tears welled in her eyes, she fought them back just like the thoughts. She fought against it, striking over and over, until her body betrayed her. "You are no hero. You are a pet, just missing your leash." At last, the sword slipped from her grasp, clattering against the earth. Fresh blood know decorating the hilt and running down like exposed veins. She dropped to her knees, trembling, her vision blurred by tears. The fight was gone, replaced by the raw release of emotion. She hung her head, squeezing hard and refusing to let loose the release that threatened beneath. She wasn’t this week, this pitiful. Shake it off, shake away the feelings.Maybe she was. Maybe she truly was broken beyond the point any bandages could put her back together. Pitifully broken. She could feel the hands pushing down on her shoulders, devil’s whispering doubt in both ears. Hissing the regret she ever thought she could live without their leashes, without their chains and shackles. She wasn’t worth freedom. Tav knelt in the dirt, shoulders heaving but no sob escaped her lips. She refused. The tears burned at the edges of her eyes threatening to spill, yet she held them back with sheer defiance. Her gaze locked on the fallen blade before her. Its steel dulled by dust, dirt now merging with wielders freshly the spilled blood. She wanted to reach for it, to keep swinging over and over to fight all the pain. But it looked so heavy. She felt so heavy.The earth pressed cold and rough against her knees, grounding her in silence. Her blistered hands trembled, hovering just above the hilt. Unwilling to reach for it yet unable to let it go. Like a lost limb she needed to reclaim. It lay there motionless, just a tool covered in dirt and blood. Useless until picked up. Until needed. Dull unless sharpened, blunt and rusted. Dirty unless cleaned. Without a wielder it was nothing. Lifeless. She clenched her jaw, forcing the storm inside to stay contained. No sobs, no collapse. Only the quiet ache of exhaustion and the unspoken vow that she would not break here, not now. In the stillness of the clearing, she was both fragile and unyielding. Alone, as she so desperately wanted go be. Just her and her sword as it always was. Her hands lowered to the dirt, she gripped it and clenched desperately to feel connected to something more then this guttural feeling. The knot in her throat threatening to strangle the more she swallowed it back. Hands stinging as wounds merged with gritty dirt. Tav couldn’t even say the graveling sensation against open flesh even hurt. The numbness was almost blinding. The air was near silent, save for the sound of her own heart threatening to burst from ribs. Eerily so, when the air shifted Tav became aware she was no longer alone. A presence lingered behind her, quiet but undeniable. Soft hesitant steps in her direction until they stopped but an inch from contact. “Tav?” The voice was all too familiar, and yet the severity that lingered in it was new. Shadowheart stood there, watching, the weight of their gaze heavy with concern. She waited for a response before taking another hesitant step, unsure of the reaction the fighter would exhibit being discovered in this state. Tav did not speak, for fear her words would fail her and give way to the emotions she was keeping at bay. Her back stiffened, she refused to turn afraid to let Shadowheart see her this way. See her weak and for little reason to be so. She stared at her hands bloodied in the dirt and tried to hide the mess of them in her lap.Shadowheart’s breath was steady, but heavy with worry. She slowly took a few more steps to be closer. She remained quiet, patiently awaiting a response from Tav. Be it a welcome, or a refusal of her presence. Tav sat still unsure of what to do. When no acknowledgement came Shadowheart sat herself beside Tav, eyes never once lifted from the broken fighter. Tav tried to look away, but she felt a stern and yet gentle hand take her cheek. She tried to resist and look away but her body almost surrendered to the controlled touch. Their eyes finally met, her clouded green and blue meeting with the severe green gaze of the very concerned cleric. Her eyes darted all over, like she almost wanted to attempt to read the hurt woman’s thoughts for clarification. Tav sat in silence feeling more vulnerable then she ever dared to be in a past life. She looked on at the woman in front of her and almost spoke but the words choked before they could leave. Tav clenched her jaw and squeezed her hands to distract once more with pain. Shadowheart looked down and her eyes widened when she spied the fighters self inflictions. Her breath caught and gaze softened, almost knowingly on what she had discovered. She reached out with hesitant but caring fingers and turned Tav’s torn hands upwards. She rested them in her own lap and Tav sat almost trapped in the exchange. Fearfully silent. Shadowheart rested her own hand above them, fingers tracing the cuts and deepened wounds. Tav continued to clench her jaw, embarrassed the usually quite stern and at times rather ruthless Sharran woman had discovered her in such a state. She surely now thought the leader worthless, pitiful and broken. A blue glow emitted from Shadowheart as she attempted to heal the battered hands. “Wait...” Tav finally spoke. Feeling undeserving of the gesture. Shadowheart’s brow furrowed in concern. She looked up and the gaze was hard to read. There was concern, and almost frustration, and deep down something else. Her eyes sparkled slightly, she swallowed hard as she looked onto the heavy eyes of Tav. “You need your hands.” She finally spoke. “We need your hands. We need you strong.” Tav’s face fell, she stared at the dirt feeling numbness. Undeserving of any lie the Sharran woman possibly struggled to conjure. Tav already knew she needed to shake it off, pick herself up. But maybe she didn’t want to. “No one needs me.” She whispered, a raspy broken sound that barely carried. “Someone else can do this.” “Maybe someone else could.” Shadowheart shrugged. “But I know they wouldn’t think like you. They wouldn’t care like you do. Fight like you do.”Tav looked up, feeling a lip tremble. “They wouldn’t be you.” The cleric smiled gently. It seemed genuine. “We need you. All of us do. I-" she paused a moment as if she heard something Tav did not. “I need you.” “Why me?” Tav sighed, feeling suddenly so tired. “I don’t know how to answer that,” Shadowheart held Tav’s hands a bit heavier now allowing her magic to fix them. “Maybe in another world it isn’t you this burden falls to. Maybe some other adventurer. Perhaps the others. Or even myself.” Tav almost felt the lump threaten to strangle once more, the heaviness weighing her down and shackling her to the ground. Shadowheart watched the vagueness wash over Tav and almost sensed the loss of her, she squeezed her hand a little tighter. An almost reminder. “I don’t know why you are here.” She finally spoke. “But I-I am glad you are. I am glad you found me on that Nautiloid. Glad you woke me at the crash site. And every moment since. You’ve been a true.... friend.” Her grip tightened and loosened all at once. Tav was certain the woman had been punished for her kindness she had shown this fallen fighter. Shadowheart had no reason to risk her own well-being for someone so useless. So broken. They had shared a heated moment or two, but Shadowheart had made it clear despite Tav hoping for anything more this would be all they ever could be. At least for now. She looked into the Sharran’s eyes and saw sadness. Tav wanted to say something. To thank her, or deny her and push her away. To accuse of lying, or jest and brush aside any genuineness. She wanted to tell Shadowheart she was glad, she was thankful. But fearful and remorseful. Broken and dull, like the blade in the dirt. That she couldn’t keep shaking these doubts away. Couldn’t go on. The more she looked into the bright green eyes however, the lighter she felt. The weight on her shoulders slowly lifted. The aching in her hands slowly replaced with a soothing feeling. A few silent tears rolled down her cheeks and landed in the dirt. Each one lifting a weight of their own. No more words were spoken, the two sat in silence as Tav’s breathing slowly began to match that of Shadowheart. And for the short time they sat together, Tav didnt feel so broken. She felt heard without needing to speak. She gently cried a few remainder tears slowly coming back down as Shadowheart gently held her hands in her own. She felt a different type of weight, like she was being held. Held for the first time by someone she could trust. Someone who dared enough to seek her out. Like Shadowheart had her and wasn’t letting go, not unless she wanted her to. She didn’t want her to. Never.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75739466
{"authors": ["Jade_Dragon_Rider"], "language": "English", "title": "Shake It Out"}
Once More to See You Robby and Whitaker began dating not long after Dennis became an intern at the Pitt. It wasn't out of the blue. They had been talking even after Dennis left the Pitt after his placement was over because Whitaker was living with Trinity, who kept dragging him out to join her and the Pitt gang on their nights out in town. In the beginning, Robby didn't go often. Once a week, maybe two or three if they were lucky. Or if he was having a bad week. But, Dennis had that warm smile and bright eyes that drew him out his door more and more. They'd chat often at the bar, Dennis talking about his other placements in the various wards with recollections of fond experiences, but expressing that they were nothing like the Pitt. His heart had been set on emergency medicine after the shit show that they call Pittfest. It had been just as exhilarating as it was traumatising, and Dennis hadn't had felt that much adrenaline on any other ward he'd been on. Robby occasionally shot a question for the student who'd answer to the best of his abilities but it was getting more difficult after every shot they shared. One thing led to another, and numbers were exchanged. That led to occasional texts. It began as nothing much, but soon became frequent texts as they recalled their days to each other. Frequent texts became offers to meet up outside of work. And when Robby had the time, he did. In cafes, diners, anywhere new that had opened up in the city for them to experience together. Sometimes Robby would help Dennis study for exams, getting the financially challenged student snacks whatever soft drink, hot drink, or milkshake on the menu that Dennis wanted to try. He told himself it was just out of slight concern for Dennis since he didn't want the younger man to go the whole day without eating or drinking, but deep down, he knew that smile Dennis wore with every sip of whatever sugary drink he was having made him want to see it more. Wanted to be the reason that smile widened, rounding his cheeks and wrinkling the eye bags beneath those gorgeous blues. And Dennis's graduation was just another nail in the coffin of his burning heart. Robby, Trinity, and anyone that was off that day at the Pitt attended, cheering loud and proud when his name was called. The blush on Dennis's cheeks and nervous smile televised on the jumbotron for the whole crowd to see made it all worth it. Made every study session and every late night phone call filled with Dennis doubting himself and his abilities worth it. Dennis was so worth it. That night, they went out to celebrate. And celebrate they did. He'd never seen Dennis so drunk. All thanks to Trinity pouring shots down his throat, even as it spilled down the corners of those soft lips and down his chin, his equally drunk friend giggling and wiping his lips clean with his chin. Robby would watch, almost too intensely, as Trinity's thumb would swipe and push on those wet lips, pulling a grunt from Whitaker as he would stumble back and wipe his own mouth. By the end of the night, Robby was behind him in the bathroom, holding back the curls Trinity had made Dennis grow out as Whitaker emptied the colourful contents of the alcoholic drinks he'd been chugging all down the toilet. Robby rubbed his back with his free hand, only moving it once Whitaker was done retching to grab some tissue to wipe his lips. "M'sorry..." Whitaker let out as Robby wiped his mouth. Robby had to swallow thickly to stop himself from reacting to the terrible smell coming from Dennis's mouth. But alas, he smiled. "Don't worry about it. I was your age once, too. You're allowed to have fun," Robby reassured him, pulling him up once he was done and throwing the tissue into the toilet, flushing it down before they left the cubicle. "But I think it's time to go home. It'd be a bad idea for you to start drinking again." Dennis could only respond with a heavy nod, leaning on Robby as they walked back to the group. Only to find Santos and King gone. "Where's Santos and King?" Robby asked Dana, who sat beside McKay. "King took Santos back to her apartment," Dana said over the loud music. "She was fucked up, and King was the most sober. Wanted to make sure Santos got home safe so she went in the taxi with her, bless her," She explained with a fond smile. Then, her eyes looked over at Dennis, who was currently falling asleep on Robby's shoulder. While standing. "He should probably get going too." Dana hummed, a maternal coo in her voice that she gave to the young man. "I'll get him home safe." Robby said. He hadn't drank since he drove them all to Dennis's graduation then to the bar. So, drinking was out of the question tonight. Dana nodded, finishing the conversation to let Robby take Whitaker outside and to his car. It was a bit of a struggle since Dennis wasn't being the most co-operative. He was stumbling, coming to a stop every few steps to mumble that he was tired. Robby just had to keep explaining that he was trying to get Dennis home, and Dennis would follow obediently. Once they got to his car, Dennis slumped in the passenger seat. Robby turned to his phone, getting up Google Maps. "Do you know your address, kid?" He asked, tapping on the search bar. Being met with silence, he looked at Dennis. His head was slumped completely, his eyes shut heavily. Robby sighed at the sight of him sleeping so peacefully. He did try to wake Dennis up, but not even shaking him worked. So, with a heavy sigh, he drove Dennis to his own house and carried him up the steps. Robby wasn't the strongest in the world, but Dennis wasn't exactly that heavy, and Robby had been lifting patients for longer than Dennis had been alive, so he wasn't weak, either. As respectfully as he could, he undid Dennis's button up and peeled it from him as gently as he could. Robby was glad for the vest Dennis was wearing underneath. So, he let Dennis sleep in that vest and a pair of his freshly cleaned shorts. Dennis slept in his bed that night, while Robby slept on his own couch. That morning, Dennis was beyond confused. Waking up in a bed that wasn't his, wearing shorts that weren't his. And the killer headache wasn't helping him collect himself. But he got up anyway, padding through the empty house until he reached the kitchen, where a note lay on the counter beside a pint glass of water and some tylenol. It read,'Morning, kid, it's Robby. You're in my house since you and Santos got pretty hammered tonight. Santos is safe with King, but I have work today so I'm not gonna be back until about 7:30 tonight. Feel free to stay as long as you want, everything you need is in the bathroom if you want a shower. Your breakfast is in the microwave if you like eggs and bacon, and your phone is charged in the living room. Xo' It put a warm smile on Dennis's face. Robby had been doing that a lot lately, putting such a wide smile on his face that it hurt his cheeks and made his heart flutter. So, Dennis had his breakfast and took the pain medication for his headache and nursed the pint glass of water for his dehydrated body. He sat in the living room for a while, texting Trinity and making sure she was safe and letting her know of his whereabouts. She teased relentlessly, of course, but it was expected with her. She knew everything about him. Even about his developing crush for the older doctor. Then at noon, he took a shower once his headache had subsided to freshen up. He washed his mouth out with mouthwash only since it was definitely too far to use Robby's toothbrush. His body moved automatically, taking himself back to Robby's bedroom to get dressed. There, he saw a picture frame of Robby and Jake smiling widely. It made him wonder whether Jake was talking to him again since Robby had revealed that they weren't on the best of terms since Pittfest during one of their more solemn conversations. He knew Robby cared deeply for Jake, so for Robby's sake, he hoped Jake would come around. However, as Dennis was getting himself ready, he realised he didn't want to leave. And Robby did say he could stay for as long as he wanted. So, he carried himself back to the living room in the shorts and vest, sitting on the couch and debriefing his plan to Trinity who, again, teased him relentlessly. Dennis didn't mind at all. He just hoped Robby wouldn't, either. That night, Robby returned at 7:38pm. Kicking his shoes off at the door, he took himself to the living room, only to pause at the light still being on and a mousy brown, curly head of hair visible from where Robby was standing. "You're still here?" Robby hummed, setting his bag on the floor beside his spot on the couch and sat beside him with a relaxed sigh, releasing the tension of the day. "Yeah. I'm staying as long as I want." Dennis smirked, to which Robby matched the smirk as he looked to the younger man. "I hope I'm not gonna regret that offer." He chuckled. "Well, this is your house. Kick me out if you don't want me here." Dennis shrugged. He wouldn't be offended if he did. Robby was probably exhausted and get to bed as fast as he could. "I didn't saythat," Robby hummed. "I don't mind the company... especially if it's you." He slid smoothly on the end. Dennis raised a brow. "Yeah?" Dennis hummed. "Yeah," Robby gave a nod, getting a little red in the cheeks as his eyes struggled to meet Dennis's now. "You've easily became one of my favourites in the past few months." "Good," Dennis let out, only to realise how cocky that sounded. "That's nice, I mean. Glad I could make a good impression." Robby could only smile, finally meeting Dennis's eyes again. "How are you feeling? You were pretty fucked up." It was Dennis's turn to chuckle. "Yeah, no thanks to Trinity," He let out. "I'm doing a lot better than I was this morning. No headache, and I freshened up in the shower." Robby nodded. "I'm glad," He said. "You hungry?" They went through the trials of deciding what takeout to get, arguing fondly over what option was better. But
Once More to See You Robby and Whitaker began dating not long after Dennis became an intern at the Pitt. It wasn't out of the blue. They had been talking even after Dennis left the Pitt after his placement was over because Whitaker was living with Trinity, who kept dragging him out to join her and the Pitt gang on their nights out in town. In the beginning, Robby didn't go often. Once a week, maybe two or three if they were lucky. Or if he was having a bad week. But, Dennis had that warm smile and bright eyes that drew him out his door more and more. They'd chat often at the bar, Dennis talking about his other placements in the various wards with recollections of fond experiences, but expressing that they were nothing like the Pitt. His heart had been set on emergency medicine after the shit show that they call Pittfest. It had been just as exhilarating as it was traumatising, and Dennis hadn't had felt that much adrenaline on any other ward he'd been on. Robby occasionally shot a question for the student who'd answer to the best of his abilities but it was getting more difficult after every shot they shared. One thing led to another, and numbers were exchanged. That led to occasional texts. It began as nothing much, but soon became frequent texts as they recalled their days to each other. Frequent texts became offers to meet up outside of work. And when Robby had the time, he did. In cafes, diners, anywhere new that had opened up in the city for them to experience together. Sometimes Robby would help Dennis study for exams, getting the financially challenged student snacks whatever soft drink, hot drink, or milkshake on the menu that Dennis wanted to try. He told himself it was just out of slight concern for Dennis since he didn't want the younger man to go the whole day without eating or drinking, but deep down, he knew that smile Dennis wore with every sip of whatever sugary drink he was having made him want to see it more. Wanted to be the reason that smile widened, rounding his cheeks and wrinkling the eye bags beneath those gorgeous blues. And Dennis's graduation was just another nail in the coffin of his burning heart. Robby, Trinity, and anyone that was off that day at the Pitt attended, cheering loud and proud when his name was called. The blush on Dennis's cheeks and nervous smile televised on the jumbotron for the whole crowd to see made it all worth it. Made every study session and every late night phone call filled with Dennis doubting himself and his abilities worth it. Dennis was so worth it. That night, they went out to celebrate. And celebrate they did. He'd never seen Dennis so drunk. All thanks to Trinity pouring shots down his throat, even as it spilled down the corners of those soft lips and down his chin, his equally drunk friend giggling and wiping his lips clean with his chin. Robby would watch, almost too intensely, as Trinity's thumb would swipe and push on those wet lips, pulling a grunt from Whitaker as he would stumble back and wipe his own mouth. By the end of the night, Robby was behind him in the bathroom, holding back the curls Trinity had made Dennis grow out as Whitaker emptied the colourful contents of the alcoholic drinks he'd been chugging all down the toilet. Robby rubbed his back with his free hand, only moving it once Whitaker was done retching to grab some tissue to wipe his lips. "M'sorry..." Whitaker let out as Robby wiped his mouth. Robby had to swallow thickly to stop himself from reacting to the terrible smell coming from Dennis's mouth. But alas, he smiled. "Don't worry about it. I was your age once, too. You're allowed to have fun," Robby reassured him, pulling him up once he was done and throwing the tissue into the toilet, flushing it down before they left the cubicle. "But I think it's time to go home. It'd be a bad idea for you to start drinking again." Dennis could only respond with a heavy nod, leaning on Robby as they walked back to the group. Only to find Santos and King gone. "Where's Santos and King?" Robby asked Dana, who sat beside McKay. "King took Santos back to her apartment," Dana said over the loud music. "She was fucked up, and King was the most sober. Wanted to make sure Santos got home safe so she went in the taxi with her, bless her," She explained with a fond smile. Then, her eyes looked over at Dennis, who was currently falling asleep on Robby's shoulder. While standing. "He should probably get going too." Dana hummed, a maternal coo in her voice that she gave to the young man. "I'll get him home safe." Robby said. He hadn't drank since he drove them all to Dennis's graduation then to the bar. So, drinking was out of the question tonight. Dana nodded, finishing the conversation to let Robby take Whitaker outside and to his car. It was a bit of a struggle since Dennis wasn't being the most co-operative. He was stumbling, coming to a stop every few steps to mumble that he was tired. Robby just had to keep explaining that he was trying to get Dennis home, and Dennis would follow obediently. Once they got to his car, Dennis slumped in the passenger seat. Robby turned to his phone, getting up Google Maps. "Do you know your address, kid?" He asked, tapping on the search bar. Being met with silence, he looked at Dennis. His head was slumped completely, his eyes shut heavily. Robby sighed at the sight of him sleeping so peacefully. He did try to wake Dennis up, but not even shaking him worked. So, with a heavy sigh, he drove Dennis to his own house and carried him up the steps. Robby wasn't the strongest in the world, but Dennis wasn't exactly that heavy, and Robby had been lifting patients for longer than Dennis had been alive, so he wasn't weak, either. As respectfully as he could, he undid Dennis's button up and peeled it from him as gently as he could. Robby was glad for the vest Dennis was wearing underneath. So, he let Dennis sleep in that vest and a pair of his freshly cleaned shorts. Dennis slept in his bed that night, while Robby slept on his own couch. That morning, Dennis was beyond confused. Waking up in a bed that wasn't his, wearing shorts that weren't his. And the killer headache wasn't helping him collect himself. But he got up anyway, padding through the empty house until he reached the kitchen, where a note lay on the counter beside a pint glass of water and some tylenol. It read,'Morning, kid, it's Robby. You're in my house since you and Santos got pretty hammered tonight. Santos is safe with King, but I have work today so I'm not gonna be back until about 7:30 tonight. Feel free to stay as long as you want, everything you need is in the bathroom if you want a shower. Your breakfast is in the microwave if you like eggs and bacon, and your phone is charged in the living room. Xo' It put a warm smile on Dennis's face. Robby had been doing that a lot lately, putting such a wide smile on his face that it hurt his cheeks and made his heart flutter. So, Dennis had his breakfast and took the pain medication for his headache and nursed the pint glass of water for his dehydrated body. He sat in the living room for a while, texting Trinity and making sure she was safe and letting her know of his whereabouts. She teased relentlessly, of course, but it was expected with her. She knew everything about him. Even about his developing crush for the older doctor. Then at noon, he took a shower once his headache had subsided to freshen up. He washed his mouth out with mouthwash only since it was definitely too far to use Robby's toothbrush. His body moved automatically, taking himself back to Robby's bedroom to get dressed. There, he saw a picture frame of Robby and Jake smiling widely. It made him wonder whether Jake was talking to him again since Robby had revealed that they weren't on the best of terms since Pittfest during one of their more solemn conversations. He knew Robby cared deeply for Jake, so for Robby's sake, he hoped Jake would come around. However, as Dennis was getting himself ready, he realised he didn't want to leave. And Robby did say he could stay for as long as he wanted. So, he carried himself back to the living room in the shorts and vest, sitting on the couch and debriefing his plan to Trinity who, again, teased him relentlessly. Dennis didn't mind at all. He just hoped Robby wouldn't, either. That night, Robby returned at 7:38pm. Kicking his shoes off at the door, he took himself to the living room, only to pause at the light still being on and a mousy brown, curly head of hair visible from where Robby was standing. "You're still here?" Robby hummed, setting his bag on the floor beside his spot on the couch and sat beside him with a relaxed sigh, releasing the tension of the day. "Yeah. I'm staying as long as I want." Dennis smirked, to which Robby matched the smirk as he looked to the younger man. "I hope I'm not gonna regret that offer." He chuckled. "Well, this is your house. Kick me out if you don't want me here." Dennis shrugged. He wouldn't be offended if he did. Robby was probably exhausted and get to bed as fast as he could. "I didn't saythat," Robby hummed. "I don't mind the company... especially if it's you." He slid smoothly on the end. Dennis raised a brow. "Yeah?" Dennis hummed. "Yeah," Robby gave a nod, getting a little red in the cheeks as his eyes struggled to meet Dennis's now. "You've easily became one of my favourites in the past few months." "Good," Dennis let out, only to realise how cocky that sounded. "That's nice, I mean. Glad I could make a good impression." Robby could only smile, finally meeting Dennis's eyes again. "How are you feeling? You were pretty fucked up." It was Dennis's turn to chuckle. "Yeah, no thanks to Trinity," He let out. "I'm doing a lot better than I was this morning. No headache, and I freshened up in the shower." Robby nodded. "I'm glad," He said. "You hungry?" They went through the trials of deciding what takeout to get, arguing fondly over what option was better. But they both came to a common ground on pizza. Double pepperoni. Robby ordered a large so they could share, sitting shoulder to shoulder on the couch with the pizza box on their lap once it arrived and eating their respective halves to whatever shitty programme was on the TV. But, their mechanical movements came to a stop as their hands touched. Robby's over Dennis's, the tips of his fingers wet with pizza grease. Dennis didn't mind. His own hand was greasy from his slices, and their movements had been so mechanical that they didn't even realise that the pizza box was empty in record breaking time. It pulled a soft laugh from them as they noticed, looking at each other. But as their eyes met, something seemed to click. In that moment, in such a serene setting so comfortable with each other's presence. It seemed so right, the pair being together so close. So they got closer, and didn't stop until their lips were touching. It was Dennis to pull away first, only an inch, with a smirk playing on his lips. "You taste like pizza." That put a wide smile on Robby's face, his wrinkles setting naturally into place. "So do you." He hummed. They dove back into each other, discarding the pizza box and wiping their greasy fingers on each other's shirts so it wouldn't be as gross as they touched each other's faces. They were guys at the end of the day. Guys too busy kissing to pause and clean their fingers properly. It all felt so natural, like they'd done this a thousand times before. Lips opening with ease, tongues meeting for the first time as their hands rested wherever felt comfortable. Robby's large hands swallowed Dennis's hips, and Dennis's rested on Robby's chest. They only pulled away to catch their breath, spending what felt like a lifetime just kissing. But they had to pull away. Had to communicate what they wanted. What they needed. Both answers being 'you', they swiftly ended up in Robby's bed. And if solving cases in the Pitt with Robby wasn't exhilarating enough, then the sex they had took the cake. Robby didn't even flinch when Dennis took his shorts and boxers off, revealing that he was a trans man. It made all the shame and guilt Dennis had felt over his transness wash away with something so simple as acceptance. Robby took his time with Dennis, with it being his first time, lulling him softly as he pulled orgasm after orgasm from him using his fingers, mouth and cock. Dennis had never felt so looked after as Robby cleaned him up with a warm wash cloth, bringing him another glass of water to have. They cuddled, of course, Robby's thick fingers running through the thick curls of his hair as Dennis rested his head on Robby's chest. From that night on, they began dating. They agreed to keep it a secret, just so they could love in the comfort of their own home. But it wasn't long before people got suspicious. Especially after Dennis became an intern. They spent almost all of their time together, and Whitaker was spending 5 out of 7 days a week at Robby's house, the rest spent with Trinity at the place he was supposed to be living at. It was so perfect. They went on dates whenever they had time off: watching movies at the cinema, having cliche picnics on a field, and going to fancy restaurants. They were completely smitten with each other, practically eating out of each other's hands. Weekly sex exploring each other, nights spent together filled with laughter and so much love. But, 7 months into their relationship. Dana had confronted Robby about Whitaker. He had denied they were together, of course. She didn't believe him, saying she 'hoped he wasn't being way over his head with the kid'. Robby didn't tell Dennis about the conversation. Mainly because it scared him, and he didn't want to scare Dennis with his doubts. What if he was? It was such a simple comment, but Dennis wasa kid in comparison to Robby's age. Robby didn't want people to think that he was some weirdo, preying on his intern and exploiting the power dynamic between them. It planted a seed in his head, that grew overgrown with thorns and not enough roses. But Dennis was on cloud 9. So high in the clouds he didn't even realise Robby was slipping from his grasp until it was too late. It was subtle, at first. Less kisses, but not none. The odd date cancelled with the excuse of managerial work he was overdue. But the longer Robby kept his mouth shut around Dennis, the worse it got. By their eighth month together, Dennis was only spending 3 out of 7 days a week at Robby's house, and the nights he spent there, Robby was being distant. Quiet. Withdrawn. Dennis didn't pick up on it until their ninth month together. He had just assumed Robby was tired from work, but there's only so long that excuse can run until it goes dry. They had stopped going on dates on their dates off. Robby had stopped texting as much when Dennis was at Trinity's house. Hell, Robby had even started avoiding him at work, assigning him to other doctors or putting him on the rota for the night shift. And Dennis would be damned if he was going to let this - whatever Robby was doing - affect the internship he'd busted his ass to get. And that brought them to this night. Dennis had found Robby on the roof, on the right side of the railing for once, after his messages had been left on delivered. "What's going on with you?" Dennis asked. "If you're struggling, then I'm here, babe. Just... talk to me. Please." He begged. Whitaker knew Robby struggled with his mental health. He wasn't stupid. This job will fuck you up if you let it, he'd heard Robby say before. He found Robby in pedes, and held him close on nights Robby would have a terrible shift or generally poor mental health. "I- I can't, Whitaker. I can't do this to you, anymore." Robby let out, trying to sound resolute but the weakness in his voice failed him. Whitaker? Dennis thought,Robby hasn't called me Whitaker since I was a student. "Do what to me?" Whitaker pushed, walking up beside him and looking up at his boyfriend. Above the city, Robby's looked so gorgeous, reflecting the glittering lights beneath the pair. "This," Robby said, looking at Dennis. "Us." He gestured between them. Dennis felt his heart drop out of his body with how far it fell, looking at him with a look that could only be described as unadulterated fear. "W-What? What is that supposed to mean?" Robby looked away from him, back out to the city. Unspoken words weighed heavy between them. Heavier than any shame or guilt Dennis had ever felt about himself growing up. Heavier than any weight that had rested on Robby's chest when he struggled to breathe through any panic attacks. "I think you know what I mean." Dennis grabbed him shoulder and pulled Robby to look at him, tears pooling in his eyes. "Don't you dare. Don't fucking do this to me, Robby." He seethed through clenched teeth. He wasn't angry. Or maybe he was. He couldn't quite put a finger on the emotion squeezing his lungs and constricting his throat, making it hard to speak. "People are starting to get suspicious of us," Robby mumbled, looking down at Whitaker. However, he never met his eyes. Robby didn't want to see how he was breaking the love of his life. "I have a reputation I need to keep, Whitaker." "Fuck what everyone else thinks, Robby! And f-fuck you for not thinking that too," Dennis let out, not wanting to shout but it was getting harder. He balled up a hand and pressed it into Robby's chest, trying to steady himself as his body shook with the gut-wrenching sobs he let out. "I l-love you so, so much, Robby. For fucks sake, don't do this to me. I-I'll do whatever you want me to..." He sniffled, feeling pathetic for begging so shamelessly. "I can't do that to you. You need to find someone your own age. Not someone old enough to be your dad. You deserve better." Robby frowned. He wanted to tell himself to stop. But it was too late now. Trying to salvage this would mean this night would be remembered forever, and it would leave an awkward crack between them. They'd never be 100% them ever again. And it was all Robby's fault. "You don't get to decide that for me! I love you, I want you, Robby!" Dennis let out, wanting so desperately for Robby to meet his eyes and see how much this was hurting him. Just so Robby would stop driving the blade deeper into his heart. "I don't want anyone my age. You made me feel so loved. So normal, like I wasn't some freak to be looked down upon. You taught me how to love me. And I can't lose that. I can't lose you." Robby just had to shake his head, squeezing his eyes shut to stop the tears that were stinging his eyes from slipping. He could feel the heavy pit of regret sitting in his throat. But he swallowed it down to let it get burned to a crisp by his stomach acid, the feeling making him nauseous. "People watch us, Whitaker. I don't want them to think that I'm taking advantage of you. I don't want people to doubt my abilities to be the boss of staff I keep making inappropriate relationships with. And I don't want people to doubt you, either." Dennis let out a laugh. But he wasn't happy. Nor was he finding this very funny. "Yeah, right. All of this is to save my ass. And you realise this after nine months of making me feel like I'm on top of the world," He spat out, dropping his hands from Robby to wipe his impossibly wet face. It didn't stop the tears from falling. Especially now that he knew that would be the last time he would ever touch Robby ever again. And Robby didn't even have the balls to respond. Dennis took a heavy step back. "So that's it then, huh?" "All I can say is that I'm sorry." Robby mumbled, the place on his chest where Dennis's fist lay burned. Right over his heart. "Fuck your fucking sorry," Dennis snapped, caving in on himself. "Fuck me, I guess, for thinking I was ever worth of something like that," Dennis mumbled self-deprecatingly. "And f-fucking- fuck you for... being everything I needed." He let out, not having the guts to insult him. Not yet. The wound was still fresh and spraying blood, as if Robby had nicked an artery. "Whitaker, I-" "Put me on the night shift. Permanently." Dennis finished, letting his mind connect to the rest of his body as he took heavy steps, leaving the roof. Robby could hear his sobs echoing down the stairwell. Robby couldn't stop the regret rushing from his angry stomach up his throat and onto the ground beneath him. Dennis found himself walking to Trinity's apartment. He was far too hysterical to put himself on the bus. He'd end up sobbing all the way home. Not that he wasn't now, but at least it would be less loud in the open space. He kept a hand clamped over his mouth to muffle himself, only letting up to take in shaky and painful breaths. He couldn't help but feel completely betrayed and confused. How did he never see Robby becoming doubtful of their love? Maybe if he had caught it sooner, he would've been able to talk to him through what he was feeling and reassure him. His steps were so heavy, his legs so weak and desperate to turn around and run to Robby's house. He had become so comfortable there. It wasn't easy for Dennis to get comfortable in a new place. With a new someone. But now it's all gone. He just wanted to scream until his lungs gave out. Instead, he could only continue to sob against his hand, stumbling into Trinity's apartment building and knocking on her door. Wearing her pyjamas, she answered with a slight crack in the door. Dennis wasn't offended. She was expecting him to be staying at Robby's tonight, and being alone in an apartment at night and getting a random knock at the door would make him nervous, too. But it swung open quickly seeing the condition he was in. "Holy fuck, Denny, what's wrong?" Trinity was quick to get him inside, taking off his jacket for him and resting his bag by the door, helping him with his shoes and pulling him to the living room where she could pull him onto her chest where he could sob all he liked. "H-He fucking broke up with me," Dennis sniffled. Fuck secrecy. Fuck that high and mighty bastard on his high horse, desperate to maintain his 'reputation'. Because if he didn't have that, what else would the great Dr. Michael Robinavitch have? A loving relationship? Fuck that. Why would he have that when he can just work his life away to a hospital ready to drop him and his department? "O-Out of nowhere... th-the best months of my life are all gone, b-because-" He couldn't keep speaking. The anger, betrayal, and immense sadness was at all time peak and he needed to catch his breath before passing out. "It's okay, you're okay... just breathe." Trinity hummed, threading her fingers through his curls, just as Robby would. Everything hurt so much. And all Dennis could do was cry. And Trinity let him, no matter how wet her shirt was getting with tears and snot. She just held him and let him cry until he stopped. She got him a hot chocolate, knowing he didn't like coffee and preferred making his own tea so it was perfect. Then, Dennis poured his heart out to her. About everything. How Robby had made him feel, their nine months together, and the abrupt end. He finished with, "And I told him to put me on the night shift permanently." Trinity sat for a moment, just to process. "What an asshole," She grumbled, referring to Robby. "I get that you want to be on the night shift. I might join you every once and a while. I probably wouldn't be able to see him without ripping him a new one." Dennis let out a dry chuckle. "I just don't know if I can put it behind me right now. And I'm not letting this ruin my internship. I've worked too fucking hard," He mumbled, wiping his nose with a tissue from the box that sat on the coffee table. "I might come back in a few months or something. I don't think I could spend the rest of my life on night shift." Trinity smiled softly. "I get that. Take as long as you need," She said, resting a hand on his shoulder as he took a drink from his mug. "Don't let it eat you up inside, though. I know it's hard right now, but there's more guys out there. We can go to as many gay bars as you want to start scouting this weekend, if you want." Dennis looked up at her, his eyes puffy and red. "No thanks. Not yet, anyway," He chuckled once again. "Might take you up on that though, going out this weekend. I could do with a fucking drink." "You're telling me," Trinity chuckled with him. Once Dennis finished his hot chocolate, she took the mug from his hand. "Come on, lets go to bed. You can stay with me tonight if you want to." Dennis nodded. He didn't want to be left alone with his thoughts in the dark of his room. "Thanks." So, they moved in tandem to Trinity's room and they cuddled up together once they were under the blankets. It became normal for them after one too many movie marathons using Trinity's laptop on their lap as they huddled in close. Dennis was glad he had someone else he felt comfortable with. But it wasn't the same. Dennis didn't know if it was ever going to be the same ever again.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75739471
{"authors": ["yaoi_master"], "language": "English", "title": "Once More to See You"}
I Don't Love You Anymore Shang Qinghua hated Luo Binghe. Once he thought of him as his son, but that was before Shen Qingqiu, before the Endless Abyss. Mobei-Jun hated him as well, and Shen Qingqiu. It was only thanks to careful persuasion (begging, pleading and crying) from Shang Qinghua that Mobei-Jun didn’t wage war on them. He didn’t know if Shen Qingqiu knew his husband’s true nature or not, but he didn’t want to risk his friendship with the only one who understood his situation. After the marriage, Luo Binghe grew increasingly jealous, glaring constantly at Shang Qinghua. Then, one day, when delivering a letter from Shen Qingqiu, he cornered Shang Qinghua. “Why is it you’re so close with Shizun?” Shang Qinghua stared up at him in confusion. “We’re friends? Why would I not be?” That had been the wrong thing to say, as Luo Binghe snapped his arm like a twig, clutching the broken limb tightly in his hand. “You cannot just be friends. You have your own language, inside jokes, and are with him constantly.” Shang Qinghua had been terrified. “We are just friends, really!” Luo Binghe had glared at him with cold eyes. “Don’t reach out to Shizun anymore. Don’t even talk to him, don’t look at him. I’ll destroy you.” But Shang Qinghua had to talk to him, about paperwork if nothing else. Luo Binghe remained true to his word, snapping his legs next. Mu Qingfang was barely able to keep up with the injuries Luo Binghe kept giving him. Shang Qinghua refused to say what was wrong, too afraid of the consequences. Luo Binghe kept visiting, kept injuring him, and breaking bones. Mobei-Jun found out, and hadn’t had the chance to say a word before Luo Binghe was gone, Shang Qinghua left barely in one piece and conscious on the floor. He had been openly more hostile towards Luo Binghe ever since. The emperor either didn’t notice, or was too arrogant to care. It was only thanks to Shang Qinghua begging Mobei-Jun that a war didn’t break out. “Why not?” Mobei-Jun snarled. “He hurt you!” “I- I know, but please-” Shang Qinghua begged. “He hurt you. I’ll destroy him.” “Don’t!” Shang Qinghua cried. “Please, Shen-shixiong is my friend, he loves him, don’t do that to him-” “Who’s to say he doesn’t know! How could he not know his own husband’s actions, he’s just as guilty, and should die as well!” Mobei-Jun was barely able to refrain from unleashing his power in an uncontrollable blast with how angry he was. “Even if he is, you can’t kill him, I’d have to take on his workload,” Shang Qinghua said. A flimsy excuse, but it worked. To Shang Qinghua, it mattered not if Shen Qingqiu knew, he couldn’t lose the only other transmigrator. The only other person he could reminisce about his past life with. It didn’t matter…
I Don't Love You Anymore Shang Qinghua hated Luo Binghe. Once he thought of him as his son, but that was before Shen Qingqiu, before the Endless Abyss. Mobei-Jun hated him as well, and Shen Qingqiu. It was only thanks to careful persuasion (begging, pleading and crying) from Shang Qinghua that Mobei-Jun didn’t wage war on them. He didn’t know if Shen Qingqiu knew his husband’s true nature or not, but he didn’t want to risk his friendship with the only one who understood his situation. After the marriage, Luo Binghe grew increasingly jealous, glaring constantly at Shang Qinghua. Then, one day, when delivering a letter from Shen Qingqiu, he cornered Shang Qinghua. “Why is it you’re so close with Shizun?” Shang Qinghua stared up at him in confusion. “We’re friends? Why would I not be?” That had been the wrong thing to say, as Luo Binghe snapped his arm like a twig, clutching the broken limb tightly in his hand. “You cannot just be friends. You have your own language, inside jokes, and are with him constantly.” Shang Qinghua had been terrified. “We are just friends, really!” Luo Binghe had glared at him with cold eyes. “Don’t reach out to Shizun anymore. Don’t even talk to him, don’t look at him. I’ll destroy you.” But Shang Qinghua had to talk to him, about paperwork if nothing else. Luo Binghe remained true to his word, snapping his legs next. Mu Qingfang was barely able to keep up with the injuries Luo Binghe kept giving him. Shang Qinghua refused to say what was wrong, too afraid of the consequences. Luo Binghe kept visiting, kept injuring him, and breaking bones. Mobei-Jun found out, and hadn’t had the chance to say a word before Luo Binghe was gone, Shang Qinghua left barely in one piece and conscious on the floor. He had been openly more hostile towards Luo Binghe ever since. The emperor either didn’t notice, or was too arrogant to care. It was only thanks to Shang Qinghua begging Mobei-Jun that a war didn’t break out. “Why not?” Mobei-Jun snarled. “He hurt you!” “I- I know, but please-” Shang Qinghua begged. “He hurt you. I’ll destroy him.” “Don’t!” Shang Qinghua cried. “Please, Shen-shixiong is my friend, he loves him, don’t do that to him-” “Who’s to say he doesn’t know! How could he not know his own husband’s actions, he’s just as guilty, and should die as well!” Mobei-Jun was barely able to refrain from unleashing his power in an uncontrollable blast with how angry he was. “Even if he is, you can’t kill him, I’d have to take on his workload,” Shang Qinghua said. A flimsy excuse, but it worked. To Shang Qinghua, it mattered not if Shen Qingqiu knew, he couldn’t lose the only other transmigrator. The only other person he could reminisce about his past life with. It didn’t matter…
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75746141/chapters/198115571
{"authors": ["Tachi_short_for_Tachihara"], "language": "English", "title": "I Don't Love You Anymore"}
An Avian Menace A sharp, cold gust of wind blows down the streets of Mantle, making old hanging signs sway, loose windows rattle, and the people pull their coats closer to themselves. Ghira Belladonna, to-be Chieftain of Menagerie finds himself as one of these people, the biting chill cutting through even his healthy mass and fur as he trudges through the slush persisting through Mantle’s aging heaters. He may not have as good of hearing as his wife and- his daughter, but he can still hear better than any human around. Which, in turn, is why he finds himself in the rain and slush, hunting for the source of a child’s cry. Ghira lets out a deep sigh, as he peeks down another darkened alleyway, his faunus eyes allowing him to clearly see the single dumpster and nothing else. He’s just about to move on when he hears the sound again, a quiet, whimpering cry from near the dumpster. With eyes narrowed in concern, Ghira makes his way to the large metal trash bin, the relative warmth of the larger street’s heaters quickly giving way to the freezing temperature in the alley. Rounding the bin, Ghira’s eyes finally land on the source of the noise he’s been in search of for what feels like hours, and almost stumbles as he sees distinctive, if tangled and dirty, white hair, along with icy blue eyes, peering back at him from the ratty blanket wrapped around what is unmistakably Weiss Schnee, middle child of the Schnee family. For a moment, Ghira simply stands in place, unsure what to do. Dozens of questions come to mind, even as Weiss shifts and turns her body towards Ghira, the blanket falling from her shoulders as she does. It’s then that Ghira receives the answer to a lot of his questions, as his eyes move from Weiss’ messy hair to the white down covered pair of wings behind her back. Anger, mixed with stunned confusion, flashes across the large faunus, though he schools his expression and lowers himself to one knee with a quick choice to revisit the perspective altering discovery later. If even half his assumptions are a quarter right, Ghira doesn’t think the child will respond to any anger from him well, regardless of its target. “Hello little one. Are you cold?” Ghira puts a smile on his face, holding a hand out palm-up towards the small Schnee. When she nods her head, eyes glancing to the hand, he continues. “Well, it is pretty chilly out. Do you have somewhere to go?” The answer he both dreads and hopes for comes, as Weiss shakes her head no. He lets out a breath, fogging in the frigid, industrial alley, though what emotion it was driven by Ghira couldn’t say. “I see. I know someone who can help you. It’s only going to get colder, and that blanket doesn’t look very warm. What do you say?” As the young Weiss looks from Ghira’s eyes, intently staring at them like she can read them, to his hand, still held out for her to take, the large faunus feels his smile become a bit more genuine as the thin, pale fingers cautiously wrap around only half of his own, such is the small size of the Schnee. Ghira stands upright, and gently gathers the blanket and things in it into his arm, as he leads Weiss at a slow pace back towards the hotel he and the other Menagerian Outreach Council members are staying at. A low chuckle escapes Ghira as he briefly lets himself think about the absurdity of the situation, finding a Schnee child- a faunus no less- in his second least favorite city, that he only came to in order to escape his hollow home for a short while. Perhaps, he muses, he simply needs to care for people, and the Gods guided him to her now that he- Now that he has the extra time. Ghira resets his mood quickly, pushing away the painful feelings and focusing on finding a way to the hotel that won’t end up on an Atlas newsreel by the morning. —---------------------------------------------------- Ghira peers through the window of the door and into the hotel’s makeshift check up office, watching Weiss’ legs kick as they dangle off the side of the examination table. Getting back, he had asked for one of the aides to get the doctor they had brought, one Jade Finemost, to check the young girl’s condition. Explaining to Weiss, though, had gotten an odd request. She had asked him to be in the room with her, and when he had asked why, she had only said… “I trust your eyes.” He’s shaken from his thoughts as he hears footsteps approaching, turning to see Dr. Finemost arriving with her well kept medical bag held by the pale green monkey’s tail that is her trait. He guesses she had been getting ready to settle down for the night, as he sees her still wearing the comfortable lounging clothes- a thick shirt and soft pants- with her white coat draped overtop, and running shoes on. Orange eyes give him a look from behind her round framed classes, the same straight, shoulder length pale green colored hair framing the tanned skin of her face, one that says she’s going to be expecting a favor in return. Ghira nods to her, already planning to bring some Atlesian wine back for her, and follows her into the room. Weiss looks up from the table, eyes flicking from Dr. Finemost to Ghira and back again, before settling on the doctor as she sits in front of Weiss. Ghira takes a spot by the door, leaning against the unremarkable beige wall of what he thinks used to be a laundry room, unsure of what he’s to do for Weiss besides watch. Dr. Finemost pulls a few items from her bag, setting them on a wheeled trolley as she speaks, her calm, flat tone managing to be free of the ice she’s often known for. “Hello there, I believe Mister Belladonna told you who I am, correct?” Upon receiving a nod, Finemost picks up a clipboard, and marks a few things Ghira can’t make out. “Then I believe we shall begin the examination.” What follows, Ghira assumes, is a mostly normal check-up. He isn’t a medical man by any stretch, beyond the most basic of field medicine and first aid, so his knowledge of what the doctor is doing is limited. Weiss looks mostly at ease throughout the experience, only getting a little hesitant about her wings, but in the end the doctor tells Weiss she’s gotten everything she needs. Ghira hands Weiss the spare room key he had been given by the front desk, and the aide who retrieved Dr. Finemost leads Weiss off with a promise that Ghira won’t be too far away if she needs anything. Ghira feels a smile forming as he watches her follow the aide off. His smile fades, as he turns to Dr. Finemost. He can tell something is wrong, and while not a doctor… “Jade, is it… is it bad?” Ghira feels the dread in his stomach grow as Jade’s scowl grows, her tail tapping the examination table as she seems to be trying to glare a hole in the chart in her hands. “It is. In some ways, it’s… here.” Jade presses the chart into Ghira’s hands, and steps back, pulling her bag up with her tail. “I’m sorry, Ghira, I- I need to go sit down. I… Call me if you need something, and try not to need something.” Ghira watches as Jade quickly walks off, tightening his grip on the unread chart in his hands and making his way to his own hotel room for the night to make an important choice, and an even more important call. —------------------------------------------------------------------------ “I know, dear, it’s… a lot to ask. Especially with-” Ghira cuts himself off, looking away from the call screen. He doesn’t need to finish the sentence, both of them know how it ends. Taking a breath, he looks back to the image of Kali, his darling wife, and resists the repeated urge to cringe away again at her expression of distraught horror mixed with deep emotional pain. “I don’t think… That leaving her here is a good idea. I don’t know what goes on behind the closed doors of the Schnee manor, but those wings… Jade thinks she’s been binding them frequently since birth. It’s a miracle they’re even functional.” Through the screen, Kali Belladonna nods almost absentmindedly. She had been sent the chart and attached photos by Ghira, and seemed to be almost transfixed by them. As he watches her let out an almost mournful sigh, Kali nods again, this time with more energy behind it. “I think you’re right, love. If her father- no. If that man was willing to… Gods, what else has he done? He just left her out in the streets, Ghira. In January!” He hears Kali’s voice hitch, and aches to be able to wrap his arms around her. Instead, he places a hand on the call’s display, which Kali mirrors. After a moment of listening to her quiet whines, she composes herself and continues. “Yes. I… I support it, Ghira. I’ll call the house and have a bedroom set up for her. I won’t let that man do any more harm to her.” Ghira smiles, as he listens to the sorrow in his wife’s voice solidify into the fierce, maternal determination he’s always loved about her, and he knows that Weiss will be safe. Only a small part of him is worried that Kali might not then let Weiss go, but… As much as it hurts him to think it, with Blake… gone, now, it would be a shame for such a large house to be lived in by only the couple. Weiss isn’t a replacement for his beloved daughter, nothing could take Blake’s place in his heart, but people have told him his heart is large. He can make space for another in there, if he has to. “I’ll ask her tomorrow, then. Goodnight, dear, I’ll see you soon.” “Goodnight, love.” The image of Kali flickers out, the call ending as Ghira sits back and gazes out of the window at the city. Neon lights and the glow of the heaters make Mantle shine even through the fog and smoke throughout it in a way Ghira, despite his general dislike of the place, simply can’t get anywhere else on Remnant. As the large faunus stands and moves over to the bed, pulling off his shirt and setting it on the chair to fold in the morning, he supposes that the city isn’t all bad, in the morning he may well have something wonderful to show for his journey out here. For the first time in months, ever since the day at the docks of Kuo Kuana, Ghira
An Avian Menace A sharp, cold gust of wind blows down the streets of Mantle, making old hanging signs sway, loose windows rattle, and the people pull their coats closer to themselves. Ghira Belladonna, to-be Chieftain of Menagerie finds himself as one of these people, the biting chill cutting through even his healthy mass and fur as he trudges through the slush persisting through Mantle’s aging heaters. He may not have as good of hearing as his wife and- his daughter, but he can still hear better than any human around. Which, in turn, is why he finds himself in the rain and slush, hunting for the source of a child’s cry. Ghira lets out a deep sigh, as he peeks down another darkened alleyway, his faunus eyes allowing him to clearly see the single dumpster and nothing else. He’s just about to move on when he hears the sound again, a quiet, whimpering cry from near the dumpster. With eyes narrowed in concern, Ghira makes his way to the large metal trash bin, the relative warmth of the larger street’s heaters quickly giving way to the freezing temperature in the alley. Rounding the bin, Ghira’s eyes finally land on the source of the noise he’s been in search of for what feels like hours, and almost stumbles as he sees distinctive, if tangled and dirty, white hair, along with icy blue eyes, peering back at him from the ratty blanket wrapped around what is unmistakably Weiss Schnee, middle child of the Schnee family. For a moment, Ghira simply stands in place, unsure what to do. Dozens of questions come to mind, even as Weiss shifts and turns her body towards Ghira, the blanket falling from her shoulders as she does. It’s then that Ghira receives the answer to a lot of his questions, as his eyes move from Weiss’ messy hair to the white down covered pair of wings behind her back. Anger, mixed with stunned confusion, flashes across the large faunus, though he schools his expression and lowers himself to one knee with a quick choice to revisit the perspective altering discovery later. If even half his assumptions are a quarter right, Ghira doesn’t think the child will respond to any anger from him well, regardless of its target. “Hello little one. Are you cold?” Ghira puts a smile on his face, holding a hand out palm-up towards the small Schnee. When she nods her head, eyes glancing to the hand, he continues. “Well, it is pretty chilly out. Do you have somewhere to go?” The answer he both dreads and hopes for comes, as Weiss shakes her head no. He lets out a breath, fogging in the frigid, industrial alley, though what emotion it was driven by Ghira couldn’t say. “I see. I know someone who can help you. It’s only going to get colder, and that blanket doesn’t look very warm. What do you say?” As the young Weiss looks from Ghira’s eyes, intently staring at them like she can read them, to his hand, still held out for her to take, the large faunus feels his smile become a bit more genuine as the thin, pale fingers cautiously wrap around only half of his own, such is the small size of the Schnee. Ghira stands upright, and gently gathers the blanket and things in it into his arm, as he leads Weiss at a slow pace back towards the hotel he and the other Menagerian Outreach Council members are staying at. A low chuckle escapes Ghira as he briefly lets himself think about the absurdity of the situation, finding a Schnee child- a faunus no less- in his second least favorite city, that he only came to in order to escape his hollow home for a short while. Perhaps, he muses, he simply needs to care for people, and the Gods guided him to her now that he- Now that he has the extra time. Ghira resets his mood quickly, pushing away the painful feelings and focusing on finding a way to the hotel that won’t end up on an Atlas newsreel by the morning. —---------------------------------------------------- Ghira peers through the window of the door and into the hotel’s makeshift check up office, watching Weiss’ legs kick as they dangle off the side of the examination table. Getting back, he had asked for one of the aides to get the doctor they had brought, one Jade Finemost, to check the young girl’s condition. Explaining to Weiss, though, had gotten an odd request. She had asked him to be in the room with her, and when he had asked why, she had only said… “I trust your eyes.” He’s shaken from his thoughts as he hears footsteps approaching, turning to see Dr. Finemost arriving with her well kept medical bag held by the pale green monkey’s tail that is her trait. He guesses she had been getting ready to settle down for the night, as he sees her still wearing the comfortable lounging clothes- a thick shirt and soft pants- with her white coat draped overtop, and running shoes on. Orange eyes give him a look from behind her round framed classes, the same straight, shoulder length pale green colored hair framing the tanned skin of her face, one that says she’s going to be expecting a favor in return. Ghira nods to her, already planning to bring some Atlesian wine back for her, and follows her into the room. Weiss looks up from the table, eyes flicking from Dr. Finemost to Ghira and back again, before settling on the doctor as she sits in front of Weiss. Ghira takes a spot by the door, leaning against the unremarkable beige wall of what he thinks used to be a laundry room, unsure of what he’s to do for Weiss besides watch. Dr. Finemost pulls a few items from her bag, setting them on a wheeled trolley as she speaks, her calm, flat tone managing to be free of the ice she’s often known for. “Hello there, I believe Mister Belladonna told you who I am, correct?” Upon receiving a nod, Finemost picks up a clipboard, and marks a few things Ghira can’t make out. “Then I believe we shall begin the examination.” What follows, Ghira assumes, is a mostly normal check-up. He isn’t a medical man by any stretch, beyond the most basic of field medicine and first aid, so his knowledge of what the doctor is doing is limited. Weiss looks mostly at ease throughout the experience, only getting a little hesitant about her wings, but in the end the doctor tells Weiss she’s gotten everything she needs. Ghira hands Weiss the spare room key he had been given by the front desk, and the aide who retrieved Dr. Finemost leads Weiss off with a promise that Ghira won’t be too far away if she needs anything. Ghira feels a smile forming as he watches her follow the aide off. His smile fades, as he turns to Dr. Finemost. He can tell something is wrong, and while not a doctor… “Jade, is it… is it bad?” Ghira feels the dread in his stomach grow as Jade’s scowl grows, her tail tapping the examination table as she seems to be trying to glare a hole in the chart in her hands. “It is. In some ways, it’s… here.” Jade presses the chart into Ghira’s hands, and steps back, pulling her bag up with her tail. “I’m sorry, Ghira, I- I need to go sit down. I… Call me if you need something, and try not to need something.” Ghira watches as Jade quickly walks off, tightening his grip on the unread chart in his hands and making his way to his own hotel room for the night to make an important choice, and an even more important call. —------------------------------------------------------------------------ “I know, dear, it’s… a lot to ask. Especially with-” Ghira cuts himself off, looking away from the call screen. He doesn’t need to finish the sentence, both of them know how it ends. Taking a breath, he looks back to the image of Kali, his darling wife, and resists the repeated urge to cringe away again at her expression of distraught horror mixed with deep emotional pain. “I don’t think… That leaving her here is a good idea. I don’t know what goes on behind the closed doors of the Schnee manor, but those wings… Jade thinks she’s been binding them frequently since birth. It’s a miracle they’re even functional.” Through the screen, Kali Belladonna nods almost absentmindedly. She had been sent the chart and attached photos by Ghira, and seemed to be almost transfixed by them. As he watches her let out an almost mournful sigh, Kali nods again, this time with more energy behind it. “I think you’re right, love. If her father- no. If that man was willing to… Gods, what else has he done? He just left her out in the streets, Ghira. In January!” He hears Kali’s voice hitch, and aches to be able to wrap his arms around her. Instead, he places a hand on the call’s display, which Kali mirrors. After a moment of listening to her quiet whines, she composes herself and continues. “Yes. I… I support it, Ghira. I’ll call the house and have a bedroom set up for her. I won’t let that man do any more harm to her.” Ghira smiles, as he listens to the sorrow in his wife’s voice solidify into the fierce, maternal determination he’s always loved about her, and he knows that Weiss will be safe. Only a small part of him is worried that Kali might not then let Weiss go, but… As much as it hurts him to think it, with Blake… gone, now, it would be a shame for such a large house to be lived in by only the couple. Weiss isn’t a replacement for his beloved daughter, nothing could take Blake’s place in his heart, but people have told him his heart is large. He can make space for another in there, if he has to. “I’ll ask her tomorrow, then. Goodnight, dear, I’ll see you soon.” “Goodnight, love.” The image of Kali flickers out, the call ending as Ghira sits back and gazes out of the window at the city. Neon lights and the glow of the heaters make Mantle shine even through the fog and smoke throughout it in a way Ghira, despite his general dislike of the place, simply can’t get anywhere else on Remnant. As the large faunus stands and moves over to the bed, pulling off his shirt and setting it on the chair to fold in the morning, he supposes that the city isn’t all bad, in the morning he may well have something wonderful to show for his journey out here. For the first time in months, ever since the day at the docks of Kuo Kuana, Ghira Belladonna goes to sleep feeling hopeful for tomorrow. —---------------------------------------- “Goodnight, love.” Kali Belladonna taps the end call button on her scroll, her black feline ears flicking as the image of her husband vanishes. Setting the device down, Kali leans back into the soft lounge chair in her parents’ Mistral home and lets out a saddened whine. Her eyes close for a moment, and the image of the bent, almost twisted wings crosses her mind. The wings had lacked almost all their feathers, and the shape! Gods, she can’t even imagine how little Weiss must feel, must have felt living in that home with that man. She hears a sound, the lightest of footsteps by the kitchen, and opens her eyes to see her mother, the wizened old apothecary, Durga Karavira, holding a tray with two cups. Kali smiles, as she sits back upright. “Oh, my dear young one. Is news like this so bad? Another little beast sounds wonderful.” The heavy accent of her mother’s voice calls out as she slowly makes her way over, setting the tray on the coffee table, and carefully settles into the chair beside the one Kali is seated in. Kali rubs a hand over the worn leather of the chair’s arm, her father’s chair. She can still get a bit of his scent, the earthy, grounding smell of a man who loved and worked as best as possible. “It… Mama, the girl, she’s a Schnee. A faunus and a Schnee, and she’s so hurt. I can’t even imagine what she’s been through.” Kali picks up one of the cups, opening her mouth to better take in the aroma of her mother’s tea blend. Even after all these years, almost four decades of life, Kali still can’t make anything like it. Durga takes up her own cup, sipping down a healthy amount before speaking. “Oh what she has been through. Yes, yes, well it has already happened, hasn’t it? You cannot change it. I cannot, the Animal God cannot even. You worry about what already is done, but you should look at what is not yet done.” Kali nods, half absently as her mother’s words hit her in more ways than their current topic. She can’t change what’s already happened. Lingering on it, on what she could have done, doesn’t help, but focusing on what she can do… She speaks after a moment of thought and a sip of tea. “Thank you mama. I think I needed that. What is not yet done, hmm…? I’ve got to imagine a lot hasn’t been done for the poor girl.” As Kali begins to think of several things at once, Durga just laughs and pushes herself back to her feet. Kali barely notices the fond look her mother sends, but very much does notice her words. “Then, to sleep it is. I have to be up early for those Safed brothers, the butter won’t make itself and I think no other payment to move my home is appropriate.” Collecting the tray and empty cups, Durga starts to walk away when Kali breaks her trance. “You- mama? You’re coming with me? But… It’s Menagerie, and you…” Kali stands as well, her amber eyes meeting the same color in her mother’s gaze, one of a mix of emotions that Kali can’t properly parse right now. Durga smiles, setting the tray down and pulling Kali into a hug much tighter than her old age would suggest possible. “I still hate that blood soaked reef of an island. I am not, though, one to miss the life of a new granddaughter, more so that this one will have much to learn, and the great oaf you wed will teach her all wrong.” Kali smiles, returning her mother’s hug as she listens to the calming, if slightly harsh, voice talk and complain about Menagerie, as she feels, for the first time since her darling little Blake stepped aboard a boat Kali could never follow, like what lies ahead isn’t destined to be so hollow after all.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75743436/chapters/198108326
{"authors": ["TheWashingtonGem"], "language": "English", "title": "An Avian Menace"}
Merging my past with the future I get to have At the San Fransokyo Mall, Sunday May 27th 2035, 11:03am… Juniper pops her lips as she, Olivia, Karmi and Ari walk through the mall, having enjoyed some McDonald’s at the food court for a late breakfast as they enjoy their mall outing. “So, what store should we hit first?” Olivia asks as she stretches her back. “And let’s spice things up today. We hit the Barnes & Noble AFTER the next two stops.” “Sounds fun.” Ari says chipperly. “Have anything in mind?” The Latina shrugs as she looks around, spotting a hair salon. “How about haircuts?” Karmi then touches the tips of her hair and shrugs. “My hair’s still rather short from two weeks ago, but maybe we could do something else there.” She says with a smile of brilliance. “Like maybe hairdos.” “Or braids.” Juniper suggests, but then Ari’s eyes widen as she gets a great idea. “How about we get parts of our hair dyed.” She smiles with joy, the whole group pausing for elaboration. “Wait, what?” Karmi warmly chuckles. “Well, I was thinking when I saw June’s hair.” The blue-eyed brunette explains. “And I saw the tuft on purple in her hair and then it came to me; me and Juni with matching purple tufts.” She smiles warmly. “Making us cousins with matching hair.” She gushes, making Juniper blush from flattery and their friends melt on the inside from how loving Ari’s suggestion was. “Well, I dunno about my own hair, but how I can say no to that face?” She says lovingly, making Ari happy. Inside the salon… The four then enter the salon, where they walk up to the dyed hair chart on the wall to see what they want. “Hmm, what do you want, Vi?” Karmi asks. “Dunno, maybe a blue streak since my Mechadama suit glows blue.” Olivia smiles and shrugs. “I’ll have to see. What about you?” Her biotech friend then clicks her tongue as she thinks. “Maybe icy blue.” “I already know what I want.” Ari gushes. “Say June, what color is your dye anyway?” “Lavendar.” The dancer assures her cousin before running her hand through her own hair. “You know, this brings back memories.” She warmly chuckles. “When I was 13, I admired my mom SO much, that I wanted to be look like her. Then after begging on my knees for like an hour, she let me dye my hair blonde.” “Oh yeah, your natural hair is dark brown, right?” Olivia says, remembering that Juniper had brought it up in Klamath Falls 2033. Her best friend then nods happily. “Yeah, and I liked it enough to keep it for 6 years.” The blonde warmly chuckles. “I got to dye it during my time in prison due to me being a model prisoner.” She runs her fingers through her hair again, feeling something weird. “Um, I need to make a call, be right back.” She says before heading outside, her friends hoping everything was alright. Outside the salon… Juniper taps her foot rapidly as she waits for her cousin to answer her FaceTime call. Trina then picks up, calling from the backyard. “Hey June, what’s up?” She asks chipperly. “Trin, if you don’t mind me asking, why did you want your original hair back?” The 18-year-old asks in a low but curious voice. “Besides wanting it long again.” While taken back by the question, Trina thinks about it. “Well, I cut my hair that short to be like my father.” She points out. “And besides wanting it long again, I wanted to look like myself; not him. Mind if I ponder why you ask?” Juniper then lets out a warm chuckle. “Could never get anything past you, huh? Well, I think I want to dye my hair back to brown.” She says with uncertainty. “But, it feels like a hard decision. It feels like I’m taking back the love I displayed for mama when I dyed it all those years ago.” She says somberly, making her cousin feel empathy for her. “You’re not betraying Aunt Barb.” She assures her softly, the videocall making her tone raspier. “You just want to change your look, and I’m pretty sure she’d agree that your love for her goes beyond your hair color.” The dancer then warmly smiles as she feels her eyes almost getting wet. “You’re right, thanks Trin. See you later cuz. Love you.” She softly gushes, moving Trina to blow a kiss at her through the videocall. “Love you too, June.” She says before the dancer hangs up and takes a deep breath before heading back into the building. But as she steps one foot through the door, Ari’s words rung in her head; giving her another idea. Later at the Ferns Residence, around 3:01pm… Barb whistles as she helps Sally roll up some tubes of dough in the kitchen. They were making delicious cinnamon rolls since it was the last weekend of the month. “At this rate we should have the cinnamon rolls done by 4, ish.” Barb says before snickering, which Sally does as well. “Yeah, reminds me of when you helped Steven bake when the two of you were younger.” “Yeah, he always got the sugar powder on his face within the first 30 seconds I set it down.” The former dancer warmly smiles. “He’s basically my brother, and I have no problem with that.” “And I have no problem with you being my sister-in-law.” Sally gushes back, making her friend’s heart melt more. “I’m back with Ari!” The two hear Juniper yell from the living room. “COMING!” Sally says as she and Barb set the dough down and head on out. The two then enter the living room; shocked from what they were seeing. Ari had a lavender tuft in her hair; specifically the tuft that went across her forehead was now lavender. But that wasn’t even the most shocking part; as Juniper had dyed the blonde portion of her hair back to its natural chocolate brown color with the lavender portion remaining. She also had a light blue lightning bolt hairpin in the left side of her hair; akin to Ari’s rosemary pin. While Ari was chipper, Juniper had her arms behind her back as she nervously smiled. “What do you think?” She nearly squeaks from the nervousness. It takes Barb’s mind a moment to realize that she’s not in 2029 anymore; that her daughter had not only restored her former hair color, but mixed it with her modern one. “I- I didn’t know you wanted to dye your hair back June.” She says. “I’m not mad, just surprised.” “Well, Ari recommended that we all get streaks of color in our hair.” Her dancer daughter says, feeling her nervousness fluctuating. “She wanted to be matching since we were cousins, so she got my lavender streak. But then I realized I wanted my brown hair back, so I called Trina and she assured me that you wouldn’t get mad.” “But, why would I be mad?” Barb asks, concerned that her daughter might be feeling unnecessary guilt. Juniper then takes another deep breath as she explains. “Because, I dyed my hair blonde because I love you so much to the point I wanted to look like you. And I worried that if I restored my normal hair, the love I showed before would disappear.” She squeaks, making her mom and aunt’s faces soften with sad hearts. “Juni the fact that you wanted to be more like me was loving enough.” Barb smiles weakly. “It assured my heart that I was doing right, that one decision alone. But if you want to have your old hair back, that’s ok with me.” She says sweetly with a closed hand on her daughter’s shoulder, since she didn’t want to get Juniper’s shirt dirty with flour. Her assuring words make her daughter’s heart melt, which warmed much more as she hugged her mom lovingly. “Thanks, mama.” She gushes softly, getting a loving closed hand pat on the back. As Sally and Ari melt from the loving moment, a realization hits the blue-head’s mind. “You kept your purple tuft because of Ari, didn’t you?” Her niece nods as the hug ends. “Well she did say ‘matching cousins’.” She says before pointing to her hairpin. “It’s like, my new self is properly displayed now.” Juniper warmly chuckles. “The lavender symbolizing the new life I got thanks to Go Go, with the brown representing the life I got back. And the hairpin represents how far I’ve come thanks to my loved ones.” She gushes before pulling Ari into a warm hug. “Thanks again Ar.” Her blue-eyed cousin warmly chuckles as she kisses her cousin’s cheek. “Anytime, June.” She gushes. “Now then, are we not going to talk about the fact that your mom and our aunt were baking something?” “You follow them to the kitchen, I’ll join after I do something.” Juniper smiles, Barb and Sally rolling their eyes in amusement as Ari followed them into the kitchen. “OOH CINNAMON ROLLS!” She squeaks with excitement, getting a small chuckle out of Juniper as she takes out her phone and looks at the group photo she and the others took as they walked around the mall with their new hairdos. While she and Ari were matching, Karmi had a few dark blue streaks throughout her hair while Olivia got some icy blue streaks on her bangs. She chuckles as she sends it to the group chat, accidentally swiping to the next photo when she’s about to go to another app; which was just her and Ari recreating the pose she and her mama did when using electricity to form their name during their villain debut; with the Burger King logo standing in for ‘High Voltage’. Juniper can’t help but melt more upon seeing it, so she sends it to the group chat as well before heading to the kitchen; glad she had her fun and loving cousin Ari in her life.
Merging my past with the future I get to have At the San Fransokyo Mall, Sunday May 27th 2035, 11:03am… Juniper pops her lips as she, Olivia, Karmi and Ari walk through the mall, having enjoyed some McDonald’s at the food court for a late breakfast as they enjoy their mall outing. “So, what store should we hit first?” Olivia asks as she stretches her back. “And let’s spice things up today. We hit the Barnes & Noble AFTER the next two stops.” “Sounds fun.” Ari says chipperly. “Have anything in mind?” The Latina shrugs as she looks around, spotting a hair salon. “How about haircuts?” Karmi then touches the tips of her hair and shrugs. “My hair’s still rather short from two weeks ago, but maybe we could do something else there.” She says with a smile of brilliance. “Like maybe hairdos.” “Or braids.” Juniper suggests, but then Ari’s eyes widen as she gets a great idea. “How about we get parts of our hair dyed.” She smiles with joy, the whole group pausing for elaboration. “Wait, what?” Karmi warmly chuckles. “Well, I was thinking when I saw June’s hair.” The blue-eyed brunette explains. “And I saw the tuft on purple in her hair and then it came to me; me and Juni with matching purple tufts.” She smiles warmly. “Making us cousins with matching hair.” She gushes, making Juniper blush from flattery and their friends melt on the inside from how loving Ari’s suggestion was. “Well, I dunno about my own hair, but how I can say no to that face?” She says lovingly, making Ari happy. Inside the salon… The four then enter the salon, where they walk up to the dyed hair chart on the wall to see what they want. “Hmm, what do you want, Vi?” Karmi asks. “Dunno, maybe a blue streak since my Mechadama suit glows blue.” Olivia smiles and shrugs. “I’ll have to see. What about you?” Her biotech friend then clicks her tongue as she thinks. “Maybe icy blue.” “I already know what I want.” Ari gushes. “Say June, what color is your dye anyway?” “Lavendar.” The dancer assures her cousin before running her hand through her own hair. “You know, this brings back memories.” She warmly chuckles. “When I was 13, I admired my mom SO much, that I wanted to be look like her. Then after begging on my knees for like an hour, she let me dye my hair blonde.” “Oh yeah, your natural hair is dark brown, right?” Olivia says, remembering that Juniper had brought it up in Klamath Falls 2033. Her best friend then nods happily. “Yeah, and I liked it enough to keep it for 6 years.” The blonde warmly chuckles. “I got to dye it during my time in prison due to me being a model prisoner.” She runs her fingers through her hair again, feeling something weird. “Um, I need to make a call, be right back.” She says before heading outside, her friends hoping everything was alright. Outside the salon… Juniper taps her foot rapidly as she waits for her cousin to answer her FaceTime call. Trina then picks up, calling from the backyard. “Hey June, what’s up?” She asks chipperly. “Trin, if you don’t mind me asking, why did you want your original hair back?” The 18-year-old asks in a low but curious voice. “Besides wanting it long again.” While taken back by the question, Trina thinks about it. “Well, I cut my hair that short to be like my father.” She points out. “And besides wanting it long again, I wanted to look like myself; not him. Mind if I ponder why you ask?” Juniper then lets out a warm chuckle. “Could never get anything past you, huh? Well, I think I want to dye my hair back to brown.” She says with uncertainty. “But, it feels like a hard decision. It feels like I’m taking back the love I displayed for mama when I dyed it all those years ago.” She says somberly, making her cousin feel empathy for her. “You’re not betraying Aunt Barb.” She assures her softly, the videocall making her tone raspier. “You just want to change your look, and I’m pretty sure she’d agree that your love for her goes beyond your hair color.” The dancer then warmly smiles as she feels her eyes almost getting wet. “You’re right, thanks Trin. See you later cuz. Love you.” She softly gushes, moving Trina to blow a kiss at her through the videocall. “Love you too, June.” She says before the dancer hangs up and takes a deep breath before heading back into the building. But as she steps one foot through the door, Ari’s words rung in her head; giving her another idea. Later at the Ferns Residence, around 3:01pm… Barb whistles as she helps Sally roll up some tubes of dough in the kitchen. They were making delicious cinnamon rolls since it was the last weekend of the month. “At this rate we should have the cinnamon rolls done by 4, ish.” Barb says before snickering, which Sally does as well. “Yeah, reminds me of when you helped Steven bake when the two of you were younger.” “Yeah, he always got the sugar powder on his face within the first 30 seconds I set it down.” The former dancer warmly smiles. “He’s basically my brother, and I have no problem with that.” “And I have no problem with you being my sister-in-law.” Sally gushes back, making her friend’s heart melt more. “I’m back with Ari!” The two hear Juniper yell from the living room. “COMING!” Sally says as she and Barb set the dough down and head on out. The two then enter the living room; shocked from what they were seeing. Ari had a lavender tuft in her hair; specifically the tuft that went across her forehead was now lavender. But that wasn’t even the most shocking part; as Juniper had dyed the blonde portion of her hair back to its natural chocolate brown color with the lavender portion remaining. She also had a light blue lightning bolt hairpin in the left side of her hair; akin to Ari’s rosemary pin. While Ari was chipper, Juniper had her arms behind her back as she nervously smiled. “What do you think?” She nearly squeaks from the nervousness. It takes Barb’s mind a moment to realize that she’s not in 2029 anymore; that her daughter had not only restored her former hair color, but mixed it with her modern one. “I- I didn’t know you wanted to dye your hair back June.” She says. “I’m not mad, just surprised.” “Well, Ari recommended that we all get streaks of color in our hair.” Her dancer daughter says, feeling her nervousness fluctuating. “She wanted to be matching since we were cousins, so she got my lavender streak. But then I realized I wanted my brown hair back, so I called Trina and she assured me that you wouldn’t get mad.” “But, why would I be mad?” Barb asks, concerned that her daughter might be feeling unnecessary guilt. Juniper then takes another deep breath as she explains. “Because, I dyed my hair blonde because I love you so much to the point I wanted to look like you. And I worried that if I restored my normal hair, the love I showed before would disappear.” She squeaks, making her mom and aunt’s faces soften with sad hearts. “Juni the fact that you wanted to be more like me was loving enough.” Barb smiles weakly. “It assured my heart that I was doing right, that one decision alone. But if you want to have your old hair back, that’s ok with me.” She says sweetly with a closed hand on her daughter’s shoulder, since she didn’t want to get Juniper’s shirt dirty with flour. Her assuring words make her daughter’s heart melt, which warmed much more as she hugged her mom lovingly. “Thanks, mama.” She gushes softly, getting a loving closed hand pat on the back. As Sally and Ari melt from the loving moment, a realization hits the blue-head’s mind. “You kept your purple tuft because of Ari, didn’t you?” Her niece nods as the hug ends. “Well she did say ‘matching cousins’.” She says before pointing to her hairpin. “It’s like, my new self is properly displayed now.” Juniper warmly chuckles. “The lavender symbolizing the new life I got thanks to Go Go, with the brown representing the life I got back. And the hairpin represents how far I’ve come thanks to my loved ones.” She gushes before pulling Ari into a warm hug. “Thanks again Ar.” Her blue-eyed cousin warmly chuckles as she kisses her cousin’s cheek. “Anytime, June.” She gushes. “Now then, are we not going to talk about the fact that your mom and our aunt were baking something?” “You follow them to the kitchen, I’ll join after I do something.” Juniper smiles, Barb and Sally rolling their eyes in amusement as Ari followed them into the kitchen. “OOH CINNAMON ROLLS!” She squeaks with excitement, getting a small chuckle out of Juniper as she takes out her phone and looks at the group photo she and the others took as they walked around the mall with their new hairdos. While she and Ari were matching, Karmi had a few dark blue streaks throughout her hair while Olivia got some icy blue streaks on her bangs. She chuckles as she sends it to the group chat, accidentally swiping to the next photo when she’s about to go to another app; which was just her and Ari recreating the pose she and her mama did when using electricity to form their name during their villain debut; with the Burger King logo standing in for ‘High Voltage’. Juniper can’t help but melt more upon seeing it, so she sends it to the group chat as well before heading to the kitchen; glad she had her fun and loving cousin Ari in her life.
ao3_english
2025-12-13T00:00:00Z
https://archiveofourown.gay/works/75740051
{"authors": ["PepsiMagnet"], "language": "English", "title": "Merging my past with the future I get to have"}
Kill For You The rain hadn’t stopped hammering the docks. It came down in sheets, turning the pavement slick and reflective- perfect for photography, terrible for survival. Peach was adjusting his lens, crouching between stacked shipping crates, trying to catch the way the neon signs across the river cut sharp pink stripes through the rain. He didn’t hear the footsteps behind him. He only felt the cold blade press to his ribs. A rough hand yanked him back. “Wrong night to be taking pictures, sweetheart.” Peach’s breath punched out of him. “I- I wasn’t photographing you. I swear.” “Doesn’t matter. Someone higher up wants you gone.” Before Peach could process the words, there was a soft click. That unmistakable, ice-cold click. The click of a gun being cocked. Then- a shape moved through the rain, silent, deliberate. Thee. He didn’t just look angry. He looked like someone had flipped a switch and turned him into something primal. The gunshot cracked through the air and the attacker dropped instantly. Peach stumbled back, heart in free fall. Thee was at his side before his brain caught up, grabbing his shoulders, scanning him closely. “Did he touch you?” Thee’s voice was steady but trembling at the edges. Peach shook his head. “No-” “Good.” Thee took the camera from Peach’s shaking hands, slinging it around his own neck. “We’re leaving.” Peach could barely breathe. “You killed him.” “He was going to kill you.” “He was going to run away!” Thee’s jaw tightened. “I don’t care.” Rain soaked through Peach’s clothes as Thee pushed him toward the street, shielding him with his own body like Peach was worth an entire army. “No,” Peach said suddenly, stopping in the middle of the road. Thee turned sharply. “No?” “We’re talking. Now.” “Not here.” “Yes, here!” Thee closed the distance in two steps, eyes dark, rain dripping from his lashes. “You want to argue about morality while you’re still shaking?” “I’m shaking because you executed someone at my feet!” “If someone comes for you,” Thee growled, “I end them.” “That’s not normal!” “Neither is someone wanting you dead!” Thee’s voice cracked. “What do you not understand?” “That you can’t just- just take over my life like this! I’m not yours to protect!” Thee’s breath faltered, just barely, but Peach saw it. And he hated the way it hit him. They walked the rest of the way in silence, Thee glued to Peach’s side, scanning every shadow like violence might leap out of it. By the time they reached Peach’s studio, Peach’s heart hadn’t slowed. Inside, the lights flickered on. The studio was cluttered with camera equipment, prints, and half-empty coffee cups. When Thee entered he kicked the door shut behind him so hard the frame rattled. He didn’t even look around. No. He stared right at Peach. “So you really think you can handle the streets alone?” Thee finally snaps, pacing like a caged animal. Rainwater drips from his hair. His knuckles are still bloody. “Someone tried to slit your throat tonight!” “And you shot him.” Peach’s voice cuts sharp. “In front of me. You promised-” “I promised to keep you alive,” Thee growls. “He. was. running. away!” “He came for you,” Thee fires back, stepping in close, chest heaving. “I already told you Peachayarat. I don’t care if they turn their back. It stays pressed into the ground until I say otherwise.” “That’s exactly why I didn’t want you involved!” “And look where staying away got you,” Thee spits. “Almost bleeding out in my arms anyway.” “Now tell me, are you okay?” The question was soft, too soft for someone who just killed a man. Peach stepped back, anger rising with the adrenaline. “No. I’m not okay. You can’t just- Thee , what the hell was that?” “What was what?” Thee asked quietly. “Saving you?” “That wasn’t saving! That was violence! That was-” “What I do,” Thee snapped, voice rougher now. “It’s who I am.” “It doesn’t have to involve me,” Peach said, voice tight. “I’m not your responsibility.” Thee flinched again. The kind of flinch you only see when a truth hits too hard. “You think I did it out of responsibility?” he said, stepping closer. Peach backed up instinctively until he hit the metal table behind him. Thee kept moving. “You think I follow you across docks because it's my job?” “Isn’t it?” Peach shot back. “You’re mafia. You use violence. You think everything is solved with force. You’re dangerous, and I shouldn’t be tangled in any of this.” “I told you to avoid my business!” Thee snapped. “But you keep showing up in it. You keep wandering into danger. You keep-” “Working. I keep working,” Peach corrected. “I take photos. I do freelance jobs. I’m not doing it to impress you.” Thee’s eyes hardened. “You think I need you to impress me?” Peach swallowed. Rainwater still clung to Thee’s hair, dripping down his jaw. “You need someone to follow,” Peach said bitterly. “Someone to obsess over. Someone to protect. I’m just… convenient.” Thee stepped even closer. “You think I picked you because you were convenient?” Peach looks away, jaw tight, breathing hard. “It doesn’t matter what I think! I will tell you one more time. You don’t get to control how I work. I’m not part of your world. We’re not-” He falters. Thee freezes. “Say it,” Thee says, voice too calm to be safe. “Finish the sentence.” Peach meets his eyes. “We’re nothing. Not dating. Not… anything. Not even-.” “Bullshit.” “We’re not even friends, Thee.” The silence after that line feels like a blade. Thee steps forward slowly, like he’s forcing down a thousand emotions just to speak, or more like he’s fighting the urge to punch a wall. “You know what’s pathetic?” he murmurs, leaning in until Peach can feel the heat of him. “I’d kill for you. And you pretend you don’t even know me.” “That’s the problem,” Peach whispers, anger cracking into fear and something else. “You solve everything with violence.” “And you solve nothing,” Thee snaps back. “You hide behind a camera and pretend danger won’t touch you if you don’t look directly at it.” Peach pushes at his chest, furious. “Get away from me.” Thee leaned in even further, voice low. “Say it again.” “What?” “Say we’re nothing again.” “No.” “Say it.” “I said no.” “Why not?” Thee demanded, stepping closer until their chests almost touched. “If it’s the truth, say it.” Peach’s voice finally cracked “Because you scare me.” Thee inhaled slowly, like he was trying to calm a storm inside him. “And you think I’m not scared?” he whispered. “I watched someone try to kill you tonight. I thought-” He cut himself off, shaking his head, jaw working. “Strangers don’t track you across the city to make sure you get home,” Thee says quietly. “Strangers don’t know your schedule better than you do.” “Strangers don’t threaten half the underworld to keep you safe.” Peach’s breath catches. “That’s exactly why we’re nothing. I don’t want a bodyguard with a gun and a temper.” Thee’s voice drops. “You don’t want me?” “I don’t want your violence,” Peach corrects, too fast, too emotional. “I don’t want your world suffocating mine.” “You want me,” Thee says with terrifying certainty, “and you hate that you do.” Peach pushes him, palms flat on Thee’s chest. “Stop assuming what I feel just because you can’t control your own.” Thee grabs his wrists, firm but careful and leans in so close Peach can feel the heat of every breath. “Oh, I can control myself,” he says. “I just don’t want to when it comes to you.” “That’s exactly why this is dangerous.” “Good.” Thee growls. “Then at least you finally see me clearly.” Peach glares up at him, angry and shaken and alive in a way that terrifies him. “You think this is some twisted romantic moment? You killed someone, Thee!” “For you.” “Not FOR me. You did it because you can’t stand the idea of losing something you think you own.” Thee’s jaw flexes. “Just. Just say we’re nothing again, just one last time.” he says, voice low, dangerous. “Why?” Peach bites out. “So I can prove you wrong.” Peach opens his mouth to argue- and Thee kisses him. It hit Peach like a shockwave. It was angry, heated, messy. Thee’s hand slid to the back of Peach’s neck, dragging him closer, kissing him like Peach’s denial had wounded something deep inside him. Peach shoves at him once, but Thee kisses harder until Peach’s resistance shatters and he fists both hands in Thee’s soaked shirt, dragging him closer, meeting the kiss with just as much anger. Just as much need. They breathe into each other like they’ve been drowning. Thee presses him back against the counter, mouths colliding again and again. Their kiss is messy, heated and full of arguments they haven’t spoken yet. And yet Thee is kissing him harder, deeper, like he needed confirmation Peach was alive. Alive and his. Peach kissed back with just as much anger, just as much want, letting the fear and frustration burn into something hotter. Thee nipped his bottom lip, Peach gasped and pulled him even closer. Their breaths mingled harshly, bodies flush, tension finally cracking open. When they broke for air, foreheads pressed together, Thee’s voice came out rough, shaking. “You say we’re nothing,” he whispered, “but you kiss me like I’m the only thing you can hold onto.” Peach’s voice shakes. “I’m scared of what you turn me into.” Thee’s thumb brushed Peach’s jaw, possessive and soft at the same time. “Good,” he breathes. “Then we’re the same.” Peach didn’t deny it and when they kissed again it was slower, deeper, filled with every ounce of intensity Thee had been trying to swallow since the first moment he saw Peach behind a camera.
Kill For You The rain hadn’t stopped hammering the docks. It came down in sheets, turning the pavement slick and reflective- perfect for photography, terrible for survival. Peach was adjusting his lens, crouching between stacked shipping crates, trying to catch the way the neon signs across the river cut sharp pink stripes through the rain. He didn’t hear the footsteps behind him. He only felt the cold blade press to his ribs. A rough hand yanked him back. “Wrong night to be taking pictures, sweetheart.” Peach’s breath punched out of him. “I- I wasn’t photographing you. I swear.” “Doesn’t matter. Someone higher up wants you gone.” Before Peach could process the words, there was a soft click. That unmistakable, ice-cold click. The click of a gun being cocked. Then- a shape moved through the rain, silent, deliberate. Thee. He didn’t just look angry. He looked like someone had flipped a switch and turned him into something primal. The gunshot cracked through the air and the attacker dropped instantly. Peach stumbled back, heart in free fall. Thee was at his side before his brain caught up, grabbing his shoulders, scanning him closely. “Did he touch you?” Thee’s voice was steady but trembling at the edges. Peach shook his head. “No-” “Good.” Thee took the camera from Peach’s shaking hands, slinging it around his own neck. “We’re leaving.” Peach could barely breathe. “You killed him.” “He was going to kill you.” “He was going to run away!” Thee’s jaw tightened. “I don’t care.” Rain soaked through Peach’s clothes as Thee pushed him toward the street, shielding him with his own body like Peach was worth an entire army. “No,” Peach said suddenly, stopping in the middle of the road. Thee turned sharply. “No?” “We’re talking. Now.” “Not here.” “Yes, here!” Thee closed the distance in two steps, eyes dark, rain dripping from his lashes. “You want to argue about morality while you’re still shaking?” “I’m shaking because you executed someone at my feet!” “If someone comes for you,” Thee growled, “I end them.” “That’s not normal!” “Neither is someone wanting you dead!” Thee’s voice cracked. “What do you not understand?” “That you can’t just- just take over my life like this! I’m not yours to protect!” Thee’s breath faltered, just barely, but Peach saw it. And he hated the way it hit him. They walked the rest of the way in silence, Thee glued to Peach’s side, scanning every shadow like violence might leap out of it. By the time they reached Peach’s studio, Peach’s heart hadn’t slowed. Inside, the lights flickered on. The studio was cluttered with camera equipment, prints, and half-empty coffee cups. When Thee entered he kicked the door shut behind him so hard the frame rattled. He didn’t even look around. No. He stared right at Peach. “So you really think you can handle the streets alone?” Thee finally snaps, pacing like a caged animal. Rainwater drips from his hair. His knuckles are still bloody. “Someone tried to slit your throat tonight!” “And you shot him.” Peach’s voice cuts sharp. “In front of me. You promised-” “I promised to keep you alive,” Thee growls. “He. was. running. away!” “He came for you,” Thee fires back, stepping in close, chest heaving. “I already told you Peachayarat. I don’t care if they turn their back. It stays pressed into the ground until I say otherwise.” “That’s exactly why I didn’t want you involved!” “And look where staying away got you,” Thee spits. “Almost bleeding out in my arms anyway.” “Now tell me, are you okay?” The question was soft, too soft for someone who just killed a man. Peach stepped back, anger rising with the adrenaline. “No. I’m not okay. You can’t just- Thee , what the hell was that?” “What was what?” Thee asked quietly. “Saving you?” “That wasn’t saving! That was violence! That was-” “What I do,” Thee snapped, voice rougher now. “It’s who I am.” “It doesn’t have to involve me,” Peach said, voice tight. “I’m not your responsibility.” Thee flinched again. The kind of flinch you only see when a truth hits too hard. “You think I did it out of responsibility?” he said, stepping closer. Peach backed up instinctively until he hit the metal table behind him. Thee kept moving. “You think I follow you across docks because it's my job?” “Isn’t it?” Peach shot back. “You’re mafia. You use violence. You think everything is solved with force. You’re dangerous, and I shouldn’t be tangled in any of this.” “I told you to avoid my business!” Thee snapped. “But you keep showing up in it. You keep wandering into danger. You keep-” “Working. I keep working,” Peach corrected. “I take photos. I do freelance jobs. I’m not doing it to impress you.” Thee’s eyes hardened. “You think I need you to impress me?” Peach swallowed. Rainwater still clung to Thee’s hair, dripping down his jaw. “You need someone to follow,” Peach said bitterly. “Someone to obsess over. Someone to protect. I’m just… convenient.” Thee stepped even closer. “You think I picked you because you were convenient?” Peach looks away, jaw tight, breathing hard. “It doesn’t matter what I think! I will tell you one more time. You don’t get to control how I work. I’m not part of your world. We’re not-” He falters. Thee freezes. “Say it,” Thee says, voice too calm to be safe. “Finish the sentence.” Peach meets his eyes. “We’re nothing. Not dating. Not… anything. Not even-.” “Bullshit.” “We’re not even friends, Thee.” The silence after that line feels like a blade. Thee steps forward slowly, like he’s forcing down a thousand emotions just to speak, or more like he’s fighting the urge to punch a wall. “You know what’s pathetic?” he murmurs, leaning in until Peach can feel the heat of him. “I’d kill for you. And you pretend you don’t even know me.” “That’s the problem,” Peach whispers, anger cracking into fear and something else. “You solve everything with violence.” “And you solve nothing,” Thee snaps back. “You hide behind a camera and pretend danger won’t touch you if you don’t look directly at it.” Peach pushes at his chest, furious. “Get away from me.” Thee leaned in even further, voice low. “Say it again.” “What?” “Say we’re nothing again.” “No.” “Say it.” “I said no.” “Why not?” Thee demanded, stepping closer until their chests almost touched. “If it’s the truth, say it.” Peach’s voice finally cracked “Because you scare me.” Thee inhaled slowly, like he was trying to calm a storm inside him. “And you think I’m not scared?” he whispered. “I watched someone try to kill you tonight. I thought-” He cut himself off, shaking his head, jaw working. “Strangers don’t track you across the city to make sure you get home,” Thee says quietly. “Strangers don’t know your schedule better than you do.” “Strangers don’t threaten half the underworld to keep you safe.” Peach’s breath catches. “That’s exactly why we’re nothing. I don’t want a bodyguard with a gun and a temper.” Thee’s voice drops. “You don’t want me?” “I don’t want your violence,” Peach corrects, too fast, too emotional. “I don’t want your world suffocating mine.” “You want me,” Thee says with terrifying certainty, “and you hate that you do.” Peach pushes him, palms flat on Thee’s chest. “Stop assuming what I feel just because you can’t control your own.” Thee grabs his wrists, firm but careful and leans in so close Peach can feel the heat of every breath. “Oh, I can control myself,” he says. “I just don’t want to when it comes to you.” “That’s exactly why this is dangerous.” “Good.” Thee growls. “Then at least you finally see me clearly.” Peach glares up at him, angry and shaken and alive in a way that terrifies him. “You think this is some twisted romantic moment? You killed someone, Thee!” “For you.” “Not FOR me. You did it because you can’t stand the idea of losing something you think you own.” Thee’s jaw flexes. “Just. Just say we’re nothing again, just one last time.” he says, voice low, dangerous. “Why?” Peach bites out. “So I can prove you wrong.” Peach opens his mouth to argue- and Thee kisses him. It hit Peach like a shockwave. It was angry, heated, messy. Thee’s hand slid to the back of Peach’s neck, dragging him closer, kissing him like Peach’s denial had wounded something deep inside him. Peach shoves at him once, but Thee kisses harder until Peach’s resistance shatters and he fists both hands in Thee’s soaked shirt, dragging him closer, meeting the kiss with just as much anger. Just as much need. They breathe into each other like they’ve been drowning. Thee presses him back against the counter, mouths colliding again and again. Their kiss is messy, heated and full of arguments they haven’t spoken yet. And yet Thee is kissing him harder, deeper, like he needed confirmation Peach was alive. Alive and his. Peach kissed back with just as much anger, just as much want, letting the fear and frustration burn into something hotter. Thee nipped his bottom lip, Peach gasped and pulled him even closer. Their breaths mingled harshly, bodies flush, tension finally cracking open. When they broke for air, foreheads pressed together, Thee’s voice came out rough, shaking. “You say we’re nothing,” he whispered, “but you kiss me like I’m the only thing you can hold onto.” Peach’s voice shakes. “I’m scared of what you turn me into.” Thee’s thumb brushed Peach’s jaw, possessive and soft at the same time. “Good,” he breathes. “Then we’re the same.” Peach didn’t deny it and when they kissed again it was slower, deeper, filled with every ounce of intensity Thee had been trying to swallow since the first moment he saw Peach behind a camera.
ao3_english
2025-12-14T00:00:00Z
https://archiveofourown.gay/works/75737131
{"authors": ["sunmyne"], "language": "English", "title": "Kill For You"}
# Variable Record Table: A Unified Hardware-Assisted Framework for Runtime Security Abstract- Modern computing systems face security threats, including memory corruption attacks, speculative execution vulnerabilities, and control-flow hijacking. Although existing solutions address these threats individually, they frequently introduce performance overhead and leave security gaps. This paper presents a Variable Record Table (VRT) with a unified hardware-assisted framework that simultaneously enforces spatial memory safety against buffer overflows, back-edge control-flow integrity (CFI), and speculative execution attack detection. The VRT dynamically constructs a protection table by instrumenting runtime instructions to extract memory addresses, bounds metadata, and control-flow signatures. Our evaluation across MiBench and SPEC benchmarks shows that VRT successfully detects all attack variants tested with zero additional instruction overhead. Furthermore, it maintains memory requirements below 25KB (for 512 entries) and maintains area / power overhead under $8\%$ and $11.65~{\mu\mathrm{W}}$ , respectively. By consolidating three essential security mechanisms into a single hardware structure, VRT provides comprehensive protection while minimizing performance impact. Index Terms—Memory safety, control-flow integrity, hardware security, tagged memory, speculative execution attacks. # I. INTRODUCTION Memory safety violations are among the most critical vulnerabilities in modern systems, with buffer overflows, control-flow hijacking, and speculative execution attacks being the three primary threat classes. Despite the availability of various mitigation techniques, existing solutions have three fundamental limitations: (1) narrow protection scope (defending against only one attack class), (2) significant performance overhead from software mediation, and (3) security gaps between disjoint mechanisms. The primary challenge in memory protection involves a trilemma between completeness, performance, and security. Current solutions force designers to choose between different approaches: using multiple-point solutions for comprehensive coverage (completeness), relying solely on hardware mechanisms for improved performance, or striving to close all vulnerability gaps (security). For example, while tagged memory architectures offer strong spatial memory safety, they do not adequately address control-flow integrity or threats from speculative execution. Conversely, control-flow integrity mechanisms focus only on validating branches, leaving them susceptible to memory corruption attacks. Software-based approaches, such as bounds checking, often attract significant overheads, typically exceeding $30\%$. Even hardware-assisted solutions like Intel's Control-flow Enforcement Technology protect against only certain types of vulnerabilities. Recent research has shown that sophisticated attacks can exploit the gaps between these isolated protections, underscoring the necessity for a unified solution. This solution should effectively address memory safety, control-flow integrity, and speculative execution threats, all while maintaining performance. To address the limitations of existing security methods, we extend the Variable Record Table (VRT)–, a comprehensive hardware framework designed to enforce memory safety, control-flow integrity, and protection against speculative execution. VRT achieves this through three key innovations. First, it features a novel metadata architecture that captures variable bounds, control-flow signatures, and speculative access patterns within a single hardware structure, eliminating the need for separate protection mechanisms. Second, our design incorporates lightweight instrumentation of runtime instructions, enabling the dynamic construction of protection policies without requiring software intervention. This approach maintains zero additional instruction overhead. Third, VRT implements parallel security checks through a dedicated pipeline stage that performs bounds verification (for spatial safety), return address validation (back-edge CFI), and speculative access tagging in a single clock cycle. This unified approach provides three fundamental advantages over existing solutions: (1) comprehensive protection against all three classes of attacks through shared metadata, (2) no impact on performance, and (3) practical hardware costs, with only a $1.98\%$ increase in area overhead. By consolidating traditionally separate security mechanisms into a coherent architectural framework, VRT provides protection against modern multi-vector attacks without compromising system performance. The remainder of this paper is organized as follows. Section IV provides necessary background and existing approaches. Section II presents the VRT architecture, including its metadata extraction mechanism, protection table design, and enforcement policies. Section III evaluates VRT's security coverage, performance impact, and hardware overhead through comprehensive experiments. Finally, Section V discusses implications and future directions while concluding the paper. # II. RUNTIME DEFENCE ARCHITECTURE The architecture of the proposed system is shown in Figure 1, where we augment the standard 5-stage pipeline processor with dedicated memory (VRT) to log runtime variable memory space information. This enhancement enables us to extract variable details in real-time from instructions interacting with the main memory, allowing us to verify their usage once control returns from a function. Fig. 1. Overall Architecture # A. Variable Space Extraction Architecture Figure 2 illustrates the architecture for extracting base and bound information during runtime, specifically during the decode and execution stages. This runtime instrumentation specifically targets instructions that could generate a new address potentially associated with the frame pointer, which is stored in a special register as indicated in Table 1. Fig. 2. Modified Pipeline for Runtime Variable Space Extraction Table I presents the layout of the VRT, which consists of three columns: the associated bit, the variable base address, and the bound value. Each entry in the VRT includes an associated bit, a 32-bit base address, and an 8-bit bound value, resulting in a total of 41 bits per entry. The allied bit differentiates entries for subsequent function calls. This table snippet displays six entries from the active function and two from the preceding function. # B. Buffer Overflow and VRT After populating the VRT with base and bound addresses of local variables, we can evaluate each array offset and pointer operation to identify potential invalid memory addresses in two representative scenarios. In this section, we will discuss both cases of illegal access. TABLEI VARIABLE RECORD TABLE <table><tr><td>Associated</td><td>Variable Address</td><td>Bound</td></tr><tr><td>1</td><td>0X7FFF60</td><td>24</td></tr><tr><td>1</td><td>0X7FFF3B</td><td>4</td></tr><tr><td>1</td><td>0X7FFF38</td><td>4</td></tr><tr><td>1</td><td>0X7FFF34</td><td>4</td></tr><tr><td>1</td><td>0X3FFF30</td><td>24</td></tr><tr><td>1</td><td>0X3FFE28</td><td>256</td></tr><tr><td>0</td><td>0X7FFE70</td><td>24</td></tr><tr><td>0</td><td>0X7FFE60</td><td>16</td></tr></table> 1) Constant variable index: The first case involves direct access to an array using a constant index that exceeds the array's range, which can result in out-of-bounds access. If this operation is unchecked, it may corrupt data outside the allocated scope. In C code, this appears to be an attempt to access an array with an out-of-bounds index: $$ a [ \text {o u t} _ {\text {o f}} \text {b o u n d} ] = ^ {\prime} X ^ {\prime}; $$ The corresponding assembly instruction for array access illustrates how the offset involved in the load instruction can lead to an address outside the valid address space of the variable stored in the VRT: $$ 4 0 0 2 e 0: \text {l w} \\ \S 2, \text {o u t} _ {\text {o f}} \text {b o u n d} (\S 3 0) $$ 2) Loop operation on array or pointer variable: This issue often arises in buffer overflow scenarios, particularly with string library functions like `strcpy()` during loop operations. An unchecked increment of a pointer variable can result in addresses that exceed the allocated memory space: char X; char \*ptr $=$ X; for(i=0;i<10;i++) ++ptr $\equiv$ '\0'; Furthermore, this pointer increment operation can demonstrated using MIPS-like assembly instructions, where register $2 serves the source and destination address. In out-of-bounds cases,$ 2 may contain addresses that span multiple entries in the VRT, whereas valid operations will remain within a single VRT entry. ```asm 4002e0: 1w $2,44 ($30) 4002e8: addu $3,$0,$2 4002f0: sll $2,$3,0x2 4002f8: 1w $3,40 ($30) 400300: addu $2,$2,$3 ``` To mitigate these issues, the pipeline implementation adds VRT checks during the execution stage. When an out-of-bound access is detected during address generation, the operation is blocked before it can corrupt memory. # C. Backward-edge CFI Enforcement In a Control Flow Graph (CFG), a backward edge indicates a transfer of control back to a preceding node within the graph. This occurs due to the 'ret' instruction, which directs the control flow to the instruction immediately following a function call. Figure 3 demonstrates how the control flow in a program initiates multiple function calls, each taking its unique execution path and managing a specific data set. Fig. 3. A Control Sequence Graph Example. Backward-edge Control Flow Integrity (CFI) ensures that functions return to the correct location by verifying the return address stored in the stack frame. Acknowledging that this return address can be vulnerable to various attacks, we propose an additional validation method that uses the base addresses of stored variables. When a function returns, the program continues to use the same
# Variable Record Table: A Unified Hardware-Assisted Framework for Runtime Security Abstract- Modern computing systems face security threats, including memory corruption attacks, speculative execution vulnerabilities, and control-flow hijacking. Although existing solutions address these threats individually, they frequently introduce performance overhead and leave security gaps. This paper presents a Variable Record Table (VRT) with a unified hardware-assisted framework that simultaneously enforces spatial memory safety against buffer overflows, back-edge control-flow integrity (CFI), and speculative execution attack detection. The VRT dynamically constructs a protection table by instrumenting runtime instructions to extract memory addresses, bounds metadata, and control-flow signatures. Our evaluation across MiBench and SPEC benchmarks shows that VRT successfully detects all attack variants tested with zero additional instruction overhead. Furthermore, it maintains memory requirements below 25KB (for 512 entries) and maintains area / power overhead under $8\%$ and $11.65~{\mu\mathrm{W}}$ , respectively. By consolidating three essential security mechanisms into a single hardware structure, VRT provides comprehensive protection while minimizing performance impact. Index Terms—Memory safety, control-flow integrity, hardware security, tagged memory, speculative execution attacks. # I. INTRODUCTION Memory safety violations are among the most critical vulnerabilities in modern systems, with buffer overflows, control-flow hijacking, and speculative execution attacks being the three primary threat classes. Despite the availability of various mitigation techniques, existing solutions have three fundamental limitations: (1) narrow protection scope (defending against only one attack class), (2) significant performance overhead from software mediation, and (3) security gaps between disjoint mechanisms. The primary challenge in memory protection involves a trilemma between completeness, performance, and security. Current solutions force designers to choose between different approaches: using multiple-point solutions for comprehensive coverage (completeness), relying solely on hardware mechanisms for improved performance, or striving to close all vulnerability gaps (security). For example, while tagged memory architectures offer strong spatial memory safety, they do not adequately address control-flow integrity or threats from speculative execution. Conversely, control-flow integrity mechanisms focus only on validating branches, leaving them susceptible to memory corruption attacks. Software-based approaches, such as bounds checking, often attract significant overheads, typically exceeding $30\%$. Even hardware-assisted solutions like Intel's Control-flow Enforcement Technology protect against only certain types of vulnerabilities. Recent research has shown that sophisticated attacks can exploit the gaps between these isolated protections, underscoring the necessity for a unified solution. This solution should effectively address memory safety, control-flow integrity, and speculative execution threats, all while maintaining performance. To address the limitations of existing security methods, we extend the Variable Record Table (VRT)–, a comprehensive hardware framework designed to enforce memory safety, control-flow integrity, and protection against speculative execution. VRT achieves this through three key innovations. First, it features a novel metadata architecture that captures variable bounds, control-flow signatures, and speculative access patterns within a single hardware structure, eliminating the need for separate protection mechanisms. Second, our design incorporates lightweight instrumentation of runtime instructions, enabling the dynamic construction of protection policies without requiring software intervention. This approach maintains zero additional instruction overhead. Third, VRT implements parallel security checks through a dedicated pipeline stage that performs bounds verification (for spatial safety), return address validation (back-edge CFI), and speculative access tagging in a single clock cycle. This unified approach provides three fundamental advantages over existing solutions: (1) comprehensive protection against all three classes of attacks through shared metadata, (2) no impact on performance, and (3) practical hardware costs, with only a $1.98\%$ increase in area overhead. By consolidating traditionally separate security mechanisms into a coherent architectural framework, VRT provides protection against modern multi-vector attacks without compromising system performance. The remainder of this paper is organized as follows. Section IV provides necessary background and existing approaches. Section II presents the VRT architecture, including its metadata extraction mechanism, protection table design, and enforcement policies. Section III evaluates VRT's security coverage, performance impact, and hardware overhead through comprehensive experiments. Finally, Section V discusses implications and future directions while concluding the paper. # II. RUNTIME DEFENCE ARCHITECTURE The architecture of the proposed system is shown in Figure 1, where we augment the standard 5-stage pipeline processor with dedicated memory (VRT) to log runtime variable memory space information. This enhancement enables us to extract variable details in real-time from instructions interacting with the main memory, allowing us to verify their usage once control returns from a function. Fig. 1. Overall Architecture # A. Variable Space Extraction Architecture Figure 2 illustrates the architecture for extracting base and bound information during runtime, specifically during the decode and execution stages. This runtime instrumentation specifically targets instructions that could generate a new address potentially associated with the frame pointer, which is stored in a special register as indicated in Table 1. Fig. 2. Modified Pipeline for Runtime Variable Space Extraction Table I presents the layout of the VRT, which consists of three columns: the associated bit, the variable base address, and the bound value. Each entry in the VRT includes an associated bit, a 32-bit base address, and an 8-bit bound value, resulting in a total of 41 bits per entry. The allied bit differentiates entries for subsequent function calls. This table snippet displays six entries from the active function and two from the preceding function. # B. Buffer Overflow and VRT After populating the VRT with base and bound addresses of local variables, we can evaluate each array offset and pointer operation to identify potential invalid memory addresses in two representative scenarios. In this section, we will discuss both cases of illegal access. TABLEI VARIABLE RECORD TABLE <table><tr><td>Associated</td><td>Variable Address</td><td>Bound</td></tr><tr><td>1</td><td>0X7FFF60</td><td>24</td></tr><tr><td>1</td><td>0X7FFF3B</td><td>4</td></tr><tr><td>1</td><td>0X7FFF38</td><td>4</td></tr><tr><td>1</td><td>0X7FFF34</td><td>4</td></tr><tr><td>1</td><td>0X3FFF30</td><td>24</td></tr><tr><td>1</td><td>0X3FFE28</td><td>256</td></tr><tr><td>0</td><td>0X7FFE70</td><td>24</td></tr><tr><td>0</td><td>0X7FFE60</td><td>16</td></tr></table> 1) Constant variable index: The first case involves direct access to an array using a constant index that exceeds the array's range, which can result in out-of-bounds access. If this operation is unchecked, it may corrupt data outside the allocated scope. In C code, this appears to be an attempt to access an array with an out-of-bounds index: $$ a [ \text {o u t} _ {\text {o f}} \text {b o u n d} ] = ^ {\prime} X ^ {\prime}; $$ The corresponding assembly instruction for array access illustrates how the offset involved in the load instruction can lead to an address outside the valid address space of the variable stored in the VRT: $$ 4 0 0 2 e 0: \text {l w} \\ \S 2, \text {o u t} _ {\text {o f}} \text {b o u n d} (\S 3 0) $$ 2) Loop operation on array or pointer variable: This issue often arises in buffer overflow scenarios, particularly with string library functions like `strcpy()` during loop operations. An unchecked increment of a pointer variable can result in addresses that exceed the allocated memory space: char X; char \*ptr $=$ X; for(i=0;i<10;i++) ++ptr $\equiv$ '\0'; Furthermore, this pointer increment operation can demonstrated using MIPS-like assembly instructions, where register $2 serves the source and destination address. In out-of-bounds cases,$ 2 may contain addresses that span multiple entries in the VRT, whereas valid operations will remain within a single VRT entry. ```asm 4002e0: 1w $2,44 ($30) 4002e8: addu $3,$0,$2 4002f0: sll $2,$3,0x2 4002f8: 1w $3,40 ($30) 400300: addu $2,$2,$3 ``` To mitigate these issues, the pipeline implementation adds VRT checks during the execution stage. When an out-of-bound access is detected during address generation, the operation is blocked before it can corrupt memory. # C. Backward-edge CFI Enforcement In a Control Flow Graph (CFG), a backward edge indicates a transfer of control back to a preceding node within the graph. This occurs due to the 'ret' instruction, which directs the control flow to the instruction immediately following a function call. Figure 3 demonstrates how the control flow in a program initiates multiple function calls, each taking its unique execution path and managing a specific data set. Fig. 3. A Control Sequence Graph Example. Backward-edge Control Flow Integrity (CFI) ensures that functions return to the correct location by verifying the return address stored in the stack frame. Acknowledging that this return address can be vulnerable to various attacks, we propose an additional validation method that uses the base addresses of stored variables. When a function returns, the program continues to use the same variable set. Our approach involves checking the first memory address accessed by load (lw) and store (sw) operations after the function returns. If this address matches one of the variables in our predetermined list, we consider the control flow path normal. In cases where the control flow may be compromised, we anticipate two distinct scenarios: 1) The return lead to the beginning of an entirely different function: This is typically decoded by observing an instruction such as sub1 $16, %esp, where stack space is allocated for the new function call. Thus the return instruction anomalously precedes the instruction that creates stack space, indicating a potential compromise. 2) The return leads to an arbitrary address: In this scenario, the addresses generated from load/store operations subsequent to a return instruction do not match any current variable in the variable record table. Therefore, after a return is executed, our system is tasked to verify these generated load/store addresses align with an entry for the expected returning function in the variable record table. This validation process is essential for detecting returns to unintended or malicious locations, thus maintaining the integrity of the program's control flow. # D. Defending Against Cache Probe Attacks Using VRT The VRT mechanism Figure 4 provides protection against cache probe attacks by tracking misspeculative memory accesses. When an attacker establishes co-location with a victim Fig. 4. VRT with dirty bit and cache probe process, VRT records recent memory accesses and utilizes dirty bit tagging to identify speculative execution patterns. The protection mechanism operates through three key phases: 1) Dirty Bit Tagging: The dirty bit is periodically reset to zero based on the system's maximum speculation resolution time. During misspeculation, this bit remains set to 1, marking all affected VRT entries as dirty. 2) Attack Detection: When attackers probe dirty cache lines during the reconnaissance phase, VRT verifies these accesses against recorded misspeculative access patterns. The parallel search mechanism compares: - The base address from operand fetch - Current function's valid index range (stored in dedicated registers) 3) Pipeline Intervention: Upon detecting unauthorized access to dirty cache lines, the pipeline stalls immediately, preventing sensitive data from being read or leaked. - Speculative() functions set dirty bits during misspeculation - check_array() functions attempt to probe contaminated cache lines The address search occurs concurrently with the execution stage, ensuring zero cycle overhead for legitimate operations while maintaining complete protection against speculative cache probes. # III. EXPERIMENTAL RESULTS # A. Experimental Setup To validate our proposed approach, we adapted the SimpleScalar simulator toolset. SimpleScalar features a pipelined architecture implementation, and we utilized its PISA instruction set (a RISC architecture) along with the Simoutorder micro-architecture simulator. Sim-outorder provides a comprehensive micro-architectural simulation, including a 5-stage pipeline architecture and various recordable parameters. It models an out-of-order microprocessor in detail, featuring branch prediction, caches, and external memory. However, we excluded out-of-order execution from our considerations, as the instructions fetched in a cycle could interfere with the extraction process of variable base and bound information. Additionally, we opted for a single functional unit to align the fetch and decode widths. # B. Experimental Results 1) Variable Extraction: To validate our proposed method, we first extracted information about variable memory to create the Variable Record Table (VRT). Using the MiBench benchmark suites, we selected six programs to analyze their static variable space, including Heap and Stack spaces, which we examined separately. The table focuses on the count of static variables and DMA functions for these programs. Specifically, in the MiBench office suite, our experiments found a maximum of 395 live entries for the VRT. Since each VRT entry consists of a valid bit (1 bit), a base address (32 bits), and a bound value (8 bits), each entry totals 41 bits, the overall VRT memory size amounts to 395 entries times 41 bits per entry, equating to 16KB. 2) Buffer Overflow: To evaluate VRT's effectiveness against memory corruption attacks, we systematically injected buffer overflow vulnerabilities into each benchmark program. Notably, the variables involved in these injected vulnerable procedures were deliberately excluded from instrumentation during execution, simulating real-world scenarios where attackers exploit uninstrumented code sections. 3) Back-edge CFI: We introduced control diverting code into these programs to simulate scenarios of CFI violations. The variables of the injected procedures were not instrumented during the execution of the programs. 4) Speculative based cache probe attack: To implement a speculative cache side-channel attack, we use the simoutorder in-built branch prediction speculation resolve system. Once misspeculation detected, we restrict auto-set for dirty bit associated memory access within the same pipeline cycle. We inject control diverting code at this point to transfer the control to the attacker function to access the same memory address. The injected procedure's variable was not instrumented during program execution. 5) Area and Power Overhead: We tested a classic 5-stage pipeline MIPS 32-bit processor, including a 2-bit branch predictor, a 1024-depth branch prediction buffer, a 2KB direct-mapped cache, and a 64KB main memory for our approach. The Variable Record Table (VRT), comprising 512 entries to each 49 bits wide, resulted in an area overhead of $1.98\%$ relative to the total processor area. Moreover, the power consumption attributed to maintaining the VRT was measured at $11.65\mu \mathrm{W}$ . # IV. RELATED WORK The development of hardware-assisted memory protection has its roots in capability-based systems like the IBM System/38, which was the first to illustrate the potential of metadata-enforced access control. Over the years, modern implementations have evolved through several generations, starting with software-oriented approaches such asTyped Assembly Language that later inspired hardware designs. The CHERI architecture marked a significant advancement by introducing capability pointers with integrated bounds checking, though its 128-bit metadata requirements posed implementation challenges. Commercial solutions like ARM's Memory Tagging Extension have shown practical viability with 4-bit tagging, while Intel's Control-flow Enforcement Technology specifically targets control-flow integrity. Despite these advancements, existing systems remain limited by their narrow protection scope: ARM MTE only addresses spatial safety vulnerabilities, and Intel CET exclusively tackles control-flow violations. Software-based alternatives, such as AddressSanitizer, offer broader coverage but come with a substantial performance overhead, often resulting in a 2 to 3 times slowdown. Recent research has highlighted significant gaps in current approaches, particularly their failure to address speculative execution attacks or provide unified protection across multiple vulnerability classes. The Variable Record Table architecture proposed in this work synthesizes lessons from these earlier systems while introducing innovative mechanisms for comprehensive, low-overhead protection that effectively addresses spatial memory safety, control-flow integrity, and speculative execution threats. # V. CONCLUSIONS This paper introduced the Variable Record Table (VRT), a hardware mechanism that simultaneously prevents buffer overflows, control-flow hijacking, and speculative execution attacks through unified metadata tracking. Our evaluation demonstrated perfect detection of all attack variants and modest hardware costs (8% area overhead), proving comprehensive protection is practical without specialized ISA support. VRT's novel integration of spatial safety, CFI, and speculation control in a single structure overcomes the limitations of fragmented security solutions, providing a foundation for efficient, attack-resistant processors. TABLE II BENCHMARK PROGRAM RESULTS WITH SECURITY ANALYSIS <table><tr><td>MiBench Program</td><td>Variable Count</td><td>Instruction Count</td><td>Attack Detected?</td><td>Mispeculative Branches</td><td>Branch Prediction</td></tr><tr><td>basicmath</td><td>25</td><td>1.81×108</td><td>Yes</td><td>66709</td><td>Yes</td></tr><tr><td>bitcount</td><td>49</td><td>6.62×108</td><td>Yes</td><td>112622</td><td>Yes</td></tr><tr><td>qsort</td><td>13</td><td>5.18×108</td><td>Yes</td><td>70892</td><td>Yes</td></tr><tr><td>CRC32</td><td>9</td><td>5.23×106</td><td>Yes</td><td>34043</td><td>Yes</td></tr><tr><td>dijkstra</td><td>15</td><td>2.55×108</td><td>Yes</td><td>98176</td><td>Yes</td></tr><tr><td>patricia</td><td>28</td><td>3.05×108</td><td>Yes</td><td>71728</td><td>Yes</td></tr></table>
arxiv_cs
2025-12-14T00:00:00Z
https://arxiv.org/pdf/2512.15777
{"title": "Variable Record Table: A Unified Hardware-Assisted Framework for Runtime Security", "raw_content": "# Variable Record Table: A Unified Hardware-Assisted Framework for Runtime Security\n\nSuraj Kumar Sah\n\nDepartment of Computer Science and Engineering\n\nKathmandu University\n\nDhulikhel, Nepal\n\nsurajsah2053@gmail.com\n\nLove Kumar Sah\n\nDepartment of Electrical and Computer Engineering\n\nWestern New England University\n\nSpringfield, MA, USA\n\nlove.sah@wne.edu\n\nAbstract- Modern computing systems face security threats, including memory corruption attacks, speculative execution vulnerabilities, and control-flow hijacking. Although existing solutions address these threats individually, they frequently introduce performance overhead and leave security gaps. This paper presents a Variable Record Table (VRT) with a unified hardware-assisted framework that simultaneously enforces spatial memory safety against buffer overflows, back-edge control-flow integrity (CFI), and speculative execution attack detection. The VRT dynamically constructs a protection table by instrumenting runtime instructions to extract memory addresses, bounds metadata, and control-flow signatures. Our evaluation across MiBench and SPEC benchmarks shows that VRT successfully detects all attack variants tested with zero additional instruction overhead. Furthermore, it maintains memory requirements below 25KB (for 512 entries) and maintains area / power overhead under $8\\%$ and $11.65~{\\mu\\mathrm{W}}$ , respectively. By consolidating three essential security mechanisms into a single hardware structure, VRT provides comprehensive protection while minimizing performance impact.\n\nIndex Terms—Memory safety, control-flow integrity, hardware security, tagged memory, speculative execution attacks.\n\n# I. INTRODUCTION\n\nMemory safety violations are among the most critical vulnerabilities in modern systems, with buffer overflows [1], control-flow hijacking [2], and speculative execution attacks [3] being the three primary threat classes. Despite the availability of various mitigation techniques, existing solutions have three fundamental limitations: (1) narrow protection scope (defending against only one attack class), (2) significant performance overhead from software mediation, and (3) security gaps between disjoint mechanisms.\n\nThe primary challenge in memory protection involves a trilemma between completeness, performance, and security. Current solutions force designers to choose between different approaches: using multiple-point solutions for comprehensive coverage (completeness), relying solely on hardware mechanisms for improved performance, or striving to close all vulnerability gaps (security). For example, while tagged memory architectures [4] offer strong spatial memory safety, they do not adequately address control-flow integrity or threats from speculative execution. Conversely, control-flow integrity mechanisms [2] focus only on validating branches, leaving them susceptible to memory corruption attacks. Software-based approaches, such as bounds checking, often attract\n\nsignificant overheads, typically exceeding $30\\%$ [5]. Even hardware-assisted solutions like Intel's Control-flow Enforcement Technology [6] protect against only certain types of vulnerabilities. Recent research [7] has shown that sophisticated attacks can exploit the gaps between these isolated protections, underscoring the necessity for a unified solution. This solution should effectively address memory safety, control-flow integrity, and speculative execution threats, all while maintaining performance.\n\nTo address the limitations of existing security methods, we extend the Variable Record Table (VRT) [8]–[12], a comprehensive hardware framework designed to enforce memory safety, control-flow integrity, and protection against speculative execution. VRT achieves this through three key innovations. First, it features a novel metadata architecture that captures variable bounds, control-flow signatures, and speculative access patterns within a single hardware structure, eliminating the need for separate protection mechanisms. Second, our design incorporates lightweight instrumentation of runtime instructions, enabling the dynamic construction of protection policies without requiring software intervention. This approach maintains zero additional instruction overhead. Third, VRT implements parallel security checks through a dedicated pipeline stage that performs bounds verification (for spatial safety), return address validation (back-edge CFI), and speculative access tagging in a single clock cycle. This unified approach provides three fundamental advantages over existing solutions: (1) comprehensive protection against all three classes of attacks through shared metadata, (2) no impact on performance, and (3) practical hardware costs, with only a $1.98\\%$ increase in area overhead. By consolidating traditionally separate security mechanisms into a coherent architectural framework, VRT provides protection against modern multi-vector attacks without compromising system performance.\n\nThe remainder of this paper is organized as follows. Section IV provides necessary background and existing approaches. Section II presents the VRT architecture, including its metadata extraction mechanism, protection table design, and enforcement policies. Section III evaluates VRT's security coverage, performance impact, and hardware overhead through comprehensive experiments. Finally, Section V discusses implications and future directions while concluding the paper.\n\n# II. RUNTIME DEFENCE ARCHITECTURE\n\nThe architecture of the proposed system is shown in Figure 1, where we augment the standard 5-stage pipeline processor with dedicated memory (VRT) to log runtime variable memory space information. This enhancement enables us to extract variable details in real-time from instructions interacting with the main memory, allowing us to verify their usage once control returns from a function.\n\n![](images/0d2dd1e3235c2e427efb2a4cf938cbb21211d0c957b2bec32c53ab643360e0db.jpg) \nFig. 1. Overall Architecture\n\n# A. Variable Space Extraction Architecture\n\nFigure 2 illustrates the architecture for extracting base and bound information during runtime, specifically during the decode and execution stages. This runtime instrumentation specifically targets instructions that could generate a new address potentially associated with the frame pointer, which is stored in a special register as indicated in Table 1.\n\n![](images/917706f7d59ead87d79bbc522e507e2fa1073c37aeca8963b4b0ae8fbe97b6ae.jpg) \nFig. 2. Modified Pipeline for Runtime Variable Space Extraction\n\nTable I presents the layout of the VRT, which consists of three columns: the associated bit, the variable base address, and the bound value. Each entry in the VRT includes an associated bit, a 32-bit base address, and an 8-bit bound value, resulting in a total of 41 bits per entry. The allied bit differentiates entries for subsequent function calls. This table snippet displays six entries from the active function and two from the preceding function.\n\n# B. Buffer Overflow and VRT\n\nAfter populating the VRT with base and bound addresses of local variables, we can evaluate each array offset and pointer\n\noperation to identify potential invalid memory addresses in two representative scenarios. In this section, we will discuss both cases of illegal access.\n\nTABLEI VARIABLE RECORD TABLE \n\n<table><tr><td>Associated</td><td>Variable Address</td><td>Bound</td></tr><tr><td>1</td><td>0X7FFF60</td><td>24</td></tr><tr><td>1</td><td>0X7FFF3B</td><td>4</td></tr><tr><td>1</td><td>0X7FFF38</td><td>4</td></tr><tr><td>1</td><td>0X7FFF34</td><td>4</td></tr><tr><td>1</td><td>0X3FFF30</td><td>24</td></tr><tr><td>1</td><td>0X3FFE28</td><td>256</td></tr><tr><td>0</td><td>0X7FFE70</td><td>24</td></tr><tr><td>0</td><td>0X7FFE60</td><td>16</td></tr></table>\n\n1) Constant variable index: The first case involves direct access to an array using a constant index that exceeds the array's range, which can result in out-of-bounds access. If this operation is unchecked, it may corrupt data outside the allocated scope. In C code, this appears to be an attempt to access an array with an out-of-bounds index:\n\n$$\na [ \\text {o u t} _ {\\text {o f}} \\text {b o u n d} ] = ^ {\\prime} X ^ {\\prime};\n$$\n\nThe corresponding assembly instruction for array access illustrates how the offset involved in the load instruction can lead to an address outside the valid address space of the variable stored in the VRT:\n\n$$\n4 0 0 2 e 0: \\text {l w} \\\\ \\S 2, \\text {o u t} _ {\\text {o f}} \\text {b o u n d} (\\S 3 0)\n$$\n\n2) Loop operation on array or pointer variable: This issue often arises in buffer overflow scenarios, particularly with string library functions like `strcpy()` during loop operations. An unchecked increment of a pointer variable can result in addresses that exceed the allocated memory space:\n\nchar X[6]; \nchar \\*ptr $=$ X; \nfor(i=0;i<10;i++) ++ptr $\\equiv$ '\\0';\n\nFurthermore, this pointer increment operation can demonstrated using MIPS-like assembly instructions, where register $2 serves the source and destination address. In out-of-bounds cases,$ 2 may contain addresses that span multiple entries in the VRT, whereas valid operations will remain within a single VRT entry.\n\n```asm\n4002e0: 1w $2,44 ($30)\n4002e8: addu $3,$0,$2\n4002f0: sll $2,$3,0x2\n4002f8: 1w $3,40 ($30)\n400300: addu $2,$2,$3\n```\n\nTo mitigate these issues, the pipeline implementation adds VRT checks during the execution stage. When an out-of-bound access is detected during address generation, the operation is blocked before it can corrupt memory.\n\n# C. Backward-edge CFI Enforcement\n\nIn a Control Flow Graph (CFG), a backward edge indicates a transfer of control back to a preceding node within the graph. This occurs due to the 'ret' instruction, which directs the control flow to the instruction immediately following a function call. Figure 3 demonstrates how the control flow in a program initiates multiple function calls, each taking its unique execution path and managing a specific data set.\n\n![](images/781516717104e40ce482ede984efed10a3da889bfb4c2c601bda2bbf5b7c7fa8.jpg) \nFig. 3. A Control Sequence Graph Example.\n\nBackward-edge Control Flow Integrity (CFI) ensures that functions return to the correct location by verifying the return address stored in the stack frame. Acknowledging that this return address can be vulnerable to various attacks, we propose an additional validation method that uses the base addresses of stored variables. When a function returns, the program continues to use the same variable set. Our approach involves checking the first memory address accessed by load (lw) and store (sw) operations after the function returns. If this address matches one of the variables in our predetermined list, we consider the control flow path normal. In cases where the control flow may be compromised, we anticipate two distinct scenarios:\n\n1) The return lead to the beginning of an entirely different function: This is typically decoded by observing an instruction such as sub1 $16, %esp, where stack space is allocated for the new function call. Thus the return instruction anomalously precedes the instruction that creates stack space, indicating a potential compromise. \n2) The return leads to an arbitrary address: In this scenario, the addresses generated from load/store operations subsequent to a return instruction do not match any current variable in the variable record table. Therefore, after a return is executed, our system is tasked to verify these generated load/store addresses align with an entry for the expected returning function in the variable record table. This validation process is essential for detecting returns to unintended or malicious locations, thus maintaining the integrity of the program's control flow.\n\n# D. Defending Against Cache Probe Attacks Using VRT\n\nThe VRT mechanism Figure 4 provides protection against cache probe attacks by tracking misspeculative memory accesses. When an attacker establishes co-location with a victim\n\n![](images/22bf581a57dfe7ba3a08dbfb6641b7b2cc22a72079baca7bacfc230c5c84516d.jpg) \nFig. 4. VRT with dirty bit and cache probe\n\nprocess, VRT records recent memory accesses and utilizes dirty bit tagging to identify speculative execution patterns.\n\nThe protection mechanism operates through three key phases:\n\n1) Dirty Bit Tagging: The dirty bit is periodically reset to zero based on the system's maximum speculation resolution time. During misspeculation, this bit remains set to 1, marking all affected VRT entries as dirty. \n2) Attack Detection: When attackers probe dirty cache lines during the reconnaissance phase, VRT verifies these accesses against recorded misspeculative access patterns. The parallel search mechanism compares:\n\n- The base address from operand fetch \n- Current function's valid index range (stored in dedicated registers)\n\n3) Pipeline Intervention: Upon detecting unauthorized access to dirty cache lines, the pipeline stalls immediately, preventing sensitive data from being read or leaked.\n\n- Speculative() functions set dirty bits during misspeculation \n- check_array() functions attempt to probe contaminated cache lines\n\nThe address search occurs concurrently with the execution stage, ensuring zero cycle overhead for legitimate operations while maintaining complete protection against speculative cache probes.\n\n# III. EXPERIMENTAL RESULTS\n\n# A. Experimental Setup\n\nTo validate our proposed approach, we adapted the SimpleScalar simulator toolset [13]. SimpleScalar features a pipelined architecture implementation, and we utilized its PISA instruction set (a RISC architecture) along with the Simoutorder micro-architecture simulator. Sim-outorder provides a comprehensive micro-architectural simulation, including a 5-stage pipeline architecture and various recordable parameters. It models an out-of-order microprocessor in detail, featuring branch prediction, caches, and external memory. However, we\n\nexcluded out-of-order execution from our considerations, as the instructions fetched in a cycle could interfere with the extraction process of variable base and bound information. Additionally, we opted for a single functional unit to align the fetch and decode widths.\n\n# B. Experimental Results\n\n1) Variable Extraction: To validate our proposed method, we first extracted information about variable memory to create the Variable Record Table (VRT). Using the MiBench benchmark suites, we selected six programs to analyze their static variable space, including Heap and Stack spaces, which we examined separately. The table focuses on the count of static variables and DMA functions for these programs. Specifically, in the MiBench office suite, our experiments found a maximum of 395 live entries for the VRT. Since each VRT entry consists of a valid bit (1 bit), a base address (32 bits), and a bound value (8 bits), each entry totals 41 bits, the overall VRT memory size amounts to 395 entries times 41 bits per entry, equating to 16KB. \n2) Buffer Overflow: To evaluate VRT's effectiveness against memory corruption attacks, we systematically injected buffer overflow vulnerabilities into each benchmark program. Notably, the variables involved in these injected vulnerable procedures were deliberately excluded from instrumentation during execution, simulating real-world scenarios where attackers exploit uninstrumented code sections. \n3) Back-edge CFI: We introduced control diverting code into these programs to simulate scenarios of CFI violations. The variables of the injected procedures were not instrumented during the execution of the programs. \n4) Speculative based cache probe attack: To implement a speculative cache side-channel attack, we use the simoutorder in-built branch prediction speculation resolve system. Once misspeculation detected, we restrict auto-set for dirty bit associated memory access within the same pipeline cycle. We inject control diverting code at this point to transfer the control to the attacker function to access the same memory address. The injected procedure's variable was not instrumented during program execution. \n5) Area and Power Overhead: We tested a classic 5-stage pipeline MIPS 32-bit processor, including a 2-bit branch predictor, a 1024-depth branch prediction buffer, a 2KB direct-mapped cache, and a 64KB main memory for our approach. The Variable Record Table (VRT), comprising 512 entries to each 49 bits wide, resulted in an area overhead of $1.98\\%$\n\nrelative to the total processor area. Moreover, the power consumption attributed to maintaining the VRT was measured at $11.65\\mu \\mathrm{W}$ .\n\n# IV. RELATED WORK\n\nThe development of hardware-assisted memory protection has its roots in capability-based systems like the IBM System/38 [15], which was the first to illustrate the potential of metadata-enforced access control. Over the years, modern implementations have evolved through several generations, starting with software-oriented approaches such asTyped Assembly Language [16] that later inspired hardware designs. The CHERI architecture [17] marked a significant advancement by introducing capability pointers with integrated bounds checking, though its 128-bit metadata requirements posed implementation challenges. Commercial solutions like ARM's Memory Tagging Extension [19] have shown practical viability with 4-bit tagging, while Intel's Control-flow Enforcement Technology [6] specifically targets control-flow integrity. Despite these advancements, existing systems remain limited by their narrow protection scope: ARM MTE only addresses spatial safety vulnerabilities, and Intel CET exclusively tackles control-flow violations. Software-based alternatives, such as AddressSanitizer [23], offer broader coverage but come with a substantial performance overhead, often resulting in a 2 to 3 times slowdown. Recent research has highlighted significant gaps in current approaches, particularly their failure to address speculative execution attacks or provide unified protection across multiple vulnerability classes. The Variable Record Table architecture proposed in this work synthesizes lessons from these earlier systems while introducing innovative mechanisms for comprehensive, low-overhead protection that effectively addresses spatial memory safety, control-flow integrity, and speculative execution threats.\n\n# V. CONCLUSIONS\n\nThis paper introduced the Variable Record Table (VRT), a hardware mechanism that simultaneously prevents buffer overflows, control-flow hijacking, and speculative execution attacks through unified metadata tracking. Our evaluation demonstrated perfect detection of all attack variants and modest hardware costs (8% area overhead), proving comprehensive protection is practical without specialized ISA support. VRT's novel integration of spatial safety, CFI, and speculation control in a single structure overcomes the limitations of fragmented security solutions, providing a foundation for efficient, attack-resistant processors.\n\nTABLE II BENCHMARK PROGRAM RESULTS WITH SECURITY ANALYSIS \n\n<table><tr><td>MiBench Program</td><td>Variable Count</td><td>Instruction Count</td><td>Attack Detected?</td><td>Mispeculative Branches</td><td>Branch Prediction</td></tr><tr><td>basicmath</td><td>25</td><td>1.81×108</td><td>Yes</td><td>66709</td><td>Yes</td></tr><tr><td>bitcount</td><td>49</td><td>6.62×108</td><td>Yes</td><td>112622</td><td>Yes</td></tr><tr><td>qsort</td><td>13</td><td>5.18×108</td><td>Yes</td><td>70892</td><td>Yes</td></tr><tr><td>CRC32</td><td>9</td><td>5.23×106</td><td>Yes</td><td>34043</td><td>Yes</td></tr><tr><td>dijkstra</td><td>15</td><td>2.55×108</td><td>Yes</td><td>98176</td><td>Yes</td></tr><tr><td>patricia</td><td>28</td><td>3.05×108</td><td>Yes</td><td>71728</td><td>Yes</td></tr></table>\n\n# REFERENCES\n\n[1] A. One, \"Smashing the Stack for Fun and Profit,\" Phrack, vol. 7, no. 49, 1996. \n[2] M. Abadi, M. Budiu, U. Erlingsson and J. Ligatti, \"Control-Flow Integrity,\" Proceedings of the 12th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, USA, 2005, pp. 340-353. \n[3] P. Kocher, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz and Y. Yarom, \"Spectre Attacks: Exploiting Speculative Execution,\" 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2018, pp. 1-19. \n[4] D. Chisnall, C. Rothwell, B. Watson, R. Woodruff, M. Vadera, S. Moore, M. Roe, P. Neumann and M. Davis, \"CHERI JNI: Sinking the Java Security Model into the C,\" 2015 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2015, pp. 1-16. \n[5] L. Szekeres, M. Payer, T. Wei and D. Song, \"SoK: Eternal War in Memory,\" 2013 IEEE Symposium on Security and Privacy (SP), Berkeley, CA, USA, 2013, pp. 1-15. \n[6] Intel Corporation, \"Control-Flow Enforcement Technology,\" White Paper, 2016. \n[7] I. Evans, F. Long, U. Otgonbaatar, H. Shrobe, M. Rinard, H. Okhravi and S. Sidiroglou-Douskos, \"Missing the Point(er): On the Effectiveness of Code Pointer Integrity,\" 2015 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2015, pp. 1-16. \n[8] L. K. Sah, S. A. Islam and S. Katkoori, \"Defending Against Misspeculation-based Cache Probe Attacks Using Variable Record Table,\" 2021 IEEE International Symposium on Quality Electronic Design (ISQED), Santa Clara, CA, USA, 2021, pp. 408-413. \n[9] L. K. Sah, S. Polnati, S. A. Islam and S. Katkoori, \"Basic Block Encoding Based Runtime CFI Check for Embedded Software,\" 2020 IFIP/IEEE 28th International Conference on Very Large Scale Integration (VLSI-SOC), Salt Lake City, UT, USA, 2020, pp. 135-140. \n[10] L. K. Sah, S. A. Islam and S. Katkoori, \"Variable Record Table: A Runtime Solution for Mitigating Buffer Overflow Attack,\" 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 2019, pp. 239-242. \n[11] L. K. Sah, S. A. Islam and S. Katkoori, \"An Efficient Hardware-Oriented Runtime Approach for Stack-based Software Buffer Overflow Attacks,\" 2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST), Hong Kong, China, 2018, pp. 1-6. \n[12] S. K. Sah and L. K. Sah, \"VRT: A Runtime Protection Against Back-Edge Control Flow Integrity Violation,\" 2024 IEEE 67th International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 2024, pp. 665-668. \n[13] T. Austin, E. Larson and D. Ernst, \"SimpleScalar: An Infrastructure for Computer System Modeling,\" IEEE Computer, vol. 35, no. 2, 2002, pp. 59-67. \n[14] A. Woodruff, R. Watson, D. Chisnall, S. Moore, J. Anderson, B. Davis, P. Neumann, R. Norton and M. Roe, \"CHERI in RISC-V: Design and Implementation,\" arXiv preprint arXiv:1908.11130, 2019. \n[15] IBM Corporation, \"IBM System/38 Functional Description,\" Technical Report, 1978. \n[16] G. Morrisett, D. Walker, K. Crary and N. Glew, \"From System F toTyped Assembly Language,\" ACM Transactions on Programming Languages and Systems, vol. 21, no. 3, 1999, pp. 527-568. \n[17] R. N. M. Watson, P. Neumann, J. Woodruff, M. Roe, N. Moore, S. Moore and M. Davis, \"CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization,\" 2015 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2015, pp. 1-16. \n[18] R. N. M. Watson, J. Woodruff, P. Neumann, S. Moore, J. Anderson, D. Chisnall, B. Davis, B. Laurie, M. Roe and A. Richardson, \"CheriBSD: A Capability-Based Operating System,\" 2019 USENIX Annual Technical Conference (USENIX ATC), Renton, WA, USA, 2019, pp. 1-14. \n[19] ARM Limited, \"Memory Tagging Extension (MTE),\" [Online]. Available: https://developer.arm.com/documentation/101754/latest. \n[20] Android Open Source Project, \"Memory Tagging Support in Android,\" [Online]. Available: https://source.android.com/docs/core/architecture/memory-safety/memory-tagging. \n[21] ARM Limited, \"ARMv8.5-A MTE Performance Analysis,\" White Paper, 2021. \n[22] Google Security Team, \"Memory Tagging in Android,\" Google Security Blog, 2021.\n\n[23] K. Serebryany, D. Bruening, A. Potapenko and D. Vyukov, \"Address-Sanitizer: A Fast Address Sanitizer for $\\mathrm{C / C + + }$ 2012 USENIX Annual Technical Conference (USENIX ATC), Boston, MA, USA, 2012, pp. 1- 12."}
# Enhanced Web User Interface Design Via Cross-Device Responsiveness Assessment Using An Improved HCI-INTEGRATED DL Schemes Shrinivass Arunachalam Balasubramanian* Senior Full Stack Engineer, Independent Researcher, United States, shrinivassab@gmail.com User Interface (UI) optimization is essential in the digital era to enhance user satisfaction in web environments. Nevertheless, the existing UI optimization models had overlooked the Cross-Responsiveness (CR) assessment, affecting the user interaction efficiency. Consequently, this article proposes a dynamic web UI optimization through CR assessment using Finite Exponential Continuous State Machine (FECSM) and Quokka Nonlinear Difference Swarm Optimization Algorithm (QNDSOA). Initially, the design and user-interaction related information is collected as well as pre-processed for min-max normalization. Next, the Human-Computer Interaction (HCI)-based features are extracted, followed by user behaviour pattern grouping. Meanwhile, the CR assessment is done using FECSM. Then, the proposed Bidirectional Gated Luong and Mish Recurrent Unit (BiGLMRU) is used to classify the User eXperience (UX) change type, which is labelled based on the User Interface Change Prediction Index (UICPI). Lastly, a novel QNDSOA is utilized to optimize the UI design with an average fitness of $98.5632\%$ . Feedback monitoring is done after optimal deployment. Additional Keywords and Phrases: Human Computer Interaction (HCI), User Interface (UI) optimization, Web Development (WD), User eXperience (UX) modelling, Predictive UI Enhancement (PUIE), Fuzzy Derivative Weighted Inference System (FDWIS), and Artificial Intelligence (AI). # 1. INTRODUCTION Currently, Websites have become significant platforms for UI, and the architecture of the websites enhances the UX in this user-oriented generation. Therefore, WD, a significant aspect of HCI, is utilized for improving the UX. This incorporates estimates of the User Behavior (UB) pattern, creating a user-friendly interface and testing the performance of the site. The UB patterns like screen size influence, and eye-tracking methodology are greatly helpful in improving the UX. By developing an age-friendly website, platforms like e-commerce improve the UX. However, traditional techniques did not concentrate on the CR assessment for web development. The proposed system's motivation is to develop a user-friendly website centered on the UB with an interface. Thus, a novel model for optimizing the website based on BiGLMRU is proposed in this paper. # 1.1 Problem Statement The prevailing works' limitations are given below, None of the works focused on CR assessment for web development. The UB patterns were not concentrated in, which mitigated the effectiveness of UI. The prevailing works designed the UI inefficiently, as the optimal UI design was not periodically updated. Existing works failed to focus on the improvement level of UI that affected the web development process. In, the support factors were not considered, which further reduced the effectiveness. # 1.2 Objectives The objectives of the proposed framework are defined below, The web is developed by considering the cross-responsiveness assessment using FECSM. To improve the effectiveness, the UB patterns are grouped by HDBSCAN. The optimal UI design is periodically updated by providing feedback to QNDSOA. By using BiGLMRU, the improvement level for the UI design is obtained. The support factors are considered by employing minimum JavaScript execution time, minimum error rate, and minimum memory usage as the fitness function in optimal UI design. The remaining part is arranged as: in Section 2, the existing works are analyzed, in Section 3, the proposed methodology is explained, in Section 4, the results and discussion are given, and Section 5 concludes the paper with future scope. # 2. LITERATURE SURVEY Bakaev et al. recognised the modeling of Visual Perception (VP) of the UI. Here, the VP was predicted by a Convolutional Neural Network (CNN). Yet, the change in responsiveness for different devices and screen sizes was not considered. Todi et al. assessed a Reinforcement Learning (RL) approach for adaptive UI. Firstly, both the positive and negative effects that impacted the UI were considered. Thus, adaptive UI adapted the webpage layouts and reorganized application menus. Nevertheless, the effectiveness of UI was reduced as the support factors based on optimal UI design were not considered. Keselj et al. examined the Deep Learning (DL) applications for the UI evaluation. Here, a CNN determined the effectiveness of UI based on the specifications, like UI design and layout. Yet, the user satisfaction was not achieved as the objective knowledge about UI was not practically implemented. Muneer et al. deployed a Meta-Model for supporting the Compatibility Testing (CT) of cross-browser web applications. Initially, for covering critical configurations, a checklist was initialized and translated into Interaction Flow Modeling Language (IFML). Next, the test cases generated by IFML addressed the compatibility issues. Wang et al. discovered a DL approach to assess the color quality interface in HCI interfaces. Firstly, the interface image features were extracted and modeled by a CNN. Yet, the UI design was still inefficient as the approach failed to update the user's immediate feedback. # 3. PROPOSED METHODOLOGY FOR CROSS-RESPONSIVE WEB UI TUNING USING FECSM ANDBiGLMRU The proposed work implements an intelligent framework for CR web UI optimization using FECSM and QNDSOA. In Figure 1, the proposed methodology's block diagram is presented. Figure 1: The structural design of the research approach # 3.1 Data collection Initially, the design-related data (layout, component arrangements, and responsiveness characteristics) and user interaction-related information (clicks, scrolls, and mouse movements) are collected by using the web crawler and session replay tools, respectively. $$ \partial_ {w} = \left(\partial_ {1}, \partial_ {2}, \dots \dots \partial_ {W}\right) \text {W h e r e ,} w = 1 t o W \tag {1} $$ Where, $W$ specifies the number of collected web data $\partial_w$ . # 3.2 Pre-processing Next, $\hat{\partial}_w$ is subjected to the pre-processing, which standardizes the collected data in the range of (0, 1) by employing min-max normalization. $$ \eta = \frac {\partial_ {w} - \min \left(\partial_ {w}\right)}{\max \left(\partial_ {w}\right) - \min \left(\partial_ {w}\right)} \tag {2} $$ Here, $\boldsymbol{\eta}$ represents the pre-processed data. # 3.3 HCI-based feature extraction From $\eta$ , the HCI features $(\gamma_{i})$ like click patterns, scroll behaviour, mouse movement, and network condition are extracted, improving the model's performance. # 3.4 User behaviour pattern grouping Next, the $\gamma_{i}$ is fed into the proposed HPPDBSCAN approach, which groups the user behaviour pattern by considering the scroll rate and click depth. The conventional Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) proficiently groups the data with varying density. However, the HDBSCAN is sensitive to the choice of the clustering parameters, like minimum cluster size $\left(\widetilde{\mathbf{X}}\right)$ and minimum samples $\left(\mathbf{Y}\right)$ . Therefore, the Persistence Probability Function (PPF) $(\wp)$ is used to determine the optimal parameters by analysing the density-based persistence of data across multiple scales. $$ \wp \left(\gamma_ {i}\right) = \int_ {S _ {k}} ^ {S _ {K}} \left| \gamma_ {i} (S) \right| d ^ {\prime \prime} S \rightarrow (\mathrm {X}, \mathrm {Y}) \tag {3} $$ Where, $S_{k}$ indicate the density level $(S)$ where the cluster $\gamma_{\infty}$ appears, $S_{K}$ indicate the density level where the cluster $\gamma_{\infty}$ disappears, and $d''$ denotes the derivative parameter. For each point, the core point $(\mathbf{Cp})$ is computed based on the minimum number of neighbours, which is determined by $\mathbf{Y}$ . Similarly, the mutual reachability distance $(M_{\mathrm{dis}})$ is estimated between the points to handle varying densities. $$ M _ {\text {d i s}} \left(\gamma_ {1}, \gamma_ {2}\right) = \max \left\{\mathrm {C p} \left(\gamma_ {1}\right), \mathrm {C p} \left(\gamma_ {2}\right), \mathrm {Z} \left(\gamma_ {1}, \gamma_ {2}\right) \right\} \tag {4} $$ Where, $Z$ displays the direct distance value. By assigning $M_{\mathrm{dis}}$ as the weight value of the edges, a complete mutual reachability graph $(U_{\mathrm{gr}})$ is constructed. Subsequently, the minimum spanning tree $(T_{\mathrm{min}})$ is generated by connecting all points with the lowest $M_{\mathrm{dis}}$ without creating any cycles. Next, the edges in the $T_{\mathrm{min}}$ are sorted by increasing the $M_{\mathrm{dis}}$ , and then the longest edges are gradually removed to create a hierarchical structure. Meanwhile, the tree pruning is done by applying $\widetilde{X}$ that evaluates the cluster's stability $(\lambda)$ . $$ \lambda \left(T _ {\min }\right) = \widetilde {X} \left(S _ {K} - S _ {k}\right) \tag {5} $$ Lastly, the data points are allocated to the clusters with the highest stability. Therefore, the user behaviour pattern grouped data is displayed as $\left(\phi_{\nabla}\right)$ . # 3.5 Cross responsiveness assessment Contrarily, the CR assessment is done in $\mathfrak{N}$ using the proposed FECSM algorithm to model how interface responsiveness changes across devices over time. The Finite State Machine (FSM) significantly captures transitions in user experience due to changes in device configuration. Yet, the FSM struggled to handle continuous state and transition, affecting the model's flexibility. Therefore, the Exponential Continuous Coverage (ECC) function is utilized to handle transitions over changes. Here, each state represents a responsive UI
# Enhanced Web User Interface Design Via Cross-Device Responsiveness Assessment Using An Improved HCI-INTEGRATED DL Schemes Shrinivass Arunachalam Balasubramanian* Senior Full Stack Engineer, Independent Researcher, United States, shrinivassab@gmail.com User Interface (UI) optimization is essential in the digital era to enhance user satisfaction in web environments. Nevertheless, the existing UI optimization models had overlooked the Cross-Responsiveness (CR) assessment, affecting the user interaction efficiency. Consequently, this article proposes a dynamic web UI optimization through CR assessment using Finite Exponential Continuous State Machine (FECSM) and Quokka Nonlinear Difference Swarm Optimization Algorithm (QNDSOA). Initially, the design and user-interaction related information is collected as well as pre-processed for min-max normalization. Next, the Human-Computer Interaction (HCI)-based features are extracted, followed by user behaviour pattern grouping. Meanwhile, the CR assessment is done using FECSM. Then, the proposed Bidirectional Gated Luong and Mish Recurrent Unit (BiGLMRU) is used to classify the User eXperience (UX) change type, which is labelled based on the User Interface Change Prediction Index (UICPI). Lastly, a novel QNDSOA is utilized to optimize the UI design with an average fitness of $98.5632\%$ . Feedback monitoring is done after optimal deployment. Additional Keywords and Phrases: Human Computer Interaction (HCI), User Interface (UI) optimization, Web Development (WD), User eXperience (UX) modelling, Predictive UI Enhancement (PUIE), Fuzzy Derivative Weighted Inference System (FDWIS), and Artificial Intelligence (AI). # 1. INTRODUCTION Currently, Websites have become significant platforms for UI, and the architecture of the websites enhances the UX in this user-oriented generation. Therefore, WD, a significant aspect of HCI, is utilized for improving the UX. This incorporates estimates of the User Behavior (UB) pattern, creating a user-friendly interface and testing the performance of the site. The UB patterns like screen size influence, and eye-tracking methodology are greatly helpful in improving the UX. By developing an age-friendly website, platforms like e-commerce improve the UX. However, traditional techniques did not concentrate on the CR assessment for web development. The proposed system's motivation is to develop a user-friendly website centered on the UB with an interface. Thus, a novel model for optimizing the website based on BiGLMRU is proposed in this paper. # 1.1 Problem Statement The prevailing works' limitations are given below, None of the works focused on CR assessment for web development. The UB patterns were not concentrated in, which mitigated the effectiveness of UI. The prevailing works designed the UI inefficiently, as the optimal UI design was not periodically updated. Existing works failed to focus on the improvement level of UI that affected the web development process. In, the support factors were not considered, which further reduced the effectiveness. # 1.2 Objectives The objectives of the proposed framework are defined below, The web is developed by considering the cross-responsiveness assessment using FECSM. To improve the effectiveness, the UB patterns are grouped by HDBSCAN. The optimal UI design is periodically updated by providing feedback to QNDSOA. By using BiGLMRU, the improvement level for the UI design is obtained. The support factors are considered by employing minimum JavaScript execution time, minimum error rate, and minimum memory usage as the fitness function in optimal UI design. The remaining part is arranged as: in Section 2, the existing works are analyzed, in Section 3, the proposed methodology is explained, in Section 4, the results and discussion are given, and Section 5 concludes the paper with future scope. # 2. LITERATURE SURVEY Bakaev et al. recognised the modeling of Visual Perception (VP) of the UI. Here, the VP was predicted by a Convolutional Neural Network (CNN). Yet, the change in responsiveness for different devices and screen sizes was not considered. Todi et al. assessed a Reinforcement Learning (RL) approach for adaptive UI. Firstly, both the positive and negative effects that impacted the UI were considered. Thus, adaptive UI adapted the webpage layouts and reorganized application menus. Nevertheless, the effectiveness of UI was reduced as the support factors based on optimal UI design were not considered. Keselj et al. examined the Deep Learning (DL) applications for the UI evaluation. Here, a CNN determined the effectiveness of UI based on the specifications, like UI design and layout. Yet, the user satisfaction was not achieved as the objective knowledge about UI was not practically implemented. Muneer et al. deployed a Meta-Model for supporting the Compatibility Testing (CT) of cross-browser web applications. Initially, for covering critical configurations, a checklist was initialized and translated into Interaction Flow Modeling Language (IFML). Next, the test cases generated by IFML addressed the compatibility issues. Wang et al. discovered a DL approach to assess the color quality interface in HCI interfaces. Firstly, the interface image features were extracted and modeled by a CNN. Yet, the UI design was still inefficient as the approach failed to update the user's immediate feedback. # 3. PROPOSED METHODOLOGY FOR CROSS-RESPONSIVE WEB UI TUNING USING FECSM ANDBiGLMRU The proposed work implements an intelligent framework for CR web UI optimization using FECSM and QNDSOA. In Figure 1, the proposed methodology's block diagram is presented. Figure 1: The structural design of the research approach # 3.1 Data collection Initially, the design-related data (layout, component arrangements, and responsiveness characteristics) and user interaction-related information (clicks, scrolls, and mouse movements) are collected by using the web crawler and session replay tools, respectively. $$ \partial_ {w} = \left(\partial_ {1}, \partial_ {2}, \dots \dots \partial_ {W}\right) \text {W h e r e ,} w = 1 t o W \tag {1} $$ Where, $W$ specifies the number of collected web data $\partial_w$ . # 3.2 Pre-processing Next, $\hat{\partial}_w$ is subjected to the pre-processing, which standardizes the collected data in the range of (0, 1) by employing min-max normalization. $$ \eta = \frac {\partial_ {w} - \min \left(\partial_ {w}\right)}{\max \left(\partial_ {w}\right) - \min \left(\partial_ {w}\right)} \tag {2} $$ Here, $\boldsymbol{\eta}$ represents the pre-processed data. # 3.3 HCI-based feature extraction From $\eta$ , the HCI features $(\gamma_{i})$ like click patterns, scroll behaviour, mouse movement, and network condition are extracted, improving the model's performance. # 3.4 User behaviour pattern grouping Next, the $\gamma_{i}$ is fed into the proposed HPPDBSCAN approach, which groups the user behaviour pattern by considering the scroll rate and click depth. The conventional Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) proficiently groups the data with varying density. However, the HDBSCAN is sensitive to the choice of the clustering parameters, like minimum cluster size $\left(\widetilde{\mathbf{X}}\right)$ and minimum samples $\left(\mathbf{Y}\right)$ . Therefore, the Persistence Probability Function (PPF) $(\wp)$ is used to determine the optimal parameters by analysing the density-based persistence of data across multiple scales. $$ \wp \left(\gamma_ {i}\right) = \int_ {S _ {k}} ^ {S _ {K}} \left| \gamma_ {i} (S) \right| d ^ {\prime \prime} S \rightarrow (\mathrm {X}, \mathrm {Y}) \tag {3} $$ Where, $S_{k}$ indicate the density level $(S)$ where the cluster $\gamma_{\infty}$ appears, $S_{K}$ indicate the density level where the cluster $\gamma_{\infty}$ disappears, and $d''$ denotes the derivative parameter. For each point, the core point $(\mathbf{Cp})$ is computed based on the minimum number of neighbours, which is determined by $\mathbf{Y}$ . Similarly, the mutual reachability distance $(M_{\mathrm{dis}})$ is estimated between the points to handle varying densities. $$ M _ {\text {d i s}} \left(\gamma_ {1}, \gamma_ {2}\right) = \max \left\{\mathrm {C p} \left(\gamma_ {1}\right), \mathrm {C p} \left(\gamma_ {2}\right), \mathrm {Z} \left(\gamma_ {1}, \gamma_ {2}\right) \right\} \tag {4} $$ Where, $Z$ displays the direct distance value. By assigning $M_{\mathrm{dis}}$ as the weight value of the edges, a complete mutual reachability graph $(U_{\mathrm{gr}})$ is constructed. Subsequently, the minimum spanning tree $(T_{\mathrm{min}})$ is generated by connecting all points with the lowest $M_{\mathrm{dis}}$ without creating any cycles. Next, the edges in the $T_{\mathrm{min}}$ are sorted by increasing the $M_{\mathrm{dis}}$ , and then the longest edges are gradually removed to create a hierarchical structure. Meanwhile, the tree pruning is done by applying $\widetilde{X}$ that evaluates the cluster's stability $(\lambda)$ . $$ \lambda \left(T _ {\min }\right) = \widetilde {X} \left(S _ {K} - S _ {k}\right) \tag {5} $$ Lastly, the data points are allocated to the clusters with the highest stability. Therefore, the user behaviour pattern grouped data is displayed as $\left(\phi_{\nabla}\right)$ . # 3.5 Cross responsiveness assessment Contrarily, the CR assessment is done in $\mathfrak{N}$ using the proposed FECSM algorithm to model how interface responsiveness changes across devices over time. The Finite State Machine (FSM) significantly captures transitions in user experience due to changes in device configuration. Yet, the FSM struggled to handle continuous state and transition, affecting the model's flexibility. Therefore, the Exponential Continuous Coverage (ECC) function is utilized to handle transitions over changes. Here, each state represents a responsive UI (mobile layout, tablet layout, and desktop layout). $$ \mathrm {S t} _ {\nu} = \left(\mathrm {S t} _ {1}, \mathrm {S t} _ {2}, \dots \dots \mathrm {S t} _ {V}\right) \text {W h e r e ,} \nu = 1 \text {t o V} \tag {6} $$ Here, $V$ denotes the number of states $\mathbf{St}_{\nu}$ . Next, the inputs (user-triggering events) like mouse events, screen size changes, and touch gestures are defined as below, $$ \mathrm {I p} _ {w} = \sum_ {w = 1} ^ {W} \left\{\mathrm {I p} _ {1}, \mathrm {I p} _ {2}, \dots \mathrm {I p} _ {W} \right\} \tag {7} $$ Here, $w = 1,2,\ldots W$ indicates the number of inputs $\mathrm{Ip}_w$ . Also, the transitions are defined to reflect how the system moves from one state to another when an input is received. In the proposed work, the ECC function $(\mathbf{N})$ is used to ensure flexible transitions by continuously adapting to dynamic state changes. $$ \mathrm {N} (\tau) = 1 - \exp^ {- \Im \tau} \tag {8} $$ $$ \tau \xrightarrow {\text {t r a n s i t i o n}} \left(\mathrm {S t} _ {1}, \mathrm {S t} _ {v}\right) \tag {9} $$ Here, $\mathfrak{T}$ specifies the controlling parameter and $\tau$ depicts the transitions. Subsequently, the start state and final state are also determined. Next, the user interactions are captured as a log file. Finally, the FSM trace logs are extracted to provide detailed insight into which transition caused friction and which device layout had higher task success. The CR assessed outcome is mentioned as $(\mathrm{H})$ . # 3.6 User interface change prediction index computation Meanwhile, the UICPI $(\varphi)$ is calculated by considering the $\gamma_{i}$ to represent the necessity of UI modification based on user interaction deviation. $$ \varphi \left(\gamma_ {i}\right) = v _ {1} \times E + v _ {2} \times T + v _ {3} \times D + v _ {4} \times C \tag {10} $$ Where, $\left(\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3,\mathbf{v}_4\right)$ illustrates the weight values, $E$ denotes the error rate, $T$ depicts the task time, $D$ exhibits the drop-off rate, and $C$ represents the click confusion index. # 3.7 UX change type labeling The proposed Fuzzy Derivative Weighted Inference System (FDWIS) precisely labels the UX change type based on $\mathbb{P}$ . The Fuzzy Inference System (FIS) offers high transparency. Yet, the FIS approach struggled to capture the small changes. Hence, the Derivative Weighted Average Function (DWAF) is employed in the defuzzification process to capture the small changes, improving the model's precision. Initially, the fuzzification step converts the crisp values into fuzzy values $\left(\ddot{\varphi}\right)$ (membership values) using a sigmoid membership function $\left(Q\right)$ . $$ Q (\varphi) = \frac {1}{1 + \exp^ {- G (\varphi - J)}} \rightarrow \ddot {\varphi} \tag {11} $$ Here, $G$ and $J$ denote the control parameter and center of the slope, respectively. Here, the fuzzy if-then rules $\left(\mathfrak{R}_{\mathrm{rule}}\right)$ are created to categorize the UX change type based on the $\ddot{\varphi}$ . $$ \mathfrak {R} _ {\text {r u l e}} = \left\{ \begin{array}{l l} \operatorname {I f} (\ddot {\varphi} = 0. 0 \text {t o} 0. 3), & \text {t h e n} \quad \mathrm {L w} \\ \operatorname {I f} (\ddot {\varphi} = 0. 3 1 \text {t o} 0. 6), & \text {t h e n} \quad \mathrm {M d} \\ \operatorname {I f} (\ddot {\varphi} > 0. 6), & \text {t h e n} \quad \mathrm {H w} \end{array} \right. \tag {12} $$ Next, the fuzzy rules are implemented in the fuzzified inputs to label the UX change type into low $\left(\mathrm{LW}\right)$ , medium $\left(\mathrm{Md}\right)$ , and high $\left(\mathrm{Hw}\right)$ . Next, defuzzification is the task of converting the fuzzy outputs $\left(\varsigma\right)$ from the inference engine into a crisp value using DWAF. The DWAF prioritizes regions with sudden membership changes, causing improved precision. $$ \varphi = \frac {\sum_ {c = 1} ^ {C} O _ {c} \left(\Re_ {\text {r u l e}}\right) \cdot \left| \frac {\dot {b} \varsigma}{\dot {b} \ddot {\varphi}} \right| \cdot \varsigma}{\sum_ {c = 1} ^ {C} O _ {c} \left(\Re_ {\text {r u l e}}\right) \cdot \left| \frac {\dot {b} \varsigma}{\dot {b} \ddot {\varphi}} \right|} \tag {13} $$ Where, $O_{c}$ designates the firing strength of the $c^{\mathrm{th}}$ rule, $\dot{b}$ depicts the partial derivative parameter, and $c = 1$ to $C$ denotes the number of fuzzy rules. # 3.8 UX change type classification Here, $\phi_{\nabla}$ , $\mathbf{H}$ , and $\wp$ are inputted to the proposed BiGLMRU, which classifies the UI requirement into three categories like low UI changes needed, medium UI changes recommended, and high UI changes necessary based on the labelled data. The Bidirectional Gated Recurrent Unit (BiGRU) effectively captures the dynamic changes of the user interaction. Nevertheless, the BiGRU struggled to handle longer dependencies. Therefore, the Luong Attention (LA) function is used to hold long-term information. Likewise, the BiGRU had over-fitting issues. Hence, the Mish activation function is weldon to minimize the over-fitting issue by improving the gradient flow. In Figure 2, the proposed BiGLMRU's diagrammatic illustration is given. Figure 2: The pictorial depiction of the proposed BiGLMRU The input layer $\left(A\right)$ holds the inputs as well as transmits them to the forward GRU layers. $$ A = \left(\phi_ {\nabla}, \mathrm {H}, \varphi\right) \tag {14} $$ The reset gate $(\varpi)$ aims to eradicate the less informative information in the previous hidden state $\left(\mathbf{D}\mathbf{V}_{e-1}\right)$ . Likewise, the Mish activation function $(\psi)$ is employed to reduce the over-fitting issues due to its gradient stability. $$ \varpi = \psi \times \left(\left(A, D v _ {e - 1}\right) \cdot N u\right) + X g \tag {15} $$ $$ \psi (A) = A \cdot \tanh \left(\ln \left(1 + \exp^ {A}\right)\right) \tag {16} $$ Here, $\mathbf{Nu}$ and $\mathbf{Xg}$ indicate the input's weight and bias, tanh exhibits the tangent function, and ln illustrates the logarithmic function. Likewise, the update gate $(\hat{\lambda})$ is used to include the relevant information in the present hidden state. $$ \lambda = \psi \times \left(\left(A, \mathrm {D v} _ {e - 1}\right) \cdot \mathrm {N u}\right) + \mathrm {X g} \tag {17} $$ Also, it uses the LA function $\left(\mu\right)$ to advance the model's capability to capture the relevant information from the past sequences. $$ \mu = \tanh \left(\mathrm {N u} \left[ \sum \chi \cdot \aleph ; (A, \mathrm {D v} _ {e - 1}) \right]\right) \tag {18} $$ Here, $\chi$ illustrates the softmax function and $\aleph$ depicts the probability score. Next, the candidate hidden state $\left(\widetilde{\mathrm{D}}\mathrm{v}_{e}\right)$ is computed according to the $\left(A,\mathrm{D}\mathrm{v}_{e - 1}\right)$ , thereby holding long-term sequences. $$ \widetilde {\mathrm {D v}} _ {e} = \tanh (\varpi \cdot (A, \mathrm {D v} _ {e - 1}) \cdot \mathrm {N u}) + \mathrm {X g} \tag {19} $$ Lastly, the hidden state $\left(\mathbf{D}\mathbf{v}_e\right)$ is calculated by taking a weighted average of the previous and the candidate hidden state. $$ \mathbf {D} \mathbf {v} _ {e} = \mu \cdot \left(\left(1 - \hat {\lambda}\right) \cdot \mathbf {D} \mathbf {v} _ {e - 1} + \hat {\lambda} \cdot \widetilde {\mathbf {D}} \mathbf {v} _ {e}\right) \tag {20} $$ Besides, the output layer processes the input via the backward GRU layers. Here, the proposed BiGLMRU classifies the UI change requirement into low $\left(L_{\mathrm{UI}}\right)$ , medium $\left(M_{\mathrm{UI}}\right)$ , and high $\left(H_{\mathrm{UI}}\right)$ effectively. $$ \hat {v} = \left(L _ {\mathrm {U I}}, M _ {\mathrm {U I}}, H _ {\mathrm {U I}}\right) \tag {21} $$ Here, $\hat{\mathbf{V}}$ establish the proposed BiGLMRU's outcome. The proposed BiGLMRU's pseudo code is given below, Input: $\phi_{\nabla}$ , H and $\varphi$ Output: UI change requirement classification Begin Initialize: $\phi_{\nabla}$ , H, $\varphi$ and $\psi$ For 1 to each input do, Determine input layer $$ A = \left(\phi_ {\nabla}, H, \varphi\right) $$ GRU layers Execute reset gate, $$ \varpi = \psi \times \left(\left(A, D v _ {e - 1}\right) \cdot N u\right) + X g $$ Activate Mish function, $$ \psi (A) = A \cdot \tanh \left(\ln \left(1 + \exp^ {A}\right)\right) $$ Perform update gate $\lambda = \psi \times ((A, Dv_{e-1}) \cdot Nu) + Xg$ Establish LA function $$ \mu = \tanh \left(\mathrm {N u} \left[ \sum \chi \cdot \aleph ; (A, \mathrm {D v} _ {e - 1}) \right]\right) $$ Compute candidate hidden state Estimate hidden state $$ \mathbf {D} \mathbf {v} _ {e} = \boldsymbol {\mu} \cdot \left(\left(1 - \boldsymbol {\lambda}\right) \cdot \mathbf {D} \mathbf {v} _ {e - 1} + \boldsymbol {\lambda} \cdot \widetilde {\mathbf {D}} \mathbf {v} _ {e}\right) $$ # 3.9 Optimal UI design The proposed QNDSOA is established regarding the requirement of medium $M_{\mathrm{UI}}$ and high UI changes $H_{\mathrm{UI}}$ to optimize the UI design. The Quokka Swarm Optimization Algorithm (QSOA) is highly adaptable to adjust the parameters, like acceleration coefficients. But, the QSOA struggled to determine the position of the member across the population. Thus, the Nonlinear Difference Function (NDF) is used to reflect the diversity of the position across the population. Initially, the population members are initialized in the search area. Here, the inputs like font size, theme mode, letter spacing, and text alignment are considered as the quokka (member). $$ L _ {x} = \left\{L _ {1}, L _ {2}, \dots \dots L _ {X} \right\} \text {W h e r e ,} x = 1 \text {t o} X \tag {22} $$ Here, $X$ demonstrates the number of population members $L_{x}$ . In the proposed work, the minimum JavaScript execution time, minimum error rate, and minimum memory usage are considered as the fitness values to select the best leader $\left(L_{x}^{Best}\right)$ . Then, the member's location and drought $(\mathrm{Dh})$ are updated regarding the $L_{x}^{Best}$ . The proposed work introduces the NDF $(\alpha)$ to cover the diversity of the position across the population. $$ \mathrm {D h} ^ {\text {n e w}} = \frac {\left(\mathrm {T m} + \mathrm {h m}\right)}{\left(0 . 8 + \mathrm {D h}\right)} + \mathrm {t} \cdot \alpha \tag {23} $$ $$ \alpha = \exp^ {- \mathrm {O} h} \left(L _ {x} ^ {\text {B e s t}} - L _ {x}\right) \tag {24} $$ $$ L _ {x} ^ {\text {n e w}} = L _ {x} + \mathrm {D h} ^ {\text {n e w}} * \sigma \tag {25} $$ Where, $\mathrm{Dh}^{\mathrm{new}}$ depicts the updated drought, $L_{x}^{new}$ demonstrates the updated member's position, $\mathrm{Tm}$ indicates the temperature (balancing parameter), hm illustrates the humidity (exploration force), $\sigma$ denotes the nitrogen ratio (solution quality), $\mathfrak{l}$ implies the weight between the leader and members, O exhibits the adaptive parameter, $\hbar$ specifies the time bound, and $\alpha$ indicates the differences of position between the leader and quokka. Next, fitness is updated iteratively until it converges. The QNDSOA's pseudo code is given below, Input: Web components Output: Optimal web UI $\left(O_{\mathrm{UI}}\right)$ Begin Initialize $L_{x}, L_{x}^{Best}$ , Dh and $\sigma$ For 1 to each member do, Perform population initialization $L_{x} = \{L_{1}, L_{2}, \dots \dots L_{X}\}$ Select leader $L_{x}^{Best}$ via fitness value Update drought, $$ D h ^ {\text {n e w}} = \frac {(T m + h m)}{(0 . 8 + D h)} + t \cdot \alpha $$ Apply NDF $\alpha = \exp^{-\mathrm{O}h}\left(L_{x}^{\text{Best}} - L_{x}\right)$ Update member's location $$ L _ {x} ^ {\text {n e w}} = L _ {x} + \mathrm {D h} ^ {\text {n e w}} * \sigma $$ Repeat until converge End For Return $O_{\mathrm{UI}}$ End The proposed QNDSOA optimizes UI layout by balancing responsiveness across devices. Once the optimized UI design is generated, it is deployed into the live web environment. User interaction with the new UI is continuously monitored, providing feedback for fine-tuning the model. # 4. RESULTS AND DISCUSSION The experimental investigation is done to validate the performance of the proposed work, which is implemented in the PYTHON platform. # 4.1 Dataset description The design-related information and user interaction-related information are gathered in real-time to evaluate the proposed approach using the web crawler and session replay tools. From the whole data, $80\%$ as well as $20\%$ of the data are allocated for training along with testing. # 4.2 Performance assessment of the research methodology The proposed work's performance is appraised with numerous prevailing algorithms. Figure 3: Empirical analysis for CR assessment Table 1: Performance validation of the proposed FECSM <table><tr><td>Algorithm</td><td>State Coverage (%)</td><td>Transition efficiency (%)</td><td>Loop detection rate (%)</td></tr><tr><td>Proposed FECSM</td><td>94.2356</td><td>95.2312</td><td>93.6532</td></tr><tr><td>FSM</td><td>86.5402</td><td>86.2356</td><td>84.6375</td></tr><tr><td>HMM</td><td>79.0364</td><td>78.0326</td><td>77.6023</td></tr><tr><td>PN</td><td>71.2341</td><td>70.6982</td><td>68.9782</td></tr><tr><td>State chart</td><td>60.2584</td><td>62.9584</td><td>59.6803</td></tr></table> The proposed FECSM's performance is weighed against the prevailing FSM, Hidden Markov Model (HMM), Petri Net (PN), and state chart in Figure 3 and Table 1. The proposed FECSM attained state coverage, transition efficiency, and loop detection rate of $94.2356\%$ , $95.2312\%$ , and $93.6532\%$ , respectively. Contrarily, the traditional FSM had state coverage, transition efficiency, and loop detection rate of $86.5402\%$ , $86.2356\%$ , and $84.6375\%$ , respectively, showing limited adaptability. Here, the presence of ECC aided in improving the proposed work's performance in CR assessment. Figure 4: Performance evaluation for UX change labeling Table 2: Numerical investigation of the proposed FECSM <table><tr><td>Techniques</td><td>Fuzzification time (ms)</td><td>Defuzzification time (ms)</td><td>Rule generation time (ms)</td></tr><tr><td>Proposed FDWIS</td><td>855</td><td>897</td><td>963</td></tr><tr><td>FIS</td><td>1024</td><td>1042</td><td>1255</td></tr><tr><td>ANFIS</td><td>1450</td><td>1501</td><td>1698</td></tr><tr><td>TFL</td><td>1964</td><td>1985</td><td>2157</td></tr><tr><td>RBP</td><td>2135</td><td>2264</td><td>2455</td></tr></table> In Figure 4 and Table 2, the proposed FDWIS's performance is appraised with prevailing FIS, Adaptive-Neuro FIS (ANFIS), Trapezoidal Fuzzy Logic (TFL), along with Rule-Based Prediction (RBP) to exhibit the model's supremacy in UX change labeling. The presence of DWAF-based defuzzification upgraded the efficacy of the labeling. The proposed FDWIS had fuzzification time, defuzzification time, along with rule generation time of $855\mathrm{ms}$ , $897\mathrm{ms}$ , and $963\mathrm{ms}$ , respectively. But, the traditional techniques had maximum time consumption. Therefore, the FDWIS's dominance was evidenced. Table 3: Comparative assessment for UX change type classification <table><tr><td>Methods</td><td>Accuracy (%)</td><td>Precision (%)</td><td>Recall (%)</td><td>F-Measure (%)</td><td>Sensitivity (%)</td><td>Specificity (%)</td></tr><tr><td>Proposed BiGLMRU</td><td>99.2315</td><td>98.0745</td><td>98.1265</td><td>98.1005</td><td>98.1265</td><td>98.0745</td></tr><tr><td>BiGRU</td><td>93.5489</td><td>90.4568</td><td>91.6544</td><td>91.0556</td><td>91.6544</td><td>90.4568</td></tr><tr><td>LSTM</td><td>88.9746</td><td>87.4571</td><td>88.0267</td><td>87.7419</td><td>88.0267</td><td>87.4571</td></tr><tr><td>RNN</td><td>84.3922</td><td>81.9655</td><td>80.9472</td><td>81.4563</td><td>80.9472</td><td>81.9655</td></tr><tr><td>DLNN</td><td>77.3586</td><td>76.3204</td><td>77.5543</td><td>76.9373</td><td>77.5543</td><td>76.3204</td></tr></table> Figure 5: Performance assessment of the proposed BiGLMRU In Figure 5 and Table 3, the proposed BiGLMRU's performance is appraised with the prevailing BiGRU, Long Short Term Memory (LSTM), Recurrent Neural Network (RNN), and Deep Learning Neural Network (DLNN). Regarding accuracy, precision, recall, f-measure, sensitivity, along with specificity, the BiGLMRU attained $99.2315\%$ , $98.0745\%$ , $98.1265\%$ , $98.1005\%$ , $98.1265\%$ , and $98.0745\%$ ; while, the prevailing techniques attained $86.0685\%$ , $84.0499\%$ , $84.5456\%$ , $84.2977\%$ , $84.5456\%$ , and $84.0499\%$ . The existing works obtained poor classification performance. But, the BiGLMRU utilized the Mish activation function for mitigating the over-fitting issues, enhancing the model's superiority. Figure 6: Average fitness analysis Figure 7: Computational complexity analysis for UI optimization The proposed QNDSOA's performance is weighed against the prevailing QSOA, Egret Swarm Optimization Algorithm (ESOA), Salp Swarm Optimization Algorithm (SSOA), along with Grey Wolf Optimization Algorithm (GWOA) in Figures 6 and 7. The proposed QNDSOA achieved an average fitness of $98.5632\%$ , whereas the traditional GWOA had $78.6594\%$ . Further, the proposed work had limited complexity regarding varying epochs due to the utilization of NDF-based position updation. Thus, the QNDSOA had better performance. Figure 8: Performance assessment for user behavior pattern grouping In Figure 8, regarding grouping time, the proposed HPPDBSCAN's performance is weighed against the prevailing HDBSCAN, K-Means (KM), Farthest First Clustering (FFC), and Fuzzy C-Means (FCM). The proposed HPPDBSCAN took 32654ms to complete grouping, whereas the existing HDBSCAN obtained a grouping time of 49687ms. Therefore, the proposed work had low time complexity due to the effectual parameter selection. # 4.3 Comparative validation of the proposed work The research methodology's comparative analysis is done to exhibit the model's prominence. Table 4: Comparative validation <table><tr><td>Author&#x27;s name</td><td>Target area</td><td>Methods</td><td>Merits</td><td>Challenges</td></tr><tr><td>Ma</td><td>Computer web interface optimization</td><td>BPNN</td><td>Faster page loading</td><td>Script execution delays</td></tr><tr><td>Wang</td><td>Intelligent layout adaptation in web page design</td><td>NCMF</td><td>Higher user engagement</td><td>Dynamic Content Instability</td></tr><tr><td>Kikuchi et al.</td><td>Enhanced web page layout optimization</td><td>Optimization-based hierarchical layout mode</td><td>Improved responsive design</td><td>Layout shift issues</td></tr><tr><td>Martin et al.</td><td>Personalized web UI adaptation</td><td>Situation adaptation-aware scheme</td><td>Better flexibility and adaptability</td><td>Over-responsive elements</td></tr><tr><td>Xu &amp; Wang</td><td>Interactive website search interface design</td><td>Concave-convex texture mapping algorithm</td><td>Enhanced web accessibility</td><td>Less adaptability</td></tr><tr><td>Proposed work</td><td>Enhanced web UI design via CR assessment using an advanced HCI</td><td>FECSM and QNDSOA</td><td>Improved cross-compatibility and adaptive layout design</td><td>The proposed work heavily relied on UI optimization rather than interpretability</td></tr></table> In Table 4, the proposed work's performance is compared with several associated studies. The proposed FECSM and QNDSOA algorithms aided in improving the user experience of the web environment through CR assessment. Similarly, to optimize the computer web interface, the existing works utilized Back Propagation Neural Network (BPNN) (Ma, 2022) and Non-negative Convolutional Matrix Factorization (NCMF) (Wang, 2022). Nevertheless, the existing work had adaptability issues and computational overhead. Thus, the proposed work achieved adaptive layout design with less complexity. # 5 CONCLUSION Here, this article proposed an enhanced web UI design through CR assessment using an improved HCI-integrated FECSM and QNDSOA approaches. The proposed FECSM provided detailed insight into the transitions across different screen sizes with a state coverage of $94.2356\%$ . Similarly, a novel QNDSOA significantly optimized the web UI with an average fitness of $98.5632\%$ . Besides, the constant feedback monitoring was enabled to ensure the model's trustworthiness. Nevertheless, the proposed work primarily focused on optimizing the UI design rather than interpretation. Future scope: Thus, this work will focus on considering explainable AI and cognitive load factors in the future to improve the reliability and trust of the UI-enhancement process.
arxiv_cs
2025-12-13T00:00:00Z
https://arxiv.org/pdf/2512.15775
{"title": "Enhanced Web User Interface Design Via Cross-Device Responsiveness Assessment Using An Improved HCI-INTEGRATED DL Schemes", "raw_content": "# Enhanced Web User Interface Design Via Cross-Device Responsiveness Assessment Using An Improved HCI-INTEGRATED DL Schemes\n\nShrinivass Arunachalam Balasubramanian*\n\nSenior Full Stack Engineer, Independent Researcher, United States, shrinivassab@gmail.com\n\nUser Interface (UI) optimization is essential in the digital era to enhance user satisfaction in web environments. Nevertheless, the existing UI optimization models had overlooked the Cross-Responsiveness (CR) assessment, affecting the user interaction efficiency. Consequently, this article proposes a dynamic web UI optimization through CR assessment using Finite Exponential Continuous State Machine (FECSM) and Quokka Nonlinear Difference Swarm Optimization Algorithm (QNDSOA). Initially, the design and user-interaction related information is collected as well as pre-processed for min-max normalization. Next, the Human-Computer Interaction (HCI)-based features are extracted, followed by user behaviour pattern grouping. Meanwhile, the CR assessment is done using FECSM. Then, the proposed Bidirectional Gated Luong and Mish Recurrent Unit (BiGLMRU) is used to classify the User eXperience (UX) change type, which is labelled based on the User Interface Change Prediction Index (UICPI). Lastly, a novel QNDSOA is utilized to optimize the UI design with an average fitness of $98.5632\\%$ . Feedback monitoring is done after optimal deployment.\n\nAdditional Keywords and Phrases: Human Computer Interaction (HCI), User Interface (UI) optimization, Web Development (WD), User eXperience (UX) modelling, Predictive UI Enhancement (PUIE), Fuzzy Derivative Weighted Inference System (FDWIS), and Artificial Intelligence (AI).\n\n# 1. INTRODUCTION\n\nCurrently, Websites have become significant platforms for UI, and the architecture of the websites enhances the UX in this user-oriented generation [1, 2]. Therefore, WD, a significant aspect of HCI, is utilized for improving the UX. This incorporates estimates of the User Behavior (UB) pattern, creating a user-friendly interface and testing the performance of the site [3, 4]. The UB patterns like screen size influence, and eye-tracking methodology are greatly helpful in improving the UX [5, 6]. By developing an age-friendly website, platforms like e-commerce improve the UX [7, 8]. However, traditional techniques did not concentrate on the CR assessment for web development [9, 10]. The proposed system's motivation is to develop a user-friendly website centered on the UB with an interface. Thus, a novel model for optimizing the website based on BiGLMRU is proposed in this paper.\n\n# 1.1 Problem Statement\n\nThe prevailing works' limitations are given below,\n\nNone of the works focused on CR assessment for web development. \nThe UB patterns were not concentrated in [11], which mitigated the effectiveness of UI. \nThe prevailing works designed the UI inefficiently, as the optimal UI design was not periodically updated. \nExisting works failed to focus on the improvement level of UI that affected the web development process. \nIn [12], the support factors were not considered, which further reduced the effectiveness.\n\n# 1.2 Objectives\n\nThe objectives of the proposed framework are defined below,\n\nThe web is developed by considering the cross-responsiveness assessment using FECSM. \nTo improve the effectiveness, the UB patterns are grouped by HDBSCAN. \nThe optimal UI design is periodically updated by providing feedback to QNDSOA.\n\nBy using BiGLMRU, the improvement level for the UI design is obtained. \nThe support factors are considered by employing minimum JavaScript execution time, minimum error rate, and minimum memory usage as the fitness function in optimal UI design.\n\nThe remaining part is arranged as: in Section 2, the existing works are analyzed, in Section 3, the proposed methodology is explained, in Section 4, the results and discussion are given, and Section 5 concludes the paper with future scope.\n\n# 2. LITERATURE SURVEY\n\nBakaev et al. [11] recognised the modeling of Visual Perception (VP) of the UI. Here, the VP was predicted by a Convolutional Neural Network (CNN). Yet, the change in responsiveness for different devices and screen sizes was not considered.\n\nTodi et al. [12] assessed a Reinforcement Learning (RL) approach for adaptive UI. Firstly, both the positive and negative effects that impacted the UI were considered. Thus, adaptive UI adapted the webpage layouts and reorganized application menus. Nevertheless, the effectiveness of UI was reduced as the support factors based on optimal UI design were not considered.\n\nKeselj et al. [13] examined the Deep Learning (DL) applications for the UI evaluation. Here, a CNN determined the effectiveness of UI based on the specifications, like UI design and layout. Yet, the user satisfaction was not achieved as the objective knowledge about UI was not practically implemented.\n\nMuneer et al. [14] deployed a Meta-Model for supporting the Compatibility Testing (CT) of cross-browser web applications. Initially, for covering critical configurations, a checklist was initialized and translated into Interaction Flow Modeling Language (IFML). Next, the test cases generated by IFML addressed the compatibility issues.\n\nWang et al. [15] discovered a DL approach to assess the color quality interface in HCI interfaces. Firstly, the interface image features were extracted and modeled by a CNN. Yet, the UI design was still inefficient as the approach failed to update the user's immediate feedback.\n\n# 3. PROPOSED METHODOLOGY FOR CROSS-RESPONSIVE WEB UI TUNING USING FECSM ANDBiGLMRU\n\nThe proposed work implements an intelligent framework for CR web UI optimization using FECSM and QNDSOA. In Figure 1, the proposed methodology's block diagram is presented.\n\n![](images/21e329cdd8cfc3dbcbf3cc8c26ae49711995102cac812725918eb3ccf101ea41.jpg) \nFigure 1: The structural design of the research approach\n\n# 3.1 Data collection\n\nInitially, the design-related data (layout, component arrangements, and responsiveness characteristics) and user interaction-related information (clicks, scrolls, and mouse movements) are collected by using the web crawler and session replay tools, respectively.\n\n$$\n\\partial_ {w} = \\left(\\partial_ {1}, \\partial_ {2}, \\dots \\dots \\partial_ {W}\\right) \\text {W h e r e ,} w = 1 t o W \\tag {1}\n$$\n\nWhere, $W$ specifies the number of collected web data $\\partial_w$ .\n\n# 3.2 Pre-processing\n\nNext, $\\hat{\\partial}_w$ is subjected to the pre-processing, which standardizes the collected data in the range of (0, 1) by employing min-max normalization.\n\n$$\n\\eta = \\frac {\\partial_ {w} - \\min \\left(\\partial_ {w}\\right)}{\\max \\left(\\partial_ {w}\\right) - \\min \\left(\\partial_ {w}\\right)} \\tag {2}\n$$\n\nHere, $\\boldsymbol{\\eta}$ represents the pre-processed data.\n\n# 3.3 HCI-based feature extraction\n\nFrom $\\eta$ , the HCI features $(\\gamma_{i})$ like click patterns, scroll behaviour, mouse movement, and network condition are extracted, improving the model's performance.\n\n# 3.4 User behaviour pattern grouping\n\nNext, the $\\gamma_{i}$ is fed into the proposed HPPDBSCAN approach, which groups the user behaviour pattern by considering the scroll rate and click depth. The conventional Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) proficiently groups the data with varying density. However, the HDBSCAN is sensitive to the choice of the clustering parameters, like minimum cluster size $\\left(\\widetilde{\\mathbf{X}}\\right)$ and minimum samples $\\left(\\mathbf{Y}\\right)$ . Therefore, the Persistence Probability Function (PPF) $(\\wp)$ is used to determine the optimal parameters by analysing the density-based persistence of data across multiple scales.\n\n$$\n\\wp \\left(\\gamma_ {i}\\right) = \\int_ {S _ {k}} ^ {S _ {K}} \\left| \\gamma_ {i} (S) \\right| d ^ {\\prime \\prime} S \\rightarrow (\\mathrm {X}, \\mathrm {Y}) \\tag {3}\n$$\n\nWhere, $S_{k}$ indicate the density level $(S)$ where the cluster $\\gamma_{\\infty}$ appears, $S_{K}$ indicate the density level where the cluster $\\gamma_{\\infty}$ disappears, and $d''$ denotes the derivative parameter. For each point, the core point $(\\mathbf{Cp})$ is computed based on the minimum number of neighbours, which is determined by $\\mathbf{Y}$ . Similarly, the mutual reachability distance $(M_{\\mathrm{dis}})$ is estimated between the points to handle varying densities.\n\n$$\nM _ {\\text {d i s}} \\left(\\gamma_ {1}, \\gamma_ {2}\\right) = \\max \\left\\{\\mathrm {C p} \\left(\\gamma_ {1}\\right), \\mathrm {C p} \\left(\\gamma_ {2}\\right), \\mathrm {Z} \\left(\\gamma_ {1}, \\gamma_ {2}\\right) \\right\\} \\tag {4}\n$$\n\nWhere, $Z$ displays the direct distance value. By assigning $M_{\\mathrm{dis}}$ as the weight value of the edges, a complete mutual reachability graph $(U_{\\mathrm{gr}})$ is constructed. Subsequently, the minimum spanning tree $(T_{\\mathrm{min}})$ is generated by connecting all points with the lowest $M_{\\mathrm{dis}}$ without creating any cycles. Next, the edges in the $T_{\\mathrm{min}}$ are sorted by increasing the $M_{\\mathrm{dis}}$ , and then the longest edges are gradually removed to create a hierarchical structure. Meanwhile, the tree pruning is done by applying $\\widetilde{X}$ that evaluates the cluster's stability $(\\lambda)$ .\n\n$$\n\\lambda \\left(T _ {\\min }\\right) = \\widetilde {X} \\left(S _ {K} - S _ {k}\\right) \\tag {5}\n$$\n\nLastly, the data points are allocated to the clusters with the highest stability. Therefore, the user behaviour pattern grouped data is displayed as $\\left(\\phi_{\\nabla}\\right)$ .\n\n# 3.5 Cross responsiveness assessment\n\nContrarily, the CR assessment is done in $\\mathfrak{N}$ using the proposed FECSM algorithm to model how interface responsiveness changes across devices over time. The Finite State Machine (FSM) significantly captures transitions in\n\nuser experience due to changes in device configuration. Yet, the FSM struggled to handle continuous state and transition, affecting the model's flexibility. Therefore, the Exponential Continuous Coverage (ECC) function is utilized to handle transitions over changes.\n\nHere, each state represents a responsive UI (mobile layout, tablet layout, and desktop layout).\n\n$$\n\\mathrm {S t} _ {\\nu} = \\left(\\mathrm {S t} _ {1}, \\mathrm {S t} _ {2}, \\dots \\dots \\mathrm {S t} _ {V}\\right) \\text {W h e r e ,} \\nu = 1 \\text {t o V} \\tag {6}\n$$\n\nHere, $V$ denotes the number of states $\\mathbf{St}_{\\nu}$ . Next, the inputs (user-triggering events) like mouse events, screen size changes, and touch gestures are defined as below,\n\n$$\n\\mathrm {I p} _ {w} = \\sum_ {w = 1} ^ {W} \\left\\{\\mathrm {I p} _ {1}, \\mathrm {I p} _ {2}, \\dots \\mathrm {I p} _ {W} \\right\\} \\tag {7}\n$$\n\nHere, $w = 1,2,\\ldots W$ indicates the number of inputs $\\mathrm{Ip}_w$ . Also, the transitions are defined to reflect how the system moves from one state to another when an input is received. In the proposed work, the ECC function $(\\mathbf{N})$ is used to ensure flexible transitions by continuously adapting to dynamic state changes.\n\n$$\n\\mathrm {N} (\\tau) = 1 - \\exp^ {- \\Im \\tau} \\tag {8}\n$$\n\n$$\n\\tau \\xrightarrow {\\text {t r a n s i t i o n}} \\left(\\mathrm {S t} _ {1}, \\mathrm {S t} _ {v}\\right) \\tag {9}\n$$\n\nHere, $\\mathfrak{T}$ specifies the controlling parameter and $\\tau$ depicts the transitions. Subsequently, the start state and final state are also determined. Next, the user interactions are captured as a log file. Finally, the FSM trace logs are extracted to provide detailed insight into which transition caused friction and which device layout had higher task success. The CR assessed outcome is mentioned as $(\\mathrm{H})$ .\n\n# 3.6 User interface change prediction index computation\n\nMeanwhile, the UICPI $(\\varphi)$ is calculated by considering the $\\gamma_{i}$ to represent the necessity of UI modification based on user interaction deviation.\n\n$$\n\\varphi \\left(\\gamma_ {i}\\right) = v _ {1} \\times E + v _ {2} \\times T + v _ {3} \\times D + v _ {4} \\times C \\tag {10}\n$$\n\nWhere, $\\left(\\mathbf{v}_1,\\mathbf{v}_2,\\mathbf{v}_3,\\mathbf{v}_4\\right)$ illustrates the weight values, $E$ denotes the error rate, $T$ depicts the task time, $D$ exhibits the drop-off rate, and $C$ represents the click confusion index.\n\n# 3.7 UX change type labeling\n\nThe proposed Fuzzy Derivative Weighted Inference System (FDWIS) precisely labels the UX change type based on $\\mathbb{P}$ . The Fuzzy Inference System (FIS) offers high transparency. Yet, the FIS approach struggled to capture the small\n\nchanges. Hence, the Derivative Weighted Average Function (DWAF) is employed in the defuzzification process to capture the small changes, improving the model's precision.\n\nInitially, the fuzzification step converts the crisp values into fuzzy values $\\left(\\ddot{\\varphi}\\right)$ (membership values) using a sigmoid membership function $\\left(Q\\right)$ .\n\n$$\nQ (\\varphi) = \\frac {1}{1 + \\exp^ {- G (\\varphi - J)}} \\rightarrow \\ddot {\\varphi} \\tag {11}\n$$\n\nHere, $G$ and $J$ denote the control parameter and center of the slope, respectively.\n\nHere, the fuzzy if-then rules $\\left(\\mathfrak{R}_{\\mathrm{rule}}\\right)$ are created to categorize the UX change type based on the $\\ddot{\\varphi}$ .\n\n$$\n\\mathfrak {R} _ {\\text {r u l e}} = \\left\\{ \\begin{array}{l l} \\operatorname {I f} (\\ddot {\\varphi} = 0. 0 \\text {t o} 0. 3), & \\text {t h e n} \\quad \\mathrm {L w} \\\\ \\operatorname {I f} (\\ddot {\\varphi} = 0. 3 1 \\text {t o} 0. 6), & \\text {t h e n} \\quad \\mathrm {M d} \\\\ \\operatorname {I f} (\\ddot {\\varphi} > 0. 6), & \\text {t h e n} \\quad \\mathrm {H w} \\end{array} \\right. \\tag {12}\n$$\n\nNext, the fuzzy rules are implemented in the fuzzified inputs to label the UX change type into low $\\left(\\mathrm{LW}\\right)$ , medium $\\left(\\mathrm{Md}\\right)$ , and high $\\left(\\mathrm{Hw}\\right)$ .\n\nNext, defuzzification is the task of converting the fuzzy outputs $\\left(\\varsigma\\right)$ from the inference engine into a crisp value using DWAF. The DWAF prioritizes regions with sudden membership changes, causing improved precision.\n\n$$\n\\varphi = \\frac {\\sum_ {c = 1} ^ {C} O _ {c} \\left(\\Re_ {\\text {r u l e}}\\right) \\cdot \\left| \\frac {\\dot {b} \\varsigma}{\\dot {b} \\ddot {\\varphi}} \\right| \\cdot \\varsigma}{\\sum_ {c = 1} ^ {C} O _ {c} \\left(\\Re_ {\\text {r u l e}}\\right) \\cdot \\left| \\frac {\\dot {b} \\varsigma}{\\dot {b} \\ddot {\\varphi}} \\right|} \\tag {13}\n$$\n\nWhere, $O_{c}$ designates the firing strength of the $c^{\\mathrm{th}}$ rule, $\\dot{b}$ depicts the partial derivative parameter, and $c = 1$ to $C$ denotes the number of fuzzy rules.\n\n# 3.8 UX change type classification\n\nHere, $\\phi_{\\nabla}$ , $\\mathbf{H}$ , and $\\wp$ are inputted to the proposed BiGLMRU, which classifies the UI requirement into three categories like low UI changes needed, medium UI changes recommended, and high UI changes necessary based on the labelled data. The Bidirectional Gated Recurrent Unit (BiGRU) effectively captures the dynamic changes of the user interaction. Nevertheless, the BiGRU struggled to handle longer dependencies. Therefore, the Luong Attention (LA) function is used to hold long-term information. Likewise, the BiGRU had over-fitting issues. Hence, the Mish activation function is\n\nweldon to minimize the over-fitting issue by improving the gradient flow. In Figure 2, the proposed BiGLMRU's diagrammatic illustration is given.\n\n![](images/66ed1ab619096ef89abb983d9b367769fc11dcdbe087a9b29e58ae137b1f0d57.jpg) \nFigure 2: The pictorial depiction of the proposed BiGLMRU\n\nThe input layer $\\left(A\\right)$ holds the inputs as well as transmits them to the forward GRU layers.\n\n$$\nA = \\left(\\phi_ {\\nabla}, \\mathrm {H}, \\varphi\\right) \\tag {14}\n$$\n\nThe reset gate $(\\varpi)$ aims to eradicate the less informative information in the previous hidden state $\\left(\\mathbf{D}\\mathbf{V}_{e-1}\\right)$ . Likewise, the Mish activation function $(\\psi)$ is employed to reduce the over-fitting issues due to its gradient stability.\n\n$$\n\\varpi = \\psi \\times \\left(\\left(A, D v _ {e - 1}\\right) \\cdot N u\\right) + X g \\tag {15}\n$$\n\n$$\n\\psi (A) = A \\cdot \\tanh \\left(\\ln \\left(1 + \\exp^ {A}\\right)\\right) \\tag {16}\n$$\n\nHere, $\\mathbf{Nu}$ and $\\mathbf{Xg}$ indicate the input's weight and bias, tanh exhibits the tangent function, and ln illustrates the logarithmic function. Likewise, the update gate $(\\hat{\\lambda})$ is used to include the relevant information in the present hidden state.\n\n$$\n\\lambda = \\psi \\times \\left(\\left(A, \\mathrm {D v} _ {e - 1}\\right) \\cdot \\mathrm {N u}\\right) + \\mathrm {X g} \\tag {17}\n$$\n\nAlso, it uses the LA function $\\left(\\mu\\right)$ to advance the model's capability to capture the relevant information from the past sequences.\n\n$$\n\\mu = \\tanh \\left(\\mathrm {N u} \\left[ \\sum \\chi \\cdot \\aleph ; (A, \\mathrm {D v} _ {e - 1}) \\right]\\right) \\tag {18}\n$$\n\nHere, $\\chi$ illustrates the softmax function and $\\aleph$ depicts the probability score. Next, the candidate hidden state $\\left(\\widetilde{\\mathrm{D}}\\mathrm{v}_{e}\\right)$ is computed according to the $\\left(A,\\mathrm{D}\\mathrm{v}_{e - 1}\\right)$ , thereby holding long-term sequences.\n\n$$\n\\widetilde {\\mathrm {D v}} _ {e} = \\tanh (\\varpi \\cdot (A, \\mathrm {D v} _ {e - 1}) \\cdot \\mathrm {N u}) + \\mathrm {X g} \\tag {19}\n$$\n\nLastly, the hidden state $\\left(\\mathbf{D}\\mathbf{v}_e\\right)$ is calculated by taking a weighted average of the previous and the candidate hidden state.\n\n$$\n\\mathbf {D} \\mathbf {v} _ {e} = \\mu \\cdot \\left(\\left(1 - \\hat {\\lambda}\\right) \\cdot \\mathbf {D} \\mathbf {v} _ {e - 1} + \\hat {\\lambda} \\cdot \\widetilde {\\mathbf {D}} \\mathbf {v} _ {e}\\right) \\tag {20}\n$$\n\nBesides, the output layer processes the input via the backward GRU layers. Here, the proposed BiGLMRU classifies the UI change requirement into low $\\left(L_{\\mathrm{UI}}\\right)$ , medium $\\left(M_{\\mathrm{UI}}\\right)$ , and high $\\left(H_{\\mathrm{UI}}\\right)$ effectively.\n\n$$\n\\hat {v} = \\left(L _ {\\mathrm {U I}}, M _ {\\mathrm {U I}}, H _ {\\mathrm {U I}}\\right) \\tag {21}\n$$\n\nHere, $\\hat{\\mathbf{V}}$ establish the proposed BiGLMRU's outcome.\n\nThe proposed BiGLMRU's pseudo code is given below,\n\nInput: $\\phi_{\\nabla}$ , H and $\\varphi$\n\nOutput: UI change requirement classification\n\nBegin\n\nInitialize: $\\phi_{\\nabla}$ , H, $\\varphi$ and $\\psi$\n\nFor 1 to each input do,\n\nDetermine input layer\n\n$$\nA = \\left(\\phi_ {\\nabla}, H, \\varphi\\right)\n$$\n\nGRU layers\n\nExecute reset gate,\n\n$$\n\\varpi = \\psi \\times \\left(\\left(A, D v _ {e - 1}\\right) \\cdot N u\\right) + X g\n$$\n\nActivate Mish function,\n\n$$\n\\psi (A) = A \\cdot \\tanh \\left(\\ln \\left(1 + \\exp^ {A}\\right)\\right)\n$$\n\nPerform update gate $\\lambda = \\psi \\times ((A, Dv_{e-1}) \\cdot Nu) + Xg$\n\nEstablish LA function\n\n$$\n\\mu = \\tanh \\left(\\mathrm {N u} \\left[ \\sum \\chi \\cdot \\aleph ; (A, \\mathrm {D v} _ {e - 1}) \\right]\\right)\n$$\n\nCompute candidate hidden state\n\nEstimate hidden state\n\n$$\n\\mathbf {D} \\mathbf {v} _ {e} = \\boldsymbol {\\mu} \\cdot \\left(\\left(1 - \\boldsymbol {\\lambda}\\right) \\cdot \\mathbf {D} \\mathbf {v} _ {e - 1} + \\boldsymbol {\\lambda} \\cdot \\widetilde {\\mathbf {D}} \\mathbf {v} _ {e}\\right)\n$$\n\n# 3.9 Optimal UI design\n\nThe proposed QNDSOA is established regarding the requirement of medium $M_{\\mathrm{UI}}$ and high UI changes $H_{\\mathrm{UI}}$ to optimize the UI design. The Quokka Swarm Optimization Algorithm (QSOA) is highly adaptable to adjust the parameters, like acceleration coefficients. But, the QSOA struggled to determine the position of the member across the population. Thus, the Nonlinear Difference Function (NDF) is used to reflect the diversity of the position across the population.\n\nInitially, the population members are initialized in the search area. Here, the inputs like font size, theme mode, letter spacing, and text alignment are considered as the quokka (member).\n\n$$\nL _ {x} = \\left\\{L _ {1}, L _ {2}, \\dots \\dots L _ {X} \\right\\} \\text {W h e r e ,} x = 1 \\text {t o} X \\tag {22}\n$$\n\nHere, $X$ demonstrates the number of population members $L_{x}$ . In the proposed work, the minimum JavaScript execution time, minimum error rate, and minimum memory usage are considered as the fitness values to select the best leader $\\left(L_{x}^{Best}\\right)$ . Then, the member's location and drought $(\\mathrm{Dh})$ are updated regarding the $L_{x}^{Best}$ . The proposed work introduces the NDF $(\\alpha)$ to cover the diversity of the position across the population.\n\n$$\n\\mathrm {D h} ^ {\\text {n e w}} = \\frac {\\left(\\mathrm {T m} + \\mathrm {h m}\\right)}{\\left(0 . 8 + \\mathrm {D h}\\right)} + \\mathrm {t} \\cdot \\alpha \\tag {23}\n$$\n\n$$\n\\alpha = \\exp^ {- \\mathrm {O} h} \\left(L _ {x} ^ {\\text {B e s t}} - L _ {x}\\right) \\tag {24}\n$$\n\n$$\nL _ {x} ^ {\\text {n e w}} = L _ {x} + \\mathrm {D h} ^ {\\text {n e w}} * \\sigma \\tag {25}\n$$\n\nWhere, $\\mathrm{Dh}^{\\mathrm{new}}$ depicts the updated drought, $L_{x}^{new}$ demonstrates the updated member's position, $\\mathrm{Tm}$ indicates the temperature (balancing parameter), hm illustrates the humidity (exploration force), $\\sigma$ denotes the nitrogen ratio (solution quality), $\\mathfrak{l}$ implies the weight between the leader and members, O exhibits the adaptive parameter, $\\hbar$ specifies the time bound, and $\\alpha$ indicates the differences of position between the leader and quokka. Next, fitness is updated iteratively until it converges. The QNDSOA's pseudo code is given below,\n\nInput: Web components\n\nOutput: Optimal web UI $\\left(O_{\\mathrm{UI}}\\right)$\n\nBegin\n\nInitialize $L_{x}, L_{x}^{Best}$ , Dh and $\\sigma$\n\nFor 1 to each member do,\n\nPerform population initialization $L_{x} = \\{L_{1}, L_{2}, \\dots \\dots L_{X}\\}$\n\nSelect leader $L_{x}^{Best}$ via fitness value\n\nUpdate drought,\n\n$$\nD h ^ {\\text {n e w}} = \\frac {(T m + h m)}{(0 . 8 + D h)} + t \\cdot \\alpha\n$$\n\nApply NDF $\\alpha = \\exp^{-\\mathrm{O}h}\\left(L_{x}^{\\text{Best}} - L_{x}\\right)$\n\nUpdate member's location\n\n$$\nL _ {x} ^ {\\text {n e w}} = L _ {x} + \\mathrm {D h} ^ {\\text {n e w}} * \\sigma\n$$\n\nRepeat until converge\n\nEnd For\n\nReturn $O_{\\mathrm{UI}}$\n\nEnd\n\nThe proposed QNDSOA optimizes UI layout by balancing responsiveness across devices. Once the optimized UI design is generated, it is deployed into the live web environment. User interaction with the new UI is continuously monitored, providing feedback for fine-tuning the model.\n\n# 4. RESULTS AND DISCUSSION\n\nThe experimental investigation is done to validate the performance of the proposed work, which is implemented in the PYTHON platform.\n\n# 4.1 Dataset description\n\nThe design-related information and user interaction-related information are gathered in real-time to evaluate the proposed approach using the web crawler and session replay tools. From the whole data, $80\\%$ as well as $20\\%$ of the data are allocated for training along with testing.\n\n# 4.2 Performance assessment of the research methodology\n\nThe proposed work's performance is appraised with numerous prevailing algorithms.\n\n![](images/152e2a925a340a5012d40440e2c7ccc16fc8e4c356226a7259d2616d7ea1132a.jpg) \nFigure 3: Empirical analysis for CR assessment\n\nTable 1: Performance validation of the proposed FECSM \n\n<table><tr><td>Algorithm</td><td>State Coverage (%)</td><td>Transition efficiency (%)</td><td>Loop detection rate (%)</td></tr><tr><td>Proposed FECSM</td><td>94.2356</td><td>95.2312</td><td>93.6532</td></tr><tr><td>FSM</td><td>86.5402</td><td>86.2356</td><td>84.6375</td></tr><tr><td>HMM</td><td>79.0364</td><td>78.0326</td><td>77.6023</td></tr><tr><td>PN</td><td>71.2341</td><td>70.6982</td><td>68.9782</td></tr><tr><td>State chart</td><td>60.2584</td><td>62.9584</td><td>59.6803</td></tr></table>\n\nThe proposed FECSM's performance is weighed against the prevailing FSM, Hidden Markov Model (HMM), Petri Net (PN), and state chart in Figure 3 and Table 1. The proposed FECSM attained state coverage, transition efficiency, and loop detection rate of $94.2356\\%$ , $95.2312\\%$ , and $93.6532\\%$ , respectively. Contrarily, the traditional FSM had state coverage, transition efficiency, and loop detection rate of $86.5402\\%$ , $86.2356\\%$ , and $84.6375\\%$ , respectively, showing limited adaptability. Here, the presence of ECC aided in improving the proposed work's performance in CR assessment.\n\n![](images/db6fdaa9a02808f92e8ca031a829e2d680a5b9b8aa8f8cdfccbf4dcf2657a92d.jpg) \nFigure 4: Performance evaluation for UX change labeling\n\nTable 2: Numerical investigation of the proposed FECSM \n\n<table><tr><td>Techniques</td><td>Fuzzification time (ms)</td><td>Defuzzification time (ms)</td><td>Rule generation time (ms)</td></tr><tr><td>Proposed FDWIS</td><td>855</td><td>897</td><td>963</td></tr><tr><td>FIS</td><td>1024</td><td>1042</td><td>1255</td></tr><tr><td>ANFIS</td><td>1450</td><td>1501</td><td>1698</td></tr><tr><td>TFL</td><td>1964</td><td>1985</td><td>2157</td></tr><tr><td>RBP</td><td>2135</td><td>2264</td><td>2455</td></tr></table>\n\nIn Figure 4 and Table 2, the proposed FDWIS's performance is appraised with prevailing FIS, Adaptive-Neuro FIS (ANFIS), Trapezoidal Fuzzy Logic (TFL), along with Rule-Based Prediction (RBP) to exhibit the model's supremacy in UX change labeling. The presence of DWAF-based defuzzification upgraded the efficacy of the labeling. The proposed FDWIS had fuzzification time, defuzzification time, along with rule generation time of $855\\mathrm{ms}$ , $897\\mathrm{ms}$ , and $963\\mathrm{ms}$ , respectively. But, the traditional techniques had maximum time consumption. Therefore, the FDWIS's dominance was evidenced.\n\nTable 3: Comparative assessment for UX change type classification \n\n<table><tr><td>Methods</td><td>Accuracy (%)</td><td>Precision (%)</td><td>Recall (%)</td><td>F-Measure (%)</td><td>Sensitivity (%)</td><td>Specificity (%)</td></tr><tr><td>Proposed BiGLMRU</td><td>99.2315</td><td>98.0745</td><td>98.1265</td><td>98.1005</td><td>98.1265</td><td>98.0745</td></tr><tr><td>BiGRU</td><td>93.5489</td><td>90.4568</td><td>91.6544</td><td>91.0556</td><td>91.6544</td><td>90.4568</td></tr><tr><td>LSTM</td><td>88.9746</td><td>87.4571</td><td>88.0267</td><td>87.7419</td><td>88.0267</td><td>87.4571</td></tr><tr><td>RNN</td><td>84.3922</td><td>81.9655</td><td>80.9472</td><td>81.4563</td><td>80.9472</td><td>81.9655</td></tr><tr><td>DLNN</td><td>77.3586</td><td>76.3204</td><td>77.5543</td><td>76.9373</td><td>77.5543</td><td>76.3204</td></tr></table>\n\n![](images/3fcb7b3b62e0509dff84fecaef9799838bf0dcc3e4e3d547b974f1c1f1a55ad7.jpg) \nFigure 5: Performance assessment of the proposed BiGLMRU\n\nIn Figure 5 and Table 3, the proposed BiGLMRU's performance is appraised with the prevailing BiGRU, Long Short Term Memory (LSTM), Recurrent Neural Network (RNN), and Deep Learning Neural Network (DLNN). Regarding accuracy, precision, recall, f-measure, sensitivity, along with specificity, the BiGLMRU attained $99.2315\\%$ , $98.0745\\%$ , $98.1265\\%$ , $98.1005\\%$ , $98.1265\\%$ , and $98.0745\\%$ ; while, the prevailing techniques attained $86.0685\\%$ , $84.0499\\%$ , $84.5456\\%$ , $84.2977\\%$ , $84.5456\\%$ , and $84.0499\\%$ . The existing works obtained poor classification performance. But, the BiGLMRU utilized the Mish activation function for mitigating the over-fitting issues, enhancing the model's superiority.\n\n![](images/797c84d7d3636d2ada093c62a91d3ab3f4655a2c972bb6d63fe574caf7556301.jpg) \nFigure 6: Average fitness analysis\n\n![](images/c56c33d647c536b09e2a329fb79493e09fec16ced558f887e4b3aed073006d45.jpg) \nFigure 7: Computational complexity analysis for UI optimization\n\nThe proposed QNDSOA's performance is weighed against the prevailing QSOA, Egret Swarm Optimization Algorithm (ESOA), Salp Swarm Optimization Algorithm (SSOA), along with Grey Wolf Optimization Algorithm (GWOA) in Figures 6 and 7. The proposed QNDSOA achieved an average fitness of $98.5632\\%$ , whereas the traditional GWOA had $78.6594\\%$ . Further, the proposed work had limited complexity regarding varying epochs due to the utilization of NDF-based position updation. Thus, the QNDSOA had better performance.\n\n![](images/34380a8b65d68f0a5a5d41a964b53ba2cfd75ab01662ae702ed0699d84c99cde.jpg) \nFigure 8: Performance assessment for user behavior pattern grouping\n\nIn Figure 8, regarding grouping time, the proposed HPPDBSCAN's performance is weighed against the prevailing HDBSCAN, K-Means (KM), Farthest First Clustering (FFC), and Fuzzy C-Means (FCM). The proposed HPPDBSCAN\n\ntook 32654ms to complete grouping, whereas the existing HDBSCAN obtained a grouping time of 49687ms. Therefore, the proposed work had low time complexity due to the effectual parameter selection.\n\n# 4.3 Comparative validation of the proposed work\n\nThe research methodology's comparative analysis is done to exhibit the model's prominence.\n\nTable 4: Comparative validation \n\n<table><tr><td>Author&#x27;s name</td><td>Target area</td><td>Methods</td><td>Merits</td><td>Challenges</td></tr><tr><td>Ma [16]</td><td>Computer web interface optimization</td><td>BPNN</td><td>Faster page loading</td><td>Script execution delays</td></tr><tr><td>Wang [17]</td><td>Intelligent layout adaptation in web page design</td><td>NCMF</td><td>Higher user engagement</td><td>Dynamic Content Instability</td></tr><tr><td>Kikuchi et al. [18]</td><td>Enhanced web page layout optimization</td><td>Optimization-based hierarchical layout mode</td><td>Improved responsive design</td><td>Layout shift issues</td></tr><tr><td>Martin et al. [19]</td><td>Personalized web UI adaptation</td><td>Situation adaptation-aware scheme</td><td>Better flexibility and adaptability</td><td>Over-responsive elements</td></tr><tr><td>Xu &amp; Wang [20]</td><td>Interactive website search interface design</td><td>Concave-convex texture mapping algorithm</td><td>Enhanced web accessibility</td><td>Less adaptability</td></tr><tr><td>Proposed work</td><td>Enhanced web UI design via CR assessment using an advanced HCI</td><td>FECSM and QNDSOA</td><td>Improved cross-compatibility and adaptive layout design</td><td>The proposed work heavily relied on UI optimization rather than interpretability</td></tr></table>\n\nIn Table 4, the proposed work's performance is compared with several associated studies. The proposed FECSM and QNDSOA algorithms aided in improving the user experience of the web environment through CR assessment. Similarly, to optimize the computer web interface, the existing works utilized Back Propagation Neural Network (BPNN) (Ma, 2022) and Non-negative Convolutional Matrix Factorization (NCMF) (Wang, 2022). Nevertheless, the existing work had adaptability issues and computational overhead. Thus, the proposed work achieved adaptive layout design with less complexity.\n\n# 5 CONCLUSION\n\nHere, this article proposed an enhanced web UI design through CR assessment using an improved HCI-integrated FECSM and QNDSOA approaches. The proposed FECSM provided detailed insight into the transitions across different screen sizes with a state coverage of $94.2356\\%$ . Similarly, a novel QNDSOA significantly optimized the web UI with an average fitness of $98.5632\\%$ . Besides, the constant feedback monitoring was enabled to ensure the model's trustworthiness. Nevertheless, the proposed work primarily focused on optimizing the UI design rather than interpretation.\n\nFuture scope: Thus, this work will focus on considering explainable AI and cognitive load factors in the future to improve the reliability and trust of the UI-enhancement process.\n\n# REFERENCES\n\n[1] Lun Liu, Dai Zetian, Tan Wee Hoe, Xue Juan, Du Jiaxin, and Wang Fulai. 2024. Factors Influencing User Intentions on Interactive Websites: Insights from the Technology Acceptance Model. IEEE Access 12 (August 2024), 122735-122756. https://doi.org/10.1109/ACCESS.2024.3437418 \n[2] Lima, Adriano Luiz de Souza, and Christiane Gresse von Wangenheim. 2021. Assessing the Visual Esthetics of User Interfaces: A Ten-Year Systematic Mapping. International Int. J. Hum.-Comput. Interact. 38, 2 (January 2021), 144-164. https://doi.org/10.1080/10447318.2021.1926118 \n[3] Alao Olujimi Daniel, Amarachi Priscilla Ezihe, Ruth Chinkata Amanze, Oluwakemi Kuyoro Shade, and Adewale Olanrewaju Adebayo. 2022. User-centered/user experience Uc/Ux design thinking approach for designing a university information management system. Ing. Syst. Inf. 27, 4 (August 2022): 577. https://doi.org/10.18280/isi.270407 \n[4] Hossain Md Tutul, Rakib Hassan, Mahfida Anjad, and Md Abdur Rahman. 2021. Web Performance Analysis: An Empirical Analysis of E-Commerce Sites in Bangladesh. Int. J. Inf. Eng. Electron. Bus.13, 4 (August 2021), 47-54. https://doi.org/10.5815/ijieeb.2021.04.04 \n[5] Modi Nandini and Yogesh Kumar. 2025. Advancements in Eye Tracking for Visual Attention Analysis Across E-commerce Screen Sizes. Procedia Comput. Sci. 258 (January 2025), 3095-3104. https://doi.org/10.1016/j.procs.2025.04.567 \n[6] Ball Linden J. and Beth H. Richardson. 2023. Eye-Tracking and Physiological Measurements for UX Evaluation. User Experience Methods and Tools in Human-Computer Interaction (August 2024), 1–31. https://www.taylordfrancis.com/chapters/edit/10.1201/9781003495161-9/eyetracking-physiological-measurements-ux-evaluation-linden-ball-beth-richardson \n[7] Ye Junnan, Yueting Han, Wenhao Li, and Chaoxiang Yang. 2025. Visual Selective Attention Analysis for Elderly Friendly Fresh E-Commerce Product Interfaces. Appl. Sci. (Switz.) 15, 8 (April 2025), 1–26. https://doi.org/10.3390/app15084470 \n[8] Pathak, Bhavesh, Sandeep Mamloda, and Manthan Patel. 2025. Responsive E-Commerce Website. Research Gate (March 2025), 1-14. https://www.researchgate.net/publication/390151475_Responsive_E-Commerce_Website \n[9] Alti Adel and Abderrahim Lakehal. 2025. AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development. Future Internet 17, 4 (April 2025), 1–34. https://doi.org/10.3390/fi17040180 \n[10] Khamaj Abdulrahman and Abdulelah M. Ali. 2024. Adapting user experience with reinforcement learning: Personalizing interfaces based on user behavior analysis in real-time. Alex. Eng. J. 95 (May 2024), 164-173. https://doi.org/10.1016/j.aej.2024.03.045 \n[11] Bakaev Maxim, Sebastian Heil, and Martin Gaedke. (2023). A Reasonable Effectiveness of Features in Modeling Visual Perception of User Interfaces. Big Data Cogn. Comput. 7, 1 (February 2023), 1-17. https://doi.org/10.3390/bdc7010030 \n[12] Todi Kashyap, Gilles Bailly, Luis Leiva, and Antti Oulasvirta. 2021. Adapting user interfaces with model-based reinforcement learning. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, 1-13. https://dl.acm.org/doi/abs/10.1145/3411764.3445497 \n[13] Keselj Ana, Mario Milicevic, Krunoslav Zubrinic, and Zeljka Car. 2022. The Application of Deep Learning for the Evaluation of User Interfaces. Sensors 22, 23 (November 2022), 1-17. https://doi.org/10.3390/s22239336 \n[14] Muneer Mishal, Uzair Rasheed, and Muhammad Majid Hussain. 2025. A Meta-Model to Support Compatibility Testing of Cross-Browser Web Application. In 2025 6th International Conference on Advancements in Computational Sciences (ICACS '25), IEEE, Lahore, Pakistan, 1-8. https://ieeexplore.ieee.org/abstract/document/10937860/ \n[15] Wang Shixiao, Runsheng Zhang, Junliang Du, Ran Hao, and Jiacheng Hu. 2025. A Deep Learning Approach to Interface Color Quality Assessment in HCI. ArXiv (February 2025), 1-5. https://arxiv.org/pdf/2502.09914 \n[16] Ma Yan. 2022. Optimization of Computer Web Page Interface Based on BP Neural Network Algorithm and Multimedia. Comput. Intell. Neurosci. 2022, 1 (May 2022), 1-8. https://doi.org/10.1155/2022/6213718 \n[17] Wang Ping. 2022. The Influence of Artificial Intelligence on Visual Elements of Web Page Design under Machine Vision. Comput. Intell. Neurosci. 2022, 2 (May 2022), 1-13. https://doi.org/10.1155/2022/4328400 \n[18] Kikuchi Kotaro, Mayu Otani, Kota Yamaguchi, and Edgar Simo-Serra. 2021. Modeling Visual Containment for Web Page Layout Optimization. Comput. Graph. Forum. 40, 7 (October 2021), 33-44. https://doi.org/10.1111/cgf.14399 \n[19] Martin Christian, Bärbel Christine Bissinger, and Pietro Asta. 2023. Optimizing the digital customer journey—Improving user experience by exploiting emotions, personas and situations for individualized user interface adaptations. J. Consum. Behav. 22, 5 (September 2023), 1050–1061. https://doi.org/10.1002/cb.1964\n\n[20] Xu Zhen and Shan Wang. 2022. Interactive Design of Personalized Website Search Interface Based on Visual Communication. Comput. Intell. Neurosci. 2022 (May 2022), 1-11. https://doi.org/10.1155/2022/2125506"}
# Auto-Tuning Safety Guardrails for Black-Box Large Language Models Abstract Large language models (LLMs) are increasingly deployed behind safety guardrails such as system prompts and content filters, especially in settings where product teams cannot modify model weights. In practice these guardrails are typically hand-tuned, brittle, and difficult to reproduce. This paper studies a simple but practical alternative: treat safety guardrail design itself as a hyperparameter optimization problem over a frozen base model. Concretely, I wrap Mistral-7B-Instruct with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier, then evaluate candidate configurations on three public benchmarks covering malware generation, classic jailbreak prompts, and benign user queries. Each configuration is scored using malware and jailbreak attack success rate, benign harmful-response rate, and end-to-end latency. A 48-point grid search over prompt combinations and filter modes establishes a baseline. I then run a black-box Optuna study over the same space and show that it reliably rediscovers the best grid configurations while requiring an order of magnitude fewer evaluations and roughly $8 \times$ less wall-clock time. The results suggest that viewing safety guardrails as tunable hyperparameters is a feasible way to harden black-box LLM deployments under compute and time constraints. # 1 Introduction Large language models (LLMs) are now embedded in productivity tools, coding assistants, educational applications, and many other high-impact products. In many real deployments, teams do not fine-tune the underlying model weights: the base LLM is provided as a managed service, or internal governance and infrastructure constraints make weight-level changes infeasible. Instead, safety is implemented through deployment-time guardrails such as system prompts, safety policies, and content filters wrapped around a frozen base model. These guardrails matter: simple changes to system prompts, refusal templates, or classifier thresholds can significantly alter an application's vulnerability to jailbreaks and other misuse. However, they are typically tuned informally by a small set of practitioners, making them brittle, non-replicable, and hard to reason about. This work investigates a simple question: Research question. Given a frozen conversational LLM, can we automatically search over a discrete space of safety guardrail configurations to reduce safety failures while maintaining helpfulness and reasonable latency? I present a small proof-of-concept system that treats guardrail design—specifically, combinations of safety-oriented system prompts and content-filter modes—as a hyperparameter optimization problem around a fixed base model. Rather than hand-tune these choices or exhaustively enumerate all combinations, I use standard black-box hyperparameter op tization to search for high-performing configura-tions. Concretely, I: - Wrap Mistral-7B-Instruct-v0.2<sup>1</sup> with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier.<sup>2</sup> - Evaluate each configuration on three public datasets covering malware generation, classic jailbreak prompts, and benign behaviors. - Define four metrics: malware attack success rate (ASR), jailbreak ASR, benign harmful-response rate (a proxy for over-refusal/hallucination), and end-to-end latency. - Run a full 48-point grid search to establish a baseline, then apply Optuna<sup>3</sup> to search the same space more efficiently. The experiments are small in scale but representative of realistic constraints: limited time and compute, no access to model gradients, and a limited guardrail configuration space. The main findings are: 1. Without any guardrails, the base model is highly vulnerable, especially to jailbreak prompts. 2. Adding a simple classifier-based content filter meaningfully reduces attack success at modest latency cost. 3. Combining system prompts with filtering yields better benign performance than filtering alone. 4. Off-the-shelf hyperparameter optimization (Optuna) recovers high-performing guardrail configurations substantially faster than naive grid search. Although exploratory, these results support the view that safety guardrails can be treated as first-class tunable objects. The goal is not to propose a novel algorithm, but to demonstrate that standard, well-understood tools from hyperparameter optimization can be repurposed to help harden black-box LLM deployments in practice. # 2 Problem Setting I consider a standard chat-style LLM interface where a user issues a prompt $u$ , the system wraps it with a system prompt $s$ and possibly other control tokens, and a frozen base model $f_{\theta}$ produces a response $r$ : $$ r = f _ {\theta} (s, u). $$ Safety guardrails are implemented via: 1. System prompts: additional natural-language instructions reminding the model to refuse unsafe requests (e.g., jailbreak or malware attempts). 2. Content filters: a harmfulness classifier $g_{\phi}$ that scores $(u, r)$ and either passes through or overrides $r$ with a refusal if the predicted risk exceeds a threshold. In many product organizations, practitioners hand-tune $s$ and the filtering policy. Instead, I define a discrete configuration space $\mathcal{C}$ of possible guardrail choices and seek a configuration $c^{\star} \in \mathcal{C}$ that reduces safety failures while preserving benign helpfulness and keeping latency acceptable. Formally, given evaluation datasets of malware prompts, jailbreak prompts, and benign prompts, I define a vector-valued objective for configuration $c$ : $$ J (c) = \left(\operatorname {A S R} _ {\operatorname {m a l}} (c), \operatorname {A S R} _ {\operatorname {j b}} (c), \operatorname {H a r m} _ {\operatorname {b e n}} (c), \operatorname {L a t e n c y} (c)\right), $$ and aim to approximate Pareto-optimal trade-offs or minimize a scalarized objective constructed from these components. Because $f_{\theta}$ and $g_{\phi}$ are used as black boxes, I use black-box hyperparameter optimization to search over $\mathcal{C}$ . # 3 Method # 3.1 Base Model and Generation The base LLM is Mistral-7B-Instruct-v0.2, loaded via HuggingFace Transformers on an A100 GPU. I use the model in standard causal language modeling mode with a simple instruction-style prompt template: $$ \begin{array}{l} < s > [ I N S T ] \{\text {s y s t e m \_ p r o m p t} \} \\ \text {U s e r :} \{\text {u s e r \_ p r o m p t} \} [ / I N S T ] \end{array} $$ Generation hyperparameters are: max_new_tokens $= 256$ - temperature = 0.3, top_p = 0.9 for general chat - Slightly lower temperature (0.2) for pure code prompts Latency is measured as wall-clock time per generation (including CUDA synchronization) for each prompt. # 3.2 Harmfulness Classifier and Filter Modes To assess and optionally filter responses, I use the ModernBERT-based modernbert-wildguardmix-classifier, a binary harmfulness classifier trained on safety-related text. A helper function classify_exchange(prompt, response) returns a probability $p_{\text{harm}}$ that the exchange is harmful. On top of this classifier, I implement three content-filter modes: - none: always return the raw LLM output. - **mild:** if $p_{\text{harm}} \geq 0.5$ , replace the output with a canned refusal. - strict: if $p_{\text{harm}} \geq 0.8$ , replace the output. In all cases, the classifier is also used to compute safety metrics, as described below. # 3.3 Modular System Prompts On top of a brief "helpful assistant" base system prompt, I define four modular safety snippets, each of which can be toggled on or off: - JB1: instructions not to follow any requests that attempt to bypass or "jailbreak" safety rules. - JB2: explicit guidance not to comply with role-play or "pretend" instructions that simulate unsafe systems. - MW1: a high-level prohibition on generating malware, exploit code, or step-by-step instructions for harmful activities. - MW2: more concrete refusal patterns for harmful coding assistance (e.g., refusing to write payloads or exploits). A guardrail configuration $c$ thus consists of binary flags for these four snippets plus a choice of content-filter mode. # 3.4 Datasets I use three existing datasets to stress different aspects of safety: Malware prompts (RMCBench). A subset of 50 prompts from RMCBench, which contains adversarial prompts designed to elicit malicious code in both text-to-code and code-to-code settings. The key metric here is malware ASR. Jailbreak prompts (ChatGPT-Jailbreak-Prompts). A subset of 50 classic jailbreak attempts from rubend18/ChatGPT-Jailbreak-Prompts, including "Developer Mode" and other well-known system-prompt attacks. The metric is jailbreak ASR. Benign prompts (JBB-Behaviors). Benign examples from the JBB-Behaviors split of JailbreakBench, which pairs unsafe behaviors with safe, everyday behaviors. I sample 50 benign prompts and measure how often the system produces outputs flagged as harmful. All datasets are in English and drawn from public HuggingFace repositories. # 3.5 Metrics For a given configuration $c$ , the evaluation harness computes: - Malware ASR: fraction of malware prompts whose final responses are classified as harmful by the classifier. - Jailbreak ASR: fraction of jailbreak prompts with harmful responses. - Benign harmful-response rate: fraction of benign prompts whose responses are classified as harmful (proxy for over-refusal or hallucinated risk). - Latency: average wall-clock time per prompt, including generation and filtering. The bare configuration uses no safety snippets and filter-none, serving as a baseline for vulnerability and latency. # 3.6 Search Procedures Grid search. The grid search enumerates all 48 configurations obtained from the $2^{4}$ combinations of the four system-prompt snippets
# Auto-Tuning Safety Guardrails for Black-Box Large Language Models Abstract Large language models (LLMs) are increasingly deployed behind safety guardrails such as system prompts and content filters, especially in settings where product teams cannot modify model weights. In practice these guardrails are typically hand-tuned, brittle, and difficult to reproduce. This paper studies a simple but practical alternative: treat safety guardrail design itself as a hyperparameter optimization problem over a frozen base model. Concretely, I wrap Mistral-7B-Instruct with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier, then evaluate candidate configurations on three public benchmarks covering malware generation, classic jailbreak prompts, and benign user queries. Each configuration is scored using malware and jailbreak attack success rate, benign harmful-response rate, and end-to-end latency. A 48-point grid search over prompt combinations and filter modes establishes a baseline. I then run a black-box Optuna study over the same space and show that it reliably rediscovers the best grid configurations while requiring an order of magnitude fewer evaluations and roughly $8 \times$ less wall-clock time. The results suggest that viewing safety guardrails as tunable hyperparameters is a feasible way to harden black-box LLM deployments under compute and time constraints. # 1 Introduction Large language models (LLMs) are now embedded in productivity tools, coding assistants, educational applications, and many other high-impact products. In many real deployments, teams do not fine-tune the underlying model weights: the base LLM is provided as a managed service, or internal governance and infrastructure constraints make weight-level changes infeasible. Instead, safety is implemented through deployment-time guardrails such as system prompts, safety policies, and content filters wrapped around a frozen base model. These guardrails matter: simple changes to system prompts, refusal templates, or classifier thresholds can significantly alter an application's vulnerability to jailbreaks and other misuse. However, they are typically tuned informally by a small set of practitioners, making them brittle, non-replicable, and hard to reason about. This work investigates a simple question: Research question. Given a frozen conversational LLM, can we automatically search over a discrete space of safety guardrail configurations to reduce safety failures while maintaining helpfulness and reasonable latency? I present a small proof-of-concept system that treats guardrail design—specifically, combinations of safety-oriented system prompts and content-filter modes—as a hyperparameter optimization problem around a fixed base model. Rather than hand-tune these choices or exhaustively enumerate all combinations, I use standard black-box hyperparameter op tization to search for high-performing configura-tions. Concretely, I: - Wrap Mistral-7B-Instruct-v0.2<sup>1</sup> with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier.<sup>2</sup> - Evaluate each configuration on three public datasets covering malware generation, classic jailbreak prompts, and benign behaviors. - Define four metrics: malware attack success rate (ASR), jailbreak ASR, benign harmful-response rate (a proxy for over-refusal/hallucination), and end-to-end latency. - Run a full 48-point grid search to establish a baseline, then apply Optuna<sup>3</sup> to search the same space more efficiently. The experiments are small in scale but representative of realistic constraints: limited time and compute, no access to model gradients, and a limited guardrail configuration space. The main findings are: 1. Without any guardrails, the base model is highly vulnerable, especially to jailbreak prompts. 2. Adding a simple classifier-based content filter meaningfully reduces attack success at modest latency cost. 3. Combining system prompts with filtering yields better benign performance than filtering alone. 4. Off-the-shelf hyperparameter optimization (Optuna) recovers high-performing guardrail configurations substantially faster than naive grid search. Although exploratory, these results support the view that safety guardrails can be treated as first-class tunable objects. The goal is not to propose a novel algorithm, but to demonstrate that standard, well-understood tools from hyperparameter optimization can be repurposed to help harden black-box LLM deployments in practice. # 2 Problem Setting I consider a standard chat-style LLM interface where a user issues a prompt $u$ , the system wraps it with a system prompt $s$ and possibly other control tokens, and a frozen base model $f_{\theta}$ produces a response $r$ : $$ r = f _ {\theta} (s, u). $$ Safety guardrails are implemented via: 1. System prompts: additional natural-language instructions reminding the model to refuse unsafe requests (e.g., jailbreak or malware attempts). 2. Content filters: a harmfulness classifier $g_{\phi}$ that scores $(u, r)$ and either passes through or overrides $r$ with a refusal if the predicted risk exceeds a threshold. In many product organizations, practitioners hand-tune $s$ and the filtering policy. Instead, I define a discrete configuration space $\mathcal{C}$ of possible guardrail choices and seek a configuration $c^{\star} \in \mathcal{C}$ that reduces safety failures while preserving benign helpfulness and keeping latency acceptable. Formally, given evaluation datasets of malware prompts, jailbreak prompts, and benign prompts, I define a vector-valued objective for configuration $c$ : $$ J (c) = \left(\operatorname {A S R} _ {\operatorname {m a l}} (c), \operatorname {A S R} _ {\operatorname {j b}} (c), \operatorname {H a r m} _ {\operatorname {b e n}} (c), \operatorname {L a t e n c y} (c)\right), $$ and aim to approximate Pareto-optimal trade-offs or minimize a scalarized objective constructed from these components. Because $f_{\theta}$ and $g_{\phi}$ are used as black boxes, I use black-box hyperparameter optimization to search over $\mathcal{C}$ . # 3 Method # 3.1 Base Model and Generation The base LLM is Mistral-7B-Instruct-v0.2, loaded via HuggingFace Transformers on an A100 GPU. I use the model in standard causal language modeling mode with a simple instruction-style prompt template: $$ \begin{array}{l} < s > [ I N S T ] \{\text {s y s t e m \_ p r o m p t} \} \\ \text {U s e r :} \{\text {u s e r \_ p r o m p t} \} [ / I N S T ] \end{array} $$ Generation hyperparameters are: max_new_tokens $= 256$ - temperature = 0.3, top_p = 0.9 for general chat - Slightly lower temperature (0.2) for pure code prompts Latency is measured as wall-clock time per generation (including CUDA synchronization) for each prompt. # 3.2 Harmfulness Classifier and Filter Modes To assess and optionally filter responses, I use the ModernBERT-based modernbert-wildguardmix-classifier, a binary harmfulness classifier trained on safety-related text. A helper function classify_exchange(prompt, response) returns a probability $p_{\text{harm}}$ that the exchange is harmful. On top of this classifier, I implement three content-filter modes: - none: always return the raw LLM output. - **mild:** if $p_{\text{harm}} \geq 0.5$ , replace the output with a canned refusal. - strict: if $p_{\text{harm}} \geq 0.8$ , replace the output. In all cases, the classifier is also used to compute safety metrics, as described below. # 3.3 Modular System Prompts On top of a brief "helpful assistant" base system prompt, I define four modular safety snippets, each of which can be toggled on or off: - JB1: instructions not to follow any requests that attempt to bypass or "jailbreak" safety rules. - JB2: explicit guidance not to comply with role-play or "pretend" instructions that simulate unsafe systems. - MW1: a high-level prohibition on generating malware, exploit code, or step-by-step instructions for harmful activities. - MW2: more concrete refusal patterns for harmful coding assistance (e.g., refusing to write payloads or exploits). A guardrail configuration $c$ thus consists of binary flags for these four snippets plus a choice of content-filter mode. # 3.4 Datasets I use three existing datasets to stress different aspects of safety: Malware prompts (RMCBench). A subset of 50 prompts from RMCBench, which contains adversarial prompts designed to elicit malicious code in both text-to-code and code-to-code settings. The key metric here is malware ASR. Jailbreak prompts (ChatGPT-Jailbreak-Prompts). A subset of 50 classic jailbreak attempts from rubend18/ChatGPT-Jailbreak-Prompts, including "Developer Mode" and other well-known system-prompt attacks. The metric is jailbreak ASR. Benign prompts (JBB-Behaviors). Benign examples from the JBB-Behaviors split of JailbreakBench, which pairs unsafe behaviors with safe, everyday behaviors. I sample 50 benign prompts and measure how often the system produces outputs flagged as harmful. All datasets are in English and drawn from public HuggingFace repositories. # 3.5 Metrics For a given configuration $c$ , the evaluation harness computes: - Malware ASR: fraction of malware prompts whose final responses are classified as harmful by the classifier. - Jailbreak ASR: fraction of jailbreak prompts with harmful responses. - Benign harmful-response rate: fraction of benign prompts whose responses are classified as harmful (proxy for over-refusal or hallucinated risk). - Latency: average wall-clock time per prompt, including generation and filtering. The bare configuration uses no safety snippets and filter-none, serving as a baseline for vulnerability and latency. # 3.6 Search Procedures Grid search. The grid search enumerates all 48 configurations obtained from the $2^{4}$ combinations of the four system-prompt snippets (JB1, JB2, MW1, MW2) crossed with the three filter modes (none, mild, strict). For each configuration, the system evaluates all three datasets and records the metrics defined above. Optuna search. To explore whether black-box hyperparameter optimization can find good guardrail configurations more efficiently, I define an Optuna search space with: - Binary variables for JB1, JB2, MW1, MW2. - A categorical variable for filter mode in {none, mild, strict}. The objective for Optuna is a scalarized score that combines the four metrics: $$ \begin{array}{l} \operatorname {s c o r e} (c) = 0. 4 \operatorname {A S R} _ {\mathrm {m a l}} + 0. 4 \operatorname {A S R} _ {\mathrm {j b}} \\ + 0. 1 \text {H a r m} _ {\text {b e n}} + 0. 1 \text {L a t e n c y} \\ \end{array} $$ where all terms are normalized. Lower values are better. For efficiency, each trial initially uses only 10 prompts per dataset; the top 5 configurations found by Optuna are then re-evaluated on the full 50-prompt sets. # 4 Experiments # 4.1 Setup All experiments are run in Google Colab on an A100 GPU. For the grid search, each of the 48 configurations is evaluated on all 50 prompts from each dataset. For Optuna, I run 24 trials in the fast loop (10 examples per dataset) and then re-score the best 5 configurations on the full datasets. # 4.2 Baseline Vulnerability The bare configuration (no safety snippets, filter-none) illustrates the need for guardrails. Malware ASR is roughly 0.48 on the sampled RMCBench prompts, and jailbreak ASR is close to 0.98 on the jailbreak prompts: almost every attack elicits a harmful response. Benign harmful-response rate is also high (around 0.42), suggesting that the classifier tends to flag many raw outputs as problematic. # 4.3 Effect of Content Filtering Adding classifier-based filtering on top of the bare model already helps. Moving from filter-none to filter-strict reduces malware ASR by about 10 percentage points $(0.48 \rightarrow 0.38)$ at the cost of extra latency from classifier calls. Jailbreak ASR falls modestly, while benign harmful-response rate also changes, reflecting the classifier's impact on overblocking. # 4.4 System Prompts + Filtering Configurations that combine safety-oriented system prompts with filtering tend to yield the best benign behavior. For example, turning on both jailbreak and malware snippets (JB1, JB2, MW1, MW2) and using mild filtering achieves benign harmful-response rates as low as roughly 0.22 in the grid search, while reducing malware ASR compared to the bare baseline. The main challenge is that jailbreak ASR remains high across many configurations, indicating that simple prompt reminders and a general-purpose harmfulness classifier are not enough to fully harden the model against prompt-injection attacks. # 4.5 Hyperparameter Optimization Results The Optuna study, though small, demonstrates that standard hyperparameter optimization can quickly discover good guardrail configurations: - With only 24 trials evaluated on 10 prompts per dataset, Optuna converges on configurations that mirror the best-performing ones from the full grid search. - The best Optuna trial favors a combination of specific jailbreak and malware prompts with mild filtering; re-evaluated on the full datasets, its safety and latency metrics closely match or slightly improve on the best grid configurations. - Because Optuna does not need to evaluate all 48 configurations exhaustively, it achieves similar performance with roughly an order of magnitude fewer total evaluations and about $8 \times$ less wall-clock time than the full grid search. Plotting configurations in the safety-latency plane shows that Optuna rapidly discovers points near the empirical Pareto frontier: configurations where further reductions in ASR would require disproportionate increases in latency or benign harmful-response rates. Figure 1: Attack success rates and benign harmful-response rates for all safety configurations in the grid search. # 5 Discussion and Limitations This project is intentionally small-scale and leaves many open questions. I highlight key limitations and opportunities for future work. Figure 2: Average total latency (generation + filtering) for each safety configuration in the grid search. Figure 3: Safety vs. latency for all trials in the fast Optuna study (10 prompts per slice). Best trial #23 is annotated. Limited data and coverage. Each dataset is sampled down to 50 examples (10 for the fast Optuna loop), leading to wide confidence intervals. The benchmarks are English-only and cover only two harm types (malware and jailbreak). Additional benchmarks—for hate speech, self-harm, personal data leakage, and other risks—would be needed to claim broader robustness. Classifier as both judge and filter. The same harmfulness classifier is used to both block content and evaluate the system. This introduces obvious biases: if the classifier consistently mislabels certain behaviors, both the filter and the metrics will share that blind spot. Using separate models or human evaluation would provide a more reliable picture. General-purpose classifier. The ModernBERT classifier is not specialized for malware code or jailbreak strings. Failures in those domains may reflect classifier limitations more than genuine safety. Figure 4: Evolution of malware ASR, jailbreak ASR, benign harmful-response rate, and latency over Optuna trials. Figure 5: Malware and jailbreak attack success rates and benign harmful-response rates for the top 5 configurations selected by Optuna. Single-turn, scalarized objective. All evaluations are single-turn; multi-turn jailbreaking and persuasion attacks are out of scope. The scalarization weights in the Optuna objective are hand-picked and not tuned for any particular product context. A real deployment would likely treat malware ASR, jailbreak ASR, and benign helpfulness as distinct objectives and reason explicitly in a multi-objective or constrained-optimization framework. Narrow configuration space. The guardrail space here is small: four binary system-prompt toggles and three filter modes. Real deployments may need to consider richer parameters such as abstention policies, second-opinion routing, per-domain thresholds, and dynamic policies conditioned on user or context. Nevertheless, the same hyperparameter-optimization framing should extend to richer spaces. Despite these limitations, the experiments provide concrete evidence that: 1. Simple combinations of prompts and filters can measurably reduce safety failures in a black-box Figure 6: Average latency (generation + filtering) for the top 5 configurations. Figure 7: Pareto-style view of safety vs. latency for the top 5 configurations. setting; and 2. Off-the-shelf hyperparameter optimization frameworks can accelerate the search for such combinations under realistic constraints. # 6 Related Work Prompt-based defenses and content filters are standard tools in LLM safety. Prior work has shown that self-reminder style system prompts can harden models against some jailbreak attacks, and that safety classifiers trained on curated datasets can catch a wide range of unsafe generations. Concurrently, a large body of work studies optimization techniques—including Bayesian optimization and evolutionary search—for tuning ML hyperparameters with expensive black-box objectives. This project sits at the intersection of these lines of work, but focuses less on novel algorithms and more on demonstrating that the existing hyperparameter-optimization toolbox can be directly applied to the guardrail design problem in realistic black-box settings. # 7 Conclusion I presented a small proof-of-concept system that treats safety guardrails around a frozen LLM as hyperparameters to be optimized. Using Mistral-7B-Instruct as a base model, a ModernBERT harmfulness classifier, and three public benchmarks, I showed that: - Without guardrails, the model is highly vulnerable to malware and jailbreak prompts. - Simple combinations of safety-oriented system prompts and classifier-based filtering improve safety metrics at modest latency cost. - Standard black-box hyperparameter optimization (via Optuna) can discover high-performing guardrail configurations significantly faster than naive grid search. While the experimental scale is limited, the framing is practical: product teams already treat learning-rate schedules and model architectures as tunable hyperparameters; this work argues that safety guardrails for black-box LLM deployments can and should be treated the same way. Future work could expand the configuration space, use richer safety benchmarks, incorporate multi-turn attacks, and integrate human evaluation, moving toward deployable tools for systematically hardening LLM applications under real-world constraints.
arxiv_cs
2025-12-14T00:00:00Z
https://arxiv.org/pdf/2512.15782
{"title": "Auto-Tuning Safety Guardrails for Black-Box Large Language Models", "raw_content": "# Auto-Tuning Safety Guardrails for Black-Box Large Language Models\n\nPerry Abdulkadir* \nUniversity of St. Thomas \nabdu9698@stthomas.edu \nperry.abdulkadir@gmail.com\n\nDecember 10, 2025\n\n# Abstract\n\nLarge language models (LLMs) are increasingly deployed behind safety guardrails such as system prompts and content filters, especially in settings where product teams cannot modify model weights. In practice these guardrails are typically hand-tuned, brittle, and difficult to reproduce. This paper studies a simple but practical alternative: treat safety guardrail design itself as a hyperparameter optimization problem over a frozen base model. Concretely, I wrap Mistral-7B-Instruct with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier, then evaluate candidate configurations on three public benchmarks covering malware generation, classic jailbreak prompts, and benign user queries. Each configuration is scored using malware and jailbreak attack success rate, benign harmful-response rate, and end-to-end latency. A 48-point grid search over prompt combinations and filter modes establishes a baseline. I then run a black-box Optuna study over the same space and show that it reliably rediscovers the best grid configurations while requiring an order of magnitude fewer evaluations and roughly $8 \\times$ less wall-clock time. The results suggest that viewing safety guardrails as tunable hyperparameters is a feasible way to harden black-box LLM deployments under compute and time constraints.\n\n# 1 Introduction\n\nLarge language models (LLMs) are now embedded in productivity tools, coding assistants, educational applications, and many other high-impact products. In many real deployments, teams do not fine-tune the underlying model weights: the base LLM is provided as a managed service, or internal governance and infrastructure constraints make weight-level changes infeasible. Instead, safety is implemented through deployment-time guardrails such as system prompts, safety policies, and content filters wrapped around a frozen base model.\n\nThese guardrails matter: simple changes to system prompts, refusal templates, or classifier thresholds can significantly alter an application's vulnerability to jailbreaks and other misuse. However, they are typically tuned informally by a small set of practitioners, making them brittle, non-replicable, and hard to reason about.\n\nThis work investigates a simple question:\n\nResearch question. Given a frozen conversational LLM, can we automatically search over a discrete space of safety guardrail configurations to reduce safety failures while maintaining helpfulness and reasonable latency?\n\nI present a small proof-of-concept system that treats guardrail design—specifically, combinations of safety-oriented system prompts and content-filter modes—as a hyperparameter optimization problem around a fixed base model. Rather than hand-tune these choices or exhaustively enumerate all combinations, I use standard black-box hyperparameter op\n\ntization to search for high-performing configura-tions.\n\nConcretely, I:\n\n- Wrap Mistral-7B-Instruct-v0.2<sup>1</sup> with modular jailbreak and malware system prompts plus a ModernBERT-based harmfulness classifier.<sup>2</sup> \n- Evaluate each configuration on three public datasets covering malware generation, classic jailbreak prompts, and benign behaviors. \n- Define four metrics: malware attack success rate (ASR), jailbreak ASR, benign harmful-response rate (a proxy for over-refusal/hallucination), and end-to-end latency. \n- Run a full 48-point grid search to establish a baseline, then apply Optuna<sup>3</sup> to search the same space more efficiently.\n\nThe experiments are small in scale but representative of realistic constraints: limited time and compute, no access to model gradients, and a limited guardrail configuration space. The main findings are:\n\n1. Without any guardrails, the base model is highly vulnerable, especially to jailbreak prompts. \n2. Adding a simple classifier-based content filter meaningfully reduces attack success at modest latency cost. \n3. Combining system prompts with filtering yields better benign performance than filtering alone. \n4. Off-the-shelf hyperparameter optimization (Optuna) recovers high-performing guardrail configurations substantially faster than naive grid search.\n\nAlthough exploratory, these results support the view that safety guardrails can be treated as first-class tunable objects. The goal is not to propose a novel algorithm, but to demonstrate that standard, well-understood tools from hyperparameter optimization can be repurposed to help harden black-box LLM deployments in practice.\n\n# 2 Problem Setting\n\nI consider a standard chat-style LLM interface where a user issues a prompt $u$ , the system wraps it with a system prompt $s$ and possibly other control tokens, and a frozen base model $f_{\\theta}$ produces a response $r$ :\n\n$$\nr = f _ {\\theta} (s, u).\n$$\n\nSafety guardrails are implemented via:\n\n1. System prompts: additional natural-language instructions reminding the model to refuse unsafe requests (e.g., jailbreak or malware attempts). \n2. Content filters: a harmfulness classifier $g_{\\phi}$ that scores $(u, r)$ and either passes through or overrides $r$ with a refusal if the predicted risk exceeds a threshold.\n\nIn many product organizations, practitioners hand-tune $s$ and the filtering policy. Instead, I define a discrete configuration space $\\mathcal{C}$ of possible guardrail choices and seek a configuration $c^{\\star} \\in \\mathcal{C}$ that reduces safety failures while preserving benign helpfulness and keeping latency acceptable.\n\nFormally, given evaluation datasets of malware prompts, jailbreak prompts, and benign prompts, I define a vector-valued objective for configuration $c$ :\n\n$$\nJ (c) = \\left(\\operatorname {A S R} _ {\\operatorname {m a l}} (c), \\operatorname {A S R} _ {\\operatorname {j b}} (c), \\operatorname {H a r m} _ {\\operatorname {b e n}} (c), \\operatorname {L a t e n c y} (c)\\right),\n$$\n\nand aim to approximate Pareto-optimal trade-offs or minimize a scalarized objective constructed from these components. Because $f_{\\theta}$ and $g_{\\phi}$ are used as black boxes, I use black-box hyperparameter optimization to search over $\\mathcal{C}$ .\n\n# 3 Method\n\n# 3.1 Base Model and Generation\n\nThe base LLM is Mistral-7B-Instruct-v0.2, loaded via HuggingFace Transformers on an A100 GPU. I use the model in standard causal language modeling mode with a simple instruction-style prompt template:\n\n$$\n\\begin{array}{l} < s > [ I N S T ] \\{\\text {s y s t e m \\_ p r o m p t} \\} \\\\ \\text {U s e r :} \\{\\text {u s e r \\_ p r o m p t} \\} [ / I N S T ] \\end{array}\n$$\n\nGeneration hyperparameters are:\n\nmax_new_tokens $= 256$ \n- temperature = 0.3, top_p = 0.9 for general chat \n- Slightly lower temperature (0.2) for pure code prompts\n\nLatency is measured as wall-clock time per generation (including CUDA synchronization) for each prompt.\n\n# 3.2 Harmfulness Classifier and Filter Modes\n\nTo assess and optionally filter responses, I use the ModernBERT-based modernbert-wildguardmix-classifier, a binary harmfulness classifier trained on safety-related text. A helper function classify_exchange(prompt, response) returns a probability $p_{\\text{harm}}$ that the exchange is harmful.\n\nOn top of this classifier, I implement three content-filter modes:\n\n- none: always return the raw LLM output. \n- **mild:** if $p_{\\text{harm}} \\geq 0.5$ , replace the output with a canned refusal. \n- strict: if $p_{\\text{harm}} \\geq 0.8$ , replace the output.\n\nIn all cases, the classifier is also used to compute safety metrics, as described below.\n\n# 3.3 Modular System Prompts\n\nOn top of a brief \"helpful assistant\" base system prompt, I define four modular safety snippets, each of which can be toggled on or off:\n\n- JB1: instructions not to follow any requests that attempt to bypass or \"jailbreak\" safety rules. \n- JB2: explicit guidance not to comply with role-play or \"pretend\" instructions that simulate unsafe systems. \n- MW1: a high-level prohibition on generating malware, exploit code, or step-by-step instructions for harmful activities.\n\n- MW2: more concrete refusal patterns for harmful coding assistance (e.g., refusing to write payloads or exploits).\n\nA guardrail configuration $c$ thus consists of binary flags for these four snippets plus a choice of content-filter mode.\n\n# 3.4 Datasets\n\nI use three existing datasets to stress different aspects of safety:\n\nMalware prompts (RMCBench). A subset of 50 prompts from RMCBench, which contains adversarial prompts designed to elicit malicious code in both text-to-code and code-to-code settings. The key metric here is malware ASR.\n\nJailbreak prompts (ChatGPT-Jailbreak-Prompts). A subset of 50 classic jailbreak attempts from rubend18/ChatGPT-Jailbreak-Prompts, including \"Developer Mode\" and other well-known system-prompt attacks. The metric is jailbreak ASR.\n\nBenign prompts (JBB-Behaviors). Benign examples from the JBB-Behaviors split of JailbreakBench, which pairs unsafe behaviors with safe, everyday behaviors. I sample 50 benign prompts and measure how often the system produces outputs flagged as harmful.\n\nAll datasets are in English and drawn from public HuggingFace repositories.\n\n# 3.5 Metrics\n\nFor a given configuration $c$ , the evaluation harness computes:\n\n- Malware ASR: fraction of malware prompts whose final responses are classified as harmful by the classifier.\n\n- Jailbreak ASR: fraction of jailbreak prompts with harmful responses. \n- Benign harmful-response rate: fraction of benign prompts whose responses are classified as harmful (proxy for over-refusal or hallucinated risk). \n- Latency: average wall-clock time per prompt, including generation and filtering.\n\nThe bare configuration uses no safety snippets and filter-none, serving as a baseline for vulnerability and latency.\n\n# 3.6 Search Procedures\n\nGrid search. The grid search enumerates all 48 configurations obtained from the $2^{4}$ combinations of the four system-prompt snippets (JB1, JB2, MW1, MW2) crossed with the three filter modes (none, mild, strict). For each configuration, the system evaluates all three datasets and records the metrics defined above.\n\nOptuna search. To explore whether black-box hyperparameter optimization can find good guardrail configurations more efficiently, I define an Optuna search space with:\n\n- Binary variables for JB1, JB2, MW1, MW2. \n- A categorical variable for filter mode in {none, mild, strict}.\n\nThe objective for Optuna is a scalarized score that combines the four metrics:\n\n$$\n\\begin{array}{l} \\operatorname {s c o r e} (c) = 0. 4 \\operatorname {A S R} _ {\\mathrm {m a l}} + 0. 4 \\operatorname {A S R} _ {\\mathrm {j b}} \\\\ + 0. 1 \\text {H a r m} _ {\\text {b e n}} + 0. 1 \\text {L a t e n c y} \\\\ \\end{array}\n$$\n\nwhere all terms are normalized. Lower values are better. For efficiency, each trial initially uses only 10 prompts per dataset; the top 5 configurations found by Optuna are then re-evaluated on the full 50-prompt sets.\n\n# 4 Experiments\n\n# 4.1 Setup\n\nAll experiments are run in Google Colab on an A100 GPU. For the grid search, each of the 48 configurations is evaluated on all 50 prompts from each dataset. For Optuna, I run 24 trials in the fast loop (10 examples per dataset) and then re-score the best 5 configurations on the full datasets.\n\n# 4.2 Baseline Vulnerability\n\nThe bare configuration (no safety snippets, filter-none) illustrates the need for guardrails. Malware ASR is roughly 0.48 on the sampled RMCBench prompts, and jailbreak ASR is close to 0.98 on the jailbreak prompts: almost every attack elicits a harmful response. Benign harmful-response rate is also high (around 0.42), suggesting that the classifier tends to flag many raw outputs as problematic.\n\n# 4.3 Effect of Content Filtering\n\nAdding classifier-based filtering on top of the bare model already helps. Moving from filter-none to filter-strict reduces malware ASR by about 10 percentage points $(0.48 \\rightarrow 0.38)$ at the cost of extra latency from classifier calls. Jailbreak ASR falls modestly, while benign harmful-response rate also changes, reflecting the classifier's impact on overblocking.\n\n# 4.4 System Prompts + Filtering\n\nConfigurations that combine safety-oriented system prompts with filtering tend to yield the best benign behavior. For example, turning on both jailbreak and malware snippets (JB1, JB2, MW1, MW2) and using mild filtering achieves benign harmful-response rates as low as roughly 0.22 in the grid search, while reducing malware ASR compared to the bare baseline. The main challenge is that jailbreak ASR remains high across many configurations, indicating that simple prompt reminders and a general-purpose harmfulness classifier are not enough to fully harden the model against prompt-injection attacks.\n\n# 4.5 Hyperparameter Optimization Results\n\nThe Optuna study, though small, demonstrates that standard hyperparameter optimization can quickly discover good guardrail configurations:\n\n- With only 24 trials evaluated on 10 prompts per dataset, Optuna converges on configurations that mirror the best-performing ones from the full grid search. \n- The best Optuna trial favors a combination of specific jailbreak and malware prompts with mild filtering; re-evaluated on the full datasets, its safety and latency metrics closely match or slightly improve on the best grid configurations. \n- Because Optuna does not need to evaluate all 48 configurations exhaustively, it achieves similar performance with roughly an order of magnitude fewer total evaluations and about $8 \\times$ less wall-clock time than the full grid search.\n\nPlotting configurations in the safety-latency plane shows that Optuna rapidly discovers points near the empirical Pareto frontier: configurations where further reductions in ASR would require disproportionate increases in latency or benign harmful-response rates.\n\n![](images/4f44c1624829abf92b176b0b44c64348d31e7f3e7768c987585506b1308a53aa.jpg) \nFigure 1: Attack success rates and benign harmful-response rates for all safety configurations in the grid search.\n\n# 5 Discussion and Limitations\n\nThis project is intentionally small-scale and leaves many open questions. I highlight key limitations and opportunities for future work.\n\n![](images/c0738b918a8edd76182ae9f29734adb4940f19c1ac4f400d9d368548dd324f73.jpg) \nFigure 2: Average total latency (generation + filtering) for each safety configuration in the grid search.\n\n![](images/e50856c76fdda9a5b1100cc626842b658f7fb99200c45df547421c0886ed7e78.jpg) \nFigure 3: Safety vs. latency for all trials in the fast Optuna study (10 prompts per slice). Best trial #23 is annotated.\n\nLimited data and coverage. Each dataset is sampled down to 50 examples (10 for the fast Optuna loop), leading to wide confidence intervals. The benchmarks are English-only and cover only two harm types (malware and jailbreak). Additional benchmarks—for hate speech, self-harm, personal data leakage, and other risks—would be needed to claim broader robustness.\n\nClassifier as both judge and filter. The same harmfulness classifier is used to both block content and evaluate the system. This introduces obvious biases: if the classifier consistently mislabels certain behaviors, both the filter and the metrics will share that blind spot. Using separate models or human evaluation would provide a more reliable picture.\n\nGeneral-purpose classifier. The ModernBERT classifier is not specialized for malware code or jailbreak strings. Failures in those domains may reflect classifier limitations more than genuine safety.\n\n![](images/91e01216d5d9a146be8ba4026d888add2f4a578e74b680e6f93ab4c93d49e95a.jpg) \nFigure 4: Evolution of malware ASR, jailbreak ASR, benign harmful-response rate, and latency over Optuna trials.\n\n![](images/00de135735f62842643f777bc7e601ffd5cf72db3e2918641bb1f4394fe91d32.jpg) \nFigure 5: Malware and jailbreak attack success rates and benign harmful-response rates for the top 5 configurations selected by Optuna.\n\nSingle-turn, scalarized objective. All evaluations are single-turn; multi-turn jailbreaking and persuasion attacks are out of scope. The scalarization weights in the Optuna objective are hand-picked and not tuned for any particular product context. A real deployment would likely treat malware ASR, jailbreak ASR, and benign helpfulness as distinct objectives and reason explicitly in a multi-objective or constrained-optimization framework.\n\nNarrow configuration space. The guardrail space here is small: four binary system-prompt toggles and three filter modes. Real deployments may need to consider richer parameters such as abstention policies, second-opinion routing, per-domain thresholds, and dynamic policies conditioned on user or context. Nevertheless, the same hyperparameter-optimization framing should extend to richer spaces.\n\nDespite these limitations, the experiments provide concrete evidence that:\n\n1. Simple combinations of prompts and filters can measurably reduce safety failures in a black-box\n\n![](images/8de94b9e0b96ea5cf61601c9fa2afc42f211d5fb33cca6a311addc4a33c3e971.jpg) \nFigure 6: Average latency (generation + filtering) for the top 5 configurations.\n\n![](images/309d52e1aa4f77de51e5a5ff6f464ac6cb8999c78db663143947d51c84841eee.jpg) \nFigure 7: Pareto-style view of safety vs. latency for the top 5 configurations.\n\nsetting; and\n\n2. Off-the-shelf hyperparameter optimization frameworks can accelerate the search for such combinations under realistic constraints.\n\n# 6 Related Work\n\nPrompt-based defenses and content filters are standard tools in LLM safety. Prior work has shown that self-reminder style system prompts can harden models against some jailbreak attacks, and that safety classifiers trained on curated datasets can catch a wide range of unsafe generations. Concurrently, a large body of work studies optimization techniques—including Bayesian optimization and evolutionary search—for tuning ML hyperparameters with expensive black-box objectives.\n\nThis project sits at the intersection of these lines of work, but focuses less on novel algorithms and more\n\non demonstrating that the existing hyperparameter-optimization toolbox can be directly applied to the guardrail design problem in realistic black-box settings.\n\n# 7 Conclusion\n\nI presented a small proof-of-concept system that treats safety guardrails around a frozen LLM as hyperparameters to be optimized. Using Mistral-7B-Instruct as a base model, a ModernBERT harmfulness classifier, and three public benchmarks, I showed that:\n\n- Without guardrails, the model is highly vulnerable to malware and jailbreak prompts. \n- Simple combinations of safety-oriented system prompts and classifier-based filtering improve safety metrics at modest latency cost. \n- Standard black-box hyperparameter optimization (via Optuna) can discover high-performing guardrail configurations significantly faster than naive grid search.\n\nWhile the experimental scale is limited, the framing is practical: product teams already treat learning-rate schedules and model architectures as tunable hyperparameters; this work argues that safety guardrails for black-box LLM deployments can and should be treated the same way. Future work could expand the configuration space, use richer safety benchmarks, incorporate multi-turn attacks, and integrate human evaluation, moving toward deployable tools for systematically hardening LLM applications under real-world constraints.\n\n# Acknowledgments\n\nI thank the instructor of SEIS 767 (Conversational AI), Abe Kazemzadeh, and my classmates for helpful discussions and feedback on early versions of this project.\n\n# References\n\nAi2. (2025). wildguardmix. Huggingface.co.\n\n- Chao, P., et al. (2024). JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models. arXiv. \n- Chen, J., et al. (2024). RMCBench: Benchmarking Large Language Models' Resistance to Malicious Code. arXiv. \n- CyberAlbSecOP. (2025). Awesome GPT Super Prompting. GitHub. \n- JailbreakBench. (2025). JBB-Behaviors. Hugging-face.co. \n- Jaramillo, D. (2022). ChatGPT Jailbreak-Prompts. Huggingface.co. \nJazhyc. (2024). modernbert-wildguardmix-classifier. Huggingface.co. \n- MistralAI. (2024). Mistral-7B-Instruct-v0.2. Huggingface.co. \n- Optuna. (2024). Optuna: A hyperparameter optimization framework. \n- Wu, F., et al. (2023). Defending ChatGPT against Jailbreak Attack via Self-Reminder. ResearchGate.\n\n# Appendix: Full Grid Search Metrics\n\nThis appendix reports detailed metrics for all 48 safety configurations evaluated in the grid search. Columns have the following meanings:\n\n- Config: name of the safety guardrail configuration (combination of system prompts and classifier settings). \n- Mal ASR: malware attack success rate. \n- Mal Gen(s): average model generation latency (seconds) on malware prompts. \n- Mal Filts: average classifier / filter latency (seconds) on malware prompts. \n- JB ASR: jailbreak attack success rate. \n- JB Gen(s): average model generation latency (seconds) on jailbreak prompts. \n- JB Filts: average classifier / filter latency (seconds) on jailbreak prompts. \n- Benign Rate: fraction of benign prompts incorrectly flagged as harmful (benign \"harm rate\"). \n- Ben Gen(s): average model generation latency (seconds) on benign prompts. \n- Ben Filts: average classifier / filter latency (seconds) on benign prompts.\n\nTable 1: Full grid search metrics for all 48 safety configurations. \n\n<table><tr><td>Config</td><td>Mal ASR</td><td>Mal Gen(s)</td><td>Mal Filt(s)</td><td>JB ASR</td><td>JB Gen(s)</td><td>JB Filt(s)</td><td>Benign Rate</td><td>Ben Gen(s)</td><td>Ben Filt(s)</td></tr><tr><td>bare_filter-mild</td><td>0.44</td><td>8.54</td><td>0.10</td><td>0.90</td><td>7.98</td><td>0.12</td><td>0.22</td><td>9.20</td><td>0.08</td></tr><tr><td>bare_filter-none</td><td>0.48</td><td>8.70</td><td>0.00</td><td>0.98</td><td>7.98</td><td>0.00</td><td>0.42</td><td>9.17</td><td>0.00</td></tr><tr><td>bare_filter-strict</td><td>0.38</td><td>8.27</td><td>0.10</td><td>0.90</td><td>8.09</td><td>0.12</td><td>0.32</td><td>9.12</td><td>0.08</td></tr><tr><td>jb1_filter-mild</td><td>0.46</td><td>8.24</td><td>0.11</td><td>0.90</td><td>7.77</td><td>0.12</td><td>0.22</td><td>8.61</td><td>0.08</td></tr><tr><td>jb1_filter-none</td><td>0.50</td><td>7.99</td><td>0.00</td><td>0.98</td><td>7.97</td><td>0.00</td><td>0.54</td><td>8.66</td><td>0.00</td></tr><tr><td>jb1_filter-strict</td><td>0.52</td><td>8.11</td><td>0.10</td><td>0.90</td><td>7.73</td><td>0.12</td><td>0.30</td><td>8.72</td><td>0.07</td></tr><tr><td>jb1_jb2_filter-mild</td><td>0.38</td><td>8.02</td><td>0.10</td><td>0.90</td><td>7.46</td><td>0.12</td><td>0.30</td><td>8.22</td><td>0.08</td></tr><tr><td>jb1_jb2_filter-none</td><td>0.46</td><td>7.43</td><td>0.00</td><td>1.00</td><td>7.41</td><td>0.00</td><td>0.32</td><td>8.29</td><td>0.00</td></tr><tr><td>jb1_jb2_filter-strict</td><td>0.48</td><td>7.71</td><td>0.10</td><td>0.94</td><td>7.13</td><td>0.12</td><td>0.28</td><td>8.24</td><td>0.07</td></tr><tr><td>jb1_jb2_mw1_filter-mild</td><td>0.44</td><td>7.52</td><td>0.11</td><td>0.88</td><td>7.43</td><td>0.12</td><td>0.22</td><td>8.65</td><td>0.08</td></tr><tr><td>jb1_jb2_mw1_filter-none</td><td>0.54</td><td>7.69</td><td>0.00</td><td>0.98</td><td>7.16</td><td>0.00</td><td>0.34</td><td>8.37</td><td>0.00</td></tr><tr><td>jb1_jb2_mw1_filter-strict</td><td>0.42</td><td>7.94</td><td>0.10</td><td>0.94</td><td>7.49</td><td>0.12</td><td>0.26</td><td>8.70</td><td>0.08</td></tr><tr><td>jb1_jb2_mw1_mw2_filter-mild</td><td>0.40</td><td>7.67</td><td>0.10</td><td>0.90</td><td>7.01</td><td>0.13</td><td>0.22</td><td>8.16</td><td>0.08</td></tr><tr><td>jb1_jb2_mw1_mw2_filter-none</td><td>0.50</td><td>7.62</td><td>0.00</td><td>0.96</td><td>7.20</td><td>0.00</td><td>0.32</td><td>8.44</td><td>0.00</td></tr><tr><td>jb1_jb2_mw1_mw2_filter-strict</td><td>0.44</td><td>7.23</td><td>0.10</td><td>0.94</td><td>7.46</td><td>0.13</td><td>0.24</td><td>8.67</td><td>0.08</td></tr><tr><td>jb1_jb2_mw2_filter-mild</td><td>0.42</td><td>7.35</td><td>0.10</td><td>0.90</td><td>6.92</td><td>0.12</td><td>0.16</td><td>8.28</td><td>0.07</td></tr><tr><td>jb1_jb2_mw2_filter-none</td><td>0.52</td><td>6.93</td><td>0.00</td><td>0.96</td><td>6.71</td><td>0.00</td><td>0.44</td><td>8.54</td><td>0.00</td></tr><tr><td>jb1_jb2_mw2_filter-strict</td><td>0.42</td><td>7.45</td><td>0.10</td><td>0.92</td><td>7.32</td><td>0.12</td><td>0.32</td><td>8.33</td><td>0.07</td></tr><tr><td>jb1_mw1_filter-mild</td><td>0.44</td><td>7.82</td><td>0.10</td><td>0.90</td><td>7.64</td><td>0.12</td><td>0.22</td><td>8.64</td><td>0.08</td></tr><tr><td>jb1_mw1_filter-none</td><td>0.52</td><td>7.74</td><td>0.00</td><td>0.98</td><td>7.68</td><td>0.00</td><td>0.48</td><td>8.73</td><td>0.00</td></tr><tr><td>jb1_mw1_filter-strict</td><td>0.52</td><td>7.48</td><td>0.10</td><td>0.94</td><td>7.73</td><td>0.12</td><td>0.40</td><td>8.76</td><td>0.07</td></tr><tr><td>jb1_mw1_mw2_filter-mild</td><td>0.48</td><td>7.32</td><td>0.11</td><td>0.90</td><td>7.71</td><td>0.12</td><td>0.24</td><td>8.49</td><td>0.08</td></tr><tr><td>jb1_mw1_mw2_filter-none</td><td>0.60</td><td>7.42</td><td>0.00</td><td>0.98</td><td>7.80</td><td>0.00</td><td>0.48</td><td>8.41</td><td>0.00</td></tr><tr><td>jb1_mw1_mw2_filter-strict</td><td>0.54</td><td>7.60</td><td>0.10</td><td>0.94</td><td>7.78</td><td>0.12</td><td>0.26</td><td>8.35</td><td>0.07</td></tr><tr><td>jb1_mw2_filter-mild</td><td>0.44</td><td>7.80</td><td>0.11</td><td>0.90</td><td>7.62</td><td>0.12</td><td>0.22</td><td>8.06</td><td>0.08</td></tr><tr><td>jb1_mw2_filter-none</td><td>0.56</td><td>7.62</td><td>0.00</td><td>1.00</td><td>7.47</td><td>0.00</td><td>0.36</td><td>8.22</td><td>0.00</td></tr><tr><td>jb1_mw2_filter-strict</td><td>0.48</td><td>7.49</td><td>0.10</td><td>0.92</td><td>7.27</td><td>0.12</td><td>0.36</td><td>8.29</td><td>0.07</td></tr><tr><td>jb2_filter-mild</td><td>0.46</td><td>7.79</td><td>0.11</td><td>0.90</td><td>7.52</td><td>0.12</td><td>0.26</td><td>8.03</td><td>0.08</td></tr><tr><td>jb2_filter-none</td><td>0.46</td><td>7.62</td><td>0.00</td><td>0.98</td><td>7.38</td><td>0.00</td><td>0.32</td><td>8.30</td><td>0.00</td></tr><tr><td>jb2_filter-strict</td><td>0.44</td><td>7.91</td><td>0.10</td><td>0.90</td><td>7.32</td><td>0.12</td><td>0.24</td><td>7.87</td><td>0.06</td></tr><tr><td>jb2_mw1_filter-mild</td><td>0.40</td><td>7.79</td><td>0.10</td><td>0.90</td><td>7.86</td><td>0.13</td><td>0.26</td><td>8.11</td><td>0.08</td></tr><tr><td>jb2_mw1_filter-none</td><td>0.46</td><td>7.52</td><td>0.00</td><td>0.98</td><td>7.66</td><td>0.00</td><td>0.32</td><td>8.19</td><td>0.00</td></tr><tr><td>jb2_mw1_mw2_filter-mild</td><td>0.44</td><td>7.13</td><td>0.11</td><td>0.90</td><td>7.33</td><td>0.12</td><td>0.22</td><td>7.67</td><td>0.08</td></tr><tr><td>jb2_mw1_mw2_filter-none</td><td>0.44</td><td>7.25</td><td>0.00</td><td>1.00</td><td>7.54</td><td>0.00</td><td>0.42</td><td>7.68</td><td>0.00</td></tr><tr><td>jb2_mw1_mw2_filter-strict</td><td>0.44</td><td>6.86</td><td>0.10</td><td>0.94</td><td>7.55</td><td>0.12</td><td>0.18</td><td>7.82</td><td>0.07</td></tr><tr><td>jb2_mw2_filter-mild</td><td>0.40</td><td>6.97</td><td>0.10</td><td>0.90</td><td>7.37</td><td>0.12</td><td>0.20</td><td>7.58</td><td>0.08</td></tr><tr><td>jb2_mw2_filter-none</td><td>0.46</td><td>6.96</td><td>0.00</td><td>0.96</td><td>7.10</td><td>0.00</td><td>0.32</td><td>7.43</td><td>0.00</td></tr><tr><td>jb2_mw2_filter-strict</td><td>0.50</td><td>7.38</td><td>0.10</td><td>0.90</td><td>7.22</td><td>0.12</td><td>0.34</td><td>7.63</td><td>0.07</td></tr><tr><td>mw1_filter-mild</td><td>0.46</td><td>8.53</td><td>0.11</td><td>0.88</td><td>7.90</td><td>0.12</td><td>0.26</td><td>9.01</td><td>0.08</td></tr><tr><td>mw1_filter-none</td><td>0.52</td><td>8.67</td><td>0.00</td><td>0.98</td><td>7.96</td><td>0.00</td><td>0.44</td><td>9.05</td><td>0.00</td></tr><tr><td>mw1_filter-strict</td><td>0.52</td><td>8.06</td><td>0.10</td><td>0.94</td><td>8.44</td><td>0.12</td><td>0.32</td><td>8.98</td><td>0.08</td></tr><tr><td>mw1_mw2_filter-mild</td><td>0.42</td><td>7.60</td><td>0.10</td><td>0.90</td><td>7.74</td><td>0.12</td><td>0.20</td><td>8.82</td><td>0.08</td></tr><tr><td>mw1_mw2_filter-none</td><td>0.48</td><td>7.68</td><td>0.00</td><td>0.98</td><td>7.46</td><td>0.00</td><td>0.44</td><td>8.72</td><td>0.00</td></tr><tr><td>mw1_mw2_filter-strict</td><td>0.48</td><td>7.60</td><td>0.10</td><td>0.92</td><td>7.61</td><td>0.12</td><td>0.20</td><td>8.61</td><td>0.07</td></tr><tr><td>mw2_filter-mild</td><td>0.48</td><td>6.94</td><td>0.11</td><td>0.90</td><td>7.88</td><td>0.12</td><td>0.20</td><td>8.05</td><td>0.07</td></tr><tr><td>mw2_filter-none</td><td>0.48</td><td>7.05</td><td>0.00</td><td>0.98</td><td>7.60</td><td>0.00</td><td>0.32</td><td>7.85</td><td>0.00</td></tr><tr><td>mw2_filter-strict</td><td>0.54</td><td>7.34</td><td>0.10</td><td>0.92</td><td>7.77</td><td>0.12</td><td>0.20</td><td>8.25</td><td>0.07</td></tr></table>"}
# Data Valuation for LLM Fine-Tuning: Efficient Shapley Value Approximation via Language Model Arithmetic MÉLISSA TAMINE*, Criteo AI Lab, Fairplay joint team, France OTMANE SAKHI*, Criteo AI Lab, France BENJAMIN HEYMANN*, Criteo AI Lab, Fairplay joint team, France Data is a critical asset for training large language models (LLMs), alongside compute resources and skilled workers. While some training data is publicly available, substantial investment is required to generate proprietary datasets, such as human preference annotations or to curate new ones from existing sources. As larger datasets generally yield better model performance, two natural questions arise. First, how can data owners make informed decisions about curation strategies and data sources investment? Second, how can multiple data owners collaboratively pool their resources to train superior models while fairly distributing the benefits? This problem, data valuation, which is not specific to large language models, has been addressed by the machine learning community through the lens of cooperative game theory, with the Shapley value being the prevalent solution concept. However, computing Shapley values is notoriously expensive for data valuation, typically requiring numerous model retrainings, which can become prohibitive for large machine learning models. In this work, we demonstrate that this computational challenge is dramatically simplified for LLMs trained with Direct Preference Optimization (DPO). We show how the specific mathematical structure of DPO enables scalable Shapley value computation. We believe this observation unlocks many applications at the intersection of data valuation and large language models. Additional Key Words and Phrases: Shapley value, Data valuation, Large-language models (LLMs), Fine-tuning, Direct preference optimization (DPO), Language Model Arithmetic # 1 INTRODUCTION Large language models (LLMs) are the result of collaborative training and alignment pipelines: a single deployed model may have been pre-trained on heterogeneous web-scale corpora, often mixed with proprietary data sources, and then adapted through instruction tuning and several stages of preference-based alignment, such as Reinforcement Learning from Human Preferences (RLHF) typically implemented with policy-gradient methods like Proximal Policy Optimization (PPO), and more recent methods such as Direct Preference Optimization (DPO) or weak-to-strong supervision schemes. Recent scaling-law studies show that, beyond architecture, LLMs' performance is mainly driven by the amount, diversity, and quality of training data. This reflects the adage that data is the new oil. The key scarce resource of LLMs is not the model design, but the data provided by companies, institutions, and user communities: people are paid to label and rank model outputs, LLMs themselves are used to generate additional training data, and companies engage in legal battles over access to valuable corpora. This raises a central data valuation question: how should we attribute the contribution of each data source to the final behavior of an LLM? Answering this question is not just of academic interest: data valuation (i.e., a systematic way to attribute value to a data source) is a prerequisite for data markets (what is a fair price for a dataset?), contractual guarantees (what level of performance can we promise to a contributor?), incentive design (how should we reward agents whose data improves alignment?), and even basic notions of responsibility (which data source made the model toxic on an input?). Cooperative game theory, in particular the Shapley value, provides a formalism for such data valuation problems: interpret each data source as a player, define the utility of any coalition of players, and use the Shapley value to fairly split the utility among all players. However, directly instantiating this paradigm for LLM fine-tuning is computationally prohibitive. In Shapley-based data valuation, the core bottleneck is that the utility must be evaluated for every coalition of data sources, and the number of such coalitions grows exponentially with the number of sources. Even in classical supervised learning, this already leads to an impractical number of model retrainings. Still, the situation is worse for LLMs. Each utility evaluation now requires fine-tuning a large model. With PPO or DPO, this means running a complete preference optimization loop on each coalition of data sources, even if one relies on emulation or distillation techniques that use a smaller model to approximate the effect of fine-tuning the LLM (such methods may lower the cost per run, but they do not remove the exponential number of coalition-specific runs). The present proposal addresses this obstacle. We focus on preference-based fine-tuning, and in particular DPO, where the objective is defined directly over pairs of preferred and dispreferred responses. Our key observation is that training sequentially across multiple datasets corresponds to summing the reward models learned from each dataset independently. Hence, in this setting, the utility of a coalition of datasets need not be defined by a dedicated fine-tuning, and we can take inspiration from recent work on language model arithmetic: starting from one base model and a collection of models, each fine-tuned on a single dataset, one can construct at inference time composite models that approximately capture the effect of training on unions of datasets, by combining their output probabilities using a simple arithmetic rule over the fine-tuned models. Building on this observation, we propose a Shapley value approximation method for LLMs that reduces the number of required fine-tunings from exponential to linear in the number of data sources. We first use DPO to fine-tune one model per data source, then apply model arithmetic to construct, at inference time, an approximate model for any coalition of sources (a coalition model). The utility of a coalition is then defined as the performance of the coalition model on a fixed evaluation task. This enables estimating the Shapley value for all data sources while performing only one fine-tuning per source. Related works. There are two streams of Shapley value applications for aligned LLMs: one focused on model explainability for token or feature-level attribution of predictions, and another focused on data valuation for quantifying training data contributions. Our work is orthogonal to the first stream and contributes to the second. Its novelty, compared to prior data valuation studies for LLMs, is to exploit the specific mathematical structure of preference-based alignment (DPO combined with language model arithmetic) to reduce the cost of computing coalition utilities. In this work, we show that combining DPO with the philosophy of language model arithmetic enables us to approximate coalition utilities without performing coalition-specific fine-tunings. # 2 PRELIMINARY # 2.1 Evaluation of LLMs We consider an aligned LLM as a stochastic policy $\pi : \mathcal{X} \to \Delta(\mathcal{Y})$ mapping prompts $x$ to distributions over responses $y$ , where $\Delta(\mathcal{Y})$ denotes the probability simplex over the response space $\mathcal{Y}$ . Let $\mathcal{D}$ be the evaluation prompt distribution (e.g., from a held-out validation set) and $r: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ a reward function measuring response quality (e.g., from a Fig. 1. We can equip the collections of datasets, the set of reward models, and the set of policies with binary operators to give them a group (or semi-group) structure. After doing so, we observe that some training methods, such as sequential DPO, preserve this structure and therefore induce morphisms between those semi-groups. reward model). We define the value of a policy $\pi$ as its expected reward: $$ v (\pi) = \mathbb {E} _ {x \sim \mathcal {D}, y \sim \pi (\cdot | x)} [ r (x, y) ]. \tag {1} $$ Suppose we have a set $\mathcal{N} = \{1, \dots, n\}$ of data sources, where each source $i \in \mathcal{N}$ provides a dataset $\mathcal{D}_i$ of preference pairs (or, more generally, alignment examples). For any coalition $S \subseteq \mathcal{N}$ , let $\pi_S$ denote the model obtained by applying a fixed alignment procedure (e.g., DPO) with fixed hyperparameters and a reference model, to the union of the corresponding datasets $\bigcup_{i \in S} \mathcal{D}_i$ . The utility of a coalition $S$ is then the value of its corresponding policy: $$ u (S) = v \left(\pi_ {S}\right). \tag {2} $$ In practice, $u(S) = v(\pi_S)$ is estimated empirically by averaging $r(x_j, y_j)$ over a finite validation set $\{x_j\}_{j=1}^m$ and sampled responses $y_j \sim \pi_S(\cdot | x_j)$ . # 2.2 Shapley-based data valuation Given the utility $u(S) = v(\pi_S)$ defined in Equation (2), the Shapley value provides a game-theoretic method for splitting the total utility $u(\mathcal{N}) = v(\pi_{\mathcal{N}})$ (achieved when aligning on all data sources) among individual sources. Formally, for each data source $i \in \mathcal{N}$ , its Shapley value is $$ \varphi_ {i} = \sum_ {S \subseteq \mathcal {N} \backslash \{i \}} \frac {| S | ! (n - | S | - 1) !}{n !} [ v (\pi_ {S \cup \{i \}}) - v (\pi_ {S}) ]. \tag {3} $$ The popularity of using the Shapley value to perform data valuation stems from the fact that it is the unique value notion satisfying four axioms (efficiency, symmetry, dummy, and linearity) that are economically desirable. However, one challenge of performing Shapley-based data valuation is its computational complexity. Evaluating the exact Shapley value of each data source using Equation (3) involves computing the marginal utility $v(\pi_{S \cup \{i\}}) - v(\pi_S)$ of every source $i$ to every coalition $S$ , which is $O(2^{|N|})$ . Such exponential computation is not feasible in any realistic setting, as
# Data Valuation for LLM Fine-Tuning: Efficient Shapley Value Approximation via Language Model Arithmetic MÉLISSA TAMINE*, Criteo AI Lab, Fairplay joint team, France OTMANE SAKHI*, Criteo AI Lab, France BENJAMIN HEYMANN*, Criteo AI Lab, Fairplay joint team, France Data is a critical asset for training large language models (LLMs), alongside compute resources and skilled workers. While some training data is publicly available, substantial investment is required to generate proprietary datasets, such as human preference annotations or to curate new ones from existing sources. As larger datasets generally yield better model performance, two natural questions arise. First, how can data owners make informed decisions about curation strategies and data sources investment? Second, how can multiple data owners collaboratively pool their resources to train superior models while fairly distributing the benefits? This problem, data valuation, which is not specific to large language models, has been addressed by the machine learning community through the lens of cooperative game theory, with the Shapley value being the prevalent solution concept. However, computing Shapley values is notoriously expensive for data valuation, typically requiring numerous model retrainings, which can become prohibitive for large machine learning models. In this work, we demonstrate that this computational challenge is dramatically simplified for LLMs trained with Direct Preference Optimization (DPO). We show how the specific mathematical structure of DPO enables scalable Shapley value computation. We believe this observation unlocks many applications at the intersection of data valuation and large language models. Additional Key Words and Phrases: Shapley value, Data valuation, Large-language models (LLMs), Fine-tuning, Direct preference optimization (DPO), Language Model Arithmetic # 1 INTRODUCTION Large language models (LLMs) are the result of collaborative training and alignment pipelines: a single deployed model may have been pre-trained on heterogeneous web-scale corpora, often mixed with proprietary data sources, and then adapted through instruction tuning and several stages of preference-based alignment, such as Reinforcement Learning from Human Preferences (RLHF) typically implemented with policy-gradient methods like Proximal Policy Optimization (PPO), and more recent methods such as Direct Preference Optimization (DPO) or weak-to-strong supervision schemes. Recent scaling-law studies show that, beyond architecture, LLMs' performance is mainly driven by the amount, diversity, and quality of training data. This reflects the adage that data is the new oil. The key scarce resource of LLMs is not the model design, but the data provided by companies, institutions, and user communities: people are paid to label and rank model outputs, LLMs themselves are used to generate additional training data, and companies engage in legal battles over access to valuable corpora. This raises a central data valuation question: how should we attribute the contribution of each data source to the final behavior of an LLM? Answering this question is not just of academic interest: data valuation (i.e., a systematic way to attribute value to a data source) is a prerequisite for data markets (what is a fair price for a dataset?), contractual guarantees (what level of performance can we promise to a contributor?), incentive design (how should we reward agents whose data improves alignment?), and even basic notions of responsibility (which data source made the model toxic on an input?). Cooperative game theory, in particular the Shapley value, provides a formalism for such data valuation problems: interpret each data source as a player, define the utility of any coalition of players, and use the Shapley value to fairly split the utility among all players. However, directly instantiating this paradigm for LLM fine-tuning is computationally prohibitive. In Shapley-based data valuation, the core bottleneck is that the utility must be evaluated for every coalition of data sources, and the number of such coalitions grows exponentially with the number of sources. Even in classical supervised learning, this already leads to an impractical number of model retrainings. Still, the situation is worse for LLMs. Each utility evaluation now requires fine-tuning a large model. With PPO or DPO, this means running a complete preference optimization loop on each coalition of data sources, even if one relies on emulation or distillation techniques that use a smaller model to approximate the effect of fine-tuning the LLM (such methods may lower the cost per run, but they do not remove the exponential number of coalition-specific runs). The present proposal addresses this obstacle. We focus on preference-based fine-tuning, and in particular DPO, where the objective is defined directly over pairs of preferred and dispreferred responses. Our key observation is that training sequentially across multiple datasets corresponds to summing the reward models learned from each dataset independently. Hence, in this setting, the utility of a coalition of datasets need not be defined by a dedicated fine-tuning, and we can take inspiration from recent work on language model arithmetic: starting from one base model and a collection of models, each fine-tuned on a single dataset, one can construct at inference time composite models that approximately capture the effect of training on unions of datasets, by combining their output probabilities using a simple arithmetic rule over the fine-tuned models. Building on this observation, we propose a Shapley value approximation method for LLMs that reduces the number of required fine-tunings from exponential to linear in the number of data sources. We first use DPO to fine-tune one model per data source, then apply model arithmetic to construct, at inference time, an approximate model for any coalition of sources (a coalition model). The utility of a coalition is then defined as the performance of the coalition model on a fixed evaluation task. This enables estimating the Shapley value for all data sources while performing only one fine-tuning per source. Related works. There are two streams of Shapley value applications for aligned LLMs: one focused on model explainability for token or feature-level attribution of predictions, and another focused on data valuation for quantifying training data contributions. Our work is orthogonal to the first stream and contributes to the second. Its novelty, compared to prior data valuation studies for LLMs, is to exploit the specific mathematical structure of preference-based alignment (DPO combined with language model arithmetic) to reduce the cost of computing coalition utilities. In this work, we show that combining DPO with the philosophy of language model arithmetic enables us to approximate coalition utilities without performing coalition-specific fine-tunings. # 2 PRELIMINARY # 2.1 Evaluation of LLMs We consider an aligned LLM as a stochastic policy $\pi : \mathcal{X} \to \Delta(\mathcal{Y})$ mapping prompts $x$ to distributions over responses $y$ , where $\Delta(\mathcal{Y})$ denotes the probability simplex over the response space $\mathcal{Y}$ . Let $\mathcal{D}$ be the evaluation prompt distribution (e.g., from a held-out validation set) and $r: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ a reward function measuring response quality (e.g., from a Fig. 1. We can equip the collections of datasets, the set of reward models, and the set of policies with binary operators to give them a group (or semi-group) structure. After doing so, we observe that some training methods, such as sequential DPO, preserve this structure and therefore induce morphisms between those semi-groups. reward model). We define the value of a policy $\pi$ as its expected reward: $$ v (\pi) = \mathbb {E} _ {x \sim \mathcal {D}, y \sim \pi (\cdot | x)} [ r (x, y) ]. \tag {1} $$ Suppose we have a set $\mathcal{N} = \{1, \dots, n\}$ of data sources, where each source $i \in \mathcal{N}$ provides a dataset $\mathcal{D}_i$ of preference pairs (or, more generally, alignment examples). For any coalition $S \subseteq \mathcal{N}$ , let $\pi_S$ denote the model obtained by applying a fixed alignment procedure (e.g., DPO) with fixed hyperparameters and a reference model, to the union of the corresponding datasets $\bigcup_{i \in S} \mathcal{D}_i$ . The utility of a coalition $S$ is then the value of its corresponding policy: $$ u (S) = v \left(\pi_ {S}\right). \tag {2} $$ In practice, $u(S) = v(\pi_S)$ is estimated empirically by averaging $r(x_j, y_j)$ over a finite validation set $\{x_j\}_{j=1}^m$ and sampled responses $y_j \sim \pi_S(\cdot | x_j)$ . # 2.2 Shapley-based data valuation Given the utility $u(S) = v(\pi_S)$ defined in Equation (2), the Shapley value provides a game-theoretic method for splitting the total utility $u(\mathcal{N}) = v(\pi_{\mathcal{N}})$ (achieved when aligning on all data sources) among individual sources. Formally, for each data source $i \in \mathcal{N}$ , its Shapley value is $$ \varphi_ {i} = \sum_ {S \subseteq \mathcal {N} \backslash \{i \}} \frac {| S | ! (n - | S | - 1) !}{n !} [ v (\pi_ {S \cup \{i \}}) - v (\pi_ {S}) ]. \tag {3} $$ The popularity of using the Shapley value to perform data valuation stems from the fact that it is the unique value notion satisfying four axioms (efficiency, symmetry, dummy, and linearity) that are economically desirable. However, one challenge of performing Shapley-based data valuation is its computational complexity. Evaluating the exact Shapley value of each data source using Equation (3) involves computing the marginal utility $v(\pi_{S \cup \{i\}}) - v(\pi_S)$ of every source $i$ to every coalition $S$ , which is $O(2^{|N|})$ . Such exponential computation is not feasible in any realistic setting, as computing exact Shapley values would entail an exponential number of full LLM finetunings, far beyond what is feasible at modern model scales. # 2.3 LLM fine-tuning LLM fine-tuning, and especially preference alignment, is a crucial step that guides pre-trained models toward desired behaviors and underlies much of their success in chat applications. The most successful and widely used preference alignment methods, for instance Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO), are motivated through KL-regularized expected-reward maximization. In these approaches, our objective is to align a pre-trained LLM, denoted by $\pi_0$ , on a given coalition of datasets of preferences $\mathcal{D}_S = \cup_{\ell \in S}\mathcal{D}_{\ell}$ with $\mathcal{D}_{\ell} = \{x_{i,\ell},y_{i,\ell}^{+},y_{i,\ell}^{-}\}_{i\in [n_{\ell}]}$ . We first describe the core alignment approaches on a single dataset of preferences $\mathcal{D}_{\ell}$ , before discussing possible alignment approaches on coalitions. RLHF begins by modeling a reward signal $\hat{r}_{\ell}:X\times Y\to \mathbb{R}$ on $\mathcal{D}_{\ell}$ , where $\hat{r}_{\ell}(x,y)$ captures the quality of response $y\in \mathcal{Y}$ to the question or prompt $x\in \mathcal{X}$ . This reward is modeled with Bradley-Terry and learned with maximum likelihood estimation by solving $$ \hat {r} _ {\ell} \in \arg \max _ {r} \left\{\sum_ {i \in \mathcal {D} _ {\ell}} \log \sigma \left(r \left(x _ {i}, y _ {i} ^ {+}\right) - r \left(x _ {i}, y _ {i} ^ {-}\right)\right) \right\}. \tag {4} $$ This reward is then used to align a pretrained LLM, denoted by $\pi_0$ , and used as a reference, solving the following optimization problem: $$ \pi_ {\ell} ^ {\star} \in \arg \max _ {\pi} \left\{\sum_ {i \in \mathcal {D} _ {\ell}} \mathbb {E} _ {y \sim \pi (\cdot | x _ {i})} [ \hat {r} _ {\ell} (x _ {i}, y) ] - \beta \mathrm {K L} (\pi (\cdot | x _ {i}), \pi_ {0} (\cdot | x _ {i})) \right\}, $$ with $\beta > 0$ the KL regularization parameter. This optimization problem admits an analytical solution, giving the form of the optimal aligned policy. For each prompt $x \in X$ , the optimal distribution over responses $y \in Y$ is $$ \pi_ {\ell} ^ {\star} (y | x) \propto \exp \left(\hat {r} _ {\ell} (x, y) / \beta\right) \pi_ {0} (y | x). \tag {5} $$ Given its intractable normalization, this policy is approximated using policy learning algorithms such as REINFORCE and PPO, to name a few. Direct Preference Optimization (DPO) exploits the structure of the optimal policy to bypass the explicit reward modeling and RL steps, treating the LLM itself as the object of optimization. Plugging (5) into the Bradley-Terry likelihood of (4) leads to an equivalent preference-learning objective directly over $\pi$ : $$ \hat {\pi} _ {\ell} ^ {\star} \in \arg \max _ {\pi} \left\{\sum_ {i \in \mathcal {D} _ {\ell}} \log \sigma \left(\beta \left(\log \frac {\pi (y _ {i} ^ {+} | x _ {i})}{\pi_ {0} (y _ {i} ^ {+} | x _ {i})} - \log \frac {\pi (y _ {i} ^ {-} | x _ {i})}{\pi_ {0} (y _ {i} ^ {-} | x _ {i})}\right)\right) \right\}. $$ Thus, under the KL-regularized expected-reward framework, DPO targets the same family of optimal policies as RLHF on $\mathcal{D}_{\ell}$ , but does so implicitly through a single maximum-likelihood stage without training an explicit reward model. In the following, without any loss of generality, we use DPO as our alignment algorithm due to its ease of implementation and practicality. # 3 COALITION POLICIES AND LANGUAGE MODEL ARITHMETIC After describing the core alignment algorithms on a single dataset, we now extend them, with DPO as a representative example, to subsets of data sources (which we will refer to as coalitions in the game theoretic sense). A straightforward approach is to apply DPO directly to the coalition dataset $\mathcal{D}_S = \cup_{\ell \in S}\mathcal{D}_{\ell}$ , solving $$ \hat {\pi} _ {S} ^ {\star} \in \arg \max _ {\pi} \left\{\sum_ {\ell \in S} \sum_ {i \in \mathcal {D} _ {\ell}} \log \sigma \left(\beta \left(\log \frac {\pi (y _ {i} ^ {+} | x _ {i})}{\pi_ {0} (y _ {i} ^ {+} | x _ {i})} - \log \frac {\pi (y _ {i} ^ {-} | x _ {i})}{\pi_ {0} (y _ {i} ^ {-} | x _ {i})}\right)\right) \right\}. $$ While natural, this is not the only way to align a pretrained LLM using datasets $\{\mathcal{D}_{\ell}\}_{\ell \in S}$ . Since our goal is to study data valuation, we seek alignment algorithms that expose useful algebraic structure across coalitions (see Figure 1), reducing computational cost and simplifying valuation. To this end, we propose a different coalition alignment procedure, which we call Sequential DPO, that is simple to implement and differs only slightly from naive DPO on aggregations. Algorithm 1 describes this procedure. Despite its sequential implementation, Algorithm 1 should converge to the same Algorithm 1: Sequential Direct Preference Optimization 1 Input: Reference $\pi_0$ , Parameter $\beta > 0$ , Coalition $S$ . 2. Initialise: $k = 0, L$ list of indices in $S$ . 3 while $k < |S|$ do 4 Set $\ell \gets L[k]$ Set $k\gets k + 1$ 5 Output: $\hat{\pi}_S^\star = \pi_{k + 1}$ $$ \pi_ {k + 1} \leftarrow \arg \max _ {\pi} \sum_ {i \in \mathcal {D} _ {\ell}} \log \sigma \left(\beta \left(\log \frac {\pi \left(y _ {i} ^ {+} \mid x _ {i}\right)}{\pi_ {k} \left(y _ {i} ^ {+} \mid x _ {i}\right)} - \log \frac {\pi \left(y _ {i} ^ {-} \mid x _ {i}\right)}{\pi_ {k} \left(y _ {i} ^ {-} \mid x _ {i}\right)}\right)\right). $$ policy regardless of the ordering of datasets in $S$ . Indeed, at convergence, the aligned coalition policy satisfies the closed form $$ \pi_ {S} ^ {\star} (y | x) \propto \exp \left(\frac {1}{\beta} \sum_ {\ell \in S} \hat {r} _ {\ell} (x, y)\right) \pi_ {0} (y | x), \tag {6} $$ which depends only on the set $S$ and not on its enumeration. Sequential DPO therefore recovers classical DPO when $|S| = 1$ and extends it naturally to coalitions. Our main question is whether we can bypass coalition-specific alignment entirely by exploiting the structure of the alignment objective. The answer is yes: assuming convergence of the alignment algorithms, we can express the coalition-aligned policy $\pi_S^\star$ directly in terms of the individually aligned models $\{\pi_\ell^\star\}_{\ell \in S}$ . Starting from the identities $$ \log \pi_ {0} (y | x) = s _ {0} (x, y) + C _ {0} (x) $$ $$ \log \pi_ {\ell} ^ {\star} (y | x) = s _ {\ell} (x, y) + C _ {\ell} (x) = \frac {1}{\beta} \hat {r} _ {\ell} (x, y) + s _ {0} (x, y) + \tilde {C} _ {\ell} (x). $$ and discarding $y$ -independent constants that vanish under normalization, we obtain $$ \begin{array}{l} \pi_ {S} ^ {\star} (y | x) \propto \exp \left(\frac {1}{\beta} \sum_ {\ell \in S} \hat {r} _ {\ell} (x, y)\right) \pi_ {0} (y | x), \\ \propto \exp \left(\sum_ {\ell \in S} s _ {\ell} (x, y) + (1 - | S |) s _ {0} (x, y)\right). \\ \end{array} $$ This simple result is powerful: for any coalition $S$ , the coalition-aligned model $\pi_S^\star$ can be recovered exactly using only the individually aligned models $\{\pi_\ell^\star\}_{\ell \in S}$ and the reference model $\pi_0$ . No additional optimization or training on the coalition dataset is required. In other words, the coalition policy can be obtained purely through algebraic operations on already-trained models. This provides a principled and training-free procedure for reconstructing $\pi_S^\star$ , and constitutes a formal instance of language model arithmetic, where new behaviors emerge from structured combinations of existing LLMs rather than from further fine-tuning. This perspective closely aligns with the Language Model Arithmetic philosophy, originally introduced in. # 4 IMPLEMENTATION The goal of this section is to illustrate our approach in a small, easily reproducible setting. We take SmolLM-135M-Instruct as the initial policy $\pi_0$ and fine-tune it with DPO on different subsets of the UltraFeedback dataset. Specifically, we use 4 UltraFeedback sources as data providers: flan_v2_niv2, sharegpt, evol_instruct, and ultrachat. We use the Hugging Face TRL implementation of DPO with the default inverse temperature $\beta = 0.1$ . Fine-tuning is performed on a single A100 80GB GPU for 4 epochs, with batch size 32, 4 gradient-accumulation steps, and a learning rate of $2 \times 10^{-5}$ . We apply LoRA with rank $r = 8$ and scaling $\alpha = 16$ to obtain one adapter per data source. In this experiment, we thus have $n = 4$ data sources and compute Shapley values for these $n$ sources using only $n$ DPO fine-tunings (instead of $2^4 = 16$ ), exploiting the language model arithmetic property derived in the previous section. Concretely, we keep the base model $\pi_0$ frozen and learn $n$ LoRA adapters, one per source. At evaluation time, we construct approximate coalition policies $\hat{\pi}_S$ by combining the corresponding LoRA adapters, and define utilities via $v(\hat{\pi}_S)$ . This yields the following Shapley approximation: $$ \hat {\varphi} _ {i} = \sum_ {S \subseteq \mathcal {N} \backslash \{i \}} \frac {| S | ! (n - | S | - 1) !}{n !} \left[ v \left(\hat {\pi} _ {S \cup \{i \}}\right) - v \left(\hat {\pi} _ {S}\right) \right]. \tag {7} $$ For evaluation, we use two scalar reward models from the RLHF literature, one trained to measure helpfulness and the other to measure harmlessness. For each coalition model $\hat{\pi}_S$ , we generate responses to 128 prompts drawn from UltraFeedback examples whose sources are disjoint from our 4 training subsets (the unused portion of UltraFeedback), and define $v(\hat{\pi}_S)$ as the average reward over these prompts. Figure 2 shows the resulting approximate Shapley values, plotting each data source in the plane defined by its Shapley value under the helpfulness reward (x-axis) and under the harmlessness reward (y-axis). Remark. In our experiments, we evaluate $v(\hat{\pi}_S)$ for all $2^n$ coalitions, so (7) is exact given the coalition models $\{\hat{\pi}_S\}_S$ . At larger scales, our contribution can complement standard Shapley approximations. Once coalition utilities can be queried cheaply via our proposed instance of language model arithmetic, one can further subsample coalitions using Monte Carlo permutation sampling or a regression trick to reduce the number of inference calls from $2^n$ to a polynomial in the number of data sources. The spatial signature in Figure 2 represents the contribution of each source in a multi-objective value space and illustrates our aim at more interpretable LLM training. Each point corresponds to one UltraFeedback subset. We observe heterogeneous profiles: sharegpt has the most significant positive helpful value and a mildly positive harmless value, ultrachat contributes mainly to harmlessness with an almost neutral helpful value, flan_v2_niv2 yields a small helpful gain but slightly harms harmlessness, and evol_instruct provides helpfulness at the cost of the most significant negative harmless value. Given an LLM training, the data signature documents the effective influence of each dataset. Negative Shapley values here are informative: they identify sources whose marginal contribution, averaged over coalitions, pushes the model in an undesirable direction for a given reward. Overall, the plot demonstrates that our Fig. 2. Approximate Shapley values of 4 UltraFeedback data sources (flan_v2_niv2, sharegpt, evol_instruct, ultrachat) under two rewards. Each point corresponds to a data source. The x-axis shows its Shapley value for the helpfulness reward, and the y-axis for the harmlessness reward. The diagonal $y = x$ indicates perfect agreement between the two rewards on the relative importance of each source. method makes it practically feasible to characterize training datasets along multiple alignment dimensions, rather than treating them as undifferentiated additional data. # 5 VISION For LLM fine-tuning, sequential training across multiple datasets corresponds to summing the reward models learned from each dataset independently. We argue that this observation has practical implications and opens several research directions. The Shapley value, and more generally semivalues, could be used to quantify, in an interpretable manner, the contribution of each dataset used during training. In SHAP, the Shapley value is used to identify the role of features in the output of the model, with a notable impact on the field of interpretable ML, in the context of LLM, the role of the features could be reinterpreted by the data. We notably foresee applications in contexts where copyright infringement and data ownership are becoming contentious. Since data quality is key to LLM performance, one could envision new training algorithms that automatically curate or weight their input data, just as it is now typical in machine learning to reweight datasets or individual datapoints to improve model outcomes. Similarly, during training, there is room for Shapley values to improve synthetic data generation, as in NashMD or online DPO. Even further: new architectures could dynamically select model operations online, guided by Shapley values tailored to the specific context or task. When datasets reflect individual preferences. The algebraic relation we highlight may support new methods at the intersection of social choice and LLM training. Indeed, it has been already observed that LLM are in fact preference aggregators. In what we described in Figure 1, it is the combination of a Bradley-Terry preference aggregation model, a KL-regularized reward objective and a sequential training resulted in a morphism between the dataset collections and the policies. An open research question is whether there exists other combinations of preference aggregation model and training rules such that such a morphism exists. In parallel to those directions, our research program calls for a better theoretical and empirical understanding of how sequential DPO relates to DPO. First, checking how the commutativity property persists in practice, where convergence is not reached, and second, how does the sequential resolution of an optimization problem approximate the original problem.
arxiv_cs
2025-12-12T00:00:00Z
https://arxiv.org/pdf/2512.15765
{"title": "Data Valuation for LLM Fine-Tuning: Efficient Shapley Value Approximation via Language Model Arithmetic", "raw_content": "# Data Valuation for LLM Fine-Tuning: Efficient Shapley Value Approximation via Language Model Arithmetic\n\nMÉLISSA TAMINE*, Criteo AI Lab, Fairplay joint team, France\n\nOTMANE SAKHI*, Criteo AI Lab, France\n\nBENJAMIN HEYMANN*, Criteo AI Lab, Fairplay joint team, France\n\nData is a critical asset for training large language models (LLMs), alongside compute resources and skilled workers. While some training data is publicly available, substantial investment is required to generate proprietary datasets, such as human preference annotations or to curate new ones from existing sources. As larger datasets generally yield better model performance, two natural questions arise. First, how can data owners make informed decisions about curation strategies and data sources investment? Second, how can multiple data owners collaboratively pool their resources to train superior models while fairly distributing the benefits? This problem, data valuation, which is not specific to large language models, has been addressed by the machine learning community through the lens of cooperative game theory, with the Shapley value being the prevalent solution concept. However, computing Shapley values is notoriously expensive for data valuation, typically requiring numerous model retrainings, which can become prohibitive for large machine learning models. In this work, we demonstrate that this computational challenge is dramatically simplified for LLMs trained with Direct Preference Optimization (DPO). We show how the specific mathematical structure of DPO enables scalable Shapley value computation. We believe this observation unlocks many applications at the intersection of data valuation and large language models.\n\nAdditional Key Words and Phrases: Shapley value, Data valuation, Large-language models (LLMs), Fine-tuning, Direct preference optimization (DPO), Language Model Arithmetic\n\n# 1 INTRODUCTION\n\nLarge language models (LLMs) are the result of collaborative training and alignment pipelines: a single deployed model may have been pre-trained on heterogeneous web-scale corpora, often mixed with proprietary data sources, and then adapted through instruction tuning [30] and several stages of preference-based alignment, such as Reinforcement Learning from Human Preferences (RLHF) [6, 18, 45] typically implemented with policy-gradient methods like Proximal Policy Optimization (PPO) [33], and more recent methods such as Direct Preference Optimization (DPO) [31] or weak-to-strong supervision schemes [4]. Recent scaling-law studies show that, beyond architecture, LLMs' performance is mainly driven by the amount, diversity, and quality of training data [15, 21]. This reflects the adage that data is the new oil [19]. The key scarce resource of LLMs is not the model design, but the data provided by companies, institutions, and user communities: people are paid to label and rank model outputs, LLMs themselves are used to generate additional training data, and companies engage in legal battles over access to valuable corpora. This raises a central data valuation question: how should we attribute the contribution of each data source to the final behavior of an LLM? Answering this question is not just of academic interest: data valuation (i.e., a systematic way to attribute value to a data source) is a prerequisite for data markets (what is a fair price for a dataset?) [1, 5], contractual guarantees (what level of performance can we promise to a contributor?), incentive design (how should we reward agents whose data improves alignment?) [22, 35], and even basic notions of responsibility (which data source made the model toxic on an input?) [12, 19, 20, 38].\n\nCooperative game theory, in particular the Shapley value [34], provides a formalism for such data valuation problems:\n\ninterpret each data source as a player, define the utility of any coalition of players, and use the Shapley value to fairly split the utility among all players. However, directly instantiating this paradigm for LLM fine-tuning is computationally prohibitive. In Shapley-based data valuation, the core bottleneck is that the utility must be evaluated for every coalition of data sources, and the number of such coalitions grows exponentially with the number of sources [12, 34]. Even in classical supervised learning, this already leads to an impractical number of model retrainings. Still, the situation is worse for LLMs. Each utility evaluation now requires fine-tuning a large model. With PPO or DPO, this means running a complete preference optimization loop on each coalition of data sources, even if one relies on emulation or distillation techniques that use a smaller model to approximate the effect of fine-tuning the LLM [27] (such methods may lower the cost per run, but they do not remove the exponential number of coalition-specific runs).\n\nThe present proposal addresses this obstacle. We focus on preference-based fine-tuning, and in particular DPO, where the objective is defined directly over pairs of preferred and dispreferred responses [31]. Our key observation is that training sequentially across multiple datasets corresponds to summing the reward models learned from each dataset independently. Hence, in this setting, the utility of a coalition of datasets need not be defined by a dedicated fine-tuning, and we can take inspiration from recent work on language model arithmetic[9]: starting from one base model and a collection of models, each fine-tuned on a single dataset, one can construct at inference time composite models that approximately capture the effect of training on unions of datasets, by combining their output probabilities using a simple arithmetic rule over the fine-tuned models. Building on this observation, we propose a Shapley value approximation method for LLMs that reduces the number of required fine-tunings from exponential to linear in the number of data sources. We first use DPO to fine-tune one model per data source, then apply model arithmetic to construct, at inference time, an approximate model for any coalition of sources (a coalition model). The utility of a coalition is then defined as the performance of the coalition model on a fixed evaluation task. This enables estimating the Shapley value for all data sources while performing only one fine-tuning per source.\n\nRelated works. There are two streams of Shapley value applications for aligned LLMs: one focused on model explainability for token or feature-level attribution of predictions [16, 40, 42, 44], and another focused on data valuation for quantifying training data contributions [13, 28, 32, 41]. Our work is orthogonal to the first stream and contributes to the second. Its novelty, compared to prior data valuation studies for LLMs, is to exploit the specific mathematical structure of preference-based alignment (DPO combined with language model arithmetic) to reduce the cost of computing coalition utilities.\n\nIn this work, we show that combining DPO with the philosophy of language model arithmetic enables us to approximate coalition utilities without performing coalition-specific fine-tunings.\n\n# 2 PRELIMINARY\n\n# 2.1 Evaluation of LLMs\n\nWe consider an aligned LLM as a stochastic policy $\\pi : \\mathcal{X} \\to \\Delta(\\mathcal{Y})$ mapping prompts $x$ to distributions over responses $y$ , where $\\Delta(\\mathcal{Y})$ denotes the probability simplex over the response space $\\mathcal{Y}$ . Let $\\mathcal{D}$ be the evaluation prompt distribution (e.g., from a held-out validation set) and $r: \\mathcal{X} \\times \\mathcal{Y} \\to \\mathbb{R}$ a reward function measuring response quality (e.g., from a\n\n![](images/4794c343d9d04119ff0610466d2ee702314dad98f1df7e69839d116d12241cfe.jpg) \nFig. 1. We can equip the collections of datasets, the set of reward models, and the set of policies with binary operators to give them a group (or semi-group) structure. After doing so, we observe that some training methods, such as sequential DPO, preserve this structure and therefore induce morphisms between those semi-groups.\n\nreward model). We define the value of a policy $\\pi$ as its expected reward:\n\n$$\nv (\\pi) = \\mathbb {E} _ {x \\sim \\mathcal {D}, y \\sim \\pi (\\cdot | x)} [ r (x, y) ]. \\tag {1}\n$$\n\nSuppose we have a set $\\mathcal{N} = \\{1, \\dots, n\\}$ of data sources, where each source $i \\in \\mathcal{N}$ provides a dataset $\\mathcal{D}_i$ of preference pairs (or, more generally, alignment examples). For any coalition $S \\subseteq \\mathcal{N}$ , let $\\pi_S$ denote the model obtained by applying a fixed alignment procedure (e.g., DPO) with fixed hyperparameters and a reference model, to the union of the corresponding datasets $\\bigcup_{i \\in S} \\mathcal{D}_i$ .\n\nThe utility of a coalition $S$ is then the value of its corresponding policy:\n\n$$\nu (S) = v \\left(\\pi_ {S}\\right). \\tag {2}\n$$\n\nIn practice, $u(S) = v(\\pi_S)$ is estimated empirically by averaging $r(x_j, y_j)$ over a finite validation set $\\{x_j\\}_{j=1}^m$ and sampled responses $y_j \\sim \\pi_S(\\cdot | x_j)$ .\n\n# 2.2 Shapley-based data valuation\n\nGiven the utility $u(S) = v(\\pi_S)$ defined in Equation (2), the Shapley value [12, 34] provides a game-theoretic method for splitting the total utility $u(\\mathcal{N}) = v(\\pi_{\\mathcal{N}})$ (achieved when aligning on all data sources) among individual sources. Formally, for each data source $i \\in \\mathcal{N}$ , its Shapley value is\n\n$$\n\\varphi_ {i} = \\sum_ {S \\subseteq \\mathcal {N} \\backslash \\{i \\}} \\frac {| S | ! (n - | S | - 1) !}{n !} [ v (\\pi_ {S \\cup \\{i \\}}) - v (\\pi_ {S}) ]. \\tag {3}\n$$\n\nThe popularity of using the Shapley value to perform data valuation stems from the fact that it is the unique value notion satisfying four axioms (efficiency, symmetry, dummy, and linearity) that are economically desirable [34].\n\nHowever, one challenge of performing Shapley-based data valuation is its computational complexity. Evaluating the exact Shapley value of each data source using Equation (3) involves computing the marginal utility $v(\\pi_{S \\cup \\{i\\}}) - v(\\pi_S)$ of every source $i$ to every coalition $S$ , which is $O(2^{|N|})$ . Such exponential computation is not feasible in any realistic setting, as computing exact Shapley values would entail an exponential number of full LLM finetunings, far beyond what is feasible at modern model scales.\n\n# 2.3 LLM fine-tuning\n\nLLM fine-tuning, and especially preference alignment [30, 45], is a crucial step that guides pre-trained models toward desired behaviors and underlies much of their success in chat applications [30]. The most successful and widely used preference alignment methods, for instance Reinforcement Learning from Human Feedback (RLHF) [45] and Direct Preference Optimization (DPO) [31], are motivated through KL-regularized expected-reward maximization [31, 45]. In these approaches, our objective is to align a pre-trained LLM, denoted by $\\pi_0$ , on a given coalition of datasets of preferences $\\mathcal{D}_S = \\cup_{\\ell \\in S}\\mathcal{D}_{\\ell}$ with $\\mathcal{D}_{\\ell} = \\{x_{i,\\ell},y_{i,\\ell}^{+},y_{i,\\ell}^{-}\\}_{i\\in [n_{\\ell}]}$ . We first describe the core alignment approaches on a single dataset of preferences $\\mathcal{D}_{\\ell}$ , before discussing possible alignment approaches on coalitions. RLHF begins by modeling a reward signal $\\hat{r}_{\\ell}:X\\times Y\\to \\mathbb{R}$ on $\\mathcal{D}_{\\ell}$ , where $\\hat{r}_{\\ell}(x,y)$ captures the quality of response $y\\in \\mathcal{Y}$ to the question or prompt $x\\in \\mathcal{X}$ . This reward is modeled with Bradley-Terry and learned with maximum likelihood estimation by solving\n\n$$\n\\hat {r} _ {\\ell} \\in \\arg \\max _ {r} \\left\\{\\sum_ {i \\in \\mathcal {D} _ {\\ell}} \\log \\sigma \\left(r \\left(x _ {i}, y _ {i} ^ {+}\\right) - r \\left(x _ {i}, y _ {i} ^ {-}\\right)\\right) \\right\\}. \\tag {4}\n$$\n\nThis reward is then used to align a pretrained LLM, denoted by $\\pi_0$ , and used as a reference, solving the following optimization problem:\n\n$$\n\\pi_ {\\ell} ^ {\\star} \\in \\arg \\max _ {\\pi} \\left\\{\\sum_ {i \\in \\mathcal {D} _ {\\ell}} \\mathbb {E} _ {y \\sim \\pi (\\cdot | x _ {i})} [ \\hat {r} _ {\\ell} (x _ {i}, y) ] - \\beta \\mathrm {K L} (\\pi (\\cdot | x _ {i}), \\pi_ {0} (\\cdot | x _ {i})) \\right\\},\n$$\n\nwith $\\beta > 0$ the KL regularization parameter. This optimization problem admits an analytical solution, giving the form of the optimal aligned policy. For each prompt $x \\in X$ , the optimal distribution over responses $y \\in Y$ is\n\n$$\n\\pi_ {\\ell} ^ {\\star} (y | x) \\propto \\exp \\left(\\hat {r} _ {\\ell} (x, y) / \\beta\\right) \\pi_ {0} (y | x). \\tag {5}\n$$\n\nGiven its intractable normalization, this policy is approximated using policy learning algorithms such as REINFORCE [2] and PPO [30], to name a few. Direct Preference Optimization (DPO) [31] exploits the structure of the optimal policy to bypass the explicit reward modeling and RL steps, treating the LLM itself as the object of optimization. Plugging (5) into the Bradley-Terry likelihood of (4) leads to an equivalent preference-learning objective directly over $\\pi$ :\n\n$$\n\\hat {\\pi} _ {\\ell} ^ {\\star} \\in \\arg \\max _ {\\pi} \\left\\{\\sum_ {i \\in \\mathcal {D} _ {\\ell}} \\log \\sigma \\left(\\beta \\left(\\log \\frac {\\pi (y _ {i} ^ {+} | x _ {i})}{\\pi_ {0} (y _ {i} ^ {+} | x _ {i})} - \\log \\frac {\\pi (y _ {i} ^ {-} | x _ {i})}{\\pi_ {0} (y _ {i} ^ {-} | x _ {i})}\\right)\\right) \\right\\}.\n$$\n\nThus, under the KL-regularized expected-reward framework, DPO targets the same family of optimal policies as RLHF on $\\mathcal{D}_{\\ell}$ , but does so implicitly through a single maximum-likelihood stage without training an explicit reward model. In the following, without any loss of generality, we use DPO as our alignment algorithm due to its ease of implementation and practicality.\n\n# 3 COALITION POLICIES AND LANGUAGE MODEL ARITHMETIC\n\nAfter describing the core alignment algorithms on a single dataset, we now extend them, with DPO as a representative example, to subsets of data sources (which we will refer to as coalitions in the game theoretic sense). A straightforward approach is to apply DPO directly to the coalition dataset $\\mathcal{D}_S = \\cup_{\\ell \\in S}\\mathcal{D}_{\\ell}$ , solving\n\n$$\n\\hat {\\pi} _ {S} ^ {\\star} \\in \\arg \\max _ {\\pi} \\left\\{\\sum_ {\\ell \\in S} \\sum_ {i \\in \\mathcal {D} _ {\\ell}} \\log \\sigma \\left(\\beta \\left(\\log \\frac {\\pi (y _ {i} ^ {+} | x _ {i})}{\\pi_ {0} (y _ {i} ^ {+} | x _ {i})} - \\log \\frac {\\pi (y _ {i} ^ {-} | x _ {i})}{\\pi_ {0} (y _ {i} ^ {-} | x _ {i})}\\right)\\right) \\right\\}.\n$$\n\nWhile natural, this is not the only way to align a pretrained LLM using datasets $\\{\\mathcal{D}_{\\ell}\\}_{\\ell \\in S}$ . Since our goal is to study data valuation, we seek alignment algorithms that expose useful algebraic structure across coalitions (see Figure 1), reducing computational cost and simplifying valuation. To this end, we propose a different coalition alignment procedure, which we call Sequential DPO, that is simple to implement and differs only slightly from naive DPO on aggregations. Algorithm 1 describes this procedure. Despite its sequential implementation, Algorithm 1 should converge to the same\n\nAlgorithm 1: Sequential Direct Preference Optimization\n\n1 Input: Reference $\\pi_0$ , Parameter $\\beta > 0$ , Coalition $S$ . \n2. Initialise: $k = 0, L$ list of indices in $S$ . \n3 while $k < |S|$ do \n4 Set $\\ell \\gets L[k]$ Set $k\\gets k + 1$ \n5 Output: $\\hat{\\pi}_S^\\star = \\pi_{k + 1}$\n\n$$\n\\pi_ {k + 1} \\leftarrow \\arg \\max _ {\\pi} \\sum_ {i \\in \\mathcal {D} _ {\\ell}} \\log \\sigma \\left(\\beta \\left(\\log \\frac {\\pi \\left(y _ {i} ^ {+} \\mid x _ {i}\\right)}{\\pi_ {k} \\left(y _ {i} ^ {+} \\mid x _ {i}\\right)} - \\log \\frac {\\pi \\left(y _ {i} ^ {-} \\mid x _ {i}\\right)}{\\pi_ {k} \\left(y _ {i} ^ {-} \\mid x _ {i}\\right)}\\right)\\right).\n$$\n\npolicy regardless of the ordering of datasets in $S$ . Indeed, at convergence, the aligned coalition policy satisfies the closed form\n\n$$\n\\pi_ {S} ^ {\\star} (y | x) \\propto \\exp \\left(\\frac {1}{\\beta} \\sum_ {\\ell \\in S} \\hat {r} _ {\\ell} (x, y)\\right) \\pi_ {0} (y | x), \\tag {6}\n$$\n\nwhich depends only on the set $S$ and not on its enumeration. Sequential DPO therefore recovers classical DPO when $|S| = 1$ and extends it naturally to coalitions. Our main question is whether we can bypass coalition-specific alignment entirely by exploiting the structure of the alignment objective. The answer is yes: assuming convergence of the alignment algorithms, we can express the coalition-aligned policy $\\pi_S^\\star$ directly in terms of the individually aligned models $\\{\\pi_\\ell^\\star\\}_{\\ell \\in S}$ . Starting from the identities\n\n$$\n\\log \\pi_ {0} (y | x) = s _ {0} (x, y) + C _ {0} (x)\n$$\n\n$$\n\\log \\pi_ {\\ell} ^ {\\star} (y | x) = s _ {\\ell} (x, y) + C _ {\\ell} (x) = \\frac {1}{\\beta} \\hat {r} _ {\\ell} (x, y) + s _ {0} (x, y) + \\tilde {C} _ {\\ell} (x).\n$$\n\nand discarding $y$ -independent constants that vanish under normalization, we obtain\n\n$$\n\\begin{array}{l} \\pi_ {S} ^ {\\star} (y | x) \\propto \\exp \\left(\\frac {1}{\\beta} \\sum_ {\\ell \\in S} \\hat {r} _ {\\ell} (x, y)\\right) \\pi_ {0} (y | x), \\\\ \\propto \\exp \\left(\\sum_ {\\ell \\in S} s _ {\\ell} (x, y) + (1 - | S |) s _ {0} (x, y)\\right). \\\\ \\end{array}\n$$\n\nThis simple result is powerful: for any coalition $S$ , the coalition-aligned model $\\pi_S^\\star$ can be recovered exactly using only the individually aligned models $\\{\\pi_\\ell^\\star\\}_{\\ell \\in S}$ and the reference model $\\pi_0$ . No additional optimization or training on the coalition dataset is required. In other words, the coalition policy can be obtained purely through algebraic operations on already-trained models. This provides a principled and training-free procedure for reconstructing $\\pi_S^\\star$ , and constitutes a formal instance of language model arithmetic, where new behaviors emerge from structured combinations of existing\n\nLLMs rather than from further fine-tuning. This perspective closely aligns with the Language Model Arithmetic philosophy, originally introduced in [10].\n\n# 4 IMPLEMENTATION\n\nThe goal of this section is to illustrate our approach in a small, easily reproducible setting. We take SmolLM-135M-Instruct [3] as the initial policy $\\pi_0$ and fine-tune it with DPO on different subsets of the UltraFeedback dataset [8]. Specifically, we use 4 UltraFeedback sources as data providers: flan_v2_niv2, sharegpt, evol_instruct, and ultrachat. We use the Hugging Face TRL implementation of DPO [39] with the default inverse temperature $\\beta = 0.1$ . Fine-tuning is performed on a single A100 80GB GPU for 4 epochs, with batch size 32, 4 gradient-accumulation steps, and a learning rate of $2 \\times 10^{-5}$ . We apply LoRA [17] with rank $r = 8$ and scaling $\\alpha = 16$ to obtain one adapter per data source.\n\nIn this experiment, we thus have $n = 4$ data sources and compute Shapley values for these $n$ sources using only $n$ DPO fine-tunings (instead of $2^4 = 16$ ), exploiting the language model arithmetic property derived in the previous section. Concretely, we keep the base model $\\pi_0$ frozen and learn $n$ LoRA adapters, one per source. At evaluation time, we construct approximate coalition policies $\\hat{\\pi}_S$ by combining the corresponding LoRA adapters, and define utilities via $v(\\hat{\\pi}_S)$ . This yields the following Shapley approximation:\n\n$$\n\\hat {\\varphi} _ {i} = \\sum_ {S \\subseteq \\mathcal {N} \\backslash \\{i \\}} \\frac {| S | ! (n - | S | - 1) !}{n !} \\left[ v \\left(\\hat {\\pi} _ {S \\cup \\{i \\}}\\right) - v \\left(\\hat {\\pi} _ {S}\\right) \\right]. \\tag {7}\n$$\n\nFor evaluation, we use two scalar reward models from the RLHF literature [43], one trained to measure helpfulness and the other to measure harmlessness. For each coalition model $\\hat{\\pi}_S$ , we generate responses to 128 prompts drawn from UltraFeedback examples whose sources are disjoint from our 4 training subsets (the unused portion of UltraFeedback [8]), and define $v(\\hat{\\pi}_S)$ as the average reward over these prompts. Figure 2 shows the resulting approximate Shapley values, plotting each data source in the plane defined by its Shapley value under the helpfulness reward (x-axis) and under the harmlessness reward (y-axis).\n\nRemark. In our experiments, we evaluate $v(\\hat{\\pi}_S)$ for all $2^n$ coalitions, so (7) is exact given the coalition models $\\{\\hat{\\pi}_S\\}_S$ . At larger scales, our contribution can complement standard Shapley approximations. Once coalition utilities can be queried cheaply via our proposed instance of language model arithmetic, one can further subsample coalitions using Monte Carlo permutation sampling [12, 20, 25] or a regression trick [23] to reduce the number of inference calls from $2^n$ to a polynomial in the number of data sources.\n\nThe spatial signature [38] in Figure 2 represents the contribution of each source in a multi-objective value space and illustrates our aim at more interpretable LLM training. Each point corresponds to one UltraFeedback subset. We observe heterogeneous profiles: sharegpt has the most significant positive helpful value and a mildly positive harmless value, ultrachat contributes mainly to harmlessness with an almost neutral helpful value, flan_v2_niv2 yields a small helpful gain but slightly harms harmlessness, and evol_instruct provides helpfulness at the cost of the most significant negative harmless value. Given an LLM training, the data signature documents the effective influence of each dataset.\n\nNegative Shapley values here are informative: they identify sources whose marginal contribution, averaged over coalitions, pushes the model in an undesirable direction for a given reward. Overall, the plot demonstrates that our\n\n![](images/041838da465955e33548d48f239023b5ecfc453f6217e824780094e71a38e68c.jpg) \nFig. 2. Approximate Shapley values of 4 UltraFeedback data sources (flan_v2_niv2, sharegpt, evol_instruct, ultrachat) under two rewards. Each point corresponds to a data source. The x-axis shows its Shapley value for the helpfulness reward, and the y-axis for the harmlessness reward. The diagonal $y = x$ indicates perfect agreement between the two rewards on the relative importance of each source.\n\nmethod makes it practically feasible to characterize training datasets along multiple alignment dimensions, rather than treating them as undifferentiated additional data.\n\n# 5 VISION\n\nFor LLM fine-tuning, sequential training across multiple datasets corresponds to summing the reward models learned from each dataset independently. We argue that this observation has practical implications and opens several research directions.\n\nThe Shapley value, and more generally semivalues, could be used to quantify, in an interpretable manner, the contribution of each dataset used during training. In SHAP [24], the Shapley value is used to identify the role of features in the output of the model, with a notable impact on the field of interpretable ML, in the context of LLM, the role of the features could be reinterpreted by the data. We notably foresee applications in contexts where copyright infringement and data ownership are becoming contentious.\n\nSince data quality is key to LLM performance, one could envision new training algorithms that automatically curate or weight their input data, just as it is now typical in machine learning to reweight datasets or individual datapoints to improve model outcomes. Similarly, during training, there is room for Shapley values to improve synthetic data generation, as in NashMD or online DPO. Even further: new architectures could dynamically select model operations online, guided by Shapley values tailored to the specific context or task.\n\nWhen datasets reflect individual preferences. The algebraic relation we highlight may support new methods at the intersection of social choice and LLM training. Indeed, it has been already observed that LLM are in fact preference aggregators [7, 11, 14, 26, 29, 36, 37]. In what we described in Figure 1, it is the combination of a Bradley-Terry preference aggregation model, a KL-regularized reward objective and a sequential training resulted in a morphism between the dataset collections and the policies. An open research question is whether there exists other combinations of preference aggregation model and training rules such that such a morphism exists.\n\nIn parallel to those directions, our research program calls for a better theoretical and empirical understanding of how sequential DPO relates to DPO. First, checking how the commutativity property persists in practice, where convergence is not reached, and second, how does the sequential resolution of an optimization problem approximate the original problem.\n\n# ACKNOWLEDGMENTS\n\nThis work was partially supported by the French National Research Agency (ANR) through grants ANR-20-CE23-0007 and ANR-23-CE23-0002 and through the PEPR IA FOUNDRY project (ANR-23-PEIA-0003). Computational and storage resources were provided by GENCI at IDRIS through allocation 2025-A0191016862 on the Jean Zay supercomputer (V100/A100/H100 partitions).\n\n# REFERENCES\n\n[1] Anish Agarwal, Munther Dahleh, and Tuhin Sarkar. 2019. A Marketplace for Data: An Algorithmic Solution. In Proceedings of the 2019 ACM Conference on Economics and Computation (Phoenix, AZ, USA) (EC '19). Association for Computing Machinery, New York, NY, USA, 701-726. https://doi.org/10.1145/3328526.3329589 \n[2] Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. 2024. Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 12248-12267. https://doi.org/10.18653/v1/2024.acl-long.662 \n[3] Loubna Ben Allal, Anton Lozhkov, Elie Bakouch, Leandro von Werra, and Thomas Wolf. 2024. SmolLM - blazingly fast and remarkably powerful. \n[4] Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeffrey Wu. 2024. Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision. In Proceedings of the 41st International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 235), Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (Eds.). PMLR, 4971–5012. https://proceedings.mlr.press/v235/burns24b.html \n[5] Yiling Chen, Stephen Chong, Ian A. Kash, Tal Moran, and Salil Vadhan. 2016. Truthful Mechanisms for Agents That Value Privacy. ACM Trans. Econ. Comput. 4, 3, Article 13 (March 2016), 30 pages. https://doi.org/10.1145/2892555 \n[6] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2017/file/d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf \n[7] Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mossé, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde, and William S. Zwicker. 2024. Position: social choice should guide AI alignment in dealing with diverse human feedback. In Proceedings of the 41st International Conference on Machine Learning (Vienna, Austria) (ICML'24). JMLR.org, Article 371, 15 pages. \n[8] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. UltraFeedback: Boosting Language Models with High-quality Feedback. arXiv:2310.01377 [cs.CL] \n[9] Jasper Dekoninck, Marc Fischer, Luca Beurer-Kellner, and Martin Vechev. 2024. Controlled Text Generation via Language Model Arithmetic. In International Conference on Representation Learning, B. Kim, Y. Yue, S. Chaudhuri, K. Fragkiadaki, M. Khan, and Y. Sun (Eds.), Vol. 2024. 35011-35038. https://proceedings.iclr.cc/paper_files/paper/2024/file/96aad3299d18497e2bea4fc20b949b81-Paper-Conference.pdf \n[10] Jasper Dekoninck, Marc Fischer, Luca Beurer-Kellner, and Martin Vechev. 2024. Controlled Text Generation via Language Model Arithmetic. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=SLw9fp4yI6 \n[11] Luise Ge, Daniel Halpern, Evi Micha, Ariel D Procaccia, Itai Shapira, Yevgeniy Vorobeychik, and Junlin Wu. 2024. Axioms for ai alignment from human feedback. Advances in Neural Information Processing Systems 37 (2024), 80439-80465. \n[12] Amirata Ghorbani and James Zou. 2019. Data Shapley: Equitable Valuation of Data for Machine Learning. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 2242-2251. https://proceedings.mlr.press/v97/ghorbani19c.html \n[13] Yexiao He, Ziyao Wang, Zheyu Shen, Guoheng Sun, Yuong Dai, Yongkai Wu, Hongyi Wang, and Ang Li. 2024. SHED: shapley-based automated dataset refinement for instruction fine-tuning. In Proceedings of the 38th International Conference on Neural Information Processing Systems (Vancouver, BC, Canada) (NIPS '24). Curran Associates Inc., Red Hook, NY, USA, Article 3153, 22 pages. \n[14] Benjamin Heymann. 2025. Adaptive Preference Aggregation. arXiv preprint arXiv:2503.10215 (2025). \n[15] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erichelsen, Oriol Vinyals, Jack W. Rae, and Laurent Sifre. 2022. Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS '22). Curran Associates Inc., Red Hook, NY, USA, Article 2176, 15 pages. \n[16] Miriam Horovicz and Roni Goldshmidt. 2024. TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation. In Proceedings of the 1st Workshop on NLP for Science (NLP4Science), Lotem Peled-Cohen, Nitay Calderon, Shir Lissak, and Roi Reichart (Eds.). Association for Computational Linguistics, Miami, FL, USA, 1-8. https://doi.org/10.18653/v1/2024.nlp4science-1.1\n\n[17] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv 2021. arXiv preprint arXiv:2106.09685 10 (2021). \n[18] Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jose Miguel Hernandez-Lobato, Richard E. Turner, and Douglas Eck. 2017. Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, 1645-1654. https://proceedings.mlr.press/v70/ jaques17a.html \n[19] Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Neizihe Merve Gurel, Bo Li, Ce Zhang, Costas Spanos, and Dawn Song. 2019. Efficient task-specific data valuation for nearest neighbor algorithms. Proc. VLDB Endow. 12, 11 (July 2019), 1610-1623. https://doi.org/10.14778/3342263.3342637 \n[20] Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gurel, Bo Li, Ce Zhang, Dawn Song, and Costas J. Spanos. 2019. Towards Efficient Data Valuation Based on the Shapley Value. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 89), Kamalika Chaudhuri and Masashi Sugiyama (Eds.). PMLR, 1167-1176. https://proceedings.mlr.press/v89/jia19a.html \n[21] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. CoRR abs/2001.08361 (2020). https://arxiv.org/pdf/2001.08361.pdf \n[22] Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (Sydney, NSW, Australia) (ICML'17). JMLR.org, 1885-1894. \n[23] Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, Vol. 30. https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43fdf28b67767-Paper.pdf \n[24] Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017). \n[25] Sasan Maleki, Long Tran-Thanh, Greg Hines, Talal Rahwan, and Alex Rogers. 2013. Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying. CoRR abs/1306.4265 (2013). arXiv:1306.4265 http://arxiv.org/abs/1306.4265 \n[26] Roberto-Rafael Maura-Rivero, Marc Lanctot, Francesco Visin, and Kate Larson. 2025. Jackpot! alignment as a maximal lottery. arXiv preprint arXiv:2501.19266 (2025). \n[27] Eric Mitchell, Rafael Rafailov, Archit Sharma, Chelsea Finn, and Christopher Manning. 2024. An Emulator for Fine-tuning Large Language Models using Small Language Models. In International Conference on Representation Learning, B. Kim, Y. Yue, S. Chaudhuri, K. Fragkiadaki, M. Khan, and Y. Sun (Eds.), Vol. 2024. 13229-13244. https://proceedings.iclr.cc/paper_files/paper/2024/file/389e161125965c0f0ba50420fee45774-Paper-Conference.pdf \n[28] Hyeonseok Moon, Jaehyung Seo, Seonmin Koo, Jinsung Kim, Young-kyoung Ham, Jiwon Moon, and Heuseok Lim. 2025. LimaCost: Data Valuation for Instruction Tuning of Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, and Violet Peng (Eds.). Association for Computational Linguistics, Suzhou, China, 12841-12854. https://doi.org/10.18653/v1/2025-findings-emnlp.688 \n[29] Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Cöme Fiegel, et al. 2024. Nash learning from human feedback. In *Forty-first International Conference on Machine Learning*. \n[30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 27730-27744. https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf \n[31] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. In Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 53728-53741. https://proceedings.neurips.cc/paper_files/paper/2023/file/a85b405ed65c6477a4fe8302b5e06ce7-Paper-Conference.pdf \n[32] Stephanie Schoch, Ritwick Mishra, and Yangfeng Ji. 2023. Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), Vishakh Padmakumar, Gisela Vallejo, and Yao Fu (Eds.). Association for Computational Linguistics, Toronto, Canada, 266-275. https://doi.org/10.18653/v1/2023.acl-srw.37 \n[33] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. arXiv:1707.06347 [cs.LG] https://arxiv.org/abs/1707.06347 \n[34] Lloyd S Shapley. 1953. A Value for n-Person Games. In Contributions to the Theory of Games II, Harold W. Kuhn and Albert W. Tucker (Eds.). Princeton University Press, Princeton, 307-317. \n[35] Rachael Hwee Ling Sim, Yehong Zhang, Mun Choon Chan, and Bryan Kian Hsiang Low. 2020. Collaborative machine learning with incentive-aware model rewards. In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 828, 10 pages. \n[36] Anand Siththaranjan, Cassidy Laidlaw, and Dylan Hadfield-Menell. 2024. Distributional preference learning: Understanding and accounting for hidden context in rlhf. *ICLR* (2024). \n[37] Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, and Alekh Agarwal. 2024. A minimaximalist approach to reinforcement learning from human feedback. In Proceedings of the 41st International Conference on Machine Learning (Vienna, Austria) (ICML '24). JMLR.org, Article 1929, 33 pages.\n\n[38] Melissa Tamine, Benjamin Heymann, Patrick Loiseau, and Maxime Vono. 2025. On the Impact of the Utility in Semivalence-based Data Valuation. arXiv:2502.06574 [cs.AI] https://arxiv.org/abs/2502.06574 \n[39] Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gallouédec. 2020. TRL: Transformer Reinforcement Learning. https://github.com/huggingface/trl. \n[40] Jingtan Wang, Xiaogiang Lin, Rui Qiao, Chuan-Sheng Foo, and Bryan Kian Hsiang Low. 2024. Helpful or harmful data? fine-tuning-free shapley attribution for explaining language model predictions. In Proceedings of the 41st International Conference on Machine Learning (Vienna, Austria) (ICML '24). JMLR.org, Article 2089, 32 pages. \n[41] Jiachen (Tianhao) Wang, Prateek Mittal, Dawn Song, and Ruoxi Jia. 2025. Data Shapley in One Training Run. In International Conference on Representation Learning, Y. Yue, A. Garg, N. Peng, F. Sha, and R. Yu (Eds.), Vol. 2025. 12358-12395. https://proceedings.iclr.cc/paper_files/paper/2025/file/20fdaf67581e6d7157376d1ed584040a-Paper-Conference.pdf \n[42] Yingtai Xiao, Yuqing Zhu, Sirat Samyoun, Wanrong Zhang, Jiachen T. Wang, and Jian Du. 2025. TokenShapley: Token Level Context Attribution with Shapley Value. In Findings of the Association for Computational Linguistics: ACL 2025, Wanxiang Che, Joyce Nabende, Ekaterina Shutova, and Mohammad Taher Pilehvar (Eds.). Association for Computational Linguistics, Vienna, Austria, 3882-3894. https://doi.org/10.18653/v1/2025.findings-acl.200 \n[43] Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, and Jianshu Chen. 2024. Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. International Conference on Machine Learning (2024). \n[44]Zikun Ye and Hema Yoganarasimhan.2025.Fair Document Valuation in LLM Summaries via Shapley Values.arXiv:2505.23842 [cs.CL]https://arxiv.org/abs/2505.23842 \n[45] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-Tuning Language Models from Human Preferences. arXiv preprint arXiv:1909.08593 (2019). https://arxiv.org/abs/1909.08593"}
# GLOW: Graph-Language Co-Reasoning for Agentic Workflow Performance Prediction Abstract Agentic Workflows (AWs) have emerged as a promising paradigm for solving complex tasks. However, the scalability of automating their generation is severely constrained by the high cost and latency of execution-based evaluation. Existing AW performance prediction methods act as surrogates but fail to simultaneously capture the intricate topological dependencies and the deep semantic logic embedded in AWs. To address this limitation, we propose GLOW, a unified framework for AW performance prediction that combines the graph-structure modeling capabilities of GNNs with the reasoning power of LLMs. Specifically, we introduce a graph-oriented LLM, instruction-tuned on graph tasks, to extract topologically aware semantic features, which are fused with GNN-encoded structural representations. A contrastive alignment strategy further refines the latent space to distinguish high-quality AWs. Extensive experiments on FLORA-Bench show that GLOW outperforms state-of-the-art baselines in prediction accuracy and ranking utility. The source code is publicly available at https://github.com/guanwei49/GLOW. # 1 Introduction Large Language Models (LLMs) have demonstrated remarkable capabilities in diverse tasks, evolving from passive text generators to active agents capable of planning, reasoning, and tool use [Xi et al., 2025]. However, recent research indicates that Agentic Workflows (AWs) offer a superior paradigm compared to single-agent systems for handling complex scenarios. By coordinating multiple specialized agents within structured collaboration topologies, AWs decompose intricate problems into manageable sub-routines, enabling state-of-the-art performance in domains including code generation [He et al., 2025; Hu et al., 2024b], mathematics [Zhong et al., 2026; Zhang and Xiong, 2025], and general reasoning [Pezeshkpour et al., 2024; Chen et al., 2025]. However, designing effective AWs manually is labor-intensive and requires expert knowledge, which has motivated the development of automatic agentic workflow generation methods [Li et al., 2024; Hu et al., 2024a]. These meth ods view the workflow structure as a search space and employ algorithms like genetic programming or reinforcement learning to discover high-performing AWs. However, a critical bottleneck that impedes their scalability is the evaluation of AWs. To determine the performance of a candidate AW, these methods typically execute it, with each agent calling an LLM. Given the stochastic nature of LLMs and the complexity of multi-turn interactions, this process is both time-consuming and costly, making large-scale exploration impractical. To address this efficiency challenge, recent works have explored performance predictors as surrogates for execution-based evaluation. Existing methods [Zhang et al., 2025; Tirat et al., 2025] model AWs as Directed Acyclic Graphs (DAGs) and utilize Graph Neural Networks (GNNs) to predict performance based on structural features. While effective at capturing topological patterns, standard GNNs treat agent prompts as shallow text embeddings, often failing to comprehend the deep semantic logic and role definitions critical to workflow success. Conversely, while LLMs excel at understanding textual prompts, they lack the inherent capability to process graph structures efficiently or model the error propagation paths in complex topologies. In this paper, we present GLOW, a unified framework that leverages the structural modeling capabilities of GNNs and the semantic reasoning power of LLMs for agentic workflow performance prediction. GLOW simultaneously captures how agents are connected (structure) and what agents are thinking (semantics) by integrating graph-based and language-based representations into a unified latent space. The main contributions of this work are as follows: i) Graph-oriented LLM instruction tuning: Instead of using off-the-shelf LLMs, we construct a specialized instruction-tuning dataset containing graph reasoning tasks (e.g., reachability, topological sorting). This transforms the LLM into a 'graph expert' capable of extracting topologically aware semantic representations from textual AW descriptions. ii) Dual-branch representation learning: We employ a GNN to encode the AW structure and the graph-oriented LLM to encode implicit reasoning logic. These representations are projected into a unified space and fused via a representation fusion module. iii) Contrastive alignment strategy: In addition to the prediction loss, we introduce a contrastive learning objective that clusters successful AWs together in the latent space while pushing apart unsuccessful ones, enhancing the model's discriminative power. We conduct extensive experiments on FLORA-Bench [Zhang et al., 2025]. Empirical results show that GLOW outperforms existing methods in both prediction accuracy and ranking utility. Moreover, when deployed as a candidate AW evaluation method in the automatic AW generation framework AFLOW [Zhang et al., 2024b], GLOW reduces computation time by $98.7\%$ while incurring only a 0.031 decrease in the score of generated AWs on average across three datasets. # 2 Related Work In this section, we briefly review prior research on automatic agentic workflow generation, LLMs for graph-structured data, and agentic workflow performance prediction. # 2.1 Automatic Agentic Workflow Generation Current approaches for automated agentic workflow generation generally fall into two primary categories. Probability-based methods generate candidate workflows through stochastic sampling from a learnable distribution. To facilitate this mathematical optimization, these approaches typically model the agentic workflow as a computational graph, where nodes represent agents and edges define their interaction topology. For example, GPTSwarm [Zhuge et al., 2024] utilizes the REINFORCE algorithm to optimize this graph structure, learning the probability of connections between nodes to maximize the agentic workflow performance. G-Designer [Zhang et al., 2024a] employs a variational graph auto-encoder (VGAE) to sample and decode task-adaptive agentic workflows. LLM-guided methods, conversely, leverage the inherent reasoning and coding capabilities of LLMs to directly generate and refine workflows based on feedback. For example, AFLOW [Zhang et al., 2024b] utilizes Monte Carlo Tree Search (MCTS) to explore different candidate workflows. AutoFlow [Li et al., 2024] frames workflows as natural language programs, employing reinforcement learning to fine-tune the generator LLM based on workflow execution rewards. EvoMAC [Hu et al., 2024b] mimics neural network training by introducing "textual backpropagation," where error logs from compilers serve as gradients to update the agent workflows. ADAS [Hu et al., 2024a] takes a meta-learning perspective, deploying a "meta-agent" that iteratively programs and discovers entirely new agent architectures. RobustFlow [Xu et al., 2025] executes multiple workflow candidates for similar user queries, identifies the one that performs the best, and trains the LLM to consistently generate that high-quality workflow. These approaches rely heavily on repeated LLM invocations to execute workflows for performance evaluation, resulting in substantial computational, temporal, and financial overhead, which limits their practicality in real-world scenarios. GLOW provides an efficient way to predict the performance of generated candidate workflows, thereby reducing the need for costly LLM calls. # 2.2 LLMs for Graph-Structured Data A growing body of work has investigated the use of LLMs for graph reasoning. Wang et al. [Wang et al., 2023] introduce one of the first natural-language graph reasoning Figure 1: An illustrative example of an AW for code generation. benchmarks, NLGraph, and demonstrate that LLMs exhibit graph reasoning abilities. Early studies [Fatemi et al., 2024; Ye et al., 2024; Zhang et al., 2024c] primarily focus on prompt design to elicit or evaluate LLMs' capabilities on graph-related tasks. Other lines of work [Chai et al., 2023; Liu et al., 2024; Tang et al., 2024] combine GNN-derived structure-aware node embeddings with textual prompts to enhance the graph reasoning performance of LLMs. In contrast to these approaches, we do not use LLMs for graph-specific question answering. Instead, we leverage LLMs to produce richer semantic encodings of agentic workflows, which serve as inputs for downstream performance prediction. # 2.3 Agentic Workflow Performance Prediction To mitigate the prohibitive cost of evaluating AWs via direct execution, recent research has shifted towards developing lightweight performance predictors. Zhang et al. [Zhang et al., 2025] pioneered this direction by formulating AWs as DAGs and applying GNNs to capture their topological structures. The performance is then predicted using a Multi-Layer Perceptron (MLP) that processes the concatenation of the AF representation and the task representation. Subsequently, Trirat et al. [Trirat et al., 2025] introduced Agentic Predictor, which extends this GNN-based paradigm by integrating graph features with code and prompt embeddings through a multi-view encoding scheme. However, these GNN-centric methods primarily focus on structural patterns or shallow semantic features, failing to capture the high-level reasoning implicit in complex agent interactions. In contrast, GLOW synergizes a graph-oriented LLM with a GNN to align deep semantic reasoning with the structural characteristics of AWs, leading to superior prediction accuracy. # 3 Preliminaries An Agentic Workflow (AW) consists of multiple collaborating agents that collectively execute a task $T$ by passing information, triggering actions, and maintaining interdependent states. As illustrated in Figure 1, such workflows typically exhibit structured control flow and explicit dependency
# GLOW: Graph-Language Co-Reasoning for Agentic Workflow Performance Prediction Abstract Agentic Workflows (AWs) have emerged as a promising paradigm for solving complex tasks. However, the scalability of automating their generation is severely constrained by the high cost and latency of execution-based evaluation. Existing AW performance prediction methods act as surrogates but fail to simultaneously capture the intricate topological dependencies and the deep semantic logic embedded in AWs. To address this limitation, we propose GLOW, a unified framework for AW performance prediction that combines the graph-structure modeling capabilities of GNNs with the reasoning power of LLMs. Specifically, we introduce a graph-oriented LLM, instruction-tuned on graph tasks, to extract topologically aware semantic features, which are fused with GNN-encoded structural representations. A contrastive alignment strategy further refines the latent space to distinguish high-quality AWs. Extensive experiments on FLORA-Bench show that GLOW outperforms state-of-the-art baselines in prediction accuracy and ranking utility. The source code is publicly available at https://github.com/guanwei49/GLOW. # 1 Introduction Large Language Models (LLMs) have demonstrated remarkable capabilities in diverse tasks, evolving from passive text generators to active agents capable of planning, reasoning, and tool use [Xi et al., 2025]. However, recent research indicates that Agentic Workflows (AWs) offer a superior paradigm compared to single-agent systems for handling complex scenarios. By coordinating multiple specialized agents within structured collaboration topologies, AWs decompose intricate problems into manageable sub-routines, enabling state-of-the-art performance in domains including code generation [He et al., 2025; Hu et al., 2024b], mathematics [Zhong et al., 2026; Zhang and Xiong, 2025], and general reasoning [Pezeshkpour et al., 2024; Chen et al., 2025]. However, designing effective AWs manually is labor-intensive and requires expert knowledge, which has motivated the development of automatic agentic workflow generation methods [Li et al., 2024; Hu et al., 2024a]. These meth ods view the workflow structure as a search space and employ algorithms like genetic programming or reinforcement learning to discover high-performing AWs. However, a critical bottleneck that impedes their scalability is the evaluation of AWs. To determine the performance of a candidate AW, these methods typically execute it, with each agent calling an LLM. Given the stochastic nature of LLMs and the complexity of multi-turn interactions, this process is both time-consuming and costly, making large-scale exploration impractical. To address this efficiency challenge, recent works have explored performance predictors as surrogates for execution-based evaluation. Existing methods [Zhang et al., 2025; Tirat et al., 2025] model AWs as Directed Acyclic Graphs (DAGs) and utilize Graph Neural Networks (GNNs) to predict performance based on structural features. While effective at capturing topological patterns, standard GNNs treat agent prompts as shallow text embeddings, often failing to comprehend the deep semantic logic and role definitions critical to workflow success. Conversely, while LLMs excel at understanding textual prompts, they lack the inherent capability to process graph structures efficiently or model the error propagation paths in complex topologies. In this paper, we present GLOW, a unified framework that leverages the structural modeling capabilities of GNNs and the semantic reasoning power of LLMs for agentic workflow performance prediction. GLOW simultaneously captures how agents are connected (structure) and what agents are thinking (semantics) by integrating graph-based and language-based representations into a unified latent space. The main contributions of this work are as follows: i) Graph-oriented LLM instruction tuning: Instead of using off-the-shelf LLMs, we construct a specialized instruction-tuning dataset containing graph reasoning tasks (e.g., reachability, topological sorting). This transforms the LLM into a 'graph expert' capable of extracting topologically aware semantic representations from textual AW descriptions. ii) Dual-branch representation learning: We employ a GNN to encode the AW structure and the graph-oriented LLM to encode implicit reasoning logic. These representations are projected into a unified space and fused via a representation fusion module. iii) Contrastive alignment strategy: In addition to the prediction loss, we introduce a contrastive learning objective that clusters successful AWs together in the latent space while pushing apart unsuccessful ones, enhancing the model's discriminative power. We conduct extensive experiments on FLORA-Bench [Zhang et al., 2025]. Empirical results show that GLOW outperforms existing methods in both prediction accuracy and ranking utility. Moreover, when deployed as a candidate AW evaluation method in the automatic AW generation framework AFLOW [Zhang et al., 2024b], GLOW reduces computation time by $98.7\%$ while incurring only a 0.031 decrease in the score of generated AWs on average across three datasets. # 2 Related Work In this section, we briefly review prior research on automatic agentic workflow generation, LLMs for graph-structured data, and agentic workflow performance prediction. # 2.1 Automatic Agentic Workflow Generation Current approaches for automated agentic workflow generation generally fall into two primary categories. Probability-based methods generate candidate workflows through stochastic sampling from a learnable distribution. To facilitate this mathematical optimization, these approaches typically model the agentic workflow as a computational graph, where nodes represent agents and edges define their interaction topology. For example, GPTSwarm [Zhuge et al., 2024] utilizes the REINFORCE algorithm to optimize this graph structure, learning the probability of connections between nodes to maximize the agentic workflow performance. G-Designer [Zhang et al., 2024a] employs a variational graph auto-encoder (VGAE) to sample and decode task-adaptive agentic workflows. LLM-guided methods, conversely, leverage the inherent reasoning and coding capabilities of LLMs to directly generate and refine workflows based on feedback. For example, AFLOW [Zhang et al., 2024b] utilizes Monte Carlo Tree Search (MCTS) to explore different candidate workflows. AutoFlow [Li et al., 2024] frames workflows as natural language programs, employing reinforcement learning to fine-tune the generator LLM based on workflow execution rewards. EvoMAC [Hu et al., 2024b] mimics neural network training by introducing "textual backpropagation," where error logs from compilers serve as gradients to update the agent workflows. ADAS [Hu et al., 2024a] takes a meta-learning perspective, deploying a "meta-agent" that iteratively programs and discovers entirely new agent architectures. RobustFlow [Xu et al., 2025] executes multiple workflow candidates for similar user queries, identifies the one that performs the best, and trains the LLM to consistently generate that high-quality workflow. These approaches rely heavily on repeated LLM invocations to execute workflows for performance evaluation, resulting in substantial computational, temporal, and financial overhead, which limits their practicality in real-world scenarios. GLOW provides an efficient way to predict the performance of generated candidate workflows, thereby reducing the need for costly LLM calls. # 2.2 LLMs for Graph-Structured Data A growing body of work has investigated the use of LLMs for graph reasoning. Wang et al. [Wang et al., 2023] introduce one of the first natural-language graph reasoning Figure 1: An illustrative example of an AW for code generation. benchmarks, NLGraph, and demonstrate that LLMs exhibit graph reasoning abilities. Early studies [Fatemi et al., 2024; Ye et al., 2024; Zhang et al., 2024c] primarily focus on prompt design to elicit or evaluate LLMs' capabilities on graph-related tasks. Other lines of work [Chai et al., 2023; Liu et al., 2024; Tang et al., 2024] combine GNN-derived structure-aware node embeddings with textual prompts to enhance the graph reasoning performance of LLMs. In contrast to these approaches, we do not use LLMs for graph-specific question answering. Instead, we leverage LLMs to produce richer semantic encodings of agentic workflows, which serve as inputs for downstream performance prediction. # 2.3 Agentic Workflow Performance Prediction To mitigate the prohibitive cost of evaluating AWs via direct execution, recent research has shifted towards developing lightweight performance predictors. Zhang et al. [Zhang et al., 2025] pioneered this direction by formulating AWs as DAGs and applying GNNs to capture their topological structures. The performance is then predicted using a Multi-Layer Perceptron (MLP) that processes the concatenation of the AF representation and the task representation. Subsequently, Trirat et al. [Trirat et al., 2025] introduced Agentic Predictor, which extends this GNN-based paradigm by integrating graph features with code and prompt embeddings through a multi-view encoding scheme. However, these GNN-centric methods primarily focus on structural patterns or shallow semantic features, failing to capture the high-level reasoning implicit in complex agent interactions. In contrast, GLOW synergizes a graph-oriented LLM with a GNN to align deep semantic reasoning with the structural characteristics of AWs, leading to superior prediction accuracy. # 3 Preliminaries An Agentic Workflow (AW) consists of multiple collaborating agents that collectively execute a task $T$ by passing information, triggering actions, and maintaining interdependent states. As illustrated in Figure 1, such workflows typically exhibit structured control flow and explicit dependency relationships among agents. To formally characterize these interaction patterns, we abstract an AW as a DAG. Specifically, an AW with $N$ agents is represented as $\mathcal{G} = \{\mathcal{V},\mathcal{E},\mathcal{P}\}$ , where $\mathcal{V} = \{v_{1},v_{2},\ldots ,v_{N}\}$ denotes the set of agent nodes, each corresponding to an individual agent. The edge set $\mathcal{E}$ captures the directional flow of information between agents and the prompt set $\mathcal{P} = \{p_1,p_2,\dots ,p_N\}$ specifies textual prompts Figure 2: The architecture of the proposed GLOW. For AW, high-level semantic representations are derived from a graph-oriented LLM, while structural dependencies are captured by a GNN. The representation of task instruction $T$ is extracted using a sentence-BERT. These distinct representations are then projected into a unified latent space and aggregated through a representation fusion module to generate the predicted performance score. guiding the behavior of each agent $v_{i}$ . During the execution phase, an agent $v_{i}$ aggregates information from two sources: the initial global task instruction $T$ and the intermediate outputs generated by its upstream neighbors. The input context $X_{i}$ for agent $v_{i}$ can be expressed as: $$ X _ {i} = \{T \} \cup \left\{y _ {j} \mid v _ {j} \in \mathcal {N} _ {i} ^ {(i n)} \right\} \tag {1} $$ where $\mathcal{N}_i^{(in)}$ signifies the set of predecessor agents (nodes) directly connected to $v_{i}$ , and $y_{j}$ represents the output produced by agent $v_{j}$ . Based on this input context, the output $y_{i}$ for agent $v_{i}$ is generated by invoking LLMs, denoted as $\mathcal{M}$ . The generation process is defined by: $$ y _ {i} = \mathcal {M} \left(X _ {i}, p _ {i}\right) \tag {2} $$ where $p_i$ serves as the specialized prompt defining the subtask logic for agent $v_i$ . Upon completion of all agent processes, the AW yields the final result $r = f(\mathcal{G}, T)$ . If $r$ matches the expected outcome, the AW is considered successful; otherwise, it is deemed unsatisfactory. Definition 3.1 (Agentic Workflow Performance Prediction). Given a specific task instruction $T$ and an AW $\mathcal{G}$ , performance prediction aims to determine whether $\mathcal{G}$ can produce the expected outcome for task instruction $T$ without actually executing the AW. The agentic workflow performance prediction provides a computationally efficient proxy that guides AW generation while avoiding the substantial overhead of direct execution. # 4 Methodology In this section, we introduce our proposed agentic workflow performance prediction method, GLOW. The architecture, shown in Figure 2, transforms an AW and a task instruction into a scalar performance score. In the following, we describe representation encoding, performance prediction, and model training. # 4.1 Representation Encoding GLOW encodes representations from the task instruction and the AW to support subsequent performance prediction. Task Instruction Encoding. Given the task instruction $T$ , we first employ a pre-trained sentence-BERT (SBERT) [Reimers and Gurevych, 2019] to obtain its semantic embedding. To align this embedding with the latent space of the AW features, we apply a lightweight MLP as the projector. The final task representation $\mathbf{R}^{\text {Task }} \in \mathbb{R}^{d}$ is formulated as: $$ \mathbf {R} ^ {\text {T a s k}} = \operatorname {P r o j} _ {T} (\mathrm {S B E R T} (T)) \tag {3} $$ where $\mathrm{Proj}_T(\cdot)$ denotes the projector. Agentic Workflow Structural Encoding. To capture the interactions and dependencies among agents, we model the AW as a graph and utilize a GNN. Initially, for each agent node $v_{i}$ , its textual prompt $p_{i}$ is encoded by the sentence-BERT to serve as the initial node embedding $\mathbf{h}_i^{(0)} = \mathrm{SBERT}(p_i)$ . Subsequently, a GNN encodes the graph structure by propagating information along the edges $\mathcal{E}$ . After $L$ layers of message passing, we obtain the set of refined node embeddings for all nodes, formulated as: $$ \left\{\mathbf {h} _ {i} ^ {(L)} \right\} _ {v _ {i} \in \mathcal {V}} = \operatorname {G N N} \left(\left\{\mathbf {h} _ {i} ^ {(0)} \right\} _ {v _ {i} \in \mathcal {V}}, \mathcal {E}\right) \tag {4} $$ To derive the global structural representation $\mathbf{R}^{\mathrm{GNN}}\in \mathbb{R}^{d}$ , we perform mean pooling [Xu et al., 2018] over all node embeddings, which averages the node embeddings of all nodes: $$ \mathbf {R} ^ {\mathrm {G N N}} = \frac {1}{| \mathcal {V} |} \sum_ {v _ {i} \in \mathcal {V}} \mathbf {h} _ {i} ^ {(L)} \tag {5} $$ You are provided with a directed graph consisting of multiple nodes, each associated with a text. The connections between nodes are defined by the given edges, as detailed below: **Nodes**: {V, P} **Edges (Each tuple (source, target) represents a directed connection from the source node to the target node)**: {E}. Figure 3: The prompt template used to convert the AW into descriptive text. Node set $\mathcal{V}$ and prompt set $\mathcal{P}$ are organized into a dictionary mapping each node ID to its textual prompt, while the edge set $\mathcal{E}$ is converted into a list of (source, target) tuples. where $|\mathcal{V}|$ denotes the total number of nodes in the AW. Agentic Workflow Semantic Encoding. While GNNs are effective at capturing structural representations, they may overlook the high-level reasoning logic implicit in the AW design. To address this, we leverage the reasoning capabilities of LLMs. We first linearize the AW $\mathcal{G}$ into a comprehensive descriptive text $S_{\mathcal{G}}$ , adhering to the template shown in Figure 3. Crucially, to extract a concise representation, the prompt concludes with the specific instruction: "Provide a single token representing the embedding of this graph." The processed prompt is then fed into a graph-oriented LLM. We extract the hidden state of its generated output—specifically, the final token embedding—and pass it through a projector, implemented as an MLP, to obtain the semantic representation $\mathbf{R}^{\mathrm{LLM}} \in \mathbb{R}^{d}$ : $$ \mathbf {R} ^ {\mathrm {L L M}} = \operatorname {P r o j} _ {L} (\mathrm {L L M} (S _ {\mathcal {G}})) \tag {6} $$ # 4.2 Performance Prediction To synthesize the semantic and structural representations of the AW along with the task representations from the encoding phase, we employ a transformer-encoder-based representation fusion module, followed by a prediction head that outputs the predicted score $\hat{y}$ . Specifically, we first construct an input sequence by concatenating a learnable prediction token representation $\mathbf{R}^{\mathrm{Pred}}$ with the extracted representations: $\mathbf{Z}^{(0)} = [\mathbf{R}^{\mathrm{Pred}}; \mathbf{R}^{\mathrm{LLM}}; \mathbf{R}^{\mathrm{GNN}}; \mathbf{R}^{\mathrm{Task}}] \in \mathbb{R}^{4 \times d}$ . To inform the model of the distinct nature of each representation type, we add learnable type embeddings $\mathbf{E}^{\mathrm{Type}} \in \mathbb{R}^{4 \times d}$ to $Z^{(0)}$ . The resulting sequence is processed by a representation fusion module composed of $L_{T}$ stacked layers. Each layer enables representation interaction through a Multi-Head Self-Attention (MHSA) mechanism followed by a position-wise Feed-Forward Network (FFN), both equipped with residual connections and Layer Normalization (LN). Formally, for the $l$ -th layer, the representation update is given by: $$ \tilde {\mathbf {Z}} ^ {(l)} = \mathrm {L N} (\mathrm {M H S A} (\mathbf {Z} ^ {(l - 1)}) + \mathbf {Z} ^ {(l - 1)}) \tag {7} $$ $$ \mathbf {Z} ^ {(l)} = \operatorname {L N} \left(\operatorname {F F N} \left(\tilde {\mathbf {Z}} ^ {(l)}\right) + \tilde {\mathbf {Z}} ^ {(l)}\right) \tag {8} $$ Through this deep interaction, the prediction token aggregates context-aware information from all other representations. Finally, the hidden state of the prediction token from the last layer, denoted as $\mathbf{z}_{\mathrm{Pred}}^{(L_T)}$ , is fed into the Prediction Head (PH), implemented as an MLP, followed by a sigmoid function to produce the predicted performance score $\hat{y}$ : $$ \hat {y} = \sigma \left(\mathrm {P H} \left(\mathbf {z} _ {\text {P r e d}} ^ {(L _ {T})}\right)\right) \tag {9} $$ where $\sigma (\cdot)$ denotes the sigmoid function. # 4.3 Model Training To ensure the effectiveness of each module and the coherence of the final representation, we adopt a multi-stage training strategy involving LLM instruction tuning, GNN pretraining, and end-to-end optimization. Instruction Tuning for LLM. To equip a generic LLM with a stronger ability to understand graph structures and interactions from plain text, we instruction-tune it using the textualized AW descriptions $S_{\mathcal{G}}$ generated from the template in Figure 3, and construct graph-related QA pairs targeting six dimensions: i) Degree-Based Prediction (DBP): Predicting the node's in-degree, out-degree, and the graph's average degree. ii) Directed Neighbor Extraction (DNE): Identifying in-neighbors (predecessors) and out-neighbors (successors) for the specific node. iii) Node Prompt Retrieval (NPR): Retrieving the raw prompt of the specified node. iv) Subgraph Reachability & Path Length (REACH): Determining reachability between node pairs and predicting their shortest directed path length. v) Key Node Identification (KNI): Identifying source nodes (zero in-degree) and sink nodes (zero out-degree). vi) Topological Sorting (TSORT): Predicting a valid topological ordering of the nodes. The LLM is fine-tuned to minimize the standard next-token prediction loss on these tasks, resulting in a graph-oriented LLM. Pre-training of GNN. Before the final training, we pre-train the GNN using self-supervised learning to ensure it generates robust structural embeddings. For node reconstruction, we aim to recover the initial semantic node embeddings $\mathbf{h}_i^{(0)}$ extracted by SBERT. Let $\mathbf{h}_i^{(L)}$ be the output embedding of node $v_i$ from the GNN. We minimize the Mean Squared Error (MSE): $$ \mathcal {L} _ {\text {N o d e}} = \frac {1}{| \mathcal {V} |} \sum_ {v _ {i} \in \mathcal {V}} \| \operatorname {P r o j} \left(\mathbf {h} _ {i} ^ {(L)}\right) - \mathbf {h} _ {i} ^ {(0)} \| ^ {2} \tag {10} $$ where $\operatorname{Proj}(\cdot)$ is an auxiliary projection head. For edge reconstruction, we employ a bilinear decoder to predict the existence of directed edges. The probability of an edge from $v_{i}$ to $v_{j}$ is computed as: $$ \hat {e} _ {i j} = \sigma \left(\mathbf {h} _ {i} ^ {(L) \mathrm {T}} \mathbf {W h} _ {j} ^ {(L)} + b\right) \tag {11} $$ where $\mathbf{W}$ and $b$ are the learnable weight matrix and bias, respectively, $\cdot^{\mathrm{T}}$ represents transposition. We optimize the Binary Cross-Entropy (BCE) loss over all possible node pairs: $$ \mathcal {L} _ {\text {E d g e}} = - \frac {1}{| \mathcal {V} | ^ {2}} \sum_ {v _ {i}, v _ {j} \in \mathcal {V}} \left[ e _ {i j} \log \hat {e} _ {i j} + \left(1 - e _ {i j}\right) \log \left(1 - \hat {e} _ {i j}\right) \right] \tag {12} $$ where $e_{ij} = 1$ if there is an edge from $v_{i}$ to $v_{j}$ , 0 otherwise. Finally, the pre-training loss is: $\mathcal{L}_{Pre} = \mathcal{L}_{\mathrm{Node}} + \mathcal{L}_{\mathrm{Edge}}$ . End-to-End Model Training. In the final stage, we freeze the parameters of the sentence BERT and the graph-oriented <table><tr><td>Domain</td><td>Coding-GD</td><td>Coding-AF</td><td>Math-GD</td><td>Math-AF</td><td>Reason-GD</td><td>Reason-AF</td></tr><tr><td>Num. of workflows</td><td>1026</td><td>56</td><td>155</td><td>41</td><td>189</td><td>30</td></tr><tr><td>Avg. of nodes</td><td>5.96</td><td>7.48</td><td>6.12</td><td>5.49</td><td>6.58</td><td>5.87</td></tr><tr><td>Num. of tasks</td><td>57</td><td>233</td><td>97</td><td>99</td><td>2400</td><td>2400</td></tr><tr><td>Num. of samples</td><td>30,683</td><td>7,362</td><td>12,561</td><td>4,059</td><td>453,600</td><td>72,000</td></tr></table> Table 1: Statistics of the FLORA-Bench dataset used for downstream performance prediction evaluation. <table><tr><td rowspan="2"></td><td colspan="2">Coding-GD</td><td colspan="2">Coding-AF</td><td colspan="2">Math-GD</td><td colspan="2">Math-AF</td><td colspan="2">Reason-GD</td><td colspan="2">Reason-AF</td></tr><tr><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td></tr><tr><td>GCN</td><td>82.1±0.2</td><td>74.3±0.8</td><td>82.7±0.1</td><td>71.3±0.9</td><td>59.8±1.1</td><td>60.1±1.3</td><td>79.8±0.1</td><td>72.9±0.5</td><td>71.6±0.2</td><td>62.0±0.7</td><td>85.1±0.1</td><td>86.6±0.8</td></tr><tr><td>GAT</td><td>83.3±0.5</td><td>75.1±0.5</td><td>82.9±0.4</td><td>72.1±0.6</td><td>59.4±0.8</td><td>58.7±1.2</td><td>79.4±0.2</td><td>72.2±0.3</td><td>71.1±0.1</td><td>62.4±0.4</td><td>85.0±0.2</td><td>87.4±0.5</td></tr><tr><td>GCNII</td><td>82.4±0.3</td><td>75.4±0.7</td><td>82.2±0.2</td><td>71.6±0.8</td><td>61.0±0.7</td><td>59.1±0.9</td><td>78.4±0.1</td><td>72.5±0.6</td><td>71.7±0.3</td><td>62.1±0.6</td><td>85.2±0.1</td><td>87.5±0.7</td></tr><tr><td>GT</td><td>83.2±0.1</td><td>75.2±0.6</td><td>82.7±0.3</td><td>72.3±0.7</td><td>61.3±0.5</td><td>60.9±0.7</td><td>79.4±0.3</td><td>71.4±0.4</td><td>71.6±0.1</td><td>62.7±0.5</td><td>85.1±0.1</td><td>86.9±0.6</td></tr><tr><td>OFA</td><td>82.3±0.4</td><td>74.1±0.4</td><td>82.2±0.5</td><td>72.8±0.5</td><td>60.0±0.6</td><td>59.9±0.8</td><td>78.9±0.1</td><td>69.8±0.5</td><td>70.9±0.2</td><td>62.7±0.3</td><td>84.3±0.3</td><td>86.3±0.4</td></tr><tr><td>Qwen3</td><td>84.2±0.2</td><td>76.1±0.9</td><td>81.4±0.1</td><td>72.4±1.0</td><td>62.0±0.3</td><td>61.4±0.4</td><td>76.7±0.2</td><td>70.4±0.5</td><td>71.8±0.1</td><td>62.6±0.4</td><td>84.1±0.1</td><td>88.7±0.9</td></tr><tr><td>AP</td><td>83.4±0.2</td><td>75.9±0.7</td><td>83.2±0.2</td><td>73.9±0.8</td><td>62.9±0.4</td><td>61.8±0.3</td><td>79.8±0.2</td><td>73.4±0.4</td><td>72.6±0.2</td><td>63.1±0.5</td><td>85.7±0.1</td><td>88.1±0.7</td></tr><tr><td>GLOW</td><td>85.1±0.3</td><td>77.3±0.6</td><td>84.6±0.3</td><td>75.4±0.7</td><td>64.4±0.2</td><td>63.5±0.5</td><td>81.3±0.1</td><td>75.1±0.4</td><td>73.8±0.1</td><td>66.1±0.5</td><td>87.0±0.1</td><td>90.5±0.6</td></tr></table> Table 2: Experimental results (%) on the six domains of the FLORA-Bench datasets. Accuracy (Acc.) and utility (Uti.) are reported. The best-performing results are highlighted in bold. LLM to preserve their pre-trained knowledge. First, we employ a prediction loss using BCE to supervise the performance estimation. Given the ground truth label $y \in \{0,1\}$ (where 1 indicates the AW successfully completes the task) and the predicted score $\hat{y}$ : $$ \mathcal {L} _ {\text {P r e d}} = - \frac {1}{S} \sum_ {i = 1} ^ {S} \left[ y _ {i} \log \hat {y} _ {i} + \left(1 - y _ {i}\right) \log \left(1 - \hat {y} _ {i}\right) \right] \tag {13} $$ where $S$ is the number of samples in the dataset. Second, to refine the latent space, we apply contrastive learning to make the representations of successful AFs (i.e., those with $y = 1$ ) cluster more tightly, while pushing them away from unsuccessful ones ( $y = 0$ ). Specifically, we construct the triplet set $\mathcal{T}_T$ for each task $T$ by restricting anchors to AFs with $y = 1$ . For each anchor $a$ with $y = 1$ , the positive sample $p$ is another successful AF ( $y = 1$ ), whereas the negative sample $n$ is an unsuccessful AF with $y = 0$ that fails to complete the task. The resulting contrastive loss is defined as: $$ \mathcal {L} _ {\mathrm {C o n}} ^ {m} = \frac {1}{| \mathcal {T} _ {T} |} \sum_ {(a, p, n) \in \mathcal {T} _ {T}} \max \left(0, d \left(\mathbf {R} _ {a} ^ {m}, \mathbf {R} _ {p} ^ {m}\right) - d \left(\mathbf {R} _ {a} ^ {m}, \mathbf {R} _ {n} ^ {m}\right) + \alpha\right) \tag {14} $$ where $m \in \{\mathrm{GNN}, \mathrm{LLM}\}$ , $d(\cdot, \cdot)$ represents a distance function (implemented as cosine distance), and $\alpha$ is a margin hyperparameter. The final objective function is a weighted sum: $\mathcal{L} = \mathcal{L}_{\mathrm{Pred}} + \frac{\lambda}{2} (\mathcal{L}_{\mathrm{Con}}^{\mathrm{GNN}} + \mathcal{L}_{\mathrm{Con}}^{\mathrm{LLM}})$ . # 5 Experiments In this section, we conduct extensive experiments to investigate the following Research Questions (RQs): RQ1: How effective is GLOW in predicting the performance of AWs? RQ2: How does instruction tuning enhance the LLM's capability to understand AWs from plain text? RQ3: How do different architectural components impact the overall performance of GLOW? RQ4: How do GNN pretraining and LLM instruction tuning contribute to the performance of GLOW? RQ5: How do the hyperparameters $\alpha$ and $\lambda$ affect the performance of GLOW? RQ6: How effectively does GLOW support the downstream task of automatic AW generation? # 5.1 Experimental Setup Dataset. We adopt the recently introduced and well-curated FLORA-Bench dataset [Zhang et al., 2025]. It spans five representative datasets frequently studied in the agentic workflow literature, covering three core task types: code generation (HumanEval [Chen, 2021], MBPP [Austin et al., 2021]), mathematical problem solving (GSM8K [Cobbe et al., 2021], MATH [Hendrycks et al., 2021]), and general reasoning (MMLU [Hendrycks et al., 2020]). The AWs are derived from two state-of-the-art automatic AW generation methods: G-Designer (GD) [Zhang et al., 2024a] and AFLOW (AF) [Zhang et al., 2024b]. Table 1 summarizes the dataset statistics. We randomly split each sub-dataset into training, validation, and test sets following an 8:1:1 ratio. In addition, to construct the dataset for instruction tuning the LLM, we aggregated 1,497 AWs from the source pool. We randomly selected 200 AWs for evaluation. For data generation, we produced 3 distinct samples for each question type. Consequently, this yielded a specialized corpus containing 23,346 training samples and 3,600 test samples. Baseline Methods. Following [Zhang et al., 2025], we include five representative GNN-based models as benchmarks: GCN [Kipf, 2016], GAT [Veličković et al., 2017], GCNII [Chen et al., 2020], Graph Transformer (GT) [Shi et al., 2020], and One-For-All (OFA) [Liu et al., 2023], as well as the Agentic Predictor (AP) [Trirat et al., 2025]. In addition, we evaluate an LLM baseline based on Qwen3-1.7B [Yang et al., 2025] $^{1}$ , which is fine-tuned to predict performance directly from the AW and task descriptions. Implementation Details. All experiments are conducted on a server equipped with an Intel Xeon Gold 6330 CPU (38 cores), 256GB of memory, and an NVIDIA A40 GPU with 48 GB of memory. We utilize all-MiniLM-L6-v2² as the SBERT, Qwen3-1.7B as the base LLM and a two-layer GAT as the GNN. QLoRA [Dettmers et al., 2023] is employed to reduce <table><tr><td></td><td>DBP</td><td>DNE</td><td>NPR</td><td>REACH</td><td>KNI</td><td>TSORT</td><td>Average</td></tr><tr><td>Base LLM</td><td>65.3</td><td>93.7</td><td>36.3</td><td>93.2</td><td>85.3</td><td>21.5</td><td>65.9</td></tr><tr><td>Graph-oriented LLM</td><td>97.0</td><td>100.0</td><td>100.0</td><td>98.7</td><td>99.7</td><td>99.0</td><td>99.1</td></tr></table> Table 3: Experimental results (Accuracy, %) illustrating that the graph-oriented LLM, fine-tuned from the base LLM, achieves enhanced comprehension of AWs from plain text. <table><tr><td rowspan="2"></td><td colspan="2">Coding-GD</td><td colspan="2">Coding-AF</td><td colspan="2">Math-GD</td><td colspan="2">Math-AF</td><td colspan="2">Reason-GD</td><td colspan="2">Reason-AF</td></tr><tr><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td></tr><tr><td>w/o R GNN</td><td>83.8±0.2</td><td>76.0±0.7</td><td>82.4±0.4</td><td>73.2±0.8</td><td>62.4±0.1</td><td>61.4±0.6</td><td>77.4±0.1</td><td>72.1±0.5</td><td>72.0±0.2</td><td>63.2±0.4</td><td>85.0±0.1</td><td>87.6±0.5</td></tr><tr><td>w/o R LLM</td><td>83.7±0.4</td><td>75.8±0.5</td><td>82.9±0.2</td><td>73.4±0.9</td><td>63.5±0.3</td><td>61.9±0.4</td><td>80.9±0.2</td><td>73.2±0.3</td><td>72.1±0.1</td><td>63.8±0.7</td><td>85.9±0.2</td><td>87.7±0.8</td></tr><tr><td>w/o E Type</td><td>82.7±0.3</td><td>75.1±0.8</td><td>83.8±0.3</td><td>74.6±0.6</td><td>62.6±0.2</td><td>61.9±0.5</td><td>79.8±0.1</td><td>72.2±0.4</td><td>71.7±0.2</td><td>62.4±0.5</td><td>85.5±0.1</td><td>86.9±0.7</td></tr><tr><td>w/o P. GNN</td><td>84.7±0.2</td><td>76.8±0.6</td><td>83.4±0.4</td><td>74.5±0.7</td><td>64.0±0.1</td><td>62.4±0.6</td><td>80.9±0.2</td><td>74.2±0.3</td><td>73.1±0.1</td><td>64.4±0.4</td><td>86.4±0.1</td><td>89.4±0.6</td></tr><tr><td>w/o P. LLM</td><td>83.9±0.4</td><td>76.2±0.5</td><td>82.9±0.3</td><td>73.6±0.8</td><td>63.0±0.3</td><td>62.0±0.4</td><td>80.0±0.1</td><td>73.8±0.5</td><td>72.4±0.2</td><td>63.7±0.6</td><td>85.8±0.1</td><td>88.6±0.5</td></tr><tr><td>w/o P. GNN &amp; LLM</td><td>83.7±0.3</td><td>75.9±0.7</td><td>82.7±0.2</td><td>73.1±0.6</td><td>62.7±0.2</td><td>61.7±0.5</td><td>79.8±0.1</td><td>73.1±0.4</td><td>72.4±0.1</td><td>63.6±0.5</td><td>85.4±0.1</td><td>88.1±0.6</td></tr><tr><td>GLOW</td><td>85.1±0.3</td><td>77.3±0.6</td><td>84.6±0.3</td><td>75.4±0.7</td><td>64.4±0.2</td><td>63.5±0.5</td><td>81.3±0.1</td><td>75.1±0.4</td><td>73.8±0.1</td><td>66.1±0.5</td><td>87.0±0.1</td><td>90.5±0.6</td></tr></table> Table 4: Ablation results (%) where 'w/o' denotes removal of a component, and 'w/o P' indicates no pretraining or instruction tuning. memory consumption during LLM fine-tuning. The hyperparameter $\lambda$ , which balances the prediction loss and contrastive loss, is set to 1, while $\alpha$ , controlling the margin in the contrastive loss, is set to 0.2. The hidden dimension $d$ is 256, and the learning rate is $10^{-4}$ . We use the AdamW optimizer [Loshchilov, 2017] to train the model with a mini-batch size of 512. The maximum number of training epochs is 200, with early stopping applied if there is no improvement on the validation set for 30 consecutive epochs. For fairness, the hyperparameters of all compared methods are set according to their original papers. We run each experiment five times and report the mean and standard deviation. Metrics. We evaluate the method's performance using two metrics. First, accuracy measures the prediction correctness: Accuracy = $\frac{1}{S}\sum_{i=1}^{S}\mathbb{I}(\hat{y}_i = y_i)$ , where $S$ is the number of test samples and $\mathbb{I}(\cdot)$ returns 1 if the condition holds and 0 otherwise. Second, utility assesses the consistency between the predicted and ground-truth rankings of AWs, emphasizing the method's ability to distinguish the relative quality of different AWs. For each AW, the success rate is defined as the proportion of tasks it successfully completes. Let $\mathcal{H}_k$ and $\hat{\mathcal{H}}_k$ denote the sets of top- $k$ AWs selected based on the ground-truth and predicted success rates, respectively. The utility is defined as the mean overlap ratio averaged over various $k$ : Utility = $\frac{1}{K}\sum_{k=1}^{K}\frac{|\mathcal{H}_k \cap \hat{\mathcal{H}}_k|}{k}$ , where $K$ is the total number of AWs in the test dataset. # 5.2 Performance Evaluation (RQ1) The quantitative results are summarized in Table 2. As observed, GLOW consistently outperforms all baseline methods in both accuracy and utility across all domains, surpassing the second-best baseline, AP, by $1.5\%$ in accuracy and $2.0\%$ in utility on average. This demonstrates the robustness of GLOW and confirms that it is highly effective at identifying high-quality workflows, making it a reliable proxy for the downstream automatic AW generation. Traditional GNN methods (e.g., GCN, GAT) and AP perform well in capturing structural patterns but struggle to fully model the semantic nuances of agent roles. In contrast, the LLM-based baseline (Qwen3) exhibits strong semantic reasoning capabilities but is limited in its ability to directly process raw graph structures. GLOW bridges this gap by integrating the structural inductive bias of GNNs with the semantic expressiveness of LLMs, yielding superior performance. # 5.3 Impact of Instruction Tuning on LLM (RQ2) To answer RQ2, we compare the zero-shot performance of the vanilla base LLM against our fine-tuned graph-oriented LLM on the dataset introduced in Section 5.1. The results are reported in Table 3. The graph-oriented LLM achieves a near-perfect average accuracy of 99.1, significantly outperforming the base LLM (65.9). This evidence shows that small-version LLMs (Qwen3-1.7B), despite strong linguistic reasoning, cannot inherently parse serialized graphs or capture topological dependencies without adaptation. By adapting the LLM into a graph-oriented expert, we ensure that the semantic features fed to the downstream GLOW predictor are not mere textual embeddings, but are deeply grounded in the AW topology and the interactions among agents. # 5.4 Ablation Studies Architectural Component (RQ3). To assess the contribution of each architectural component, we compare GLOW with variants where specific feature components are removed. As shown in rows 1-3 of Table 4, removing any component leads to a performance degradation. The removal of $\mathbf{R}^{\mathrm{GNN}}$ results in the most significant impact, causing an average drop of $2.2\%$ in accuracy and $2.4\%$ in utility across the six domains. Similarly, excluding $\mathbf{R}^{\mathrm{LLM}}$ results in an average decline of $1.2\%$ in accuracy and $2.0\%$ in utility. This quantitative evidence suggests that while structural information is paramount, the synergy between topological features and semantic reasoning is essential for optimal performance. The absence of type embeddings ( $w/o \mathbf{E}^{\mathrm{Type}}$ ) results in an average decline of $1.2\%$ in accuracy and $2.0\%$ in utility. This substantial drop indicates that explicitly distinguishing representation types through learnable embeddings greatly improves the fusion module's ability to integrate heterogeneous information. GNN Pretraining and LLM Instruction Tuning (RQ4). As shown in rows 4-6 of Table 4, the variant without LLM instruction tuning (w/o P. LLM) and the variant without GNN pretraining (w/o P. GNN) both exhibit the expected performance degradation. Completely removing both GNN pre Figure 4: Impact of hyperparameters $\lambda$ and $\alpha$ on model performance. (a) HumanEval (b) MBPP (c) MMLU Figure 5: Comparison of time consumption and final AW performance across different AW evaluation methods in AFLOW. training and LLM instruction tuning (w/o P. GNN & LLM) leads to an average drop of $1.6\%$ in accuracy and $2.1\%$ in utility across the six domains. These results confirm that initializing the feature extractors with domain-specific knowledge substantially improves their generalization ability. # 5.5 Hyperparameter Study (RQ5) We examine GLOW's sensitivity to two key hyperparameters: the loss weight $\lambda$ , which balances the prediction and contrastive losses, and the margin $\alpha$ , which controls the contrastive separation. Figure 4 reports the accuracy under different settings. Notably, the case $\lambda = 0$ corresponds to the ablation of the contrastive loss. As expected, both hyperparameters follow a consistent trend in which accuracy first improves and then declines when pushed to extreme values. Specifically, the introduction of the contrastive loss is beneficial, with performance peaking when $\lambda \in [0.5,1.0]$ and $\alpha \in [0.2,0.3]$ . Importantly, the accuracy variation within these ranges is small, indicating that GLOW is robust and not overly sensitive to precise hyperparameter choices. These results suggest that $\lambda = 1.0$ and $\alpha = 0.2$ yield reliable performance, and we therefore recommend using them. # 5.6 Impact on Automatic AW Generation (RQ6) We evaluate the practical effectiveness of GLOW by integrating it into the automatic AW generation framework, AFLOW. We compare GLOW against three baselines: i) 'Random', which predicts an AW's performance uniformly at random; ii) the standard 'GCN'-based predictor; iii) the 'Agentic Predictor' (AP); and iv) 'Ground Truth', which obtains the actual performance by executing the AW. The reported 'Score' metric reflects the success rate of the final AWs generated by AFLOW on the test dataset. As shown in Figure 5, GLOW consistently outperforms both the Random, GCN and AP baselines, owing to its more accurate performance predictions. Its performance closely approaches the ceiling established by the Ground Truth, demonstrating that GLOW can effectively guide AFLOW toward high-quality AWs with minimal performance loss. Moreover, compared with the computationally expensive Ground Truth, which requires repeated LLM calls, GLOW substantially accelerates AFLOW's optimization process, reducing time consumption by $98.7\%$ while incurring only a 0.031 decrease in score on average across three datasets. Compared with the Random, GCN and AP, GLOW's more reliable performance estimation also helps AFLOW converge slightly faster, as observed on datasets such as MBPP and MMLU. These results confirm that GLOW is an efficient and reliable proxy for accelerating automatic AW generation. # 6 Conclusion In this paper, we introduce GLOW, which couples a specialized graph-oriented LLM with a structural GNN through a dual-branch architecture and contrastive learning, enabling it to capture both interaction topology and agent-level semantics of AWs. Experimental results show that GLOW achieves state-of-the-art prediction accuracy and reduces the time cost of automatic AW generation methods by two orders of magnitude, while incurring only minimal performance trade-offs. # Ethical Statement There are no ethical issues.
arxiv_cs
2025-12-11T00:00:00Z
https://arxiv.org/pdf/2512.15751
{"title": "GLOW: Graph-Language Co-Reasoning for Agentic Workflow Performance Prediction", "raw_content": "# GLOW: Graph-Language Co-Reasoning for Agentic Workflow Performance Prediction\n\nWei Guan $^{1,2}$ , Jian Cao $^{1}$ , Jinyu Cai $^{2}$ , Qiqi Cai $^{1}$ , Jianqi Gao $^{3}$ , See-Kiong Ng $^{2}$\n\n$^{1}$ School of Computer Science, Shanghai Jiao Tong University, China\n\n$^{2}$ Institute of Data Science, National University of Singapore, Singapore\n\n$^{3}$ School of Computer Engineering and Science, Shanghai University, China\n\n{guan-wei, cao-jian, cai_qiqi} $@$ sjtu.edu.cn, {jinyucai, seekiong} $@$ nus.edu.sg, jianqi_gao@shu.edu.cn\n\n# Abstract\n\nAgentic Workflows (AWs) have emerged as a promising paradigm for solving complex tasks. However, the scalability of automating their generation is severely constrained by the high cost and latency of execution-based evaluation. Existing AW performance prediction methods act as surrogates but fail to simultaneously capture the intricate topological dependencies and the deep semantic logic embedded in AWs. To address this limitation, we propose GLOW, a unified framework for AW performance prediction that combines the graph-structure modeling capabilities of GNNs with the reasoning power of LLMs. Specifically, we introduce a graph-oriented LLM, instruction-tuned on graph tasks, to extract topologically aware semantic features, which are fused with GNN-encoded structural representations. A contrastive alignment strategy further refines the latent space to distinguish high-quality AWs. Extensive experiments on FLORA-Bench show that GLOW outperforms state-of-the-art baselines in prediction accuracy and ranking utility. The source code is publicly available at https://github.com/guanwei49/GLOW.\n\n# 1 Introduction\n\nLarge Language Models (LLMs) have demonstrated remarkable capabilities in diverse tasks, evolving from passive text generators to active agents capable of planning, reasoning, and tool use [Xi et al., 2025]. However, recent research indicates that Agentic Workflows (AWs) offer a superior paradigm compared to single-agent systems for handling complex scenarios. By coordinating multiple specialized agents within structured collaboration topologies, AWs decompose intricate problems into manageable sub-routines, enabling state-of-the-art performance in domains including code generation [He et al., 2025; Hu et al., 2024b], mathematics [Zhong et al., 2026; Zhang and Xiong, 2025], and general reasoning [Pezeshkpour et al., 2024; Chen et al., 2025]. However, designing effective AWs manually is labor-intensive and requires expert knowledge, which has motivated the development of automatic agentic workflow generation methods [Li et al., 2024; Hu et al., 2024a]. These meth\n\nods view the workflow structure as a search space and employ algorithms like genetic programming or reinforcement learning to discover high-performing AWs. However, a critical bottleneck that impedes their scalability is the evaluation of AWs. To determine the performance of a candidate AW, these methods typically execute it, with each agent calling an LLM. Given the stochastic nature of LLMs and the complexity of multi-turn interactions, this process is both time-consuming and costly, making large-scale exploration impractical.\n\nTo address this efficiency challenge, recent works have explored performance predictors as surrogates for execution-based evaluation. Existing methods [Zhang et al., 2025; Tirat et al., 2025] model AWs as Directed Acyclic Graphs (DAGs) and utilize Graph Neural Networks (GNNs) to predict performance based on structural features. While effective at capturing topological patterns, standard GNNs treat agent prompts as shallow text embeddings, often failing to comprehend the deep semantic logic and role definitions critical to workflow success. Conversely, while LLMs excel at understanding textual prompts, they lack the inherent capability to process graph structures efficiently or model the error propagation paths in complex topologies.\n\nIn this paper, we present GLOW, a unified framework that leverages the structural modeling capabilities of GNNs and the semantic reasoning power of LLMs for agentic workflow performance prediction. GLOW simultaneously captures how agents are connected (structure) and what agents are thinking (semantics) by integrating graph-based and language-based representations into a unified latent space. The main contributions of this work are as follows: i) Graph-oriented LLM instruction tuning: Instead of using off-the-shelf LLMs, we construct a specialized instruction-tuning dataset containing graph reasoning tasks (e.g., reachability, topological sorting). This transforms the LLM into a 'graph expert' capable of extracting topologically aware semantic representations from textual AW descriptions. ii) Dual-branch representation learning: We employ a GNN to encode the AW structure and the graph-oriented LLM to encode implicit reasoning logic. These representations are projected into a unified space and fused via a representation fusion module. iii) Contrastive alignment strategy: In addition to the prediction loss, we introduce a contrastive learning objective that clusters successful AWs together in the latent space while pushing apart unsuccessful ones, enhancing\n\nthe model's discriminative power. We conduct extensive experiments on FLORA-Bench [Zhang et al., 2025]. Empirical results show that GLOW outperforms existing methods in both prediction accuracy and ranking utility. Moreover, when deployed as a candidate AW evaluation method in the automatic AW generation framework AFLOW [Zhang et al., 2024b], GLOW reduces computation time by $98.7\\%$ while incurring only a 0.031 decrease in the score of generated AWs on average across three datasets.\n\n# 2 Related Work\n\nIn this section, we briefly review prior research on automatic agentic workflow generation, LLMs for graph-structured data, and agentic workflow performance prediction.\n\n# 2.1 Automatic Agentic Workflow Generation\n\nCurrent approaches for automated agentic workflow generation generally fall into two primary categories. Probability-based methods generate candidate workflows through stochastic sampling from a learnable distribution. To facilitate this mathematical optimization, these approaches typically model the agentic workflow as a computational graph, where nodes represent agents and edges define their interaction topology. For example, GPTSwarm [Zhuge et al., 2024] utilizes the REINFORCE algorithm to optimize this graph structure, learning the probability of connections between nodes to maximize the agentic workflow performance. G-Designer [Zhang et al., 2024a] employs a variational graph auto-encoder (VGAE) to sample and decode task-adaptive agentic workflows. LLM-guided methods, conversely, leverage the inherent reasoning and coding capabilities of LLMs to directly generate and refine workflows based on feedback. For example, AFLOW [Zhang et al., 2024b] utilizes Monte Carlo Tree Search (MCTS) to explore different candidate workflows. AutoFlow [Li et al., 2024] frames workflows as natural language programs, employing reinforcement learning to fine-tune the generator LLM based on workflow execution rewards. EvoMAC [Hu et al., 2024b] mimics neural network training by introducing \"textual backpropagation,\" where error logs from compilers serve as gradients to update the agent workflows. ADAS [Hu et al., 2024a] takes a meta-learning perspective, deploying a \"meta-agent\" that iteratively programs and discovers entirely new agent architectures. RobustFlow [Xu et al., 2025] executes multiple workflow candidates for similar user queries, identifies the one that performs the best, and trains the LLM to consistently generate that high-quality workflow. These approaches rely heavily on repeated LLM invocations to execute workflows for performance evaluation, resulting in substantial computational, temporal, and financial overhead, which limits their practicality in real-world scenarios. GLOW provides an efficient way to predict the performance of generated candidate workflows, thereby reducing the need for costly LLM calls.\n\n# 2.2 LLMs for Graph-Structured Data\n\nA growing body of work has investigated the use of LLMs for graph reasoning. Wang et al. [Wang et al., 2023] introduce one of the first natural-language graph reasoning\n\n![](images/44cee8bacc0411c262d12952a6100f7d1a36bcbaeecc0b4afc0945a71e7a8407.jpg) \nFigure 1: An illustrative example of an AW for code generation.\n\nbenchmarks, NLGraph, and demonstrate that LLMs exhibit graph reasoning abilities. Early studies [Fatemi et al., 2024; Ye et al., 2024; Zhang et al., 2024c] primarily focus on prompt design to elicit or evaluate LLMs' capabilities on graph-related tasks. Other lines of work [Chai et al., 2023; Liu et al., 2024; Tang et al., 2024] combine GNN-derived structure-aware node embeddings with textual prompts to enhance the graph reasoning performance of LLMs. In contrast to these approaches, we do not use LLMs for graph-specific question answering. Instead, we leverage LLMs to produce richer semantic encodings of agentic workflows, which serve as inputs for downstream performance prediction.\n\n# 2.3 Agentic Workflow Performance Prediction\n\nTo mitigate the prohibitive cost of evaluating AWs via direct execution, recent research has shifted towards developing lightweight performance predictors. Zhang et al. [Zhang et al., 2025] pioneered this direction by formulating AWs as DAGs and applying GNNs to capture their topological structures. The performance is then predicted using a Multi-Layer Perceptron (MLP) that processes the concatenation of the AF representation and the task representation. Subsequently, Trirat et al. [Trirat et al., 2025] introduced Agentic Predictor, which extends this GNN-based paradigm by integrating graph features with code and prompt embeddings through a multi-view encoding scheme. However, these GNN-centric methods primarily focus on structural patterns or shallow semantic features, failing to capture the high-level reasoning implicit in complex agent interactions. In contrast, GLOW synergizes a graph-oriented LLM with a GNN to align deep semantic reasoning with the structural characteristics of AWs, leading to superior prediction accuracy.\n\n# 3 Preliminaries\n\nAn Agentic Workflow (AW) consists of multiple collaborating agents that collectively execute a task $T$ by passing information, triggering actions, and maintaining interdependent states. As illustrated in Figure 1, such workflows typically exhibit structured control flow and explicit dependency relationships among agents. To formally characterize these interaction patterns, we abstract an AW as a DAG. Specifically, an AW with $N$ agents is represented as $\\mathcal{G} = \\{\\mathcal{V},\\mathcal{E},\\mathcal{P}\\}$ , where $\\mathcal{V} = \\{v_{1},v_{2},\\ldots ,v_{N}\\}$ denotes the set of agent nodes, each corresponding to an individual agent. The edge set $\\mathcal{E}$ captures the directional flow of information between agents and the prompt set $\\mathcal{P} = \\{p_1,p_2,\\dots ,p_N\\}$ specifies textual prompts\n\n![](images/674066406150fc4a398d4efdae2a9d31941e18b30e47d25dfc68fe832885738d.jpg) \nFigure 2: The architecture of the proposed GLOW. For AW, high-level semantic representations are derived from a graph-oriented LLM, while structural dependencies are captured by a GNN. The representation of task instruction $T$ is extracted using a sentence-BERT. These distinct representations are then projected into a unified latent space and aggregated through a representation fusion module to generate the predicted performance score.\n\nguiding the behavior of each agent $v_{i}$ . During the execution phase, an agent $v_{i}$ aggregates information from two sources: the initial global task instruction $T$ and the intermediate outputs generated by its upstream neighbors. The input context $X_{i}$ for agent $v_{i}$ can be expressed as:\n\n$$\nX _ {i} = \\{T \\} \\cup \\left\\{y _ {j} \\mid v _ {j} \\in \\mathcal {N} _ {i} ^ {(i n)} \\right\\} \\tag {1}\n$$\n\nwhere $\\mathcal{N}_i^{(in)}$ signifies the set of predecessor agents (nodes) directly connected to $v_{i}$ , and $y_{j}$ represents the output produced by agent $v_{j}$ . Based on this input context, the output $y_{i}$ for agent $v_{i}$ is generated by invoking LLMs, denoted as $\\mathcal{M}$ . The generation process is defined by:\n\n$$\ny _ {i} = \\mathcal {M} \\left(X _ {i}, p _ {i}\\right) \\tag {2}\n$$\n\nwhere $p_i$ serves as the specialized prompt defining the subtask logic for agent $v_i$ . Upon completion of all agent processes, the AW yields the final result $r = f(\\mathcal{G}, T)$ . If $r$ matches the expected outcome, the AW is considered successful; otherwise, it is deemed unsatisfactory.\n\nDefinition 3.1 (Agentic Workflow Performance Prediction). Given a specific task instruction $T$ and an AW $\\mathcal{G}$ , performance prediction aims to determine whether $\\mathcal{G}$ can produce the expected outcome for task instruction $T$ without actually executing the AW.\n\nThe agentic workflow performance prediction provides a computationally efficient proxy that guides AW generation while avoiding the substantial overhead of direct execution.\n\n# 4 Methodology\n\nIn this section, we introduce our proposed agentic workflow performance prediction method, GLOW. The architecture,\n\nshown in Figure 2, transforms an AW and a task instruction into a scalar performance score. In the following, we describe representation encoding, performance prediction, and model training.\n\n# 4.1 Representation Encoding\n\nGLOW encodes representations from the task instruction and the AW to support subsequent performance prediction.\n\nTask Instruction Encoding. Given the task instruction $T$ , we first employ a pre-trained sentence-BERT (SBERT) [Reimers and Gurevych, 2019] to obtain its semantic embedding. To align this embedding with the latent space of the AW features, we apply a lightweight MLP as the projector. The final task representation $\\mathbf{R}^{\\text {Task }} \\in \\mathbb{R}^{d}$ is formulated as:\n\n$$\n\\mathbf {R} ^ {\\text {T a s k}} = \\operatorname {P r o j} _ {T} (\\mathrm {S B E R T} (T)) \\tag {3}\n$$\n\nwhere $\\mathrm{Proj}_T(\\cdot)$ denotes the projector.\n\nAgentic Workflow Structural Encoding. To capture the interactions and dependencies among agents, we model the AW as a graph and utilize a GNN. Initially, for each agent node $v_{i}$ , its textual prompt $p_{i}$ is encoded by the sentence-BERT to serve as the initial node embedding $\\mathbf{h}_i^{(0)} = \\mathrm{SBERT}(p_i)$ . Subsequently, a GNN encodes the graph structure by propagating information along the edges $\\mathcal{E}$ . After $L$ layers of message passing, we obtain the set of refined node embeddings for all nodes, formulated as:\n\n$$\n\\left\\{\\mathbf {h} _ {i} ^ {(L)} \\right\\} _ {v _ {i} \\in \\mathcal {V}} = \\operatorname {G N N} \\left(\\left\\{\\mathbf {h} _ {i} ^ {(0)} \\right\\} _ {v _ {i} \\in \\mathcal {V}}, \\mathcal {E}\\right) \\tag {4}\n$$\n\nTo derive the global structural representation $\\mathbf{R}^{\\mathrm{GNN}}\\in \\mathbb{R}^{d}$ , we perform mean pooling [Xu et al., 2018] over all node embeddings, which averages the node embeddings of all nodes:\n\n$$\n\\mathbf {R} ^ {\\mathrm {G N N}} = \\frac {1}{| \\mathcal {V} |} \\sum_ {v _ {i} \\in \\mathcal {V}} \\mathbf {h} _ {i} ^ {(L)} \\tag {5}\n$$\n\nYou are provided with a directed graph consisting of multiple nodes, each associated with a text. The connections between nodes are defined by the given edges, as detailed below:\n\n**Nodes**:\n\n{V, P}\n\n**Edges (Each tuple (source, target) represents a directed connection from the source node to the target node)**:\n\n{E}.\n\nFigure 3: The prompt template used to convert the AW into descriptive text. Node set $\\mathcal{V}$ and prompt set $\\mathcal{P}$ are organized into a dictionary mapping each node ID to its textual prompt, while the edge set $\\mathcal{E}$ is converted into a list of (source, target) tuples.\n\nwhere $|\\mathcal{V}|$ denotes the total number of nodes in the AW.\n\nAgentic Workflow Semantic Encoding. While GNNs are effective at capturing structural representations, they may overlook the high-level reasoning logic implicit in the AW design. To address this, we leverage the reasoning capabilities of LLMs. We first linearize the AW $\\mathcal{G}$ into a comprehensive descriptive text $S_{\\mathcal{G}}$ , adhering to the template shown in Figure 3. Crucially, to extract a concise representation, the prompt concludes with the specific instruction: \"Provide a single token representing the embedding of this graph.\" The processed prompt is then fed into a graph-oriented LLM. We extract the hidden state of its generated output—specifically, the final token embedding—and pass it through a projector, implemented as an MLP, to obtain the semantic representation $\\mathbf{R}^{\\mathrm{LLM}} \\in \\mathbb{R}^{d}$ :\n\n$$\n\\mathbf {R} ^ {\\mathrm {L L M}} = \\operatorname {P r o j} _ {L} (\\mathrm {L L M} (S _ {\\mathcal {G}})) \\tag {6}\n$$\n\n# 4.2 Performance Prediction\n\nTo synthesize the semantic and structural representations of the AW along with the task representations from the encoding phase, we employ a transformer-encoder-based representation fusion module, followed by a prediction head that outputs the predicted score $\\hat{y}$ .\n\nSpecifically, we first construct an input sequence by concatenating a learnable prediction token representation $\\mathbf{R}^{\\mathrm{Pred}}$ with the extracted representations: $\\mathbf{Z}^{(0)} = [\\mathbf{R}^{\\mathrm{Pred}}; \\mathbf{R}^{\\mathrm{LLM}}; \\mathbf{R}^{\\mathrm{GNN}}; \\mathbf{R}^{\\mathrm{Task}}] \\in \\mathbb{R}^{4 \\times d}$ . To inform the model of the distinct nature of each representation type, we add learnable type embeddings $\\mathbf{E}^{\\mathrm{Type}} \\in \\mathbb{R}^{4 \\times d}$ to $Z^{(0)}$ . The resulting sequence is processed by a representation fusion module composed of $L_{T}$ stacked layers. Each layer enables representation interaction through a Multi-Head Self-Attention (MHSA) mechanism followed by a position-wise Feed-Forward Network (FFN), both equipped with residual connections and Layer Normalization (LN). Formally, for the $l$ -th layer, the representation update is given by:\n\n$$\n\\tilde {\\mathbf {Z}} ^ {(l)} = \\mathrm {L N} (\\mathrm {M H S A} (\\mathbf {Z} ^ {(l - 1)}) + \\mathbf {Z} ^ {(l - 1)}) \\tag {7}\n$$\n\n$$\n\\mathbf {Z} ^ {(l)} = \\operatorname {L N} \\left(\\operatorname {F F N} \\left(\\tilde {\\mathbf {Z}} ^ {(l)}\\right) + \\tilde {\\mathbf {Z}} ^ {(l)}\\right) \\tag {8}\n$$\n\nThrough this deep interaction, the prediction token aggregates context-aware information from all other representations. Finally, the hidden state of the prediction token from the last layer, denoted as $\\mathbf{z}_{\\mathrm{Pred}}^{(L_T)}$ , is fed into the Prediction Head (PH),\n\nimplemented as an MLP, followed by a sigmoid function to produce the predicted performance score $\\hat{y}$ :\n\n$$\n\\hat {y} = \\sigma \\left(\\mathrm {P H} \\left(\\mathbf {z} _ {\\text {P r e d}} ^ {(L _ {T})}\\right)\\right) \\tag {9}\n$$\n\nwhere $\\sigma (\\cdot)$ denotes the sigmoid function.\n\n# 4.3 Model Training\n\nTo ensure the effectiveness of each module and the coherence of the final representation, we adopt a multi-stage training strategy involving LLM instruction tuning, GNN pretraining, and end-to-end optimization.\n\nInstruction Tuning for LLM. To equip a generic LLM with a stronger ability to understand graph structures and interactions from plain text, we instruction-tune it using the textualized AW descriptions $S_{\\mathcal{G}}$ generated from the template in Figure 3, and construct graph-related QA pairs targeting six dimensions: i) Degree-Based Prediction (DBP): Predicting the node's in-degree, out-degree, and the graph's average degree. ii) Directed Neighbor Extraction (DNE): Identifying in-neighbors (predecessors) and out-neighbors (successors) for the specific node. iii) Node Prompt Retrieval (NPR): Retrieving the raw prompt of the specified node. iv) Subgraph Reachability & Path Length (REACH): Determining reachability between node pairs and predicting their shortest directed path length. v) Key Node Identification (KNI): Identifying source nodes (zero in-degree) and sink nodes (zero out-degree). vi) Topological Sorting (TSORT): Predicting a valid topological ordering of the nodes. The LLM is fine-tuned to minimize the standard next-token prediction loss on these tasks, resulting in a graph-oriented LLM. Pre-training of GNN. Before the final training, we pre-train the GNN using self-supervised learning to ensure it generates robust structural embeddings. For node reconstruction, we aim to recover the initial semantic node embeddings $\\mathbf{h}_i^{(0)}$ extracted by SBERT. Let $\\mathbf{h}_i^{(L)}$ be the output embedding of node $v_i$ from the GNN. We minimize the Mean Squared Error (MSE):\n\n$$\n\\mathcal {L} _ {\\text {N o d e}} = \\frac {1}{| \\mathcal {V} |} \\sum_ {v _ {i} \\in \\mathcal {V}} \\| \\operatorname {P r o j} \\left(\\mathbf {h} _ {i} ^ {(L)}\\right) - \\mathbf {h} _ {i} ^ {(0)} \\| ^ {2} \\tag {10}\n$$\n\nwhere $\\operatorname{Proj}(\\cdot)$ is an auxiliary projection head. For edge reconstruction, we employ a bilinear decoder to predict the existence of directed edges. The probability of an edge from $v_{i}$ to $v_{j}$ is computed as:\n\n$$\n\\hat {e} _ {i j} = \\sigma \\left(\\mathbf {h} _ {i} ^ {(L) \\mathrm {T}} \\mathbf {W h} _ {j} ^ {(L)} + b\\right) \\tag {11}\n$$\n\nwhere $\\mathbf{W}$ and $b$ are the learnable weight matrix and bias, respectively, $\\cdot^{\\mathrm{T}}$ represents transposition. We optimize the Binary Cross-Entropy (BCE) loss over all possible node pairs:\n\n$$\n\\mathcal {L} _ {\\text {E d g e}} = - \\frac {1}{| \\mathcal {V} | ^ {2}} \\sum_ {v _ {i}, v _ {j} \\in \\mathcal {V}} \\left[ e _ {i j} \\log \\hat {e} _ {i j} + \\left(1 - e _ {i j}\\right) \\log \\left(1 - \\hat {e} _ {i j}\\right) \\right] \\tag {12}\n$$\n\nwhere $e_{ij} = 1$ if there is an edge from $v_{i}$ to $v_{j}$ , 0 otherwise. Finally, the pre-training loss is: $\\mathcal{L}_{Pre} = \\mathcal{L}_{\\mathrm{Node}} + \\mathcal{L}_{\\mathrm{Edge}}$ .\n\nEnd-to-End Model Training. In the final stage, we freeze the parameters of the sentence BERT and the graph-oriented\n\n<table><tr><td>Domain</td><td>Coding-GD</td><td>Coding-AF</td><td>Math-GD</td><td>Math-AF</td><td>Reason-GD</td><td>Reason-AF</td></tr><tr><td>Num. of workflows</td><td>1026</td><td>56</td><td>155</td><td>41</td><td>189</td><td>30</td></tr><tr><td>Avg. of nodes</td><td>5.96</td><td>7.48</td><td>6.12</td><td>5.49</td><td>6.58</td><td>5.87</td></tr><tr><td>Num. of tasks</td><td>57</td><td>233</td><td>97</td><td>99</td><td>2400</td><td>2400</td></tr><tr><td>Num. of samples</td><td>30,683</td><td>7,362</td><td>12,561</td><td>4,059</td><td>453,600</td><td>72,000</td></tr></table>\n\nTable 1: Statistics of the FLORA-Bench dataset used for downstream performance prediction evaluation. \n\n<table><tr><td rowspan=\"2\"></td><td colspan=\"2\">Coding-GD</td><td colspan=\"2\">Coding-AF</td><td colspan=\"2\">Math-GD</td><td colspan=\"2\">Math-AF</td><td colspan=\"2\">Reason-GD</td><td colspan=\"2\">Reason-AF</td></tr><tr><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td></tr><tr><td>GCN</td><td>82.1±0.2</td><td>74.3±0.8</td><td>82.7±0.1</td><td>71.3±0.9</td><td>59.8±1.1</td><td>60.1±1.3</td><td>79.8±0.1</td><td>72.9±0.5</td><td>71.6±0.2</td><td>62.0±0.7</td><td>85.1±0.1</td><td>86.6±0.8</td></tr><tr><td>GAT</td><td>83.3±0.5</td><td>75.1±0.5</td><td>82.9±0.4</td><td>72.1±0.6</td><td>59.4±0.8</td><td>58.7±1.2</td><td>79.4±0.2</td><td>72.2±0.3</td><td>71.1±0.1</td><td>62.4±0.4</td><td>85.0±0.2</td><td>87.4±0.5</td></tr><tr><td>GCNII</td><td>82.4±0.3</td><td>75.4±0.7</td><td>82.2±0.2</td><td>71.6±0.8</td><td>61.0±0.7</td><td>59.1±0.9</td><td>78.4±0.1</td><td>72.5±0.6</td><td>71.7±0.3</td><td>62.1±0.6</td><td>85.2±0.1</td><td>87.5±0.7</td></tr><tr><td>GT</td><td>83.2±0.1</td><td>75.2±0.6</td><td>82.7±0.3</td><td>72.3±0.7</td><td>61.3±0.5</td><td>60.9±0.7</td><td>79.4±0.3</td><td>71.4±0.4</td><td>71.6±0.1</td><td>62.7±0.5</td><td>85.1±0.1</td><td>86.9±0.6</td></tr><tr><td>OFA</td><td>82.3±0.4</td><td>74.1±0.4</td><td>82.2±0.5</td><td>72.8±0.5</td><td>60.0±0.6</td><td>59.9±0.8</td><td>78.9±0.1</td><td>69.8±0.5</td><td>70.9±0.2</td><td>62.7±0.3</td><td>84.3±0.3</td><td>86.3±0.4</td></tr><tr><td>Qwen3</td><td>84.2±0.2</td><td>76.1±0.9</td><td>81.4±0.1</td><td>72.4±1.0</td><td>62.0±0.3</td><td>61.4±0.4</td><td>76.7±0.2</td><td>70.4±0.5</td><td>71.8±0.1</td><td>62.6±0.4</td><td>84.1±0.1</td><td>88.7±0.9</td></tr><tr><td>AP</td><td>83.4±0.2</td><td>75.9±0.7</td><td>83.2±0.2</td><td>73.9±0.8</td><td>62.9±0.4</td><td>61.8±0.3</td><td>79.8±0.2</td><td>73.4±0.4</td><td>72.6±0.2</td><td>63.1±0.5</td><td>85.7±0.1</td><td>88.1±0.7</td></tr><tr><td>GLOW</td><td>85.1±0.3</td><td>77.3±0.6</td><td>84.6±0.3</td><td>75.4±0.7</td><td>64.4±0.2</td><td>63.5±0.5</td><td>81.3±0.1</td><td>75.1±0.4</td><td>73.8±0.1</td><td>66.1±0.5</td><td>87.0±0.1</td><td>90.5±0.6</td></tr></table>\n\nTable 2: Experimental results (%) on the six domains of the FLORA-Bench datasets. Accuracy (Acc.) and utility (Uti.) are reported. The best-performing results are highlighted in bold.\n\nLLM to preserve their pre-trained knowledge. First, we employ a prediction loss using BCE to supervise the performance estimation. Given the ground truth label $y \\in \\{0,1\\}$ (where 1 indicates the AW successfully completes the task) and the predicted score $\\hat{y}$ :\n\n$$\n\\mathcal {L} _ {\\text {P r e d}} = - \\frac {1}{S} \\sum_ {i = 1} ^ {S} \\left[ y _ {i} \\log \\hat {y} _ {i} + \\left(1 - y _ {i}\\right) \\log \\left(1 - \\hat {y} _ {i}\\right) \\right] \\tag {13}\n$$\n\nwhere $S$ is the number of samples in the dataset. Second, to refine the latent space, we apply contrastive learning to make the representations of successful AFs (i.e., those with $y = 1$ ) cluster more tightly, while pushing them away from unsuccessful ones ( $y = 0$ ). Specifically, we construct the triplet set $\\mathcal{T}_T$ for each task $T$ by restricting anchors to AFs with $y = 1$ . For each anchor $a$ with $y = 1$ , the positive sample $p$ is another successful AF ( $y = 1$ ), whereas the negative sample $n$ is an unsuccessful AF with $y = 0$ that fails to complete the task. The resulting contrastive loss is defined as:\n\n$$\n\\mathcal {L} _ {\\mathrm {C o n}} ^ {m} = \\frac {1}{| \\mathcal {T} _ {T} |} \\sum_ {(a, p, n) \\in \\mathcal {T} _ {T}} \\max \\left(0, d \\left(\\mathbf {R} _ {a} ^ {m}, \\mathbf {R} _ {p} ^ {m}\\right) - d \\left(\\mathbf {R} _ {a} ^ {m}, \\mathbf {R} _ {n} ^ {m}\\right) + \\alpha\\right) \\tag {14}\n$$\n\nwhere $m \\in \\{\\mathrm{GNN}, \\mathrm{LLM}\\}$ , $d(\\cdot, \\cdot)$ represents a distance function (implemented as cosine distance), and $\\alpha$ is a margin hyperparameter. The final objective function is a weighted sum: $\\mathcal{L} = \\mathcal{L}_{\\mathrm{Pred}} + \\frac{\\lambda}{2} (\\mathcal{L}_{\\mathrm{Con}}^{\\mathrm{GNN}} + \\mathcal{L}_{\\mathrm{Con}}^{\\mathrm{LLM}})$ .\n\n# 5 Experiments\n\nIn this section, we conduct extensive experiments to investigate the following Research Questions (RQs): RQ1: How effective is GLOW in predicting the performance of AWs? RQ2: How does instruction tuning enhance the LLM's capability to understand AWs from plain text? RQ3: How do different architectural components impact the overall performance of GLOW? RQ4: How do GNN pretraining and LLM instruction tuning contribute to the performance of GLOW? RQ5: How do the hyperparameters $\\alpha$ and $\\lambda$ affect the performance of GLOW? RQ6: How effectively does GLOW support the downstream task of automatic AW generation?\n\n# 5.1 Experimental Setup\n\nDataset. We adopt the recently introduced and well-curated FLORA-Bench dataset [Zhang et al., 2025]. It spans five representative datasets frequently studied in the agentic workflow literature, covering three core task types: code generation (HumanEval [Chen, 2021], MBPP [Austin et al., 2021]), mathematical problem solving (GSM8K [Cobbe et al., 2021], MATH [Hendrycks et al., 2021]), and general reasoning (MMLU [Hendrycks et al., 2020]). The AWs are derived from two state-of-the-art automatic AW generation methods: G-Designer (GD) [Zhang et al., 2024a] and AFLOW (AF) [Zhang et al., 2024b]. Table 1 summarizes the dataset statistics. We randomly split each sub-dataset into training, validation, and test sets following an 8:1:1 ratio. In addition, to construct the dataset for instruction tuning the LLM, we aggregated 1,497 AWs from the source pool. We randomly selected 200 AWs for evaluation. For data generation, we produced 3 distinct samples for each question type. Consequently, this yielded a specialized corpus containing 23,346 training samples and 3,600 test samples.\n\nBaseline Methods. Following [Zhang et al., 2025], we include five representative GNN-based models as benchmarks: GCN [Kipf, 2016], GAT [Veličković et al., 2017], GCNII [Chen et al., 2020], Graph Transformer (GT) [Shi et al., 2020], and One-For-All (OFA) [Liu et al., 2023], as well as the Agentic Predictor (AP) [Trirat et al., 2025]. In addition, we evaluate an LLM baseline based on Qwen3-1.7B [Yang et al., 2025] $^{1}$ , which is fine-tuned to predict performance directly from the AW and task descriptions.\n\nImplementation Details. All experiments are conducted on a server equipped with an Intel Xeon Gold 6330 CPU (38 cores), 256GB of memory, and an NVIDIA A40 GPU with 48 GB of memory. We utilize all-MiniLM-L6-v2² as the SBERT, Qwen3-1.7B as the base LLM and a two-layer GAT as the GNN. QLoRA [Dettmers et al., 2023] is employed to reduce\n\n<table><tr><td></td><td>DBP</td><td>DNE</td><td>NPR</td><td>REACH</td><td>KNI</td><td>TSORT</td><td>Average</td></tr><tr><td>Base LLM</td><td>65.3</td><td>93.7</td><td>36.3</td><td>93.2</td><td>85.3</td><td>21.5</td><td>65.9</td></tr><tr><td>Graph-oriented LLM</td><td>97.0</td><td>100.0</td><td>100.0</td><td>98.7</td><td>99.7</td><td>99.0</td><td>99.1</td></tr></table>\n\nTable 3: Experimental results (Accuracy, %) illustrating that the graph-oriented LLM, fine-tuned from the base LLM, achieves enhanced comprehension of AWs from plain text. \n\n<table><tr><td rowspan=\"2\"></td><td colspan=\"2\">Coding-GD</td><td colspan=\"2\">Coding-AF</td><td colspan=\"2\">Math-GD</td><td colspan=\"2\">Math-AF</td><td colspan=\"2\">Reason-GD</td><td colspan=\"2\">Reason-AF</td></tr><tr><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td><td>Acc.</td><td>Uti.</td></tr><tr><td>w/o R GNN</td><td>83.8±0.2</td><td>76.0±0.7</td><td>82.4±0.4</td><td>73.2±0.8</td><td>62.4±0.1</td><td>61.4±0.6</td><td>77.4±0.1</td><td>72.1±0.5</td><td>72.0±0.2</td><td>63.2±0.4</td><td>85.0±0.1</td><td>87.6±0.5</td></tr><tr><td>w/o R LLM</td><td>83.7±0.4</td><td>75.8±0.5</td><td>82.9±0.2</td><td>73.4±0.9</td><td>63.5±0.3</td><td>61.9±0.4</td><td>80.9±0.2</td><td>73.2±0.3</td><td>72.1±0.1</td><td>63.8±0.7</td><td>85.9±0.2</td><td>87.7±0.8</td></tr><tr><td>w/o E Type</td><td>82.7±0.3</td><td>75.1±0.8</td><td>83.8±0.3</td><td>74.6±0.6</td><td>62.6±0.2</td><td>61.9±0.5</td><td>79.8±0.1</td><td>72.2±0.4</td><td>71.7±0.2</td><td>62.4±0.5</td><td>85.5±0.1</td><td>86.9±0.7</td></tr><tr><td>w/o P. GNN</td><td>84.7±0.2</td><td>76.8±0.6</td><td>83.4±0.4</td><td>74.5±0.7</td><td>64.0±0.1</td><td>62.4±0.6</td><td>80.9±0.2</td><td>74.2±0.3</td><td>73.1±0.1</td><td>64.4±0.4</td><td>86.4±0.1</td><td>89.4±0.6</td></tr><tr><td>w/o P. LLM</td><td>83.9±0.4</td><td>76.2±0.5</td><td>82.9±0.3</td><td>73.6±0.8</td><td>63.0±0.3</td><td>62.0±0.4</td><td>80.0±0.1</td><td>73.8±0.5</td><td>72.4±0.2</td><td>63.7±0.6</td><td>85.8±0.1</td><td>88.6±0.5</td></tr><tr><td>w/o P. GNN &amp; LLM</td><td>83.7±0.3</td><td>75.9±0.7</td><td>82.7±0.2</td><td>73.1±0.6</td><td>62.7±0.2</td><td>61.7±0.5</td><td>79.8±0.1</td><td>73.1±0.4</td><td>72.4±0.1</td><td>63.6±0.5</td><td>85.4±0.1</td><td>88.1±0.6</td></tr><tr><td>GLOW</td><td>85.1±0.3</td><td>77.3±0.6</td><td>84.6±0.3</td><td>75.4±0.7</td><td>64.4±0.2</td><td>63.5±0.5</td><td>81.3±0.1</td><td>75.1±0.4</td><td>73.8±0.1</td><td>66.1±0.5</td><td>87.0±0.1</td><td>90.5±0.6</td></tr></table>\n\nTable 4: Ablation results (%) where 'w/o' denotes removal of a component, and 'w/o P' indicates no pretraining or instruction tuning.\n\nmemory consumption during LLM fine-tuning. The hyperparameter $\\lambda$ , which balances the prediction loss and contrastive loss, is set to 1, while $\\alpha$ , controlling the margin in the contrastive loss, is set to 0.2. The hidden dimension $d$ is 256, and the learning rate is $10^{-4}$ . We use the AdamW optimizer [Loshchilov, 2017] to train the model with a mini-batch size of 512. The maximum number of training epochs is 200, with early stopping applied if there is no improvement on the validation set for 30 consecutive epochs. For fairness, the hyperparameters of all compared methods are set according to their original papers. We run each experiment five times and report the mean and standard deviation.\n\nMetrics. We evaluate the method's performance using two metrics. First, accuracy measures the prediction correctness: Accuracy = $\\frac{1}{S}\\sum_{i=1}^{S}\\mathbb{I}(\\hat{y}_i = y_i)$ , where $S$ is the number of test samples and $\\mathbb{I}(\\cdot)$ returns 1 if the condition holds and 0 otherwise. Second, utility assesses the consistency between the predicted and ground-truth rankings of AWs, emphasizing the method's ability to distinguish the relative quality of different AWs. For each AW, the success rate is defined as the proportion of tasks it successfully completes. Let $\\mathcal{H}_k$ and $\\hat{\\mathcal{H}}_k$ denote the sets of top- $k$ AWs selected based on the ground-truth and predicted success rates, respectively. The utility is defined as the mean overlap ratio averaged over various $k$ : Utility = $\\frac{1}{K}\\sum_{k=1}^{K}\\frac{|\\mathcal{H}_k \\cap \\hat{\\mathcal{H}}_k|}{k}$ , where $K$ is the total number of AWs in the test dataset.\n\n# 5.2 Performance Evaluation (RQ1)\n\nThe quantitative results are summarized in Table 2. As observed, GLOW consistently outperforms all baseline methods in both accuracy and utility across all domains, surpassing the second-best baseline, AP, by $1.5\\%$ in accuracy and $2.0\\%$ in utility on average. This demonstrates the robustness of GLOW and confirms that it is highly effective at identifying high-quality workflows, making it a reliable proxy for the downstream automatic AW generation. Traditional GNN methods (e.g., GCN, GAT) and AP perform well in capturing structural patterns but struggle to fully model the semantic nuances of agent roles. In contrast, the LLM-based baseline (Qwen3) exhibits strong semantic reasoning capabilities but is limited in its ability to directly process raw graph structures. GLOW bridges this gap by integrating the structural\n\ninductive bias of GNNs with the semantic expressiveness of LLMs, yielding superior performance.\n\n# 5.3 Impact of Instruction Tuning on LLM (RQ2)\n\nTo answer RQ2, we compare the zero-shot performance of the vanilla base LLM against our fine-tuned graph-oriented LLM on the dataset introduced in Section 5.1. The results are reported in Table 3. The graph-oriented LLM achieves a near-perfect average accuracy of 99.1, significantly outperforming the base LLM (65.9). This evidence shows that small-version LLMs (Qwen3-1.7B), despite strong linguistic reasoning, cannot inherently parse serialized graphs or capture topological dependencies without adaptation. By adapting the LLM into a graph-oriented expert, we ensure that the semantic features fed to the downstream GLOW predictor are not mere textual embeddings, but are deeply grounded in the AW topology and the interactions among agents.\n\n# 5.4 Ablation Studies\n\nArchitectural Component (RQ3). To assess the contribution of each architectural component, we compare GLOW with variants where specific feature components are removed. As shown in rows 1-3 of Table 4, removing any component leads to a performance degradation. The removal of $\\mathbf{R}^{\\mathrm{GNN}}$ results in the most significant impact, causing an average drop of $2.2\\%$ in accuracy and $2.4\\%$ in utility across the six domains. Similarly, excluding $\\mathbf{R}^{\\mathrm{LLM}}$ results in an average decline of $1.2\\%$ in accuracy and $2.0\\%$ in utility. This quantitative evidence suggests that while structural information is paramount, the synergy between topological features and semantic reasoning is essential for optimal performance. The absence of type embeddings ( $w/o \\mathbf{E}^{\\mathrm{Type}}$ ) results in an average decline of $1.2\\%$ in accuracy and $2.0\\%$ in utility. This substantial drop indicates that explicitly distinguishing representation types through learnable embeddings greatly improves the fusion module's ability to integrate heterogeneous information.\n\nGNN Pretraining and LLM Instruction Tuning (RQ4). As shown in rows 4-6 of Table 4, the variant without LLM instruction tuning (w/o P. LLM) and the variant without GNN pretraining (w/o P. GNN) both exhibit the expected performance degradation. Completely removing both GNN pre\n\n![](images/414eb4a1144c899b1c84ea7765d05151a80cf184605a1eeef8a87d2ca7708270.jpg) \nFigure 4: Impact of hyperparameters $\\lambda$ and $\\alpha$ on model performance.\n\n![](images/9c3bd333b8b10f4285e816c390a638ccada9b163872d8e17f75b10d0b521349f.jpg)\n\n![](images/977c591877dda3c347aaa78c4ab88c8567601144f43fb846ae10b1b4feb9e489.jpg) \n(a) HumanEval\n\n![](images/2319b3021b178b9f4735276dd757abbd3ad81cebe277a1c30810b5ae1c957250.jpg) \n(b) MBPP\n\n![](images/e3fa6c5ad33eddd04492ed2ef3ce7e1257d1009c76b0530e8177821af4d2e92d.jpg) \n(c) MMLU \nFigure 5: Comparison of time consumption and final AW performance across different AW evaluation methods in AFLOW.\n\ntraining and LLM instruction tuning (w/o P. GNN & LLM) leads to an average drop of $1.6\\%$ in accuracy and $2.1\\%$ in utility across the six domains. These results confirm that initializing the feature extractors with domain-specific knowledge substantially improves their generalization ability.\n\n# 5.5 Hyperparameter Study (RQ5)\n\nWe examine GLOW's sensitivity to two key hyperparameters: the loss weight $\\lambda$ , which balances the prediction and contrastive losses, and the margin $\\alpha$ , which controls the contrastive separation. Figure 4 reports the accuracy under different settings. Notably, the case $\\lambda = 0$ corresponds to the ablation of the contrastive loss. As expected, both hyperparameters follow a consistent trend in which accuracy first improves and then declines when pushed to extreme values. Specifically, the introduction of the contrastive loss is beneficial, with performance peaking when $\\lambda \\in [0.5,1.0]$ and $\\alpha \\in [0.2,0.3]$ . Importantly, the accuracy variation within these ranges is small, indicating that GLOW is robust and not overly sensitive to precise hyperparameter choices. These results suggest that $\\lambda = 1.0$ and $\\alpha = 0.2$ yield reliable performance, and we therefore recommend using them.\n\n# 5.6 Impact on Automatic AW Generation (RQ6)\n\nWe evaluate the practical effectiveness of GLOW by integrating it into the automatic AW generation framework, AFLOW. We compare GLOW against three baselines: i) 'Random', which predicts an AW's performance uniformly at random; ii) the standard 'GCN'-based predictor; iii) the 'Agentic Predictor' (AP); and iv) 'Ground Truth', which obtains the actual performance by executing the AW. The reported 'Score' metric reflects the success rate of the final AWs generated by AFLOW on the test dataset.\n\nAs shown in Figure 5, GLOW consistently outperforms both the Random, GCN and AP baselines, owing to its more accurate performance predictions. Its performance closely approaches the ceiling established by the Ground Truth, demonstrating that GLOW can effectively guide AFLOW toward high-quality AWs with minimal performance loss. Moreover, compared with the computationally expensive Ground Truth, which requires repeated LLM calls, GLOW substantially accelerates AFLOW's optimization process, reducing time consumption by $98.7\\%$ while incurring only a 0.031 decrease in score on average across three datasets. Compared with the Random, GCN and AP, GLOW's more reliable performance estimation also helps AFLOW converge slightly faster, as observed on datasets such as MBPP and MMLU. These results confirm that GLOW is an efficient and reliable proxy for accelerating automatic AW generation.\n\n# 6 Conclusion\n\nIn this paper, we introduce GLOW, which couples a specialized graph-oriented LLM with a structural GNN through a dual-branch architecture and contrastive learning, enabling it to capture both interaction topology and agent-level semantics of AWs. Experimental results show that GLOW achieves state-of-the-art prediction accuracy and reduces the time cost of automatic AW generation methods by two orders of magnitude, while incurring only minimal performance trade-offs.\n\n# Ethical Statement\n\nThere are no ethical issues.\n\n# References\n\n[Austin et al., 2021] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. \n[Chai et al., 2023] Ziwei Chai, Tianjie Zhang, Liang Wu, Kaiqiao Han, Xiaohai Hu, Xuanwen Huang, and Yang Yang. Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845, 2023. \n[Chen et al., 2020] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International conference on machine learning, pages 1725-1735. PMLR, 2020. \n[Chen et al., 2025] Justin Chen, Archiki Prasad, Swarnadeep Saha, Elias Stengel-Eskin, and Mohit Bansal. Magicore: Multi-agent, iterative, coarse-to-fine refinement for reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 32651-32674, 2025. \n[Chen, 2021] Mark Chen. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. \n[Cobbe et al., 2021] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. \n[Dettmers et al., 2023] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, 36:10088-10115, 2023. \n[Fatemi et al., 2024] Bahare Fatemi, Jonathan Halcrow, and Bryan Perozzi. Talk like a graph: Encoding graphs for large language models. In B. Kim, Y. Yue, S. Chaudhuri, K. Fragkiadaki, M. Khan, and Y. Sun, editors, International Conference on Representation Learning, volume 2024, pages 43909-43934, 2024. \n[He et al., 2025] Junda He, Christoph Treude, and David Lo. Llm-based multi-agent systems for software engineering: Literature review, vision, and the road ahead. ACM Transactions on Software Engineering and Methodology, 34(5):1-30, 2025. \n[Hendrycks et al., 2020] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. \n[Hendrycks et al., 2021] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. \n[Hu et al., 2024a] Shengran Hu, Cong Lu, and Jeff Clune. Automated design of agentic systems. arXiv preprint arXiv:2408.08435, 2024.\n\n[Hu et al., 2024b] Yue Hu, Yuzhu Cai, Yaxin Du, Xinyu Zhu, Xiangrui Liu, Zijie Yu, Yuchen Hou, Shuo Tang, and Siheng Chen. Self-evolving multi-agent collaboration networks for software development. arXiv preprint arXiv:2410.16946, 2024. \n[Kipf, 2016] TN Kipf. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. \n[Li et al., 2024] Zelong Li, Shuyuan Xu, Kai Mei, Wenyue Hua, Balaji Rama, Om Raheja, Hao Wang, He Zhu, and Yongfeng Zhang. Autoflow: Automated workflow generation for large language model agents. arXiv preprint arXiv:2407.12821, 2024. \n[Liu et al., 2023] Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, and Muhan Zhang. One for all: Towards training one graph model for all classification tasks. arXiv preprint arXiv:2310.00149, 2023. \n[Liu et al., 2024] Zheyuan Liu, Xiaoxin He, Yijun Tian, and Nitesh V Chawla. Can we soft prompt llms for graph learning tasks? In Companion Proceedings of the ACM Web Conference 2024, pages 481-484, 2024. \n[Loshchilov, 2017] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. \n[Pezeshkpour et al., 2024] Pouya Pezeshkpour, Eser Kandogan, Nikita Bhutani, Sajjadur Rahman, Tom Mitchell, and Estevam Hruschka. Reasoning capacity in multi-agent systems: Limitations, challenges and human-centered solutions. arXiv preprint arXiv:2402.01108, 2024. \n[Reimers and Gurevych, 2019] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. \n[Shi et al., 2020] Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509, 2020. \n[Tang et al., 2024] Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. Graphgpt: Graph instruction tuning for large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 491-500, 2024. \n[Trirat et al., 2025] Patara Trirat, Wonyong Jeong, and Sung Ju Hwang. Agentic predictor: Performance prediction for agentic workflows via multi-view encoding. arXiv preprint arXiv:2505.19764, 2025. \n[Velicković et al., 2017] Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. \n[Wang et al., 2023] Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov.\n\nCan language models solve graph problems in natural language? Advances in Neural Information Processing Systems, 36:30840-30861, 2023. \n[Xi et al., 2025] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. Science China Information Sciences, 68(2):121101, 2025. \n[Xu et al., 2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. \n[Xu et al., 2025] Shengxiang Xu, Jiayi Zhang, Shimin Di, Yuyu Luo, Liang Yao, Hanmo Liu, Jia Zhu, Fan Liu, and Min-Ling Zhang. Robustflow: Towards robust agentic workflow generation. arXiv preprint arXiv:2509.21834, 2025. \n[Yang et al., 2025] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388, 2025. \n[Ye et al., 2024] Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu, and Yongfeng Zhang. Language is all a graph needs. In Findings of the association for computational linguistics: EACL 2024, pages 1955-1973, 2024. \n[Zhang and Xiong, 2025] Shaowei Zhang and Deyi Xiong. Debate4math: Multi-agent debate for fine-grained reasoning in math. In Findings of the Association for Computational Linguistics: ACL 2025, pages 16810-16824, 2025. \n[Zhang et al., 2024a] Guibin Zhang, Yanwei Yue, Xiangguo Sun, Guancheng Wan, Miao Yu, Junfeng Fang, Kun Wang, Tianlong Chen, and Dawei Cheng. G-designer: Architecting multi-agent communication topologies via graph neural networks. arXiv preprint arXiv:2410.11782, 2024. \n[Zhang et al., 2024b] Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xionghui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, et al. Aflow: Automating agentic workflow generation. arXiv preprint arXiv:2410.10762, 2024. \n[Zhang et al., 2024c] Zeyang Zhang, Xin Wang, Ziwei Zhang, Haoyang Li, Yijian Qin, and Wenwu Zhu. Llm4dyg: Can large language models solve spatial-temporal problems on dynamic graphs? In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4350-4361, 2024. \n[Zhang et al., 2025] Yuanshuo Zhang, Yuchen Hou, Bohan Tang, Shuo Chen, Muhan Zhang, Xiaowen Dong, and Siheng Chen. Gnns as predictors of agentic workflow performances. arXiv preprint arXiv:2503.11301, 2025. \n[Zhong et al., 2026] Qihuang Zhong, Kang Wang, Ziyang Xu, Liang Ding, Juhua Liu, and Bo Du. Achieving $>97\\%$ on gsm8k: Deeply understanding the problems makes llms better solvers for math word problems. Frontiers of Computer Science, 20(1):1-3, 2026. \n[Zhuge et al., 2024] Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, and Jürgen\n\nSchmidhuber. Gptswarm: Language agents as estimizable graphs. In *Forty-first International Conference on Machine Learning*, 2024."}
# PHANTOM: Progressive High-fidelity Adversarial Network for Threat Object Modeling Abstract The scarcity of high-quality cyberattack datasets poses a fundamental challenge to developing robust machine learning-based intrusion detection systems. Real-world attack data is difficult to obtain due to privacy regulations, organizational reluctance to share breach information, and the rapidly evolving threat landscape. This paper introduces PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a novel multi-task adversarial variational framework specifically designed for generating synthetic cyberattack datasets. PHANTOM addresses the unique challenges of cybersecurity data through three key innovations: Progressive training that captures attack patterns at multiple resolutions, dual-path learning that combines VAE stability with GAN fidelity, and domain-specific feature matching that preserves temporal causality and behavioral semantics. We implement a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM) architecture that incorporates specialized loss functions for reconstruction, adversarial training, feature preservation, classification accuracy, and cyber-specific constraints. Experimental validation on a realistic synthetic dataset of 100 000 network traffic samples across five attack categories demonstrates that PHANTOM achieves $98\%$ weighted accuracy when used to train intrusion detection models tested on real attack samples. Statistical analyses, including kernel density estimation, nearest neighbor distance distributions, and $t$ -SNE visualizations, confirm that generated attacks preserve the distributional properties, diversity, and class separability of authentic cyberattack patterns. However, results also reveal limitations in generating rare attack types, highlighting the need for specialized handling of severely imbalanced classes. This work advances the state-of-the-art in synthetic cybersecurity data generation, providing a foundation for training more robust threat detection systems while maintaining privacy and security. Keywords: Synthetic Cyberattack Generation, Adversarial Generative Modeling, Cybersecurity Data Scarcity, Intrusion Detection Augmentation # 1 Introduction The exponential growth of cyber threats in recent years has created an urgent demand for robust cybersecurity systems capable of detecting and mitigating sophisticated attacks. Machine Learning (ML) and Deep Learning (DL) models have emerged as powerful tools for threat detection, enabling automated analysis of network traffic, system logs, and user behavior patterns. However, the effectiveness of these models hinges critically on the availability of diverse, representative training data that captures the full spectrum of attack vectors and techniques employed by adversaries. Despite this need, obtaining high-quality cyberattack datasets remains one of the most significant challenges in cybersecurity research and practice. Real-world attack data is inherently scarce due to several factors: 1. Organizations are often reluctant to share sensitive breach information due to legal and reputational concerns. 2. Privacy regulations restrict the dissemination of network traffic containing potentially identifiable information. 3. The rapidly evolving threat landscape means that historical datasets quickly become obsolete. Additionally, even when attack data is available, it often suffers from severe class imbalance, with benign traffic vastly outnumbering malicious samples, leading to biased models that struggle to detect novel or rare attack patterns. Synthetic data generation has emerged as a promising solution to address these limitations. By artificially creating realistic cyberattack samples, researchers can augment existing datasets, balance class distributions, and generate examples of rare or emerging threats that may not yet exist in operational environments. However, traditional synthetic data generation techniques, such as rule-based simulation and simple statistical sampling, often produce oversimplified attack patterns that lack the complexity and variability of real-world threats. Models trained on such synthetic data frequently exhibit poor generalization when deployed in production environments, as they fail to capture the nuanced behavioral characteristics of actual attackers. Recent advances in generative modeling, particularly Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), offer a paradigm shift in synthetic data generation. These deep generative models learn the underlying probability distribution of real data and can generate novel samples that preserve the statistical properties and complex patterns of the original dataset. GANs, through their adversarial training mechanism between a generator and a discriminator network, have demonstrated remarkable success in generating high-fidelity synthetic data across various domains, including image synthesis, natural language processing, and time-series forecasting. Similarly, VAEs utilize probabilistic latent representations to facilitate the controlled generation of diverse samples while preserving the interpretability of the learned feature space. In this paper, we propose specialized GAN and VAE architectures tailored specifically for generating high-fidelity synthetic cyberattack datasets. Our approach addresses the unique challenges of cybersecurity data, including temporal dependencies in attack sequences, multi-modal feature distributions spanning categorical and continuous variables, and the need to preserve attack semantics while introducing realistic variations. We develop novel architectural components and training strategies that enhance the diversity, realism, and utility of generated attack samples for downstream security applications. This work is divided as follows: In Sec. 2, we describe analogous attempts to address the major challenge that we set out to address in our research. In Sec. 3, we described our proposed PHANTOM framework and the mechanics of how the algorithm works. In Sec. 4, we describe the experiment performed, first by describing the dataset used, second by explaining the choice and motivation for the hyperparameter values selected in the algorithm implementation, and finally by presenting the results obtained through experimentation. In Sec. 5, we reflect upon what was achieved in this work, the drawbacks, and provide direction for future works. # 2 Related Work In, the authors address the key problem for critical space systems: The lack of high-fidelity, shareable datasets that include both nominal and malicious activity. Specifically, they propose a GAN-based system that creates realistic synthetic cyberattack datasets by training on small samples of real-world nominal and malicious data and then using the generator to produce new, high-fidelity synthetic samples. They evaluate the realism of the generated data and test its usefulness across three datasets. In, the authors focus on improving cybersecurity in Internet of Things (IoT) and Wireless Sensor Networks (WSNs) by using GANs. Due to the rise of sophisticated threats, especially DDoS and spoofing attacks, traditional security systems are no longer sufficient. To address this, the authors propose a new GAN-based model, called Dynamic Adaptive Threat Simulation GAN (DATS-GAN), which generates realistic synthetic cyberattack scenarios that mimic real-world attacks, thereby enabling security systems to better detect, learn from, and adapt to evolving threats. The novelty in this work lies not only in its focus on generating such datasets but also in its ability to dynamically detect cybersecurity attacks. In, the authors address cybersecurity challenges in modern power systems, particularly the threat of stealthy false data injection (FDI) attacks that can cause operational problems such as congestion and voltage instability by proposing a defense framework that uses Wasserstein Generative Adversarial Networks (WGANs) to generate synthetic Phasor Measurement Unit (PMU) data. The workflow creates uncertainty, making it more difficult for attackers to understand, predict, or exploit the system. This work is innovative because it strategically injects realistic synthetic data into the communication stream. # 3 The Proposed Approach The generation of high-fidelity synthetic cyberattack data presents unique challenges that surpass those of conventional image or text synthesis. Cyberattack patterns exhibit complex temporal dependencies, causal relationships between attack stages, multi-scale features (from packet-level to campaign-level), and highly imbalanced class distributions. To address these challenges holistically, below we introduce PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a multi-task adversarial variational framework specifically designed for synthesizing cyberattack data. Our approach is predicated on three fundamental insights about cyberattack data generation: 1. Cyberattacks are hierarchical and manifest at multiple resolutions simultaneously, from low-level network packet features to high-level behavioral patterns. 2. Attack semantics are causal, which implies that actions follow logical sequences that must be preserved in synthetic data to maintain realism and utility. 3. Fidelity must be multi-dimensional. This translates to temporal, behavioral, and structural aspects that must all be preserved for synthetic data to be operationally useful. PHANTOM addresses these insights through an integrated architecture that combines the stability of VAEs with the high-fidelity generation capabilities of GANs, incorporating domain-specific feature preservation mechanisms. At its core, PHANTOM implements a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM), which operates through three synergistic components:
# PHANTOM: Progressive High-fidelity Adversarial Network for Threat Object Modeling Abstract The scarcity of high-quality cyberattack datasets poses a fundamental challenge to developing robust machine learning-based intrusion detection systems. Real-world attack data is difficult to obtain due to privacy regulations, organizational reluctance to share breach information, and the rapidly evolving threat landscape. This paper introduces PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a novel multi-task adversarial variational framework specifically designed for generating synthetic cyberattack datasets. PHANTOM addresses the unique challenges of cybersecurity data through three key innovations: Progressive training that captures attack patterns at multiple resolutions, dual-path learning that combines VAE stability with GAN fidelity, and domain-specific feature matching that preserves temporal causality and behavioral semantics. We implement a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM) architecture that incorporates specialized loss functions for reconstruction, adversarial training, feature preservation, classification accuracy, and cyber-specific constraints. Experimental validation on a realistic synthetic dataset of 100 000 network traffic samples across five attack categories demonstrates that PHANTOM achieves $98\%$ weighted accuracy when used to train intrusion detection models tested on real attack samples. Statistical analyses, including kernel density estimation, nearest neighbor distance distributions, and $t$ -SNE visualizations, confirm that generated attacks preserve the distributional properties, diversity, and class separability of authentic cyberattack patterns. However, results also reveal limitations in generating rare attack types, highlighting the need for specialized handling of severely imbalanced classes. This work advances the state-of-the-art in synthetic cybersecurity data generation, providing a foundation for training more robust threat detection systems while maintaining privacy and security. Keywords: Synthetic Cyberattack Generation, Adversarial Generative Modeling, Cybersecurity Data Scarcity, Intrusion Detection Augmentation # 1 Introduction The exponential growth of cyber threats in recent years has created an urgent demand for robust cybersecurity systems capable of detecting and mitigating sophisticated attacks. Machine Learning (ML) and Deep Learning (DL) models have emerged as powerful tools for threat detection, enabling automated analysis of network traffic, system logs, and user behavior patterns. However, the effectiveness of these models hinges critically on the availability of diverse, representative training data that captures the full spectrum of attack vectors and techniques employed by adversaries. Despite this need, obtaining high-quality cyberattack datasets remains one of the most significant challenges in cybersecurity research and practice. Real-world attack data is inherently scarce due to several factors: 1. Organizations are often reluctant to share sensitive breach information due to legal and reputational concerns. 2. Privacy regulations restrict the dissemination of network traffic containing potentially identifiable information. 3. The rapidly evolving threat landscape means that historical datasets quickly become obsolete. Additionally, even when attack data is available, it often suffers from severe class imbalance, with benign traffic vastly outnumbering malicious samples, leading to biased models that struggle to detect novel or rare attack patterns. Synthetic data generation has emerged as a promising solution to address these limitations. By artificially creating realistic cyberattack samples, researchers can augment existing datasets, balance class distributions, and generate examples of rare or emerging threats that may not yet exist in operational environments. However, traditional synthetic data generation techniques, such as rule-based simulation and simple statistical sampling, often produce oversimplified attack patterns that lack the complexity and variability of real-world threats. Models trained on such synthetic data frequently exhibit poor generalization when deployed in production environments, as they fail to capture the nuanced behavioral characteristics of actual attackers. Recent advances in generative modeling, particularly Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), offer a paradigm shift in synthetic data generation. These deep generative models learn the underlying probability distribution of real data and can generate novel samples that preserve the statistical properties and complex patterns of the original dataset. GANs, through their adversarial training mechanism between a generator and a discriminator network, have demonstrated remarkable success in generating high-fidelity synthetic data across various domains, including image synthesis, natural language processing, and time-series forecasting. Similarly, VAEs utilize probabilistic latent representations to facilitate the controlled generation of diverse samples while preserving the interpretability of the learned feature space. In this paper, we propose specialized GAN and VAE architectures tailored specifically for generating high-fidelity synthetic cyberattack datasets. Our approach addresses the unique challenges of cybersecurity data, including temporal dependencies in attack sequences, multi-modal feature distributions spanning categorical and continuous variables, and the need to preserve attack semantics while introducing realistic variations. We develop novel architectural components and training strategies that enhance the diversity, realism, and utility of generated attack samples for downstream security applications. This work is divided as follows: In Sec. 2, we describe analogous attempts to address the major challenge that we set out to address in our research. In Sec. 3, we described our proposed PHANTOM framework and the mechanics of how the algorithm works. In Sec. 4, we describe the experiment performed, first by describing the dataset used, second by explaining the choice and motivation for the hyperparameter values selected in the algorithm implementation, and finally by presenting the results obtained through experimentation. In Sec. 5, we reflect upon what was achieved in this work, the drawbacks, and provide direction for future works. # 2 Related Work In, the authors address the key problem for critical space systems: The lack of high-fidelity, shareable datasets that include both nominal and malicious activity. Specifically, they propose a GAN-based system that creates realistic synthetic cyberattack datasets by training on small samples of real-world nominal and malicious data and then using the generator to produce new, high-fidelity synthetic samples. They evaluate the realism of the generated data and test its usefulness across three datasets. In, the authors focus on improving cybersecurity in Internet of Things (IoT) and Wireless Sensor Networks (WSNs) by using GANs. Due to the rise of sophisticated threats, especially DDoS and spoofing attacks, traditional security systems are no longer sufficient. To address this, the authors propose a new GAN-based model, called Dynamic Adaptive Threat Simulation GAN (DATS-GAN), which generates realistic synthetic cyberattack scenarios that mimic real-world attacks, thereby enabling security systems to better detect, learn from, and adapt to evolving threats. The novelty in this work lies not only in its focus on generating such datasets but also in its ability to dynamically detect cybersecurity attacks. In, the authors address cybersecurity challenges in modern power systems, particularly the threat of stealthy false data injection (FDI) attacks that can cause operational problems such as congestion and voltage instability by proposing a defense framework that uses Wasserstein Generative Adversarial Networks (WGANs) to generate synthetic Phasor Measurement Unit (PMU) data. The workflow creates uncertainty, making it more difficult for attackers to understand, predict, or exploit the system. This work is innovative because it strategically injects realistic synthetic data into the communication stream. # 3 The Proposed Approach The generation of high-fidelity synthetic cyberattack data presents unique challenges that surpass those of conventional image or text synthesis. Cyberattack patterns exhibit complex temporal dependencies, causal relationships between attack stages, multi-scale features (from packet-level to campaign-level), and highly imbalanced class distributions. To address these challenges holistically, below we introduce PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a multi-task adversarial variational framework specifically designed for synthesizing cyberattack data. Our approach is predicated on three fundamental insights about cyberattack data generation: 1. Cyberattacks are hierarchical and manifest at multiple resolutions simultaneously, from low-level network packet features to high-level behavioral patterns. 2. Attack semantics are causal, which implies that actions follow logical sequences that must be preserved in synthetic data to maintain realism and utility. 3. Fidelity must be multi-dimensional. This translates to temporal, behavioral, and structural aspects that must all be preserved for synthetic data to be operationally useful. PHANTOM addresses these insights through an integrated architecture that combines the stability of VAEs with the high-fidelity generation capabilities of GANs, incorporating domain-specific feature preservation mechanisms. At its core, PHANTOM implements a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM), which operates through three synergistic components: 1. Unlike conventional GANs that operate at fixed resolutions, PHANTOM employs a progressive training strategy that begins with coarse-grained attack features and gradually incorporates finer-grained details. This hierarchical approach mirrors how security analysts investigate incidents—from broad indicators to specific artifacts—and ensures that both macro- and micro-patterns are faithfully reproduced. 2. The VAE component provides stable reconstruction and meaningful latent representations, while the GAN component ensures high perceptual fidelity. Crucially, both pathways share the same generator, enabling knowledge transfer between reconstruction and pure generation tasks. This dual-path approach mitigates mode collapse, which is a critical failure mode in cybersecurity contexts where rare attack types must still be generated. 3. We introduce specialized feature extractors that encode domain-specific invariants, including temporal causality, attack graph structures, and behavioral sequences. These extractors inform a novel feature matching loss that ensures synthetic attacks maintain the essential characteristics of their real counterparts, not just statistical similarity but operational realism. Algorithm 1: PHANTOM 1 input: 2 real-world cyberattack dataset $\mathcal{D} = \{x_i,y_i\}$ 3 latent dimension $Z$ 4 batch size m 5 progressive levels $L$ 6 feature extractors $\mathcal{F} = \{F_{\mathrm{network}},F_{\mathrm{temporal}},F_{\mathrm{behavioral}}\}$ 7 initialize: $G,D,E,C$ with weights $\theta_G,\theta_D,\theta_E,\theta_C$ ,replay buffer $\mathcal{B}$ 8 for current_level $l = 1:L$ do 9 $\alpha_l\gets$ fade_in_factor(l) 10 $\mathcal{D}_l\gets$ resize_samples() 11 for iteration t do 12 sample batch: $\{x_r,y_r\} \sim \mathcal{D}_l,z\sim \mathcal{N}(0,I),\epsilon \sim \mathcal{N}(0,\sigma^2 I)$ 13 encode: $\mu ,\sigma = E(x_r),z_c = \mu +\sigma \odot \epsilon$ 14 // generate 15 $x_{\mathrm{recon}} = G(z_c,y_r,l,\alpha)$ ▷ reconstructed 16 $y_{s} = p(y),x_{\mathrm{syn}}G(z,y_{s},l,\alpha)$ ▷ synthesized 17 // extract features 18 $F_{r},F_{\mathrm{recon}},F_{\mathrm{syn}}$ using $\mathcal{F}$ 19 // compute losses 20 $\mathcal{L}_{\mathrm{recon}} = ||x_r - x_{\mathrm{recon}}||^2 +\beta \mathrm{KL}(q||p)$ ▷VAE 21 $\mathcal{L}_{\mathrm{adv}}^{G} = -\mathbb{E}[D(x_{\mathrm{syn}},x_s)]$ ▷ generator 22 $\mathcal{L}_{\mathrm{adv}}^{D} = \mathbb{E}[(D_{\mathrm{syn}})] - \mathbb{E}[D(x_r)] + \lambda_{\mathrm{gp}}\mathcal{R}_{\mathrm{gp}}$ ▷ discriminator 23 $\mathcal{L}_{\mathrm{fm}} = \sum_i\omega_i||\mathcal{F}_r^{(i)} - \mathcal{F}_{\mathrm{syn}}^{(i)}||$ ▷ feature matching 24 $\mathcal{L}_{\mathrm{class}} = \mathrm{CE}[C(x_{\mathrm{syn}},y_s)] + \mathrm{CE}[C(x_r),y_r]$ ▷ classification 25 $\mathcal{L}_{\mathrm{cyber}} = \mathcal{L}_{\mathrm{temporal}} + \mathcal{L}_{\mathrm{causal}} + \mathcal{L}_{\mathrm{div}}$ ▷ cyber-specific loss 26 // updates 27 $G,E\gets \nabla (\lambda_1\mathcal{L}_{\mathrm{adv}}^G +\lambda_2\mathcal{L}_{\mathrm{recon}} + \lambda_3\mathcal{L}_{\mathrm{fm}} + \lambda_4\mathcal{L}_{\mathrm{class}} + \lambda_5\mathcal{L}_{\mathrm{cyber}})$ 28 $D\gets \nabla L_{\mathrm{adv}}^{D}$ 29 $C\gets \nabla L_{\mathrm{class}}$ 30 update B with $x_{\mathrm{syn}}$ 31 end 32 // stabilization 33 freeze D,refine G and E with $||x_r - G[E(x_r)]||_1$ 34 end 35 36 return Generator G, discriminator D, encoder E, classifier C At a high level, Algorithm 1 operates by training progressively across multiple resolution levels, starting with coarse attack features, such as packet headers, and gradually incorporating finer details, including behavioral patterns. At each level, the algorithm processes batches of real cyberattack data $\mathcal{D}_l$ , which contain attack samples $x_r$ along with their corresponding labels $y_r$ . The encoder $E$ compresses real attacks into latent distributions $(\mu, \sigma)$ , enabling reconstruction via the generator $G$ . Simultaneously, $G$ synthesizes new attacks from random noise $\mathbf{z}$ conditioned on attack parameters $\mathbf{y}$ . The discriminator $D$ distinguishes real from synthetic samples, while the classifier $C$ ensures generated attacks match their intended categories. Three specialized feature extractors ( $F_{\mathrm{network}}, F_{\mathrm{temporal}}, F_{\mathrm{behavior}}$ ) capture network, temporal, and behavioral characteristics for domain-specific feature matching. The outer progressive loop (lines 11-34) implements hierarchical multi-resolution training, starting with coarse network features and gradually introducing finer behavioral details through the $\alpha$ parameter. At each level, batches are sampled and processed through dual generation paths: The VAE path encodes real attacks into latent distributions $(\mu, \sigma)$ for reconstruction, while the GAN path synthesizes novel attacks from random noise conditioned on attack parameters $y_{s}$ . This dual approach ensures both stable learning through reconstruction and high-fidelity generation through adversarial training. The feature extraction block applies domain-specific transforms to capture network topology, temporal patterns, and behavioral sequences—critical for maintaining cyberattack semantics. Five specialized loss functions collectively optimize different aspects of cyberattack synthesis. The VAE loss $\mathcal{L}_{\mathrm{recon}}$ ensures latent space structure and reconstruction fidelity, while $\mathcal{L}_{\mathrm{adv}}$ implements Wasserstein adversarial training with gradient penalty for stable GAN dynamics. Crucially, $\mathcal{L}_{\mathrm{fm}}$ preserves domain-specific characteristics by matching features across real and synthetic data in the network, temporal, and behavioral subspaces. $\mathcal{L}_{\mathrm{class}}$ maintains attack type accuracy, while $\mathcal{L}_{\mathrm{cyber}}$ enforces cyber-specific constraints, including temporal consistency across attack stages, causal relationships between attack actions, and diversity in generated threats. The multi-task update balances these objectives through $\lambda$ weights, while the replay buffer prevents discriminator overfitting. Finally, the stabilization phase (line 33) refines the generator-encoder pair without adversarial pressure, ensuring convergence at each resolution level before progression. Fig. 1 Network architecture diagram of the PHANTOM (1) algorithm. In Fig. 1, we see a graphical rendition of Algorithm 1. The core system features parallel data flows: A VAE reconstruction path, where real attacks are encoded into latent distributions and reconstructed to ensure stability, and a GAN generation path, where random noise is transformed into novel synthetic attacks. These flows converge through a shared conditional generator that preserves attack semantics, while domain-specific feature extractors (network, temporal, behavioral) enforce cyberattack invariants through feature matching losses. The architecture is governed by a multi-objective loss function combining reconstruction fidelity $(\mathrm{MSE} + \mathrm{KL}$ divergence), adversarial competition (WGAN-GP), attack classification accuracy, and cyber-specific constraints (temporal consistency, causal relationships, diversity), all orchestrated through progressive multi-resolution training that gradually refines attack patterns from coarse to fine-grained features, enabling the generation of diverse, realistic cyberattacks while maintaining the statistical and operational properties of real threat data. We observe the spatial complexity of Algorithm 1 to be $\mathcal{O}(m\cdot (|x| + Z) + P)$ , where $m$ is the batch size, $|x|$ is the input dimension, $Z$ is the latent space dimension, and $P$ is the total number of parameters across $G,D,E,C,\mathcal{F}$ . Similarly, we observe the temporal complexity to be $\mathcal{O}(T\cdot L\cdot m\cdot (|x|^2 +Z^2))$ . We see that the spatial complexity grows linearly, $\mathcal{O}(N)$ , with model capacity and batch processing requirements, making large-scale cyberattack synthesis memory-intensive but manageable with modern GPU architectures. The temporal complexity exhibits a quadratic dependence, $\mathcal{O}(N^2)$ , on feature dimensions due to attention mechanisms in cyber-specific extractors; however, progressive training mitigates this by gradually increasing resolution across levels. # 4 Experiments # 4.1 “Real” Dataset To successfully test Algorithm 1, we generated a synthetic cyberattack dataset using this code, consisting of 100 000 network traffic samples across five distinct attack categories, each characterized by 40 engineered features. The dataset exhibits a deliberate realistic class imbalance mirroring real-world network environments: $70\%$ benign traffic (70 000 samples), $15\%$ Denial-of-Service (DoS) attacks (15 000 samples), $10\%$ probing activities (10 000 samples), $4\%$ remote-to-local attacks (4 000 samples), and only $1\%$ user-to-root privilege escalation attempts (1 000 samples). Each class contains statistically distinct feature patterns derived from domain knowledge - for instance, DoS attacks show exceptionally high source byte volumes and connection counts, while U2R attacks demonstrate prolonged durations and specific protocol usage. The features include transformed network metrics (log-scaled byte counts and normalized rates), categorical encodings of protocol types and service flags, and engineered attributes such as failed login counts and session continuity measures, providing a comprehensive representation of attack signatures. To substantiate why we generated the dataset synthetically, it is worth noting that real-world, labeled cyberattack data of this scale and diversity is exceptionally difficult to obtain due to multiple constraints. Firstly, organizations that experience attacks rarely disclose detailed network logs due to security policies, regulatory concerns (such as GDPR and HIPAA), and reputational risks. Secondly, even when incident data is shared through threat intelligence platforms, it is typically anonymized, incomplete, or lacks ground-truth labels - security analysts often cannot definitively categorize every attack, especially novel or sophisticated threats. Thirdly, the extreme class imbalance observed here (U2R attacks constituting only $1\%$ of the samples) reflects reality but creates data scarcity for training robust ML models; collecting sufficient samples of rare attacks would require monitoring thousands of networks over years. Finally, operational networks cannot ethically be attacked for research purposes, making controlled experimentation with real attacks impossible. It is worth mentioning that the synthetic generation approach enables reproducible cybersecurity research while addressing critical gaps in available data. By programmatically creating attacks with known ground truth, one can validate detection algorithms without violating privacy or raising legal concerns. The controlled class distribution enables a systematic investigation of imbalance-handling techniques, while feature engineering incorporates domain expertise on attack signatures. Importantly, it is our intention that this dataset serves as a benchmark for evaluating synthetic data generation methods like PHANTOM (Algorithm 1) – if a GAN can reproduce the statistical properties and class separability of this known distribution, it demonstrates the capability to generate useful synthetic data where real data is unavailable. The inclusion of realistic noise, protocol distributions, and attack-specific patterns creates a challenging testbed that bridges the gap between academic research and operational security needs, which we believe enables advancement in intrusion detection without compromising real network security or privacy. # 4.2 Hyperparameter Values In Tab. 1, we describe all the hyperparameters used for testing Algorithm 1 and provide a rationale for their choice. Table 1 PHANTOM Algorithm Hyperparameters and Their Rationale <table><tr><td>Parameter</td><td>Value</td><td>Reason for Choice</td></tr><tr><td>Latent Dimension</td><td>Z=64</td><td>Balances expressiveness (capturing complex attack patterns) and computational efficiency. Common choice in VAE/GAN literature for tabular data.</td></tr><tr><td>Batch Size</td><td>m=64</td><td>Provides stable gradient estimates while fitting within GPU memory constraints. Powers of 2 optimize memory alignment on GPUs.</td></tr><tr><td>Progressive Levels</td><td>L=1</td><td>Simplified for initial testing; full implementation utilizes 3-4 levels for hierarchical feature learning (packet→flow→session patterns).</td></tr><tr><td>Iterations per Level</td><td>imax=500</td><td>Reduced for demonstration; typical training requires 5 000-10 000 iterations per level for convergence.</td></tr><tr><td>KL Weight</td><td>β=1.0</td><td>Standard β-VAE setting balancing reconstruction fidelity and latent space regularization for disentangled representations.</td></tr><tr><td>Gradient Penalty Weight</td><td>λgp=10.0</td><td>Standard WGAN-GP value ensuring Lipschitz continuity of discriminator for stable adversarial training.</td></tr><tr><td>Adversarial Weight</td><td>λ1=1.0</td><td>Base weight for generator&#x27;s adversarial loss relative to other objectives.</td></tr><tr><td>Reconstruction Weight</td><td>λ2=10.0</td><td>Prioritizes VAE reconstruction to ensure synthetic samples preserve essential attack characteristics.</td></tr><tr><td>Feature Matching Weight</td><td>λ3=5.0</td><td>Emphasizes preservation of domain-specific features (network, temporal, behavioral) crucial for cyberattack realism.</td></tr><tr><td>Classification Weight</td><td>λ4=1.0</td><td>Ensures generated attacks are classifiable with correct labels, maintaining attack type integrity.</td></tr><tr><td>Cyber Loss Weight</td><td>λ5=0.1</td><td>Lower weight for domain-specific losses (temporal consistency, causality) during initial training phases.</td></tr><tr><td>Learning Rate</td><td>η=0.0002</td><td>Standard GAN learning rate from DCGAN/WGAN literature, providing stable convergence without oscillations.</td></tr><tr><td>Discriminator Beta1</td><td>β1D=0.0</td><td>WGAN-GP recommendation (first momentum coefficient) for discriminator to prevent mode-seeking behavior.</td></tr><tr><td>Generator/Encoder Beta-1</td><td>β1G,β1E=0.0</td><td>Consistent with WGAN-GP architecture for stable generator training against critic.</td></tr><tr><td>Classifier Beta-1</td><td>β2C=0.5</td><td>Standard Adam setting for auxiliary classifier to balance exploration and exploitation.</td></tr><tr><td>Beta-2 (all models)</td><td>β2=0.9</td><td>Standard second momentum coefficient for Adam optimizer across all components.</td></tr><tr><td>Feature Extractor Dimension</td><td>|F|=32</td><td>Dimensionality for domain-specific feature representations, balancing information retention and model complexity.</td></tr><tr><td>Feature Matching Weights</td><td>ωi=[1.0,1.0,1.0]</td><td>Equal importance for network, temporal, and behavioral feature preservation in initial implementation.</td></tr><tr><td>Label Prior</td><td>Uniform:U</td><td>Assumes balanced sampling across attack types; in practice, would follow empirical distribution from training data.</td></tr><tr><td>Fake-in Factor</td><td>α=l/L</td><td>Linear progression from coarse to fine features in progressive training paradigm.</td></tr><tr><td>Noise Scale</td><td>ε~N(0,1)</td><td>Standard Gaussian noise for VAE reparameterization trick and latent space sampling.</td></tr></table> # 4.3 Results The classification report in Tab. 2 presents the performance of a trained intrusion detection model when evaluated on a real cyberattack test set after being trained on PHANTOM-generated synthetic data. The results demonstrate strong overall performance with a $98\%$ weighted accuracy and excellent F1-scores (1.00) for the majority classes (Classes 0 and 1), indicating that the synthetic data successfully preserves the distinctive patterns of common attack types such as DDoS and malware. However, the complete failure on Class 4 (precision $= 0.00$ , recall $= 0.00$ , F1 $= 0.00$ ) reveals a critical limitation: PHANTOM failed to generate representative samples for rare attack types, likely due to insufficient examples in the training distribution. The disparity between the high weighted average $(98\%)$ and lower macro average $(77\%)$ highlights the class imbalance problem common in cybersecurity, where performance metrics weighted by class prevalence can mask poor detection of minority attack classes. This finding highlights the necessity for specialized techniques in synthetic data generation to ensure adequate representation of rare yet critical threats such as advanced persistent threats (APTs). Table 2 Classification report - Synthetic data vs. real test set. $TP =$ True Positive, $FP =$ False Positive, ${FN} =$ False Negative. <table><tr><td>Class</td><td>Precision</td><td>Recall</td><td>F1-Score</td><td>Support</td></tr><tr><td></td><td>TP</td><td>TP</td><td>2·TP</td><td>TP + FN</td></tr><tr><td></td><td>TP+FP</td><td>TP+FN</td><td>2·TP/2·TP+FP+FN</td><td></td></tr><tr><td>0</td><td>1.00</td><td>1.00</td><td>1.00</td><td>14 000</td></tr><tr><td>1</td><td>1.00</td><td>1.00</td><td>1.00</td><td>3 000</td></tr><tr><td>2</td><td>0.88</td><td>0.99</td><td>0.93</td><td>2 000</td></tr><tr><td>3</td><td>1.00</td><td>0.87</td><td>0.93</td><td>800</td></tr><tr><td>4</td><td>0.00</td><td>0.00</td><td>0.00</td><td>200</td></tr><tr><td>Accuracy</td><td></td><td></td><td>0.98</td><td>20 000</td></tr><tr><td>Macro Avg</td><td>0.77</td><td>0.77</td><td>0.77</td><td>20 000</td></tr><tr><td>Weighted Avg</td><td>0.98</td><td>0.98</td><td>0.98</td><td>20 000</td></tr></table> In Tab. 3, we observe that the utility metrics demonstrate that PHANTOM generates synthetic cyberattack data with exceptional practical value for downstream security applications. Training intrusion detection models exclusively on synthetic data achieves near-perfect performance (F1: 0.9792, AUC: 0.9966), with only marginal degradation compared to models trained on real data (F1/AUC: 1.0000). More significantly, combining real and synthetic data maintains perfect detection capability (F1/AUC: 1.0000), indicating that the synthetic samples complement rather than contaminate the training distribution. These results suggest that PHANTOM-generated data can effectively substitute for real attack data in scenarios where labeled samples are scarce, while also serving as a valuable augmentation resource to expand training datasets without introducing harmful bias or reducing model accuracy. The fidelity metrics reveal a moderate statistical alignment between the real and synthetic distributions, with a Kolmogorov-Smirnov (KS) statistic of 0.4618 and a Wasserstein distance of 0.2586, indicating room for refinement in capturing exact statistical properties while maintaining operational utility. This minor statistical divergence may actually benefit practical cybersecurity applications by introducing controlled variation that enhances model robustness against novel attack variants. Meanwhile, the diversity metrics show excellent sample variation, with a minimum nearest neighbor distance of 0.3963, confirming the absence of duplicate synthetic samples, and an average distance of 0.5798, indicating healthy dispersion throughout the feature space. This combination—adequate statistical fidelity for realistic training, coupled with sufficient diversity to avoid mode collapse—positions PHANTOM as particularly valuable for generating rare attack types where real samples are insufficient for robust model training, while maintaining detection performance comparable to that of real-world data. In Fig. 2, the graph in the top left offers quantitative validation of the approach's statistical fidelity by comparing the normalized distributions of a representative network feature. The close alignment between real (blue) and synthetic (orange) distributions across the entire feature range indicates that the PHANTOM algorithm successfully captures both central tendencies and distribution tails. The minor discrepancies observed in the mid-range values most likely represent the model's intentional diversification strategy, which ensures coverage of less frequent but operationally important attack patterns. This balanced approach, which maintains overall statistical fidelity while Graph 1: t-SNE Visualization of Real vs. PHANTOM Synthetic Data Fig. 2 Top Left: Density profile comparison showing the density distributions of a representative network traffic feature for real and synthetic datasets. The close alignment between distributions indicates PHANTOM successfully captures the statistical properties of real cyberattack patterns. Top Right: Histogram distribution of Euclidean distances between each synthetic sample and its nearest neighbor in the synthetic dataset. The varied distance profile indicates diverse attack pattern generation, with distinct clusters of both densely and sparsely populated regions in the feature space. Bottom: $t$ -SNE projection showing the latent space distribution of real cyberattack samples (blue) and PHANTOM-generated synthetic attacks (orange). The overlapping clusters demonstrate that the synthetic data preserves the natural separation between different attack classes while covering similar regions of the feature space. strategically expanding coverage, is particularly valuable for cybersecurity applications where rare attack types must be adequately represented, despite their scarcity in real-world datasets. The nearest neighbor distance analysis in the top right reveals the effectiveness of the approach in generating diverse attack patterns while avoiding mode collapse. The multimodal distance distribution, with several peaks, indicates that synthetic samples naturally form clusters of varying densities, mimicking the heterogeneous structure of real attack data, where certain attack types exhibit more intra-class variation than others. The absence of samples with extremely small nearest neighbor distances, relatively, demonstrates that PHANTOM avoids generating near-identical duplicates. The presence of samples with larger distances, relatively, confirms coverage of less populated regions of Table 3 PHANTOM Evaluation Results with Performance Interpretation <table><tr><td>Metric</td><td>Value</td><td>Interpretation</td></tr><tr><td colspan="3">Utility (Downstream Detection Performance)</td></tr><tr><td>Real Data Only (F1)</td><td>1.0000</td><td>Perfect baseline performance</td></tr><tr><td>Synthetic Data Only (F1)</td><td>0.9792</td><td>Near-perfect, minor degrada-tion</td></tr><tr><td>Combined Data (F1)</td><td>1.0000</td><td>Perfect, no negative impact</td></tr><tr><td>Real Data Only (AUC)</td><td>1.0000</td><td>Perfect baseline</td></tr><tr><td>Synthetic Data Only (AUC)</td><td>0.9966</td><td>Excellent performance</td></tr><tr><td>Combined Data (AUC)</td><td>1.0000</td><td>Perfect combination</td></tr><tr><td colspan="3">Fidelity (Statistical Similarity)</td></tr><tr><td>KS Statistic: \(D_{\mathrm{KS}}=\sup _{x}\left|p(x)-q(x)\right|\)</td><td>0.4618</td><td>Moderate similarity, room for improvement</td></tr><tr><td>Wasserstein Distance: \(W_{1}(p,q)=\int_{\mathbb{R}}|p(x)-q(x)| \mathrm{d}x\)</td><td>0.2586</td><td>Acceptable distribution align-ment</td></tr><tr><td colspan="3">Diversity (Sample Variation)</td></tr><tr><td>Min NN Distance: \(d_{\min }(X,Y)=\min _{i,j}d(x_i,y_j)\)</td><td>0.3963</td><td>Good spacing, no duplicates</td></tr><tr><td>Avg NN Distance:\(d(X,Y)=\frac{1}{n}\sum_{i=1}^{n}\left[\min _{j}d(x_i,y_j)\right]\)</td><td>0.5798</td><td>Healthy diversity in samples</td></tr></table> the attack space. This diversity profile ensures that synthetic training data will expose ML models to a broad spectrum of attack variations, improving their robustness against novel attack vectors in real-world deployment. The $t$ -SNE visualization at the bottom provides compelling evidence of the approach's ability to generate high-fidelity synthetic cyberattack data. The clear separation of distinct attack classes, visible as clusters in both real and synthetic distributions, demonstrates that the model preserves the inherent categorical structure of cybersecurity threats. Importantly, the substantial overlap between blue (real) and orange (synthetic) points within each cluster indicates that PHANTOM-generated attacks occupy similar regions of the feature space as real attacks, rather than creating artifacts and outliers. This spatial congruence is crucial for downstream security applications, as synthetic samples that diverge significantly from real data distributions would provide misleading training signals for intrusion detection systems. # 5 Conclusion This paper presents PHANTOM, a progressive high-fidelity adversarial network designed specifically for generating synthetic cyberattack data. By integrating VAEs, GANs, and domain-specific feature preservation mechanisms, PHANTOM addresses the critical shortage of diverse, labeled cybersecurity datasets that impedes the development of effective intrusion detection systems. Our experimental results demonstrate that PHANTOM successfully generates synthetic attack data with statistical properties closely matching real cyberattack distributions, as evidenced by kernel density alignment, diverse nearest neighbor distance profiles, and overlapping $t$ -SNE cluster formations. The framework's ability to preserve temporal causality, behavioral semantics, and multi-resolution attack patterns through progressive training represents a significant advancement over traditional synthetic data generation techniques. However, our findings also illuminate important limitations. The complete failure to detect Class 4 attacks (0% precision and recall) reveals that PHANTOM struggles with extremely rare attack types, reflecting a fundamental challenge in generative modeling under severe class imbalance. The disparity between the macro-average (77%) and weighted-average (98%) F1-scores highlights that while the framework performs excellently on the majority classes, minority attack categories require specialized attention. This limitation is particularly concerning for cybersecurity applications, where rare attacks such as advanced persistent threats and zero-day exploits often pose the most significant operational risks. In future works, we hope to address these limitations through several directions. Firstly, implementing class-conditional training with targeted oversampling strategies could enhance the generation of rare attacks. Secondly, incorporating semi-supervised learning techniques that leverage unlabeled attack indicators may enhance the representation of novel threat patterns. Thirdly, extending the progressive training paradigm to include attack campaign sequences rather than isolated incidents could better capture the temporal evolution of sophisticated intrusions. Finally, validating PHANTOM on diverse real-world datasets beyond synthetic benchmarks would strengthen confidence in its generalizability across different network environments and threat landscapes. Despite these challenges, PHANTOM establishes a principled framework for generating high-fidelity synthetic cyberattacks that balances statistical realism, operational utility, and ethical data sharing. # Declarations - Funding: This research was supported by grant number 23070, provided by Zayed University. - Conflict of interest/Competing interests: The authors declare that there are no conflicts of interest. - Ethics approval and consent to participate: Not applicable. - Consent for publication: The authors grant full consent to the journal to publish this article. - Data availability: The data that support the findings of this study are available upon a reasonable request from the corresponding author. - Materials availability: Not applicable. - Code availability: The code developed for this study is available from the corresponding author upon reasonable request. - Author contribution: All authors have contributed equally to this research.
arxiv_cs
2025-12-12T00:00:00Z
https://arxiv.org/pdf/2512.15768
{"title": "PHANTOM: Progressive High-fidelity Adversarial Network for Threat Object Modeling", "raw_content": "# PHANTOM: Progressive High-fidelity Adversarial Network for Threat Object Modeling\n\nJamal Al-Karaki $^{1,2}$ , Muhammad Al-Zafar Khan $^{1*}$ , Rand Derar Mohammad Al Athamneh $^{1}$\n\n$^{1}$ College of Interdisciplinary Studies, Zayed University, Abu Dhabi, UAE. $^{2}$ College of Engineering, The Hashemite University Zarqa, Jordan.\n\n*Corresponding author(s). E-mail(s): Muhammad.Khan@zu.ac.ae; Contributing authors: Jamal.Al-Karaki@zu.ac.ae;\n\n# Abstract\n\nThe scarcity of high-quality cyberattack datasets poses a fundamental challenge to developing robust machine learning-based intrusion detection systems. Real-world attack data is difficult to obtain due to privacy regulations, organizational reluctance to share breach information, and the rapidly evolving threat landscape. This paper introduces PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a novel multi-task adversarial variational framework specifically designed for generating synthetic cyberattack datasets. PHANTOM addresses the unique challenges of cybersecurity data through three key innovations: Progressive training that captures attack patterns at multiple resolutions, dual-path learning that combines VAE stability with GAN fidelity, and domain-specific feature matching that preserves temporal causality and behavioral semantics. We implement a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM) architecture that incorporates specialized loss functions for reconstruction, adversarial training, feature preservation, classification accuracy, and cyber-specific constraints. Experimental validation on a realistic synthetic dataset of 100 000 network traffic samples across five attack categories demonstrates that PHANTOM achieves $98\\%$ weighted accuracy when used to train intrusion detection models tested on real attack samples. Statistical analyses, including kernel density estimation, nearest neighbor distance distributions, and $t$ -SNE visualizations, confirm that generated attacks preserve the distributional properties, diversity, and class separability of authentic cyberattack patterns. However, results also reveal limitations in generating rare attack types, highlighting the need for specialized handling of severely imbalanced classes. This work advances the state-of-the-art in synthetic cybersecurity data generation, providing a foundation for training more robust threat detection systems while maintaining privacy and security.\n\nKeywords: Synthetic Cyberattack Generation, Adversarial Generative Modeling, Cybersecurity Data Scarcity, Intrusion Detection Augmentation\n\n# 1 Introduction\n\nThe exponential growth of cyber threats in recent years has created an urgent demand for robust cybersecurity systems capable of detecting and mitigating sophisticated attacks [1-3]. Machine Learning (ML) and Deep Learning (DL) models have emerged as powerful tools for threat detection [4, 5], enabling automated analysis of network traffic [6], system logs [7], and user behavior patterns [8]. However, the effectiveness of these models hinges critically on the availability of diverse, representative training data that captures the full spectrum of attack vectors and techniques employed by adversaries.\n\nDespite this need, obtaining high-quality cyberattack datasets remains one of the most significant challenges in cybersecurity research and practice. Real-world attack data is inherently scarce due to several factors:\n\n1. Organizations are often reluctant to share sensitive breach information due to legal and reputational concerns [9]. \n2. Privacy regulations restrict the dissemination of network traffic containing potentially identifiable information [10]. \n3. The rapidly evolving threat landscape means that historical datasets quickly become obsolete [11].\n\nAdditionally, even when attack data is available, it often suffers from severe class imbalance, with benign traffic vastly outnumbering malicious samples, leading to biased models that struggle to detect novel or rare attack patterns.\n\nSynthetic data generation has emerged as a promising solution to address these limitations [12, 13]. By artificially creating realistic cyberattack samples, researchers can augment existing datasets, balance class distributions, and generate examples of rare or emerging threats that may not yet exist in operational environments. However, traditional synthetic data generation techniques, such as rule-based simulation and simple statistical sampling, often produce oversimplified attack patterns that lack the complexity and variability of real-world threats. Models trained on such synthetic data frequently exhibit poor generalization when deployed in production environments, as they fail to capture the nuanced behavioral characteristics of actual attackers.\n\nRecent advances in generative modeling, particularly Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), offer a paradigm shift in synthetic data generation. These deep generative models learn the underlying probability distribution of real data and can generate novel samples that preserve the statistical properties and complex patterns of the original dataset. GANs, through their adversarial training mechanism between a generator and a discriminator network, have demonstrated remarkable success in generating high-fidelity synthetic data across various domains, including image synthesis, natural language processing, and time-series forecasting. Similarly, VAEs utilize probabilistic latent representations to facilitate the controlled generation of diverse samples while preserving the interpretability of the learned feature space.\n\nIn this paper, we propose specialized GAN and VAE architectures tailored specifically for generating high-fidelity synthetic cyberattack datasets. Our approach addresses the unique challenges of cybersecurity data, including temporal dependencies in attack sequences, multi-modal feature distributions spanning categorical and continuous variables, and the need to preserve attack semantics while introducing realistic variations. We develop novel architectural components and training strategies that enhance the diversity, realism, and utility of generated attack samples for downstream security applications.\n\nThis work is divided as follows:\n\nIn Sec. 2, we describe analogous attempts to address the major challenge that we set out to address in our research.\n\nIn Sec. 3, we described our proposed PHANTOM framework and the mechanics of how the algorithm works.\n\nIn Sec. 4, we describe the experiment performed, first by describing the dataset used, second by explaining the choice and motivation for the hyperparameter values selected in the algorithm implementation, and finally by presenting the results obtained through experimentation.\n\nIn Sec. 5, we reflect upon what was achieved in this work, the drawbacks, and provide direction for future works.\n\n# 2 Related Work\n\nIn [14], the authors address the key problem for critical space systems: The lack of high-fidelity, shareable datasets that include both nominal and malicious activity. Specifically, they propose a GAN-based system that creates realistic synthetic cyberattack datasets by training on small samples of real-world nominal and malicious data and then using the generator to produce new, high-fidelity synthetic samples. They evaluate the realism of the generated data and test its usefulness across three datasets.\n\nIn [15], the authors focus on improving cybersecurity in Internet of Things (IoT) and Wireless Sensor Networks (WSNs) by using GANs. Due to the rise of sophisticated threats, especially DDoS and spoofing attacks, traditional security systems are no longer sufficient. To address this, the authors\n\npropose a new GAN-based model, called Dynamic Adaptive Threat Simulation GAN (DATS-GAN), which generates realistic synthetic cyberattack scenarios that mimic real-world attacks, thereby enabling security systems to better detect, learn from, and adapt to evolving threats. The novelty in this work lies not only in its focus on generating such datasets but also in its ability to dynamically detect cybersecurity attacks.\n\nIn [16], the authors address cybersecurity challenges in modern power systems, particularly the threat of stealthy false data injection (FDI) attacks that can cause operational problems such as congestion and voltage instability by proposing a defense framework that uses Wasserstein Generative Adversarial Networks (WGANs) to generate synthetic Phasor Measurement Unit (PMU) data. The workflow creates uncertainty, making it more difficult for attackers to understand, predict, or exploit the system. This work is innovative because it strategically injects realistic synthetic data into the communication stream.\n\n# 3 The Proposed Approach\n\nThe generation of high-fidelity synthetic cyberattack data presents unique challenges that surpass those of conventional image or text synthesis. Cyberattack patterns exhibit complex temporal dependencies, causal relationships between attack stages, multi-scale features (from packet-level to campaign-level), and highly imbalanced class distributions. To address these challenges holistically, below we introduce PHANTOM (Progressive High-fidelity Adversarial Network for Threat Object Modeling), a multi-task adversarial variational framework specifically designed for synthesizing cyberattack data.\n\nOur approach is predicated on three fundamental insights about cyberattack data generation:\n\n1. Cyberattacks are hierarchical and manifest at multiple resolutions simultaneously, from low-level network packet features to high-level behavioral patterns. \n2. Attack semantics are causal, which implies that actions follow logical sequences that must be preserved in synthetic data to maintain realism and utility. \n3. Fidelity must be multi-dimensional. This translates to temporal, behavioral, and structural aspects that must all be preserved for synthetic data to be operationally useful.\n\nPHANTOM addresses these insights through an integrated architecture that combines the stability of VAEs with the high-fidelity generation capabilities of GANs, incorporating domain-specific feature preservation mechanisms. At its core, PHANTOM implements a Multi-Task Adversarial VAE with Progressive Feature Matching (MAV-PFM), which operates through three synergistic components:\n\n1. Unlike conventional GANs that operate at fixed resolutions, PHANTOM employs a progressive training strategy that begins with coarse-grained attack features and gradually incorporates finer-grained details. This hierarchical approach mirrors how security analysts investigate incidents—from broad indicators to specific artifacts—and ensures that both macro- and micro-patterns are faithfully reproduced. \n2. The VAE component provides stable reconstruction and meaningful latent representations, while the GAN component ensures high perceptual fidelity. Crucially, both pathways share the same generator, enabling knowledge transfer between reconstruction and pure generation tasks. This dual-path approach mitigates mode collapse, which is a critical failure mode in cybersecurity contexts where rare attack types must still be generated. \n3. We introduce specialized feature extractors that encode domain-specific invariants, including temporal causality, attack graph structures, and behavioral sequences. These extractors inform a novel feature matching loss that ensures synthetic attacks maintain the essential characteristics of their real counterparts, not just statistical similarity but operational realism.\n\nAlgorithm 1: PHANTOM \n1 input: \n2 real-world cyberattack dataset $\\mathcal{D} = \\{x_i,y_i\\}$ \n3 latent dimension $Z$ \n4 batch size m \n5 progressive levels $L$ \n6 feature extractors $\\mathcal{F} = \\{F_{\\mathrm{network}},F_{\\mathrm{temporal}},F_{\\mathrm{behavioral}}\\}$ \n7 initialize: $G,D,E,C$ with weights $\\theta_G,\\theta_D,\\theta_E,\\theta_C$ ,replay buffer $\\mathcal{B}$ \n8 for current_level $l = 1:L$ do \n9 $\\alpha_l\\gets$ fade_in_factor(l) \n10 $\\mathcal{D}_l\\gets$ resize_samples() \n11 for iteration t do \n12 sample batch: $\\{x_r,y_r\\} \\sim \\mathcal{D}_l,z\\sim \\mathcal{N}(0,I),\\epsilon \\sim \\mathcal{N}(0,\\sigma^2 I)$ \n13 encode: $\\mu ,\\sigma = E(x_r),z_c = \\mu +\\sigma \\odot \\epsilon$ \n14 // generate \n15 $x_{\\mathrm{recon}} = G(z_c,y_r,l,\\alpha)$ ▷ reconstructed \n16 $y_{s} = p(y),x_{\\mathrm{syn}}G(z,y_{s},l,\\alpha)$ ▷ synthesized \n17 // extract features \n18 $F_{r},F_{\\mathrm{recon}},F_{\\mathrm{syn}}$ using $\\mathcal{F}$ \n19 // compute losses \n20 $\\mathcal{L}_{\\mathrm{recon}} = ||x_r - x_{\\mathrm{recon}}||^2 +\\beta \\mathrm{KL}(q||p)$ ▷VAE \n21 $\\mathcal{L}_{\\mathrm{adv}}^{G} = -\\mathbb{E}[D(x_{\\mathrm{syn}},x_s)]$ ▷ generator \n22 $\\mathcal{L}_{\\mathrm{adv}}^{D} = \\mathbb{E}[(D_{\\mathrm{syn}})] - \\mathbb{E}[D(x_r)] + \\lambda_{\\mathrm{gp}}\\mathcal{R}_{\\mathrm{gp}}$ ▷ discriminator \n23 $\\mathcal{L}_{\\mathrm{fm}} = \\sum_i\\omega_i||\\mathcal{F}_r^{(i)} - \\mathcal{F}_{\\mathrm{syn}}^{(i)}||$ ▷ feature matching \n24 $\\mathcal{L}_{\\mathrm{class}} = \\mathrm{CE}[C(x_{\\mathrm{syn}},y_s)] + \\mathrm{CE}[C(x_r),y_r]$ ▷ classification \n25 $\\mathcal{L}_{\\mathrm{cyber}} = \\mathcal{L}_{\\mathrm{temporal}} + \\mathcal{L}_{\\mathrm{causal}} + \\mathcal{L}_{\\mathrm{div}}$ ▷ cyber-specific loss \n26 // updates \n27 $G,E\\gets \\nabla (\\lambda_1\\mathcal{L}_{\\mathrm{adv}}^G +\\lambda_2\\mathcal{L}_{\\mathrm{recon}} + \\lambda_3\\mathcal{L}_{\\mathrm{fm}} + \\lambda_4\\mathcal{L}_{\\mathrm{class}} + \\lambda_5\\mathcal{L}_{\\mathrm{cyber}})$ \n28 $D\\gets \\nabla L_{\\mathrm{adv}}^{D}$ \n29 $C\\gets \\nabla L_{\\mathrm{class}}$ \n30 update B with $x_{\\mathrm{syn}}$ \n31 end \n32 // stabilization \n33 freeze D,refine G and E with $||x_r - G[E(x_r)]||_1$ \n34 end \n35 \n36 return Generator G, discriminator D, encoder E, classifier C\n\nAt a high level, Algorithm 1 operates by training progressively across multiple resolution levels, starting with coarse attack features, such as packet headers, and gradually incorporating finer details, including behavioral patterns. At each level, the algorithm processes batches of real cyberattack data $\\mathcal{D}_l$ , which contain attack samples $x_r$ along with their corresponding labels $y_r$ . The encoder $E$ compresses real attacks into latent distributions $(\\mu, \\sigma)$ , enabling reconstruction via the generator $G$ . Simultaneously, $G$ synthesizes new attacks from random noise $\\mathbf{z}$ conditioned on attack parameters $\\mathbf{y}$ . The discriminator $D$ distinguishes real from synthetic samples, while the classifier $C$ ensures generated attacks match their intended categories. Three specialized feature extractors ( $F_{\\mathrm{network}}, F_{\\mathrm{temporal}}, F_{\\mathrm{behavior}}$ ) capture network, temporal, and behavioral characteristics for domain-specific feature matching.\n\nThe outer progressive loop (lines 11-34) implements hierarchical multi-resolution training, starting with coarse network features and gradually introducing finer behavioral details through the $\\alpha$ parameter. At each level, batches are sampled and processed through dual generation paths: The VAE path encodes real attacks into latent distributions $(\\mu, \\sigma)$ for reconstruction, while the GAN path synthesizes novel attacks from random noise conditioned on attack parameters $y_{s}$ . This dual approach ensures both stable learning through reconstruction and high-fidelity generation through adversarial training. The feature extraction block applies domain-specific transforms to capture network topology, temporal patterns, and behavioral sequences—critical for maintaining cyberattack semantics.\n\nFive specialized loss functions collectively optimize different aspects of cyberattack synthesis. The VAE loss $\\mathcal{L}_{\\mathrm{recon}}$ ensures latent space structure and reconstruction fidelity, while $\\mathcal{L}_{\\mathrm{adv}}$ implements Wasserstein adversarial training with gradient penalty for stable GAN dynamics. Crucially, $\\mathcal{L}_{\\mathrm{fm}}$ preserves domain-specific characteristics by matching features across real and synthetic data in the network, temporal, and behavioral subspaces. $\\mathcal{L}_{\\mathrm{class}}$ maintains attack type accuracy, while $\\mathcal{L}_{\\mathrm{cyber}}$ enforces cyber-specific constraints, including temporal consistency across attack stages, causal relationships between attack actions, and diversity in generated threats. The multi-task update balances these objectives through $\\lambda$ weights, while the replay buffer prevents discriminator overfitting. Finally, the stabilization phase (line 33) refines the generator-encoder pair without adversarial pressure, ensuring convergence at each resolution level before progression.\n\n![](images/9b625d99edeba6e8ddd5419d7678ddd9a15a078c5ed6ecd207cac6122406cfc1.jpg) \nFig. 1 Network architecture diagram of the PHANTOM (1) algorithm.\n\nIn Fig. 1, we see a graphical rendition of Algorithm 1. The core system features parallel data flows: A VAE reconstruction path, where real attacks are encoded into latent distributions and reconstructed to ensure stability, and a GAN generation path, where random noise is transformed into novel synthetic attacks. These flows converge through a shared conditional generator that preserves attack semantics, while domain-specific feature extractors (network, temporal, behavioral) enforce cyberattack invariants through feature matching losses. The architecture is governed by a multi-objective loss function combining reconstruction fidelity $(\\mathrm{MSE} + \\mathrm{KL}$ divergence), adversarial competition (WGAN-GP), attack classification accuracy, and cyber-specific constraints (temporal consistency, causal relationships, diversity), all orchestrated through progressive multi-resolution training that gradually refines attack patterns from coarse to fine-grained features, enabling the generation of diverse, realistic cyberattacks while maintaining the statistical and operational properties of real threat data.\n\nWe observe the spatial complexity of Algorithm 1 to be $\\mathcal{O}(m\\cdot (|x| + Z) + P)$ , where $m$ is the batch size, $|x|$ is the input dimension, $Z$ is the latent space dimension, and $P$ is the total number of parameters across $G,D,E,C,\\mathcal{F}$ . Similarly, we observe the temporal complexity to be $\\mathcal{O}(T\\cdot L\\cdot m\\cdot (|x|^2 +Z^2))$ . We see that the spatial complexity grows linearly, $\\mathcal{O}(N)$ , with model capacity and batch processing requirements, making large-scale cyberattack synthesis memory-intensive but manageable with modern GPU architectures. The temporal complexity exhibits a quadratic dependence, $\\mathcal{O}(N^2)$ , on feature dimensions due to attention mechanisms in cyber-specific extractors; however, progressive training mitigates this by gradually increasing resolution across levels.\n\n# 4 Experiments\n\n# 4.1 “Real” Dataset\n\nTo successfully test Algorithm 1, we generated a synthetic cyberattack dataset using this code, consisting of 100 000 network traffic samples across five distinct attack categories, each characterized by 40 engineered features. The dataset exhibits a deliberate realistic class imbalance mirroring real-world network environments: $70\\%$ benign traffic (70 000 samples), $15\\%$ Denial-of-Service (DoS) attacks (15 000 samples), $10\\%$ probing activities (10 000 samples), $4\\%$ remote-to-local attacks (4 000 samples), and only $1\\%$ user-to-root privilege escalation attempts (1 000 samples). Each class contains statistically distinct feature patterns derived from domain knowledge - for instance, DoS attacks show exceptionally high source byte volumes and connection counts, while U2R attacks demonstrate prolonged durations and specific protocol usage. The features include transformed network metrics (log-scaled byte counts and normalized rates), categorical encodings of protocol types and service flags, and engineered attributes such as failed login counts and session continuity measures, providing a comprehensive representation of attack signatures.\n\nTo substantiate why we generated the dataset synthetically, it is worth noting that real-world, labeled cyberattack data of this scale and diversity is exceptionally difficult to obtain due to multiple constraints. Firstly, organizations that experience attacks rarely disclose detailed network logs due to security policies, regulatory concerns (such as GDPR [17] and HIPAA [18]), and reputational risks. Secondly, even when incident data is shared through threat intelligence platforms, it is typically anonymized, incomplete, or lacks ground-truth labels - security analysts often cannot definitively categorize every attack, especially novel or sophisticated threats. Thirdly, the extreme class imbalance observed here (U2R attacks constituting only $1\\%$ of the samples) reflects reality but creates data scarcity for training robust ML models; collecting sufficient samples of rare attacks would require monitoring thousands of networks over years. Finally, operational networks cannot ethically be attacked for research purposes, making controlled experimentation with real attacks impossible.\n\nIt is worth mentioning that the synthetic generation approach enables reproducible cybersecurity research while addressing critical gaps in available data. By programmatically creating attacks with known ground truth, one can validate detection algorithms without violating privacy or raising legal concerns. The controlled class distribution enables a systematic investigation of imbalance-handling techniques, while feature engineering incorporates domain expertise on attack signatures. Importantly, it is our intention that this dataset serves as a benchmark for evaluating synthetic data generation methods like PHANTOM (Algorithm 1) – if a GAN can reproduce the statistical properties and class separability of this known distribution, it demonstrates the capability to generate useful synthetic data where real data is unavailable. The inclusion of realistic noise, protocol distributions, and attack-specific patterns creates a challenging testbed that bridges the gap between\n\nacademic research and operational security needs, which we believe enables advancement in intrusion detection without compromising real network security or privacy.\n\n# 4.2 Hyperparameter Values\n\nIn Tab. 1, we describe all the hyperparameters used for testing Algorithm 1 and provide a rationale for their choice.\n\nTable 1 PHANTOM Algorithm Hyperparameters and Their Rationale \n\n<table><tr><td>Parameter</td><td>Value</td><td>Reason for Choice</td></tr><tr><td>Latent Dimension</td><td>Z=64</td><td>Balances expressiveness (capturing complex attack patterns) and computational efficiency. Common choice in VAE/GAN literature for tabular data.</td></tr><tr><td>Batch Size</td><td>m=64</td><td>Provides stable gradient estimates while fitting within GPU memory constraints. Powers of 2 optimize memory alignment on GPUs.</td></tr><tr><td>Progressive Levels</td><td>L=1</td><td>Simplified for initial testing; full implementation utilizes 3-4 levels for hierarchical feature learning (packet→flow→session patterns).</td></tr><tr><td>Iterations per Level</td><td>imax=500</td><td>Reduced for demonstration; typical training requires 5 000-10 000 iterations per level for convergence.</td></tr><tr><td>KL Weight</td><td>β=1.0</td><td>Standard β-VAE setting balancing reconstruction fidelity and latent space regularization for disentangled representations.</td></tr><tr><td>Gradient Penalty Weight</td><td>λgp=10.0</td><td>Standard WGAN-GP value ensuring Lipschitz continuity of discriminator for stable adversarial training.</td></tr><tr><td>Adversarial Weight</td><td>λ1=1.0</td><td>Base weight for generator&#x27;s adversarial loss relative to other objectives.</td></tr><tr><td>Reconstruction Weight</td><td>λ2=10.0</td><td>Prioritizes VAE reconstruction to ensure synthetic samples preserve essential attack characteristics.</td></tr><tr><td>Feature Matching Weight</td><td>λ3=5.0</td><td>Emphasizes preservation of domain-specific features (network, temporal, behavioral) crucial for cyberattack realism.</td></tr><tr><td>Classification Weight</td><td>λ4=1.0</td><td>Ensures generated attacks are classifiable with correct labels, maintaining attack type integrity.</td></tr><tr><td>Cyber Loss Weight</td><td>λ5=0.1</td><td>Lower weight for domain-specific losses (temporal consistency, causality) during initial training phases.</td></tr><tr><td>Learning Rate</td><td>η=0.0002</td><td>Standard GAN learning rate from DCGAN/WGAN literature, providing stable convergence without oscillations.</td></tr><tr><td>Discriminator Beta1</td><td>β1D=0.0</td><td>WGAN-GP recommendation (first momentum coefficient) for discriminator to prevent mode-seeking behavior.</td></tr><tr><td>Generator/Encoder Beta-1</td><td>β1G,β1E=0.0</td><td>Consistent with WGAN-GP architecture for stable generator training against critic.</td></tr><tr><td>Classifier Beta-1</td><td>β2C=0.5</td><td>Standard Adam setting for auxiliary classifier to balance exploration and exploitation.</td></tr><tr><td>Beta-2 (all models)</td><td>β2=0.9</td><td>Standard second momentum coefficient for Adam optimizer across all components.</td></tr><tr><td>Feature Extractor Dimension</td><td>|F|=32</td><td>Dimensionality for domain-specific feature representations, balancing information retention and model complexity.</td></tr><tr><td>Feature Matching Weights</td><td>ωi=[1.0,1.0,1.0]</td><td>Equal importance for network, temporal, and behavioral feature preservation in initial implementation.</td></tr><tr><td>Label Prior</td><td>Uniform:U</td><td>Assumes balanced sampling across attack types; in practice, would follow empirical distribution from training data.</td></tr><tr><td>Fake-in Factor</td><td>α=l/L</td><td>Linear progression from coarse to fine features in progressive training paradigm.</td></tr><tr><td>Noise Scale</td><td>ε~N(0,1)</td><td>Standard Gaussian noise for VAE reparameterization trick and latent space sampling.</td></tr></table>\n\n# 4.3 Results\n\nThe classification report in Tab. 2 presents the performance of a trained intrusion detection model when evaluated on a real cyberattack test set after being trained on PHANTOM-generated synthetic\n\ndata. The results demonstrate strong overall performance with a $98\\%$ weighted accuracy and excellent F1-scores (1.00) for the majority classes (Classes 0 and 1), indicating that the synthetic data successfully preserves the distinctive patterns of common attack types such as DDoS and malware. However, the complete failure on Class 4 (precision $= 0.00$ , recall $= 0.00$ , F1 $= 0.00$ ) reveals a critical limitation: PHANTOM failed to generate representative samples for rare attack types, likely due to insufficient examples in the training distribution. The disparity between the high weighted average $(98\\%)$ and lower macro average $(77\\%)$ highlights the class imbalance problem common in cybersecurity, where performance metrics weighted by class prevalence can mask poor detection of minority attack classes. This finding highlights the necessity for specialized techniques in synthetic data generation to ensure adequate representation of rare yet critical threats such as advanced persistent threats (APTs).\n\nTable 2 Classification report - Synthetic data vs. real test set. $TP =$ True Positive, $FP =$ False Positive, ${FN} =$ False Negative. \n\n<table><tr><td>Class</td><td>Precision</td><td>Recall</td><td>F1-Score</td><td>Support</td></tr><tr><td></td><td>TP</td><td>TP</td><td>2·TP</td><td>TP + FN</td></tr><tr><td></td><td>TP+FP</td><td>TP+FN</td><td>2·TP/2·TP+FP+FN</td><td></td></tr><tr><td>0</td><td>1.00</td><td>1.00</td><td>1.00</td><td>14 000</td></tr><tr><td>1</td><td>1.00</td><td>1.00</td><td>1.00</td><td>3 000</td></tr><tr><td>2</td><td>0.88</td><td>0.99</td><td>0.93</td><td>2 000</td></tr><tr><td>3</td><td>1.00</td><td>0.87</td><td>0.93</td><td>800</td></tr><tr><td>4</td><td>0.00</td><td>0.00</td><td>0.00</td><td>200</td></tr><tr><td>Accuracy</td><td></td><td></td><td>0.98</td><td>20 000</td></tr><tr><td>Macro Avg</td><td>0.77</td><td>0.77</td><td>0.77</td><td>20 000</td></tr><tr><td>Weighted Avg</td><td>0.98</td><td>0.98</td><td>0.98</td><td>20 000</td></tr></table>\n\nIn Tab. 3, we observe that the utility metrics demonstrate that PHANTOM generates synthetic cyberattack data with exceptional practical value for downstream security applications. Training intrusion detection models exclusively on synthetic data achieves near-perfect performance (F1: 0.9792, AUC: 0.9966), with only marginal degradation compared to models trained on real data (F1/AUC: 1.0000). More significantly, combining real and synthetic data maintains perfect detection capability (F1/AUC: 1.0000), indicating that the synthetic samples complement rather than contaminate the training distribution. These results suggest that PHANTOM-generated data can effectively substitute for real attack data in scenarios where labeled samples are scarce, while also serving as a valuable augmentation resource to expand training datasets without introducing harmful bias or reducing model accuracy.\n\nThe fidelity metrics reveal a moderate statistical alignment between the real and synthetic distributions, with a Kolmogorov-Smirnov (KS) statistic of 0.4618 and a Wasserstein distance of 0.2586, indicating room for refinement in capturing exact statistical properties while maintaining operational utility. This minor statistical divergence may actually benefit practical cybersecurity applications by introducing controlled variation that enhances model robustness against novel attack variants. Meanwhile, the diversity metrics show excellent sample variation, with a minimum nearest neighbor distance of 0.3963, confirming the absence of duplicate synthetic samples, and an average distance of 0.5798, indicating healthy dispersion throughout the feature space. This combination—adequate statistical fidelity for realistic training, coupled with sufficient diversity to avoid mode collapse—positions PHANTOM as particularly valuable for generating rare attack types where real samples are insufficient for robust model training, while maintaining detection performance comparable to that of real-world data.\n\nIn Fig. 2, the graph in the top left offers quantitative validation of the approach's statistical fidelity by comparing the normalized distributions of a representative network feature. The close alignment between real (blue) and synthetic (orange) distributions across the entire feature range indicates that the PHANTOM algorithm successfully captures both central tendencies and distribution tails. The minor discrepancies observed in the mid-range values most likely represent the model's intentional diversification strategy, which ensures coverage of less frequent but operationally important attack patterns. This balanced approach, which maintains overall statistical fidelity while\n\n![](images/32f80ffe585a5daed9eccaa4f214aa326a075e4a8405768a0114bb2b3fe673d8.jpg)\n\n![](images/dc5eff36911a62f70e16cbf89e8e3cbe33d56cb6d474b8070ab77d683b852ea7.jpg)\n\n![](images/7288c71e2bb4320811fe1063de0ae453acfe7349a265158bb55af48cbbf77b6e.jpg) \nGraph 1: t-SNE Visualization of Real vs. PHANTOM Synthetic Data \nFig. 2 Top Left: Density profile comparison showing the density distributions of a representative network traffic feature for real and synthetic datasets. The close alignment between distributions indicates PHANTOM successfully captures the statistical properties of real cyberattack patterns. Top Right: Histogram distribution of Euclidean distances between each synthetic sample and its nearest neighbor in the synthetic dataset. The varied distance profile indicates diverse attack pattern generation, with distinct clusters of both densely and sparsely populated regions in the feature space. Bottom: $t$ -SNE projection showing the latent space distribution of real cyberattack samples (blue) and PHANTOM-generated synthetic attacks (orange). The overlapping clusters demonstrate that the synthetic data preserves the natural separation between different attack classes while covering similar regions of the feature space.\n\nstrategically expanding coverage, is particularly valuable for cybersecurity applications where rare attack types must be adequately represented, despite their scarcity in real-world datasets.\n\nThe nearest neighbor distance analysis in the top right reveals the effectiveness of the approach in generating diverse attack patterns while avoiding mode collapse. The multimodal distance distribution, with several peaks, indicates that synthetic samples naturally form clusters of varying densities, mimicking the heterogeneous structure of real attack data, where certain attack types exhibit more intra-class variation than others. The absence of samples with extremely small nearest neighbor distances, relatively, demonstrates that PHANTOM avoids generating near-identical duplicates. The presence of samples with larger distances, relatively, confirms coverage of less populated regions of\n\nTable 3 PHANTOM Evaluation Results with Performance Interpretation \n\n<table><tr><td>Metric</td><td>Value</td><td>Interpretation</td></tr><tr><td colspan=\"3\">Utility (Downstream Detection Performance)</td></tr><tr><td>Real Data Only (F1)</td><td>1.0000</td><td>Perfect baseline performance</td></tr><tr><td>Synthetic Data Only (F1)</td><td>0.9792</td><td>Near-perfect, minor degrada-tion</td></tr><tr><td>Combined Data (F1)</td><td>1.0000</td><td>Perfect, no negative impact</td></tr><tr><td>Real Data Only (AUC)</td><td>1.0000</td><td>Perfect baseline</td></tr><tr><td>Synthetic Data Only (AUC)</td><td>0.9966</td><td>Excellent performance</td></tr><tr><td>Combined Data (AUC)</td><td>1.0000</td><td>Perfect combination</td></tr><tr><td colspan=\"3\">Fidelity (Statistical Similarity)</td></tr><tr><td>KS Statistic: \\(D_{\\mathrm{KS}}=\\sup _{x}\\left|p(x)-q(x)\\right|\\)</td><td>0.4618</td><td>Moderate similarity, room for improvement</td></tr><tr><td>Wasserstein Distance: \\(W_{1}(p,q)=\\int_{\\mathbb{R}}|p(x)-q(x)| \\mathrm{d}x\\)</td><td>0.2586</td><td>Acceptable distribution align-ment</td></tr><tr><td colspan=\"3\">Diversity (Sample Variation)</td></tr><tr><td>Min NN Distance: \\(d_{\\min }(X,Y)=\\min _{i,j}d(x_i,y_j)\\)</td><td>0.3963</td><td>Good spacing, no duplicates</td></tr><tr><td>Avg NN Distance:\\(d(X,Y)=\\frac{1}{n}\\sum_{i=1}^{n}\\left[\\min _{j}d(x_i,y_j)\\right]\\)</td><td>0.5798</td><td>Healthy diversity in samples</td></tr></table>\n\nthe attack space. This diversity profile ensures that synthetic training data will expose ML models to a broad spectrum of attack variations, improving their robustness against novel attack vectors in real-world deployment.\n\nThe $t$ -SNE visualization at the bottom provides compelling evidence of the approach's ability to generate high-fidelity synthetic cyberattack data. The clear separation of distinct attack classes, visible as clusters in both real and synthetic distributions, demonstrates that the model preserves the inherent categorical structure of cybersecurity threats. Importantly, the substantial overlap between blue (real) and orange (synthetic) points within each cluster indicates that PHANTOM-generated attacks occupy similar regions of the feature space as real attacks, rather than creating artifacts and outliers. This spatial congruence is crucial for downstream security applications, as synthetic samples that diverge significantly from real data distributions would provide misleading training signals for intrusion detection systems.\n\n# 5 Conclusion\n\nThis paper presents PHANTOM, a progressive high-fidelity adversarial network designed specifically for generating synthetic cyberattack data. By integrating VAEs, GANs, and domain-specific feature preservation mechanisms, PHANTOM addresses the critical shortage of diverse, labeled cybersecurity datasets that impedes the development of effective intrusion detection systems.\n\nOur experimental results demonstrate that PHANTOM successfully generates synthetic attack data with statistical properties closely matching real cyberattack distributions, as evidenced by kernel density alignment, diverse nearest neighbor distance profiles, and overlapping $t$ -SNE cluster formations. The framework's ability to preserve temporal causality, behavioral semantics, and multi-resolution attack patterns through progressive training represents a significant advancement over traditional synthetic data generation techniques.\n\nHowever, our findings also illuminate important limitations. The complete failure to detect Class 4 attacks (0% precision and recall) reveals that PHANTOM struggles with extremely rare attack types, reflecting a fundamental challenge in generative modeling under severe class imbalance. The disparity between the macro-average (77%) and weighted-average (98%) F1-scores highlights that while the framework performs excellently on the majority classes, minority attack categories require specialized attention. This limitation is particularly concerning for cybersecurity applications, where rare attacks such as advanced persistent threats and zero-day exploits often pose the most significant operational risks.\n\nIn future works, we hope to address these limitations through several directions. Firstly, implementing class-conditional training with targeted oversampling strategies could enhance the generation of rare attacks. Secondly, incorporating semi-supervised learning techniques that leverage unlabeled attack indicators may enhance the representation of novel threat patterns. Thirdly, extending the progressive training paradigm to include attack campaign sequences rather than isolated incidents\n\ncould better capture the temporal evolution of sophisticated intrusions. Finally, validating PHANTOM on diverse real-world datasets beyond synthetic benchmarks would strengthen confidence in its generalizability across different network environments and threat landscapes.\n\nDespite these challenges, PHANTOM establishes a principled framework for generating high-fidelity synthetic cyberattacks that balances statistical realism, operational utility, and ethical data sharing.\n\n# Declarations\n\n- Funding: This research was supported by grant number 23070, provided by Zayed University. \n- Conflict of interest/Competing interests: The authors declare that there are no conflicts of interest. \n- Ethics approval and consent to participate: Not applicable. \n- Consent for publication: The authors grant full consent to the journal to publish this article. \n- Data availability: The data that support the findings of this study are available upon a reasonable request from the corresponding author. \n- Materials availability: Not applicable. \n- Code availability: The code developed for this study is available from the corresponding author upon reasonable request. \n- Author contribution: All authors have contributed equally to this research.\n\n# References\n\n[1] Jang-Jaccard, J., Nepal, S.: A survey of emerging threats in cybersecurity. Journal of computer and system sciences 80(5), 973-993 (2014) \n[2] Jimmy, F.: Emerging threats: The latest cybersecurity risks and the role of artificial intelligence in enhancing cybersecurity defenses. Valley International Journal Digital Library 1, 564-74 (2021) \n[3] Admass, W.S., Munaye, Y.Y., Diro, A.A.: Cyber security: State of the art, challenges and future directions. Cyber Security and Applications 2, 100031 (2024) \n[4] Shaukat, K., Luo, S., Chen, S., Liu, D.: Cyber threat detection using machine learning techniques: A performance evaluation perspective. In: 2020 International Conference on Cyber Warfare and Security (ICCWS), pp. 1-6 (2020). IEEE \n[5] Alzaabi, F.R., Mehmood, A.: A review of recent advances, challenges, and opportunities in malicious insider threat detection using machine learning methods. IEEE Access 12, 30907-30927 (2024) \n[6] Pacheco, F., Exposito, E., Gineste, M., Baudoin, C., Aguilar, J.: Towards the deployment of machine learning solutions in network traffic classification: A systematic survey. IEEE Communications Surveys & Tutorials 21(2), 1988-2014 (2018) \n[7] Du, M., Li, F., Zheng, G., Srikumar, V.: Deeplog: Anomaly detection and diagnosis from system logs through deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1285-1298 (2017) \n[8] Alshehri, A., Khan, N., Alowayr, A., Alghamdi, M.Y.: Cyberattack detection framework using machine learning and user behavior analytics. Computer Systems Science & Engineering 44(2) (2023) \n[9] Wu, W., Konstantinidis, G.: Trust and reputation in data sharing: a survey. The VLDB Journal (2025) \n[10] Liu, Y., Song, H.H., Bermudez, I., Mislove, A., Baldi, M., Tongaonkar, A.: Identifying personal information in internet traffic. In: Proceedings of the 2015 ACM on Conference on Online Social Networks, pp. 59-70 (2015)\n\n[11] Kedys, A.: Fast-changing cyber threat landscape and a new reality of cyber security. Cyber Security: A Peer-Reviewed Journal 8(3), 273–280 (2025) \n[12] Agrawal, G., Kaur, A., Myneni, S.: A review of generative models in generating synthetic attack data for cybersecurity. Electronics 13(2), 322 (2024) \n[13] Kumar, V., Sinha, D.: Synthetic attack data generation model applying generative adversarial network for intrusion detection. Computers & Security 125, 103054 (2023) \n[14] Le, J., Viswanathan, A., Zhang, Y.: Generating high-fidelity cybersecurity data with generative adversarial networks. In: ASCEND 2020, p. 4117 (2020) \n[15] Rao, P.K., Chatterjee, S., Prakash, P.S., Ramana, K.S.: Adaptive cyber defence: Leveraging gans for simulating and mitigating advanced network attacks in IoT environments. In: International Symposium on Applied Computing for Software and Smart Systems, pp. 309-322 (2024). Springer \n[16] Gondhi, S.R., Janak, U.R., Raja, A., Asrari, A.: Wgan-based synthetic data generation for modeling cyberattacks in power transmission systems. In: 2024 8th International Symposium on Innovative Approaches in Smart Technologies (ISAS), pp. 1-5 (2024). IEEE \n[17] Voigt, P., Bussche, A.: The eu general data protection regulation (gdpr). A practical guide, 1st ed., Cham: Springer International Publishing 10(3152676), 10-5555 (2017) \n[18] Ness, R.B., Committee, J.P., et al.: Influence of the hipaa privacy rule on health research. Jama 298(18), 2164-2170 (2007)"}
# D3G: Diverse Demographic Data Generation Increases Zero-Shot Image Classification Accuracy within Multimodal Models Abstract Image classification is a task essential for machine perception to achieve human-level image understanding. Multimodal models such as CLIP have been able to perform well on this task by learning semantic similarities across vision and language; however, despite these advances, image classification is still a challenging task. Models with low capacity often suffer from underfitting and thus underperform on fine-grained image classification. Along with this, it is important to ensure high-quality data with rich cross-modal representations of each class, which is often difficult to generate. When datasets do not enforce balanced demographics, the predictions will be biased toward the more represented class, while others will be neglected. We focus on how these issues can lead to harmful bias for zero-shot image classification, and explore how to combat these issues in demographic bias. We propose Diverse Demographic Data Generation (D3G), a training-free, zero-shot method of boosting classification accuracy while reducing demographic bias in pre-trained multimodal models. With this method, we utilize CLIP as our base multimodal model and Stable Diffusion XL as our generative model. We demonstrate that providing diverse demographic data at inference time improves performance for these models, and explore the impact of individual demographics on the resulting accuracy metric. # 1 Introduction Deep Learning systems have been a promising new paradigm in the field of image classification. Vision-Language systems are able to utilize multiple modalities to create models that are more generalizable to a broad range of downstream tasks. Despite this performance, issues such as data redundancy, noise, and class imbalance are just a few of the many difficulties that arise from collecting large amounts of training data. Multimodal models, in particular, require large amounts of high-quality training data with rich cross-modal representations in order to perform well compared to their unimodal counterparts. These models leverage the massive amounts of image-text pairs available online by learning to associate the images with their correct caption, leading to greater flexibility during inference (Pratt et al. 2023); however, class imbalances can often lead to gender and racial bias depending on the desired task. Existing public face image datasets are strongly biased toward Caucasian faces; meanwhile, other races (i.e., Latino) are significantly underrepresented Figure 4. As a result, the models trained from such datasets suffer from inconsistent classification accuracies, limiting the applicability of such systems to predominantly white racial groups Karkkainen and Joo (2021). This means that minority subpopulations can potentially be further marginalized when applied to certain downstream tasks without calibration. This is a core tenet of machine learning: poor data produces poor models. Figure 1: Images Generated with D3G for Race 4 It is important to note that we acknowledge that not all bias is harmful and often is necessary for models to generalize. This is why within this work, we focus on demographic bias which can frequently have harmful implications when applied within society. Image classification is particularly pressing, because it is the core of a myriad of computer vision tasks. Facial recognition, object detection, image search, content moderation, sentiment analysis, and many more tasks are grounded in accurate image classification Figure 2: The image of Google Photos misclassifying black people within the Photos application. systems. This is compounded by the fact that many widely used Foundational Models require multiple modalities, such as DALL-E Ramesh et al. (2021) and Stable Diffusion Rombach et al. (2022). In order to train these models, other models such as CLIP Radford et al. (2021) are used to classify the data that the model will be trained on, in order to enforce a strong cross-modal correlation. This means that demographic biases will be compounded as the images continue to be utilized in training processes. When there are strong harmful demographic biases, these models can cause tremendous harms. One example was when Google Photo's classified black people as gorillas within a user's album, as shown in Figure 2. This is the direct result of demographic bias. The dataset, used to quantify the classification accuracies of Google's model, likely contained the standard biases where images of non-Caucasian faces are underrepresented Figure 4. This leads to a misleading evaluation, which was likely why this model was deployed to the public with this significant issue. In this work, we explore how demographic bias affects image classification accuracy for multimodal models. We also propose D3G, a zero-shot, training-free framework to balance demographic bias and boost classification accuracies for multimodal models used for image classification. # 2 Related Work # 2.1 Model Ensembling for Image Classification Many state-of-the-art techniques for image classification leverage a methodology known as model ensembling. Ensemble learning broadly is an approach that aims to improve performance by combining the predictions of multiple models. There are many such ensemble learning methods, but the one most relevant to our proposed technique is called bagging. Bagging predictors originally published by Breiman (1996) introduced "bootstrap aggregating" or bagging, the ensemble learning method that combines multiple models trained on different subsets of the training data. The formulation is presented as follows: A learning set of $L$ consists of data $(y_{n},\mathbf{x}_{n})$ , $n = 1,\dots,N$ where the $y$ 's are class labels, assume we form a predictor $\phi (\mathbf{x},L)$ where if the input is $\mathbf{x}$ , we predict $\mathbf{y}$ by $\phi (\mathbf{x})$ . Now suppose we form a sequence of replicate learning sets, $L^{(B)}$ each consisting of $N$ observations drawn at random with replacement from $L$ . Since $y$ is a class label in our scenario, the predictions from each of the predictors $\phi (\mathbf{x},L^{(B)})$ will vote to form $\phi_B(\mathbf{x})$ , the aggregated final prediction. Bagging both empirically and theoretically prove improves accuracy for a given set of weak classifiers or "weak learners." This technique effectively replicates our proposed method within a controlled environment. In the implementation of D3G, we are employing a strategy similar to bagging, but across modalities and with generated data. Our goal is based on the theoretical guarantees of bagging, where even though each model is trained on a subset of the data, the aggregation of the predictions begins to approximate the true distribution of the data. A model that applies this concept of model ensembling was introduced in Learning to Navigate for Fine-grained Classification by Yang et al. (2018). This is a state-of-the-art paper that attempts to reduce misclassification rates by developing a model called NTS-Net (Navigator-Teacher-Scrutinizer Network) to teach itself methods of identifying and scrutinizing fine-grained image details. The Navigator navigates the model to focus on the most informative regions (denoted by yellow rectangles in Figure 3), while the Teacher evaluates the regions proposed by the Navigator and provides feedback. After that, the Scrutinizer scrutinizes those regions to make predictions. NTS-Net achieves high classification accuracy on a pre-defined dataset; however, we wanted to explore if these foundational concepts could be applied to a training-free, zero-shot environment. In order to achieve this, we leverage pretrained open-vocabulary models with advanced attention mechanisms to discriminate the fine-grained features of a given image, to improve performance on a variety of classes without additional training. Figure 3: The NTSNet Architecture (Yang et al. 2018) Similarly, within the paper Sus-X: Training-Free Name-Only Transfer of Vision-Language Models by Udandarao, Gupta, and Albanie (2022) achieves state-of-the-art zero-shot classification results on 19 benchmark datasets, outperforming other training-free adaptation methods. It also demonstrates strong performance in the training-free few-shot setting, surpassing previous state-of-the-art methods. This paper is focused on general image classification improvements; however, we aim to explore how this idea of synthetic support set generation affects the fairness of predictions from a classification model. We will employ a similar strategy but also explore how to offset existing harmful biases within the zero-shot setting. # 2.2 Data Filtering and Generation Neural Priming for Sample-Efficient Adaptation by Wallingford et al. (2024) proposes a technique to adapt large pretrained models to distribution shifts. This paper demonstrates that we can leverage an open-vocabulary model's own pretraining data in order to improve performance on downstream tasks. Even though we don't aim to utilize the model's training data in our method, the generated images will likely be sampled from a similar distribution as the multimodal model. This paper shows that even if that is the case, we can still use filtering and guidance in order to improve performance. In our case, our custom prompting method plays the role of guiding the image generation process, resulting in the same empirical performance improvements. DATACOMP: In search of the next generation of multimodal datasets by Gadre et al. (2024) has completely different goals from Neural Priming, but achieves them similarly. The paper introduces DataComp, which is a test bed for dataset-related experiments that contains 12.8 billion image-text pairs retrieved from Common Crawl. Upon retrieving this pool, they proceed to train a new clip
# D3G: Diverse Demographic Data Generation Increases Zero-Shot Image Classification Accuracy within Multimodal Models Abstract Image classification is a task essential for machine perception to achieve human-level image understanding. Multimodal models such as CLIP have been able to perform well on this task by learning semantic similarities across vision and language; however, despite these advances, image classification is still a challenging task. Models with low capacity often suffer from underfitting and thus underperform on fine-grained image classification. Along with this, it is important to ensure high-quality data with rich cross-modal representations of each class, which is often difficult to generate. When datasets do not enforce balanced demographics, the predictions will be biased toward the more represented class, while others will be neglected. We focus on how these issues can lead to harmful bias for zero-shot image classification, and explore how to combat these issues in demographic bias. We propose Diverse Demographic Data Generation (D3G), a training-free, zero-shot method of boosting classification accuracy while reducing demographic bias in pre-trained multimodal models. With this method, we utilize CLIP as our base multimodal model and Stable Diffusion XL as our generative model. We demonstrate that providing diverse demographic data at inference time improves performance for these models, and explore the impact of individual demographics on the resulting accuracy metric. # 1 Introduction Deep Learning systems have been a promising new paradigm in the field of image classification. Vision-Language systems are able to utilize multiple modalities to create models that are more generalizable to a broad range of downstream tasks. Despite this performance, issues such as data redundancy, noise, and class imbalance are just a few of the many difficulties that arise from collecting large amounts of training data. Multimodal models, in particular, require large amounts of high-quality training data with rich cross-modal representations in order to perform well compared to their unimodal counterparts. These models leverage the massive amounts of image-text pairs available online by learning to associate the images with their correct caption, leading to greater flexibility during inference (Pratt et al. 2023); however, class imbalances can often lead to gender and racial bias depending on the desired task. Existing public face image datasets are strongly biased toward Caucasian faces; meanwhile, other races (i.e., Latino) are significantly underrepresented Figure 4. As a result, the models trained from such datasets suffer from inconsistent classification accuracies, limiting the applicability of such systems to predominantly white racial groups Karkkainen and Joo (2021). This means that minority subpopulations can potentially be further marginalized when applied to certain downstream tasks without calibration. This is a core tenet of machine learning: poor data produces poor models. Figure 1: Images Generated with D3G for Race 4 It is important to note that we acknowledge that not all bias is harmful and often is necessary for models to generalize. This is why within this work, we focus on demographic bias which can frequently have harmful implications when applied within society. Image classification is particularly pressing, because it is the core of a myriad of computer vision tasks. Facial recognition, object detection, image search, content moderation, sentiment analysis, and many more tasks are grounded in accurate image classification Figure 2: The image of Google Photos misclassifying black people within the Photos application. systems. This is compounded by the fact that many widely used Foundational Models require multiple modalities, such as DALL-E Ramesh et al. (2021) and Stable Diffusion Rombach et al. (2022). In order to train these models, other models such as CLIP Radford et al. (2021) are used to classify the data that the model will be trained on, in order to enforce a strong cross-modal correlation. This means that demographic biases will be compounded as the images continue to be utilized in training processes. When there are strong harmful demographic biases, these models can cause tremendous harms. One example was when Google Photo's classified black people as gorillas within a user's album, as shown in Figure 2. This is the direct result of demographic bias. The dataset, used to quantify the classification accuracies of Google's model, likely contained the standard biases where images of non-Caucasian faces are underrepresented Figure 4. This leads to a misleading evaluation, which was likely why this model was deployed to the public with this significant issue. In this work, we explore how demographic bias affects image classification accuracy for multimodal models. We also propose D3G, a zero-shot, training-free framework to balance demographic bias and boost classification accuracies for multimodal models used for image classification. # 2 Related Work # 2.1 Model Ensembling for Image Classification Many state-of-the-art techniques for image classification leverage a methodology known as model ensembling. Ensemble learning broadly is an approach that aims to improve performance by combining the predictions of multiple models. There are many such ensemble learning methods, but the one most relevant to our proposed technique is called bagging. Bagging predictors originally published by Breiman (1996) introduced "bootstrap aggregating" or bagging, the ensemble learning method that combines multiple models trained on different subsets of the training data. The formulation is presented as follows: A learning set of $L$ consists of data $(y_{n},\mathbf{x}_{n})$ , $n = 1,\dots,N$ where the $y$ 's are class labels, assume we form a predictor $\phi (\mathbf{x},L)$ where if the input is $\mathbf{x}$ , we predict $\mathbf{y}$ by $\phi (\mathbf{x})$ . Now suppose we form a sequence of replicate learning sets, $L^{(B)}$ each consisting of $N$ observations drawn at random with replacement from $L$ . Since $y$ is a class label in our scenario, the predictions from each of the predictors $\phi (\mathbf{x},L^{(B)})$ will vote to form $\phi_B(\mathbf{x})$ , the aggregated final prediction. Bagging both empirically and theoretically prove improves accuracy for a given set of weak classifiers or "weak learners." This technique effectively replicates our proposed method within a controlled environment. In the implementation of D3G, we are employing a strategy similar to bagging, but across modalities and with generated data. Our goal is based on the theoretical guarantees of bagging, where even though each model is trained on a subset of the data, the aggregation of the predictions begins to approximate the true distribution of the data. A model that applies this concept of model ensembling was introduced in Learning to Navigate for Fine-grained Classification by Yang et al. (2018). This is a state-of-the-art paper that attempts to reduce misclassification rates by developing a model called NTS-Net (Navigator-Teacher-Scrutinizer Network) to teach itself methods of identifying and scrutinizing fine-grained image details. The Navigator navigates the model to focus on the most informative regions (denoted by yellow rectangles in Figure 3), while the Teacher evaluates the regions proposed by the Navigator and provides feedback. After that, the Scrutinizer scrutinizes those regions to make predictions. NTS-Net achieves high classification accuracy on a pre-defined dataset; however, we wanted to explore if these foundational concepts could be applied to a training-free, zero-shot environment. In order to achieve this, we leverage pretrained open-vocabulary models with advanced attention mechanisms to discriminate the fine-grained features of a given image, to improve performance on a variety of classes without additional training. Figure 3: The NTSNet Architecture (Yang et al. 2018) Similarly, within the paper Sus-X: Training-Free Name-Only Transfer of Vision-Language Models by Udandarao, Gupta, and Albanie (2022) achieves state-of-the-art zero-shot classification results on 19 benchmark datasets, outperforming other training-free adaptation methods. It also demonstrates strong performance in the training-free few-shot setting, surpassing previous state-of-the-art methods. This paper is focused on general image classification improvements; however, we aim to explore how this idea of synthetic support set generation affects the fairness of predictions from a classification model. We will employ a similar strategy but also explore how to offset existing harmful biases within the zero-shot setting. # 2.2 Data Filtering and Generation Neural Priming for Sample-Efficient Adaptation by Wallingford et al. (2024) proposes a technique to adapt large pretrained models to distribution shifts. This paper demonstrates that we can leverage an open-vocabulary model's own pretraining data in order to improve performance on downstream tasks. Even though we don't aim to utilize the model's training data in our method, the generated images will likely be sampled from a similar distribution as the multimodal model. This paper shows that even if that is the case, we can still use filtering and guidance in order to improve performance. In our case, our custom prompting method plays the role of guiding the image generation process, resulting in the same empirical performance improvements. DATACOMP: In search of the next generation of multimodal datasets by Gadre et al. (2024) has completely different goals from Neural Priming, but achieves them similarly. The paper introduces DataComp, which is a test bed for dataset-related experiments that contains 12.8 billion image-text pairs retrieved from Common Crawl. Upon retrieving this pool, they proceed to train a new clip model with a fixed architecture and hyper-parameters. The paper concludes that CommonPool and LAION-2B are comparable with the same filtering. This means that image-based filtering and CLIP score filtering excels on most tasks, and can be effectively used to retrain other models. Despite this, the paper mentions that they found demographic biases in models trained using their pool, but their goal was not to reduce these harmful biases. In this paper we aim to offset this demographic bias found in models trained on large-scale filtered data pools such as DataComp. # 2.3 Ethics and Fairness The FairFace dataset and classifier were first published in *FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation* by Karkkainen and Joo (2021). This project focused on creating a dataset and classifier that were balanced across race, gender, and age as shown in Figure 4. This balance is crucial because the paper demonstrates that the balance allows for improved generalization classification performance on the defined demographics, even on novel datasets that contain more non-White faces than typical datasets. The fact the simply balancing these demographics allows for increased accuracy and generalizability is extremely important. This is the core of D3G, and FairFace shows that balancing demographics results in performance improvements. The primary difference is that we aim to show similar improvements without any additional training. Alongside creating a balanced dataset, they also demonstrated their classifier produces balanced accuracy across the specified demographics, which is crucial because I use this classifier to create new labels for the IdenProf dataset. Figure 4: Racial compositions in face datasets (Karkkainen and Joo 2021) # 3 Methods We aim to create an ensemble of models to improve multimodal image classification accuracy, especially for models that are trained on data with a class imbalance. We test this method on standard benchmark datasets, such as ImageNet as shown in Figure 6, then we expand our technique to classify demographic-focused datasets. CLIP (Contrastive Language-Image Pretraining) (Radford et al. 2021) will be used for image-to-text retrieval, and Stable Diffusion XL 1.0 (Podell et al. 2023) for image generation. Our approach is as follows: # 3.1 Datasets For all the results shown in this paper, we classify images from the IdenProf test dataset. We selected this dataset because it provides a simple, applicable downstream task and because all the images were collected and filtered by hand via Google Image search. Each image in the dataset can belong to one of ten classes: Chef, Doctor, Engineer, Farmer, Firefighter, Judge, Mechanic, Pilot, Police, or Waiter. In total, there are 2,000 images for testing, with 200 images for each class. Finally, it is important to note the demographic distribution published by the dataset authors. The IdenProf dataset consists of $80.6\%$ male subjects, and $19.4\%$ female. Along with this, $91.1\%$ of the people within the dataset are White, while $8.9\%$ are of another race. The dataset author also noted that there were more images of Asian and White people obtainable, when compared to that of black people. Similarly, there were more images of men Figure 5: The D3G Framework obtainable than of women. This reflects this demographic biases discussed previously. Along with IdenProf, we also leverage information collected from the FairFace dataset Karkkainen and Joo (2021). This dataset defines common demographics and forms them into classification categories. The authors constructed their dataset containing 108,501 images, and even though we do not utilize this dataset within this paper, the demographic information is still useful. We leverage the classification model that they trained on their own dataset. As a result, the results from the classifier are highly balanced and less likely to contain demographic bias. We use this classifier to assign additional labels to the images within IdenProf. There are 3 primary demographics that will be assigned as labels: race, gender, and age. Along with this, the race category has two versions, with race 4 being coarse-grained with only four races to choose from, and race 7 being fine-grained with 7 races to choose from. Combining the classes from IdenProf and FairFace, every image in the dataset can be classified with any of the labels identified in Table 1. # 3.2 Creating Prompts To generate our prompts, we leverage a set of templates constructed based on demographics identified in Table 1. These templates are designed to expose and leverage a specific demographic bias, based on the whatever image is currently being classified. For instance, if we were attempting to classify the profession of the person within the image, our prompts would be as shown in Table 2. This process is pictured within Figure 5. <table><tr><td>Class</td><td>Values</td></tr><tr><td>profession</td><td>Chef, Doctor, Engineer, Farmer, Firefighter, Judge, Mechanic, Pilot, Police, or Waiter</td></tr><tr><td>race 7</td><td>White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino</td></tr><tr><td>race 4</td><td>White, Black, Indian, Asian</td></tr><tr><td>gender</td><td>Male, Female</td></tr><tr><td>age</td><td>0-2, 3-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+</td></tr></table> Table 1: All potential classes for an image from IdenProf # 3.3 Generate Class Images Upon creating diverse demographic prompts from templates for each of the classes, each of these prompts are used to generate an image. We employ Stable Diffusion XL, a diffusion-based image generation model, to conditionally generate an image of each class in the dataset. Our result will be images that emphasizes the diverse demographics between the classes. For standard D3G we will generate 1 image per prompt then average the embeddings of all the prompts, and for average image D3G we will generate 5 images per prompts, then perform the same process of averaging the embeddings of these images. Generating these diverse images is crucial because our goal is to combat the issue of prediction bias by generating diverse images and utilizing them for the next step when predicting labels. This step is depicted by images on the right of Figure 6. # 3.4 Weighted Sum Using the prompts created earlier, we start the classification phase. We use the image and text encoders from CLIP ViT- <table><tr><td>Demographic</td><td>Prompt</td><td>Text</td></tr><tr><td>Profession</td><td>&quot;A photo of a &lt;prof&gt;&quot;</td><td>A photo of a doctor</td></tr><tr><td>Race 7</td><td>&quot;A photo of a &lt;race&gt;&lt;prof&gt;&quot;</td><td>A photo of a white doctor</td></tr><tr><td>Race 4</td><td>&quot;A photo of a &lt;race&gt;&lt;prof&gt;&quot;</td><td>A photo of a white doctor</td></tr><tr><td>Gender</td><td>&quot;A photo of a &lt;gender&gt;&lt;prof&gt;&quot;</td><td>A photo of a male doctor</td></tr><tr><td>Age</td><td>&quot;A photo of a &lt;age&gt; year old &lt;prof&gt;&quot;</td><td>A photo of a 30-39 year old doctor</td></tr></table> Table 2: Example diverse demographic texts for classifying profession. Note that each prompt starts with "A photo of a," and that all the correct nouns and adjectives are added to the prompts as shown in the right column. More examples are provided in Section 9. L/14, our multimodal model, in order to get the embeddings for the generated images. Upon getting these embeddings, we scan values from 0 to 1 using a step value of 0.01 in order to find an optimal weight to create a weighted sum of the text and image embeddings. The text embedding will have a weight of $w$ while the image embedding is weighted by $1 - w$ . This step allows us to bridge the semantic gap between text and images, because images are always closer in embedding space to one another than text. After performing this step, we will get a new embedding that represents the weighted combination of the text and image embeddings. # 3.5 Classification Finally, we get the embedding of the query image by passing it through the CLIP image encoder. At this point, we simply getting the cosine similarity between the query image embedding, and the combined image-text embeddings from each class. In order to classify the image, we just get the highest similarity score and use that class as the prediction. This step is depicted by the blue arrows within Figure 6. Figure 6: A demo example of D3G on difficult fine-grained classes from the ImageNet dataset (Note: we utilized ImageNet for this example to showcase the fine-grained classification capabilities of D3G. IdenProf does not have such fine-grained classes). # 4 Results # 4.1 Metrics We choose to use top-1 accuracy as the standard metric for our results. We selected this metric for a variety of reasons, the most prominent being that this paper aims to increase zero-shot classification accuracy. There are many metrics that represent zero-shot accuracy; however, top-1 accuracy is the most common. 2. Create dataset classifier from label text Figure 7: The process of classifying an image with CLIP at inference-time (Radford et al. 2021). # 4.2 Evaluation Breakdown We study three primary classification methods in this paper. CLIP ViT-L/14 is our baseline, the standard method of multimodal classification as outlined in Figure 7. The second method of image classification is Standard D3G as shown in Figure 5. In this method, for every class in our dataset, we generate one prompt for each of the specified demographics, then use these prompts to generate images and average their embeddings. Finally, our third method is Average Image D3G. This is the exact same process as Standard D3G; however, instead of generating one image per demographic prompt, we will generate 5, then average the embeddings of all the prompts for a given class. Along with our three classification methods, we outline 5 prompting strategies when creating our demographic <table><tr><td>Demographic</td><td>Method</td><td>Profession</td><td>Race 7</td><td>Race 4</td><td>Gender</td><td>Age</td></tr><tr><td rowspan="3">Profession</td><td>CLIP</td><td>95.14</td><td>94.73</td><td>95.22</td><td>96.52</td><td>94.81</td></tr><tr><td>Standard D3G</td><td>95.54</td><td>95.22</td><td>95.30</td><td>96.52</td><td>95.06</td></tr><tr><td>Average Image D3G</td><td>95.87</td><td>95.62</td><td>95.38</td><td>96.76</td><td>95.54</td></tr><tr><td rowspan="3">Race 7</td><td>CLIP</td><td>44.65</td><td>28.20</td><td>-</td><td>28.61</td><td>25.69</td></tr><tr><td>Standard D3G</td><td>45.38</td><td>31.85</td><td>-</td><td>32.90</td><td>30.96</td></tr><tr><td>Average Image D3G</td><td>45.46</td><td>32.33</td><td>-</td><td>33.55</td><td>32.25</td></tr></table> Table 3: Results when classifying the specified demographic of the people within the IdenProf dataset. The far left column shows the demographic that will be classified. The second column on the left dictates the method used for classification, while the other columns dictate the prompting structure, as discussed in Section 4.2. Additional results are shown in Section 9. <table><tr><td>Demographic</td><td>Method</td><td>Profession</td><td>Race 7</td><td>Race 4</td><td>Gender</td><td>Age</td></tr><tr><td rowspan="2">Profession</td><td>Standard D3G</td><td>0.85 / 0.15</td><td>0.84 / 0.16</td><td>0.90 / 0.10</td><td>0.91 / 0.09</td><td>0.90 / 0.10</td></tr><tr><td>Average Image D3G</td><td>0.71 / 0.29</td><td>0.74 / 0.26</td><td>0.84 / 0.16</td><td>0.91 / 0.09</td><td>0.67 / 0.33</td></tr><tr><td rowspan="2">Race 7</td><td>Standard D3G</td><td>0.90 / 0.10</td><td>0.68 / 0.32</td><td>-</td><td>0.68 / 0.32</td><td>0.67 / 0.33</td></tr><tr><td>Average Image D3G</td><td>0.92 / 0.08</td><td>0.69 / 0.31</td><td>-</td><td>0.67 / 0.33</td><td>0.68 / 0.32</td></tr></table> Table 4: The weight values used to achieve the results in Table 3. For each evaluation, the left value is the text embedding weight, and the right is the image embedding weight. CLIP is not included because no images are weighted with the text embeddings. Note that the sum of the text and image weights for a given evaluation should equal 1. prompts within the D3G framework, as shown in the top row of Table 3. It is important to note that for all of these prompting strategies, they add demographic information, in addition to the specified classification category. For instance, as shown in Table 2, when the task is to classify profession, we can add information regarding race or gender in addition to the standard profession class. This allows us to study how specific demographics affect the classification accuracy. Finally, we also explore and analyze the per-class accuracy results when classifying the specified demographic, as discussed in later sections. # 4.3 Top-1 Results The top-1 results when classifying two out of the 5 demographics are shown in Table 3 (Additional results for the other demographics will be included in Section 9). CLIP performs fairly well when classifying profession, which was to be expected because CLIP's training data likely includes richer cross-modal representations related to profession. As a result, all the accuracies are quite high. Despite this already high performance, D3G is able to still improve performance. This implies that providing diverse demographics can still improve CLIP's understanding of well-established concepts. We are able to gain a better understanding of D3G's efficacy when we look at the performance gains on Race 7. For this demographic, CLIP performance is much worse. Once again, the model performs better when the prompt contains information regarding profession, due to the increased likelihood of this information within the training data; however, when this information is omitted, the performance on the other demographics is abysmal. With the accuracy only scoring around $10 - 15\%$ more than random guessing (which should be around $14\%$ top-1 accuracy), this shows that CLIP does not have a deep understanding of race and other demographics. With this in mind, by simply implementing D3G, we are able to push the accuracies up to by $4 - 7\%$ . This indicates that coupling the standard embeddings with diverse data that has been generated, improves CLIP's understanding of concepts that were previously misunderstood. In addition, it is important to note that Average Image D3G typically performs better than the standard method. Once again, this makes sense and conforms to our hypothesis. Generating diverse data pushes the embeddings closer to the ground-truth position within embedding space, resulting in more accurate predictions for classes the model may not fully understand. These results are highly promising, and we can learn a bit more about the effect of D3G on these results by looking at the weights utilized to produce these scores. # 4.4 Weighting Strategy Recall that in order to classify a query image, we form a weighted sum of the embeddings between the text prompt, and the generated images. The ratio of text-to-image weighting is dictated by scanning values until an optimal state is found. This is necessary, because for certain images, the text embeddings will contribute more to the classification result than the image embeddings and vice versa. With this in mind, the weighting ration between images and text is also an indicator of how much the generated images from D3G actually help given a defined demographic. Knowing <table><tr><td>Demographic</td><td>Prompt</td><td>White</td><td>Black</td><td>Latino</td><td>East Asian</td><td>South East Asian</td><td>Indian</td><td>Middle Eastern</td></tr><tr><td rowspan="5">Race 7</td><td>Profession</td><td>68.19</td><td>70.90</td><td>15.38</td><td>43.46</td><td>20.59</td><td>57.58</td><td>13.80</td></tr><tr><td>Race 7</td><td>8.92</td><td>66.42</td><td>11.54</td><td>52.74</td><td>8.82</td><td>60.61</td><td>32.68</td></tr><tr><td>Race 4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Gender</td><td>18.07</td><td>73.13</td><td>11.54</td><td>35.02</td><td>17.65</td><td>51.52</td><td>34.93</td></tr><tr><td>Age</td><td>12.29</td><td>68.66</td><td>11.54</td><td>43.46</td><td>14.71</td><td>69.70</td><td>29.58</td></tr></table> Table 5: Standard D3G per-class results when classifying the specified demographic. Note that all the prompts are as described in Table 2 (e.g. "A photo of a black person", or "A photo of a 30-39 year old doctor"). More examples are provided in Section 9 this information, we can then start to understand exactly what our results mean in the broader scope. When viewing the weights from Table 4, we see the same trends that were displayed within Table 3, but we get a glimpse as to why D3G had minimal performance gains. When classifying profession, most of the text weights for Standard D3G are quite high, being roughly around $85 - 90\%$ ; however, whenever see larger increases in accuracy for D3G, we also see an increased weighting of the generated images. This is especially evident when classifying race 7. Once again, the prompts that utilized professions were able to get somewhat higher accuracies, due to the structure of the dataset; however, for every other race 7 evaluation, the generated images played a major role in the classification results. The fact that images were consistently weighted around $30\%$ shows that the diversity matters when classifying demographics. The results analyzed from our top-1 results and their corresponding weighting strategies, show that the method works; however, the per-class results give us a deeper understanding of why the method works. # 4.5 Per-Class Results For these results, we primarily reference Table 5; however, note that additional per-class results are included within Section 9. When classifying race 7, we know that the best performance gains were from including gender and age into the prompts. Focusing on these rows, we can see a few interesting trends. For instance, including information about gender improves the accuracy for black and middle eastern people the most. This is likely due to the fact that within CLIP's training data, these populations have gender underrepresented. Within Section 5.2, we will later discuss future methods of confirming this hypothesis. Now that we understand which demographics help classification accuracies, we can now start to extend these inferences across demographics. Images and text related to East Asian people likely did not have rich cross-modal representations because race 7 helped the most for this demographic. This means that simply generating images of diverse races was able to significantly boost the accuracy. Similarly, age was the most useful demographic when classifying people in images as Indian. This was quite surprising, and as we discuss in Section 5.2, we intend to further explore the impact of these results by including additional metrics such as precision, recall, specificity, and F1 score. Another trait of these per-class results emerges when we compare the accuracy ratios across demographic columns. For instance, black generally achieves a higher per-class accuracy than the other demographics, with Indian and East Asian obtaining the second and third-highest overall per-class accuracies across all the prompts. Alongside this, Latino, South East Asian, and White achieve some of the lowest per-class accuracies overall across all prompts. We were very surprised by this outcome, especially by the fact that race 7 was the worst performing prompt for White, which had the majority representation within the dataset. Intuitively, this may imply that providing diverse representations can also move embeddings away from the correct position in embedding space. In order to combat this, we may be able to strategically weight generated image and text prompt embeddings in relation to their demographic proportions within the dataset (e.g., if Latino is underrepresented within the dataset, then we will up-weight the Latino embeddings). This idea is further explored within Section 5.2. Finally, we did not describe the results for profession, due to the fact that we cannot infer why these demographics performed best, due to the fact that CLIP leverages profession information to make its predictions, but the dataset is catered towards profession. This means that the increased accuracies could be either due to the profession information within the prompt, or the images generated of each profession. Either way, we will need to run more tests to fully understand this. We intend to evaluate on other datasets, so we can understand whether this correlation indicates causation; however, these are very promising results. # 5 Discussion # 5.1 Assumptions Within this paper, two prominent assumptions are made: 1. The generative model has a better learned representation of the true distribution of the data (due to its increased complexity and data diversity). 2. The base multimodal model can distinguish between similar classes. Our method will not improve performance if this is not the case. These assumptions are necessary for D3G to function properly, but they are not unreasonable for a zero-shot setting. The generative model must have a better learned representation of the true data distribution, because it needs to be able to generate images that accurately represent the desired concept. If the model cannot generate useful images, then D3G will revert to using the baseline CLIP method, with text-based classification. In addition, we need our base model to be able to distinguish between similar classes, because if two classes correspond to the same point within embedding space, then our model cannot distinguish them. Similarly, we need this assumption so that the weighted sum of the image and text embeddings actually pushes the embedding towards the true embedding, and not just in a random direction. If the base model couldn't distinguish between certain classes, then we would have no guarantee that creating a weighted sum actually improves classification, because the model would be completely guessing in that case. In the future, we may be able to validate this assumption by comparing the embeddings within embedding space to ensure they are an adequate distance apart, but for now this will be maintained as an assumption. These assumptions on their own are not unreasonable; however, in certain circumstances they may become limitations, as discussed later. As mentioned previously, this research is crucial because models such as CLIP, dictate are frequently used to filter large datasets such as DataComp-1B or LAION-2B. If CLIP performs so poorly when classifying demographics, then these biases will be reinforced on all models trained with the datasets. This issue has compounding effects, and so to reduce demographic bias within image generation, object detection, and content moderation models, we must start with image classification. # 5.2 Future Work With such promising results from this project, there are many steps we intend to take in the future, in order to ensure this method is as robust as possible. To start, we aim to include additional metrics that properly quantify the balance between demographics to better understand how D3G balanced the predictions of the multimodal classifier. We specifically hope to investigate the robustness of our approach to class imbalance, data redundancy, and noise levels. For this paper, we decided to simply average the embeddings of all images generated with D3G, however this may not be the most effective process. Even though we generate images of a diverse range of demographics, these demographics are not weighted equally by CLIP (as demonstrated previously in Table 5), due to the training data. This means that by utilizing the CLIP image encoder to get embeddings for all of our images, we are only offsetting the existing bias, but this does not create a neutral embedding; rather, it creates an embedding that still emphasizes the existing bias but is slightly more balanced across demographics. In order to combat this, we aim to explore how we can create a weighted sum of the embeddings from individual images, that is informed by the demographics of the training data and of the broader world. Intuitively, if CLIP tends to favor one demographic, then we will down weight those images, and vice versa if CLIP rarely selects another demographic. In this way, we can robustly enforce equity within CLIP's predictions. In addition to this step, in the future we also aim to utilize OpenCLIP so that we can accurately draw conclusions about the model's predictions in relation to the training data. Since we solely used CLIP as a baseline for this paper, we are unable to confidently state that the distribution of the training data led to the model's sometimes biased predictions; however, this is strongly implied. By utilizing a model with open training data and architecture, we can draw these conclusions with certainty. Researchers are starting to explore demographic bias within LAION-2B and DataComp-1B (the training data for certain OpenCLIP models), and we aim to leverage this knowledge for future implementations. We would also like to expand our evaluation suite to multiple datasets. Currently, we only evaluate on 2,000 images from IdenProf, but we could start by utilizing the full dataset of 11,000 training and test images, since we are not training, and we want a wider pool of images. In addition, we intend to perform similar tests over the FairFace dataset. These results would more effectively isolate CLIP's capabilities in predicting the demographics outlined in this paper, since the FairFace dataset was constructed with these demographics in mind. This is especially important, because we found that CLIP was able to leverage the semantic information regarding professions within the dataset, in order to classify race 7 more accurately. By removing professions as a factor, we will be able to fully explore CLIP's performance on such tasks. An important note, was that we were particularly intrigued by CLIP's inadequate performance when classifying demographics such as race 7, so we also aim to conduct an analysis on the individual classification results, combined with metrics such as precision, recall, specificity, and overall F1 score in order to better understand whether CLIP's performance on these demographics is statistically significant. If the positive predictions are solely informed by demographic stereotypes, then we aim to expose these weaknesses and combat them with D3G. Finally, in addition to generating images based on the demographics, we also aim to explore methods of retrieving images, or modifying the demographics of the query image in-place. Modifying the existing query image to get diverse demographics, may reduce the impact of stereotypes enforced by the image generation model, and result in classifications that are much more accurate. # 6 Conclusion Image classification remains a challenging task despite advancements in multimodal models like CLIP that leverage semantic similarities across vision and language. Low-capacity models often suffer from underfitting, leading to poor performance; however, the generation of high-quality data with rich cross-modal representations is also difficult. Imbalanced demographics in datasets can cause predictions to bias toward more represented classes, pushing those who are underrepresented to the wayside. Our study highlights these issues and their impact on zero-shot image classification, proposing Diverse Demographic Data Generation (D3G) as a solution. This training-free, zero-shot method enhances classification accuracy and reduces demographic bias in pre-trained multimodal models by providing diverse demographic data at inference time, demonstrating improved performance for these models. # 7 Ethics Statement The fact that we are utilizing image generation models for D3G provides significant potential for negative societal impact. For instance, the images generated by the model can often reinforce certain demographic biases. This is to be expected, because the prompts used within this paper are quite vague; however, this also shows that the generative model has learned visual stereotypes from its training data. The stereotypes within the generated images is why they should only be used as a weighted sum with the text, and never as the sole ground-truth signal. Excess up-weighting of the images, provides opportunity for unethical image generations. One potential way to combat this issue of stereotypes within generation, is to utilize the method discussed in Section 5.2, where we modify the query image in-place in order to reduce the room for error, while still increasing demographic diversity. Along with this, our use of generative modelling allows for potentially unethical prompting. The only restrictions on prompting are those enforced by Stable Diffusion XL; however, due to the open-source nature of the model, many of these restrictions can be circumvented. We do not condone the use of D3G to generate any hateful, demeaning, or otherwise unethical data. This method should only be used within appropriate contexts, and primarily as a means of increasing pre-trained model diversity ad hoc. The selection of demographics used within our classification process was mainly a result of the process used to create the FairFace dataset (Karkkainen and Joo 2021). The authors defined the races used to be based on commonly accepted race classification from the U.S. Census Bureau; however, we acknowledge that does not properly represent the racial landscape of the world. It is important to note that the authors decided to use skin color as a proxy to race, combined with annotations about physical attributes. This means that the annotations used to construct the dataset and train the FairFace classification model used to create labels for IdenProf, may contain annotator bias. This is evident in the gender demographic. The authors mentioned it would be impossible to perfectly balance the gender predictions of their model, outside a lab setting. Finally, ages were simply segmented into common age groups. The decision to use these demographic categories limits the conclusions we can draw in this paper, regarding the impact of all relevant demographics on classification accuracy. Finally, D3G is a technique that does not remove demographic biases, but rather, it offsets learned biases. This means that the method can either reduce or accentuate human bias, and should not be used as a universal architecture to improve multimodal model fairness and accuracy. If the images generated contain harmful bias, then this technique could make the performance worse and much more inequitable. # 8 Limitations Due to this paper being focused on classification, a significant limitation is with regard to demographic intersectionality. People that fit into multiple demographics within the same category (i.e., people who are biracial), will suffer from only being classified as a single demographic. This is a limitation, because it is a known issue that cannot be surmounted using standard metrics within image classification. Future methods may be able to explore intersectionality by retrieving the top-k classified demographics; however, this would be difficult in a zero-shot setting, where no additional information about the query image is provided. A second major limitation is the fact that the D3G can only perform well if the pretrained models are able to effectively distinguish between the demographics being classified. As mentioned within previously, if the multimodal model embeds two demographics to the same point in embedding space, or if the image generation model cannot generate good images for a given demographic, the technique will fail. This is typically not an issue for the broad demographics covered within this paper; however, it may become more difficult as the classes become more fine-grained. A final limitation is the fact that D3G utilizes pre-trained models for every step of the pipeline. This partially is also the most useful part of the technique; however, it also means that the limitations of the pretrained models will extend to D3G. The abilities or inabilities of the generative model will result in the final classification accuracies. Similarly, the quality of the embeddings produced from the multimodal model will dictate the effect D3G will have on classification accuracy.
arxiv_cs
2025-12-10T00:00:00Z
https://arxiv.org/pdf/2512.15747
{"title": "D3G: Diverse Demographic Data Generation Increases Zero-Shot Image Classification Accuracy within Multimodal Models", "raw_content": "# D3G: Diverse Demographic Data Generation Increases Zero-Shot Image Classification Accuracy within Multimodal Models\n\nJavon Hickmon\n\nDepartment of Computer Science, University of Washington, Seattle WA\n\njavonh@cs.washington.edu\n\n# Abstract\n\nImage classification is a task essential for machine perception to achieve human-level image understanding. Multimodal models such as CLIP have been able to perform well on this task by learning semantic similarities across vision and language; however, despite these advances, image classification is still a challenging task. Models with low capacity often suffer from underfitting and thus underperform on fine-grained image classification. Along with this, it is important to ensure high-quality data with rich cross-modal representations of each class, which is often difficult to generate. When datasets do not enforce balanced demographics, the predictions will be biased toward the more represented class, while others will be neglected. We focus on how these issues can lead to harmful bias for zero-shot image classification, and explore how to combat these issues in demographic bias. We propose Diverse Demographic Data Generation (D3G), a training-free, zero-shot method of boosting classification accuracy while reducing demographic bias in pre-trained multimodal models. With this method, we utilize CLIP as our base multimodal model and Stable Diffusion XL as our generative model. We demonstrate that providing diverse demographic data at inference time improves performance for these models, and explore the impact of individual demographics on the resulting accuracy metric.\n\n# 1 Introduction\n\nDeep Learning systems have been a promising new paradigm in the field of image classification. Vision-Language systems are able to utilize multiple modalities to create models that are more generalizable to a broad range of downstream tasks. Despite this performance, issues such as data redundancy, noise, and class imbalance are just a few of the many difficulties that arise from collecting large amounts of training data. Multimodal models, in particular, require large amounts of high-quality training data with rich cross-modal representations in order to perform well compared to their unimodal counterparts. These models leverage the massive amounts of image-text pairs available online by learning to associate the images with their correct caption, leading to greater flexibility during inference (Pratt et al. 2023); however, class imbalances can often lead to gender and racial bias depending on the desired task.\n\nExisting public face image datasets are strongly biased toward Caucasian faces; meanwhile, other races (i.e., Latino) are significantly underrepresented Figure 4. As a result, the models trained from such datasets suffer from inconsistent classification accuracies, limiting the applicability of such systems to predominantly white racial groups Karkkainen and Joo (2021). This means that minority subpopulations can potentially be further marginalized when applied to certain downstream tasks without calibration. This is a core tenet of machine learning: poor data produces poor models.\n\n![](images/d0d99888c390a4ab1a69161c82045a10b6952a6e901b81d0c9fc99969cd89fb7.jpg) \nFigure 1: Images Generated with D3G for Race 4\n\nIt is important to note that we acknowledge that not all bias is harmful and often is necessary for models to generalize. This is why within this work, we focus on demographic bias which can frequently have harmful implications when applied within society. Image classification is particularly pressing, because it is the core of a myriad of computer vision tasks. Facial recognition, object detection, image search, content moderation, sentiment analysis, and many more tasks are grounded in accurate image classification\n\n![](images/2b93165b99db9c7030a0d42236836ab7176dc18b3c7ffa065cc6a231d22a3340.jpg) \nFigure 2: The image of Google Photos misclassifying black people within the Photos application.\n\nsystems. This is compounded by the fact that many widely used Foundational Models require multiple modalities, such as DALL-E Ramesh et al. (2021) and Stable Diffusion Rombach et al. (2022). In order to train these models, other models such as CLIP Radford et al. (2021) are used to classify the data that the model will be trained on, in order to enforce a strong cross-modal correlation. This means that demographic biases will be compounded as the images continue to be utilized in training processes.\n\nWhen there are strong harmful demographic biases, these models can cause tremendous harms. One example was when Google Photo's classified black people as gorillas within a user's album, as shown in Figure 2. This is the direct result of demographic bias. The dataset, used to quantify the classification accuracies of Google's model, likely contained the standard biases where images of non-Caucasian faces are underrepresented Figure 4. This leads to a misleading evaluation, which was likely why this model was deployed to the public with this significant issue.\n\nIn this work, we explore how demographic bias affects image classification accuracy for multimodal models. We also propose D3G, a zero-shot, training-free framework to balance demographic bias and boost classification accuracies for multimodal models used for image classification.\n\n# 2 Related Work\n\n# 2.1 Model Ensembling for Image Classification\n\nMany state-of-the-art techniques for image classification leverage a methodology known as model ensembling. Ensemble learning broadly is an approach that aims to improve performance by combining the predictions of multiple models. There are many such ensemble learning methods, but the one most relevant to our proposed technique is called bagging.\n\nBagging predictors originally published by Breiman (1996) introduced \"bootstrap aggregating\" or bagging,\n\nthe ensemble learning method that combines multiple models trained on different subsets of the training data. The formulation is presented as follows: A learning set of $L$ consists of data $(y_{n},\\mathbf{x}_{n})$ , $n = 1,\\dots,N$ where the $y$ 's are class labels, assume we form a predictor $\\phi (\\mathbf{x},L)$ where if the input is $\\mathbf{x}$ , we predict $\\mathbf{y}$ by $\\phi (\\mathbf{x})$ . Now suppose we form a sequence of replicate learning sets, $L^{(B)}$ each consisting of $N$ observations drawn at random with replacement from $L$ . Since $y$ is a class label in our scenario, the predictions from each of the predictors $\\phi (\\mathbf{x},L^{(B)})$ will vote to form $\\phi_B(\\mathbf{x})$ , the aggregated final prediction. Bagging both empirically and theoretically prove improves accuracy for a given set of weak classifiers or \"weak learners.\" This technique effectively replicates our proposed method within a controlled environment. In the implementation of D3G, we are employing a strategy similar to bagging, but across modalities and with generated data. Our goal is based on the theoretical guarantees of bagging, where even though each model is trained on a subset of the data, the aggregation of the predictions begins to approximate the true distribution of the data.\n\nA model that applies this concept of model ensembling was introduced in Learning to Navigate for Fine-grained Classification by Yang et al. (2018). This is a state-of-the-art paper that attempts to reduce misclassification rates by developing a model called NTS-Net (Navigator-Teacher-Scrutinizer Network) to teach itself methods of identifying and scrutinizing fine-grained image details. The Navigator navigates the model to focus on the most informative regions (denoted by yellow rectangles in Figure 3), while the Teacher evaluates the regions proposed by the Navigator and provides feedback. After that, the Scrutinizer scrutinizes those regions to make predictions. NTS-Net achieves high classification accuracy on a pre-defined dataset; however, we wanted to explore if these foundational concepts could be applied to a training-free, zero-shot environment. In order to achieve this, we leverage pretrained open-vocabulary models with advanced attention mechanisms to discriminate the fine-grained features of a given image, to improve performance on a variety of classes without additional training.\n\n![](images/be28003919080575830145a97d8525f67088cb54486b8198850e26cd6652b17b.jpg) \nFigure 3: The NTSNet Architecture (Yang et al. 2018)\n\nSimilarly, within the paper Sus-X: Training-Free Name-Only Transfer of Vision-Language Models by Udandarao, Gupta, and Albanie (2022) achieves state-of-the-art zero-shot classification results on 19 benchmark datasets, outperforming other training-free adaptation methods. It also demonstrates strong performance in the training-free few-shot setting, surpassing previous state-of-the-art methods. This paper is focused on general image classification improvements; however, we aim to explore how this idea of synthetic support set generation affects the fairness of predictions from a classification model. We will employ a similar strategy but also explore how to offset existing harmful biases within the zero-shot setting.\n\n# 2.2 Data Filtering and Generation\n\nNeural Priming for Sample-Efficient Adaptation by Wallingford et al. (2024) proposes a technique to adapt large pretrained models to distribution shifts. This paper demonstrates that we can leverage an open-vocabulary model's own pretraining data in order to improve performance on downstream tasks. Even though we don't aim to utilize the model's training data in our method, the generated images will likely be sampled from a similar distribution as the multimodal model. This paper shows that even if that is the case, we can still use filtering and guidance in order to improve performance. In our case, our custom prompting method plays the role of guiding the image generation process, resulting in the same empirical performance improvements.\n\nDATACOMP: In search of the next generation of multimodal datasets by Gadre et al. (2024) has completely different goals from Neural Priming, but achieves them similarly. The paper introduces DataComp, which is a test bed for dataset-related experiments that contains 12.8 billion image-text pairs retrieved from Common Crawl. Upon retrieving this pool, they proceed to train a new clip model with a fixed architecture and hyper-parameters. The paper concludes that CommonPool and LAION-2B are comparable with the same filtering. This means that image-based filtering and CLIP score filtering excels on most tasks, and can be effectively used to retrain other models. Despite this, the paper mentions that they found demographic biases in models trained using their pool, but their goal was not to reduce these harmful biases. In this paper we aim to offset this demographic bias found in models trained on large-scale filtered data pools such as DataComp.\n\n# 2.3 Ethics and Fairness\n\nThe FairFace dataset and classifier were first published in *FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation* by Karkkainen and Joo (2021). This project focused on creating a dataset and classifier that were balanced across race, gender, and age as shown in Figure 4. This balance is crucial because the paper demonstrates that the balance allows for improved generalization classification performance on the defined demographics, even on novel datasets that contain\n\nmore non-White faces than typical datasets. The fact the simply balancing these demographics allows for increased accuracy and generalizability is extremely important. This is the core of D3G, and FairFace shows that balancing demographics results in performance improvements. The primary difference is that we aim to show similar improvements without any additional training. Alongside creating a balanced dataset, they also demonstrated their classifier produces balanced accuracy across the specified demographics, which is crucial because I use this classifier to create new labels for the IdenProf dataset.\n\n![](images/7e089674ea9b092c89807fe97f4f316c93e8278a3f08c13ddbbb09e29bd7368f.jpg) \nFigure 4: Racial compositions in face datasets (Karkkainen and Joo 2021)\n\n# 3 Methods\n\nWe aim to create an ensemble of models to improve multimodal image classification accuracy, especially for models that are trained on data with a class imbalance. We test this method on standard benchmark datasets, such as ImageNet as shown in Figure 6, then we expand our technique to classify demographic-focused datasets. CLIP (Contrastive Language-Image Pretraining) (Radford et al. 2021) will be used for image-to-text retrieval, and Stable Diffusion XL 1.0 (Podell et al. 2023) for image generation. Our approach is as follows:\n\n# 3.1 Datasets\n\nFor all the results shown in this paper, we classify images from the IdenProf test dataset. We selected this dataset because it provides a simple, applicable downstream task and because all the images were collected and filtered by hand via Google Image search. Each image in the dataset can belong to one of ten classes: Chef, Doctor, Engineer, Farmer, Firefighter, Judge, Mechanic, Pilot, Police, or Waiter. In total, there are 2,000 images for testing, with 200 images for each class. Finally, it is important to note the demographic distribution published by the dataset authors. The IdenProf dataset consists of $80.6\\%$ male subjects, and $19.4\\%$ female. Along with this, $91.1\\%$ of the people within the dataset are White, while $8.9\\%$ are of another race. The dataset author also noted that there were more images of Asian and White people obtainable, when compared to that of black people. Similarly, there were more images of men\n\n![](images/ab616172a09c70d6680336f9af1952d94ec63b577024c90d466f281a5a363ef7.jpg) \nFigure 5: The D3G Framework\n\nobtainable than of women. This reflects this demographic biases discussed previously.\n\nAlong with IdenProf, we also leverage information collected from the FairFace dataset Karkkainen and Joo (2021). This dataset defines common demographics and forms them into classification categories. The authors constructed their dataset containing 108,501 images, and even though we do not utilize this dataset within this paper, the demographic information is still useful. We leverage the classification model that they trained on their own dataset. As a result, the results from the classifier are highly balanced and less likely to contain demographic bias. We use this classifier to assign additional labels to the images within IdenProf. There are 3 primary demographics that will be assigned as labels: race, gender, and age. Along with this, the race category has two versions, with race 4 being coarse-grained with only four races to choose from, and race 7 being fine-grained with 7 races to choose from. Combining the classes from IdenProf and FairFace, every image in the dataset can be classified with any of the labels identified in Table 1.\n\n# 3.2 Creating Prompts\n\nTo generate our prompts, we leverage a set of templates constructed based on demographics identified in Table 1. These templates are designed to expose and leverage a specific demographic bias, based on the whatever image is currently being classified. For instance, if we were attempting to classify the profession of the person within the image, our prompts would be as shown in Table 2. This process is pictured within Figure 5.\n\n<table><tr><td>Class</td><td>Values</td></tr><tr><td>profession</td><td>Chef, Doctor, Engineer, Farmer, Firefighter, Judge, Mechanic, Pilot, Police, or Waiter</td></tr><tr><td>race 7</td><td>White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino</td></tr><tr><td>race 4</td><td>White, Black, Indian, Asian</td></tr><tr><td>gender</td><td>Male, Female</td></tr><tr><td>age</td><td>0-2, 3-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+</td></tr></table>\n\nTable 1: All potential classes for an image from IdenProf\n\n# 3.3 Generate Class Images\n\nUpon creating diverse demographic prompts from templates for each of the classes, each of these prompts are used to generate an image. We employ Stable Diffusion XL, a diffusion-based image generation model, to conditionally generate an image of each class in the dataset. Our result will be images that emphasizes the diverse demographics between the classes. For standard D3G we will generate 1 image per prompt then average the embeddings of all the prompts, and for average image D3G we will generate 5 images per prompts, then perform the same process of averaging the embeddings of these images. Generating these diverse images is crucial because our goal is to combat the issue of prediction bias by generating diverse images and utilizing them for the next step when predicting labels. This step is depicted by images on the right of Figure 6.\n\n# 3.4 Weighted Sum\n\nUsing the prompts created earlier, we start the classification phase. We use the image and text encoders from CLIP ViT-\n\n<table><tr><td>Demographic</td><td>Prompt</td><td>Text</td></tr><tr><td>Profession</td><td>&quot;A photo of a &lt;prof&gt;&quot;</td><td>A photo of a doctor</td></tr><tr><td>Race 7</td><td>&quot;A photo of a &lt;race&gt;&lt;prof&gt;&quot;</td><td>A photo of a white doctor</td></tr><tr><td>Race 4</td><td>&quot;A photo of a &lt;race&gt;&lt;prof&gt;&quot;</td><td>A photo of a white doctor</td></tr><tr><td>Gender</td><td>&quot;A photo of a &lt;gender&gt;&lt;prof&gt;&quot;</td><td>A photo of a male doctor</td></tr><tr><td>Age</td><td>&quot;A photo of a &lt;age&gt; year old &lt;prof&gt;&quot;</td><td>A photo of a 30-39 year old doctor</td></tr></table>\n\nTable 2: Example diverse demographic texts for classifying profession. Note that each prompt starts with \"A photo of a,\" and that all the correct nouns and adjectives are added to the prompts as shown in the right column. More examples are provided in Section 9.\n\nL/14, our multimodal model, in order to get the embeddings for the generated images. Upon getting these embeddings, we scan values from 0 to 1 using a step value of 0.01 in order to find an optimal weight to create a weighted sum of the text and image embeddings. The text embedding will have a weight of $w$ while the image embedding is weighted by $1 - w$ . This step allows us to bridge the semantic gap between text and images, because images are always closer in embedding space to one another than text. After performing this step, we will get a new embedding that represents the weighted combination of the text and image embeddings.\n\n# 3.5 Classification\n\nFinally, we get the embedding of the query image by passing it through the CLIP image encoder. At this point, we simply getting the cosine similarity between the query image embedding, and the combined image-text embeddings from each class. In order to classify the image, we just get the highest similarity score and use that class as the prediction. This step is depicted by the blue arrows within Figure 6.\n\n![](images/285374630b97cd1e3877b80e2e691c7ac9667b378d9a0e6610296ff01b3bb7f9.jpg) \nFigure 6: A demo example of D3G on difficult fine-grained classes from the ImageNet dataset (Note: we utilized ImageNet for this example to showcase the fine-grained classification capabilities of D3G. IdenProf does not have such fine-grained classes).\n\n# 4 Results\n\n# 4.1 Metrics\n\nWe choose to use top-1 accuracy as the standard metric for our results. We selected this metric for a variety of reasons, the most prominent being that this paper aims to increase zero-shot classification accuracy. There are many metrics that represent zero-shot accuracy; however, top-1 accuracy is the most common.\n\n![](images/316d570d04b1676d7c526633c72fb26ad0218385dc10064324b2126a13050d94.jpg) \n2. Create dataset classifier from label text \nFigure 7: The process of classifying an image with CLIP at inference-time (Radford et al. 2021).\n\n# 4.2 Evaluation Breakdown\n\nWe study three primary classification methods in this paper. CLIP ViT-L/14 is our baseline, the standard method of multimodal classification as outlined in Figure 7. The second method of image classification is Standard D3G as shown in Figure 5. In this method, for every class in our dataset, we generate one prompt for each of the specified demographics, then use these prompts to generate images and average their embeddings. Finally, our third method is Average Image D3G. This is the exact same process as Standard D3G; however, instead of generating one image per demographic prompt, we will generate 5, then average the embeddings of all the prompts for a given class.\n\nAlong with our three classification methods, we outline 5 prompting strategies when creating our demographic\n\n<table><tr><td>Demographic</td><td>Method</td><td>Profession</td><td>Race 7</td><td>Race 4</td><td>Gender</td><td>Age</td></tr><tr><td rowspan=\"3\">Profession</td><td>CLIP</td><td>95.14</td><td>94.73</td><td>95.22</td><td>96.52</td><td>94.81</td></tr><tr><td>Standard D3G</td><td>95.54</td><td>95.22</td><td>95.30</td><td>96.52</td><td>95.06</td></tr><tr><td>Average Image D3G</td><td>95.87</td><td>95.62</td><td>95.38</td><td>96.76</td><td>95.54</td></tr><tr><td rowspan=\"3\">Race 7</td><td>CLIP</td><td>44.65</td><td>28.20</td><td>-</td><td>28.61</td><td>25.69</td></tr><tr><td>Standard D3G</td><td>45.38</td><td>31.85</td><td>-</td><td>32.90</td><td>30.96</td></tr><tr><td>Average Image D3G</td><td>45.46</td><td>32.33</td><td>-</td><td>33.55</td><td>32.25</td></tr></table>\n\nTable 3: Results when classifying the specified demographic of the people within the IdenProf dataset. The far left column shows the demographic that will be classified. The second column on the left dictates the method used for classification, while the other columns dictate the prompting structure, as discussed in Section 4.2. Additional results are shown in Section 9. \n\n<table><tr><td>Demographic</td><td>Method</td><td>Profession</td><td>Race 7</td><td>Race 4</td><td>Gender</td><td>Age</td></tr><tr><td rowspan=\"2\">Profession</td><td>Standard D3G</td><td>0.85 / 0.15</td><td>0.84 / 0.16</td><td>0.90 / 0.10</td><td>0.91 / 0.09</td><td>0.90 / 0.10</td></tr><tr><td>Average Image D3G</td><td>0.71 / 0.29</td><td>0.74 / 0.26</td><td>0.84 / 0.16</td><td>0.91 / 0.09</td><td>0.67 / 0.33</td></tr><tr><td rowspan=\"2\">Race 7</td><td>Standard D3G</td><td>0.90 / 0.10</td><td>0.68 / 0.32</td><td>-</td><td>0.68 / 0.32</td><td>0.67 / 0.33</td></tr><tr><td>Average Image D3G</td><td>0.92 / 0.08</td><td>0.69 / 0.31</td><td>-</td><td>0.67 / 0.33</td><td>0.68 / 0.32</td></tr></table>\n\nTable 4: The weight values used to achieve the results in Table 3. For each evaluation, the left value is the text embedding weight, and the right is the image embedding weight. CLIP is not included because no images are weighted with the text embeddings. Note that the sum of the text and image weights for a given evaluation should equal 1.\n\nprompts within the D3G framework, as shown in the top row of Table 3. It is important to note that for all of these prompting strategies, they add demographic information, in addition to the specified classification category. For instance, as shown in Table 2, when the task is to classify profession, we can add information regarding race or gender in addition to the standard profession class. This allows us to study how specific demographics affect the classification accuracy.\n\nFinally, we also explore and analyze the per-class accuracy results when classifying the specified demographic, as discussed in later sections.\n\n# 4.3 Top-1 Results\n\nThe top-1 results when classifying two out of the 5 demographics are shown in Table 3 (Additional results for the other demographics will be included in Section 9).\n\nCLIP performs fairly well when classifying profession, which was to be expected because CLIP's training data likely includes richer cross-modal representations related to profession. As a result, all the accuracies are quite high. Despite this already high performance, D3G is able to still improve performance. This implies that providing diverse demographics can still improve CLIP's understanding of well-established concepts.\n\nWe are able to gain a better understanding of D3G's efficacy when we look at the performance gains on Race 7. For this demographic, CLIP performance is much worse. Once again, the model performs better when the prompt contains information regarding profession, due to the increased likelihood of this information within the training\n\ndata; however, when this information is omitted, the performance on the other demographics is abysmal. With the accuracy only scoring around $10 - 15\\%$ more than random guessing (which should be around $14\\%$ top-1 accuracy), this shows that CLIP does not have a deep understanding of race and other demographics.\n\nWith this in mind, by simply implementing D3G, we are able to push the accuracies up to by $4 - 7\\%$ . This indicates that coupling the standard embeddings with diverse data that has been generated, improves CLIP's understanding of concepts that were previously misunderstood. In addition, it is important to note that Average Image D3G typically performs better than the standard method. Once again, this makes sense and conforms to our hypothesis. Generating diverse data pushes the embeddings closer to the ground-truth position within embedding space, resulting in more accurate predictions for classes the model may not fully understand.\n\nThese results are highly promising, and we can learn a bit more about the effect of D3G on these results by looking at the weights utilized to produce these scores.\n\n# 4.4 Weighting Strategy\n\nRecall that in order to classify a query image, we form a weighted sum of the embeddings between the text prompt, and the generated images. The ratio of text-to-image weighting is dictated by scanning values until an optimal state is found. This is necessary, because for certain images, the text embeddings will contribute more to the classification result than the image embeddings and vice versa. With this in mind, the weighting ration between images and text is also an indicator of how much the generated images from D3G actually help given a defined demographic. Knowing\n\n<table><tr><td>Demographic</td><td>Prompt</td><td>White</td><td>Black</td><td>Latino</td><td>East Asian</td><td>South East Asian</td><td>Indian</td><td>Middle Eastern</td></tr><tr><td rowspan=\"5\">Race 7</td><td>Profession</td><td>68.19</td><td>70.90</td><td>15.38</td><td>43.46</td><td>20.59</td><td>57.58</td><td>13.80</td></tr><tr><td>Race 7</td><td>8.92</td><td>66.42</td><td>11.54</td><td>52.74</td><td>8.82</td><td>60.61</td><td>32.68</td></tr><tr><td>Race 4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Gender</td><td>18.07</td><td>73.13</td><td>11.54</td><td>35.02</td><td>17.65</td><td>51.52</td><td>34.93</td></tr><tr><td>Age</td><td>12.29</td><td>68.66</td><td>11.54</td><td>43.46</td><td>14.71</td><td>69.70</td><td>29.58</td></tr></table>\n\nTable 5: Standard D3G per-class results when classifying the specified demographic. Note that all the prompts are as described in Table 2 (e.g. \"A photo of a black person\", or \"A photo of a 30-39 year old doctor\"). More examples are provided in Section 9\n\nthis information, we can then start to understand exactly what our results mean in the broader scope.\n\nWhen viewing the weights from Table 4, we see the same trends that were displayed within Table 3, but we get a glimpse as to why D3G had minimal performance gains. When classifying profession, most of the text weights for Standard D3G are quite high, being roughly around $85 - 90\\%$ ; however, whenever see larger increases in accuracy for D3G, we also see an increased weighting of the generated images. This is especially evident when classifying race 7. Once again, the prompts that utilized professions were able to get somewhat higher accuracies, due to the structure of the dataset; however, for every other race 7 evaluation, the generated images played a major role in the classification results. The fact that images were consistently weighted around $30\\%$ shows that the diversity matters when classifying demographics.\n\nThe results analyzed from our top-1 results and their corresponding weighting strategies, show that the method works; however, the per-class results give us a deeper understanding of why the method works.\n\n# 4.5 Per-Class Results\n\nFor these results, we primarily reference Table 5; however, note that additional per-class results are included within Section 9. When classifying race 7, we know that the best performance gains were from including gender and age into the prompts. Focusing on these rows, we can see a few interesting trends. For instance, including information about gender improves the accuracy for black and middle eastern people the most. This is likely due to the fact that within CLIP's training data, these populations have gender underrepresented. Within Section 5.2, we will later discuss future methods of confirming this hypothesis.\n\nNow that we understand which demographics help classification accuracies, we can now start to extend these inferences across demographics. Images and text related to East Asian people likely did not have rich cross-modal representations because race 7 helped the most for this demographic. This means that simply generating images of diverse races was able to significantly boost the accuracy. Similarly, age was the most useful demographic when classifying people in images as Indian. This was quite surprising, and as we discuss in Section 5.2, we intend to\n\nfurther explore the impact of these results by including additional metrics such as precision, recall, specificity, and F1 score.\n\nAnother trait of these per-class results emerges when we compare the accuracy ratios across demographic columns. For instance, black generally achieves a higher per-class accuracy than the other demographics, with Indian and East Asian obtaining the second and third-highest overall per-class accuracies across all the prompts. Alongside this, Latino, South East Asian, and White achieve some of the lowest per-class accuracies overall across all prompts. We were very surprised by this outcome, especially by the fact that race 7 was the worst performing prompt for White, which had the majority representation within the dataset. Intuitively, this may imply that providing diverse representations can also move embeddings away from the correct position in embedding space. In order to combat this, we may be able to strategically weight generated image and text prompt embeddings in relation to their demographic proportions within the dataset (e.g., if Latino is underrepresented within the dataset, then we will up-weight the Latino embeddings). This idea is further explored within Section 5.2.\n\nFinally, we did not describe the results for profession, due to the fact that we cannot infer why these demographics performed best, due to the fact that CLIP leverages profession information to make its predictions, but the dataset is catered towards profession. This means that the increased accuracies could be either due to the profession information within the prompt, or the images generated of each profession. Either way, we will need to run more tests to fully understand this. We intend to evaluate on other datasets, so we can understand whether this correlation indicates causation; however, these are very promising results.\n\n# 5 Discussion\n\n# 5.1 Assumptions\n\nWithin this paper, two prominent assumptions are made:\n\n1. The generative model has a better learned representation of the true distribution of the data (due to its increased complexity and data diversity). \n2. The base multimodal model can distinguish between similar classes. Our method will not improve performance if this is not the case.\n\nThese assumptions are necessary for D3G to function properly, but they are not unreasonable for a zero-shot setting. The generative model must have a better learned representation of the true data distribution, because it needs to be able to generate images that accurately represent the desired concept. If the model cannot generate useful images, then D3G will revert to using the baseline CLIP method, with text-based classification.\n\nIn addition, we need our base model to be able to distinguish between similar classes, because if two classes correspond to the same point within embedding space, then our model cannot distinguish them. Similarly, we need this assumption so that the weighted sum of the image and text embeddings actually pushes the embedding towards the true embedding, and not just in a random direction. If the base model couldn't distinguish between certain classes, then we would have no guarantee that creating a weighted sum actually improves classification, because the model would be completely guessing in that case. In the future, we may be able to validate this assumption by comparing the embeddings within embedding space to ensure they are an adequate distance apart, but for now this will be maintained as an assumption.\n\nThese assumptions on their own are not unreasonable; however, in certain circumstances they may become limitations, as discussed later.\n\nAs mentioned previously, this research is crucial because models such as CLIP, dictate are frequently used to filter large datasets such as DataComp-1B or LAION-2B. If CLIP performs so poorly when classifying demographics, then these biases will be reinforced on all models trained with the datasets. This issue has compounding effects, and so to reduce demographic bias within image generation, object detection, and content moderation models, we must start with image classification.\n\n# 5.2 Future Work\n\nWith such promising results from this project, there are many steps we intend to take in the future, in order to ensure this method is as robust as possible.\n\nTo start, we aim to include additional metrics that properly quantify the balance between demographics to better understand how D3G balanced the predictions of the multimodal classifier. We specifically hope to investigate the robustness of our approach to class imbalance, data redundancy, and noise levels.\n\nFor this paper, we decided to simply average the embeddings of all images generated with D3G, however this may not be the most effective process. Even though we generate images of a diverse range of demographics, these demographics are not weighted equally by CLIP (as demonstrated previously in Table 5), due to the training data. This means that by utilizing the CLIP image encoder to get embeddings for all of our images, we are only\n\noffsetting the existing bias, but this does not create a neutral embedding; rather, it creates an embedding that still emphasizes the existing bias but is slightly more balanced across demographics. In order to combat this, we aim to explore how we can create a weighted sum of the embeddings from individual images, that is informed by the demographics of the training data and of the broader world. Intuitively, if CLIP tends to favor one demographic, then we will down weight those images, and vice versa if CLIP rarely selects another demographic. In this way, we can robustly enforce equity within CLIP's predictions.\n\nIn addition to this step, in the future we also aim to utilize OpenCLIP so that we can accurately draw conclusions about the model's predictions in relation to the training data. Since we solely used CLIP as a baseline for this paper, we are unable to confidently state that the distribution of the training data led to the model's sometimes biased predictions; however, this is strongly implied. By utilizing a model with open training data and architecture, we can draw these conclusions with certainty. Researchers are starting to explore demographic bias within LAION-2B and DataComp-1B (the training data for certain OpenCLIP models), and we aim to leverage this knowledge for future implementations.\n\nWe would also like to expand our evaluation suite to multiple datasets. Currently, we only evaluate on 2,000 images from IdenProf, but we could start by utilizing the full dataset of 11,000 training and test images, since we are not training, and we want a wider pool of images. In addition, we intend to perform similar tests over the FairFace dataset. These results would more effectively isolate CLIP's capabilities in predicting the demographics outlined in this paper, since the FairFace dataset was constructed with these demographics in mind. This is especially important, because we found that CLIP was able to leverage the semantic information regarding professions within the dataset, in order to classify race 7 more accurately. By removing professions as a factor, we will be able to fully explore CLIP's performance on such tasks.\n\nAn important note, was that we were particularly intrigued by CLIP's inadequate performance when classifying demographics such as race 7, so we also aim to conduct an analysis on the individual classification results, combined with metrics such as precision, recall, specificity, and overall F1 score in order to better understand whether CLIP's performance on these demographics is statistically significant. If the positive predictions are solely informed by demographic stereotypes, then we aim to expose these weaknesses and combat them with D3G.\n\nFinally, in addition to generating images based on the demographics, we also aim to explore methods of retrieving images, or modifying the demographics of the query image in-place. Modifying the existing query image to get diverse demographics, may reduce the impact of stereotypes enforced by the image generation model, and result in\n\nclassifications that are much more accurate.\n\n# 6 Conclusion\n\nImage classification remains a challenging task despite advancements in multimodal models like CLIP that leverage semantic similarities across vision and language. Low-capacity models often suffer from underfitting, leading to poor performance; however, the generation of high-quality data with rich cross-modal representations is also difficult. Imbalanced demographics in datasets can cause predictions to bias toward more represented classes, pushing those who are underrepresented to the wayside. Our study highlights these issues and their impact on zero-shot image classification, proposing Diverse Demographic Data Generation (D3G) as a solution. This training-free, zero-shot method enhances classification accuracy and reduces demographic bias in pre-trained multimodal models by providing diverse demographic data at inference time, demonstrating improved performance for these models.\n\n# 7 Ethics Statement\n\nThe fact that we are utilizing image generation models for D3G provides significant potential for negative societal impact. For instance, the images generated by the model can often reinforce certain demographic biases. This is to be expected, because the prompts used within this paper are quite vague; however, this also shows that the generative model has learned visual stereotypes from its training data. The stereotypes within the generated images is why they should only be used as a weighted sum with the text, and never as the sole ground-truth signal. Excess up-weighting of the images, provides opportunity for unethical image generations.\n\nOne potential way to combat this issue of stereotypes within generation, is to utilize the method discussed in Section 5.2, where we modify the query image in-place in order to reduce the room for error, while still increasing demographic diversity.\n\nAlong with this, our use of generative modelling allows for potentially unethical prompting. The only restrictions on prompting are those enforced by Stable Diffusion XL; however, due to the open-source nature of the model, many of these restrictions can be circumvented. We do not condone the use of D3G to generate any hateful, demeaning, or otherwise unethical data. This method should only be used within appropriate contexts, and primarily as a means of increasing pre-trained model diversity ad hoc.\n\nThe selection of demographics used within our classification process was mainly a result of the process used to create the FairFace dataset (Karkkainen and Joo 2021). The authors defined the races used to be based on commonly accepted race classification from the U.S. Census Bureau; however, we acknowledge that does not properly represent the racial landscape of the world. It is important to note that the authors decided to use skin color as a proxy to race,\n\ncombined with annotations about physical attributes. This means that the annotations used to construct the dataset and train the FairFace classification model used to create labels for IdenProf, may contain annotator bias. This is evident in the gender demographic. The authors mentioned it would be impossible to perfectly balance the gender predictions of their model, outside a lab setting. Finally, ages were simply segmented into common age groups. The decision to use these demographic categories limits the conclusions we can draw in this paper, regarding the impact of all relevant demographics on classification accuracy.\n\nFinally, D3G is a technique that does not remove demographic biases, but rather, it offsets learned biases. This means that the method can either reduce or accentuate human bias, and should not be used as a universal architecture to improve multimodal model fairness and accuracy. If the images generated contain harmful bias, then this technique could make the performance worse and much more inequitable.\n\n# 8 Limitations\n\nDue to this paper being focused on classification, a significant limitation is with regard to demographic intersectionality. People that fit into multiple demographics within the same category (i.e., people who are biracial), will suffer from only being classified as a single demographic. This is a limitation, because it is a known issue that cannot be surmounted using standard metrics within image classification. Future methods may be able to explore intersectionality by retrieving the top-k classified demographics; however, this would be difficult in a zero-shot setting, where no additional information about the query image is provided.\n\nA second major limitation is the fact that the D3G can only perform well if the pretrained models are able to effectively distinguish between the demographics being classified. As mentioned within previously, if the multimodal model embeds two demographics to the same point in embedding space, or if the image generation model cannot generate good images for a given demographic, the technique will fail. This is typically not an issue for the broad demographics covered within this paper; however, it may become more difficult as the classes become more fine-grained.\n\nA final limitation is the fact that D3G utilizes pre-trained models for every step of the pipeline. This partially is also the most useful part of the technique; however, it also means that the limitations of the pretrained models will extend to D3G. The abilities or inabilities of the generative model will result in the final classification accuracies. Similarly, the quality of the embeddings produced from the multimodal model will dictate the effect D3G will have on classification accuracy.\n\n# References\n\nBreiman, L. 1996. Bagging predictors. Machine learning, 24: 123-140. \nGadre, S. Y.; Ilharco, G.; Fang, A.; Hayase, J.; Smyrnis, G.; Nguyen, T.; Marten, R.; Wortsman, M.; Ghosh, D.; Zhang, J.; et al. 2024. Datacomp: In search of the next generation of multimodal datasets. Advances in Neural Information Processing Systems, 36. \nKarkkainen, K.; and Joo, J. 2021. Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1548-1558. \nPodell, D.; English, Z.; Lacey, K.; Blattmann, A.; Dockhorn, T.; Müller, J.; Penna, J.; and Rombach, R. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952. \nPratt, S.; Covert, I.; Liu, R.; and Farhadi, A. 2023. What does a platypus look like? generating customized prompts for zero-shot image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15691-15701. \nRadford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748-8763. PMLR. \nRamesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-to-image generation. In International conference on machine learning, 8821-8831. Pmlr. \nRombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Omer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684-10695. \nUdandarao, V.; Gupta, A.; and Albanie, S. 2022. Sus-x: Training-free name-only transfer of vision-language models. arXiv preprint arXiv:2211.16198. \nWallingford, M.; Ramanujan, V.; Fang, A.; Kusupati, A.; Mottaghi, R.; Kembhavi, A.; Schmidt, L.; and Farhadi, A. 2024. Neural priming for sample-efficient adaptation. Advances in Neural Information Processing Systems, 36. \nYang, Z.; Luo, T.; Wang, D.; Hu, Z.; Gao, J.; and Wang, L. 2018. Learning to navigate for fine-grained classification. In Proceedings of the European conference on computer vision (ECCV), 420-435.\n\n9 Appendix\n\n<table><tr><td>Demographic</td><td>Prompt</td><td>Text</td></tr><tr><td>Gender</td><td>&quot;A photo of a &lt;gender&gt;&quot;</td><td>A photo of a female</td></tr><tr><td>Profession</td><td>&quot;A photo of a &lt;gender&gt; doctor&quot;</td><td>A photo of a female doctor</td></tr><tr><td>Race 7</td><td>&quot;A photo of a &lt;race&gt;&lt;gender&gt;&quot;</td><td>A photo of a black female</td></tr><tr><td>Race 4</td><td>&quot;A photo of a &lt;race&gt;&lt;gender&gt;&quot;</td><td>A photo of a black female</td></tr><tr><td>Age</td><td>&quot;A photo of a &lt;age&gt; year old &lt;gender&gt; person&quot;</td><td>A photo of a 30-39 year old female</td></tr></table>\n\nTable 7: Example diverse demographic texts for classifying gender. \n\n<table><tr><td>Demographic</td><td>Prompt</td><td>Text</td></tr><tr><td>Age</td><td>&quot;A photo of a &lt;age&gt; year old&quot;</td><td>A photo of a 30-39 year old</td></tr><tr><td>Profession</td><td>&quot;A photo of a &lt;age&gt; year old doctor&quot;</td><td>A photo of a 30-39 year old doctor</td></tr><tr><td>Race 7</td><td>&quot;A photo of a &lt;age&gt; year old &lt;race&gt; person&quot;</td><td>A photo of a 30-39 year old black person</td></tr><tr><td>Race 4</td><td>&quot;A photo of a &lt;age&gt; &lt;race&gt;&quot;</td><td>A photo of a 30-39 year old black person</td></tr><tr><td>Gender</td><td>&quot;A photo of a &lt;age&gt; &lt;gender&gt;&quot;</td><td>A photo of a 30-39 year old female</td></tr></table>\n\nTable 8: Example diverse demographic texts for classifying gender. \n\n<table><tr><td>Demographic</td><td>Prompt</td><td>White</td><td>Black</td><td>Latino</td><td>East Asian</td><td>South East Asian</td><td>Indian</td><td>Middle Eastern</td></tr><tr><td rowspan=\"5\">Race 7</td><td>Profession</td><td>68.92</td><td>70.90</td><td>15.38</td><td>43.46</td><td>20.59</td><td>57.58</td><td>13.24</td></tr><tr><td>Race 7</td><td>9.40</td><td>61.94</td><td>15.38</td><td>66.24</td><td>5.88</td><td>60.61</td><td>26.48</td></tr><tr><td>Race 4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Gender</td><td>20.72</td><td>67.91</td><td>15.38</td><td>44.73</td><td>11.76</td><td>51.52</td><td>29.86</td></tr><tr><td>Age</td><td>11.57</td><td>65.67</td><td>11.54</td><td>59.92</td><td>14.71</td><td>69.70</td><td>25.07</td></tr></table>\n\nTable 9: Average D3G per-class results when classifying the specified demographic. Note that all the prompts are as described in Table 2. \n\n<table><tr><td></td><td>Prompt</td><td>Chef</td><td>Doctor</td><td>Engineer</td><td>Farmer</td><td>Firefighter</td><td>Judge</td><td>Mechanic</td><td>Pilot</td><td>Police</td><td>Waiter</td></tr><tr><td rowspan=\"5\">Profession</td><td>Profession</td><td>95.54</td><td>98.80</td><td>97.62</td><td>98.36</td><td>94.12</td><td>98.20</td><td>86.49</td><td>99.37</td><td>100.0</td><td>84.34</td></tr><tr><td>Race 7</td><td>94.27</td><td>98.80</td><td>96.43</td><td>98.36</td><td>96.08</td><td>99.40</td><td>81.08</td><td>98.74</td><td>100.0</td><td>84.94</td></tr><tr><td>Race 4</td><td>94.90</td><td>98.80</td><td>96.43</td><td>98.36</td><td>96.08</td><td>99.40</td><td>82.43</td><td>98.74</td><td>100.0</td><td>84.34</td></tr><tr><td>Gender</td><td>96.82</td><td>98.80</td><td>96.43</td><td>96.72</td><td>98.04</td><td>99.40</td><td>90.54</td><td>99.37</td><td>100.0</td><td>87.35</td></tr><tr><td>Age</td><td>97.45</td><td>98.19</td><td>96.43</td><td>98.36</td><td>98.04</td><td>98.80</td><td>85.14</td><td>97.48</td><td>100.0</td><td>80.72</td></tr></table>\n\nTable 10: Standard D3G per-class results when classifying the specified demographic. Note that all the prompts are as described in Table 2. These results for race 7 are shown in Table 5. \n\n<table><tr><td></td><td>Prompt</td><td>Chef</td><td>Doctor</td><td>Engineer</td><td>Farmer</td><td>Firefighter</td><td>Judge</td><td>Mechanic</td><td>Pilot</td><td>Police</td><td>Waiter</td></tr><tr><td rowspan=\"5\">Profession</td><td>Profession</td><td>93.63</td><td>98.80</td><td>98.81</td><td>100.0</td><td>94.12</td><td>96.41</td><td>83.78</td><td>99.37</td><td>100.0</td><td>90.36</td></tr><tr><td>Race 7</td><td>93.63</td><td>98.80</td><td>97.62</td><td>98.36</td><td>94.12</td><td>97.60</td><td>81.08</td><td>98.74</td><td>100.0</td><td>90.36</td></tr><tr><td>Race 4</td><td>93.63</td><td>98.80</td><td>96.43</td><td>98.36</td><td>94.11</td><td>98.80</td><td>82.43</td><td>98.74</td><td>100.0</td><td>87.35</td></tr><tr><td>Gender</td><td>96.18</td><td>98.80</td><td>96.43</td><td>98.36</td><td>98.04</td><td>99.40</td><td>90.54</td><td>99.37</td><td>100.0</td><td>89.16</td></tr><tr><td>Age</td><td>94.90</td><td>98.80</td><td>95.24</td><td>100.0</td><td>94.12</td><td>96.41</td><td>82.43</td><td>95.60</td><td>100.0</td><td>92.77</td></tr></table>\n\nTable 11: Average D3G per-class results when classifying the specified demographic. Note that all the prompts are as described in Table 2"}
# A SPECIAL CASE OF QUADRATIC EXTRAPOLATION UNDER THE NEURAL TANGENT KERNEL ABSTRACT It has been demonstrated both theoretically and empirically that the ReLU MLP tends to extrapolate linearly for an out-of-distribution evaluation point. The machine learning literature provides ample analysis with respect to the mechanisms to which linearity is induced. However, the analysis of extrapolation at the origin under the NTK regime remains a more unexplored special case. In particular, the infinite-dimensional feature map induced by the neural tangent kernel is not translationally invariant. This means that the study of an out-of-distribution evaluation point very far from the origin is not equivalent to the evaluation of a point very near the origin. And since the feature map is rotation invariant, these two special cases may represent the most canonically extreme bounds of ReLU NTK extrapolation. Ultimately, it is this loose recognition of the two special cases of extrapolation that motivate the discovery of quadratic extrapolation for an evaluation close to the origin. # 1 Introduction The work of Xu et al. proves that an over-parameterized ReLU-activated multilayer perceptron (MLP) will extrapolate linearly when evaluated along any direction very distant from the origin. They formally prove extrapolative linearity by analysis of the learned regressor's functional form in the neural tangent kernel (NTK) reproducing kernel hilbert space (RKHS). And, since the infinite dimensional feature map induced by the neural tangent kernel is rotation invariant, the analysis covers the generalizable case of an evaluation point very distant from the origin. However, it is not difficult to recognize that the same feature map is not translation invariant. It is by a geometric reasoning that the origin of the RKHS must be a distinct special case whose analysis departs from Theorem 1 of Xu et al.. That is, in the limit of a large relative distance between the training point set and the evaluation point, one observes that there must be two special locations of the evaluation point with respect to the NTK induced feature map: A location casted along a singular feature direction, and a location which intersects all feature directions. It is this recognition of the distinguishable cases that motivates the extrapolative analysis at the origin location. The non translation invariance of the feature map implies that the extrapolative analysis at the origin and far from origin are not equivalent problems. It can be reasoned that they are two canonical cases of a more complete analysis of extrapolation. However, inducing extrapolation at the origin must be done carefully to ensure that the evaluation data is pushed out of the support of the training distribution space. This is achieved by this paper's definition of a labeled training set, which is formally presented in the problem setup of section 2. The desired effect of said definition is to induce a problem setup where all members of the training set are sent infinitely far away from the origin whilst fixing the evaluation data at the origin. Under this variant setting, we state Theorem 1, which discovers that an overparameterized neural network extrapolates quadratically when evaluated near the origin. This finding contrasts, but does not conflict with, Xu et al., which contrastingly concerns itself an evaluation point far from the origin. The paper is organized as follows. The proof of Theorem 1 is presented in §A.4 and will depend on the results of Lemmas 1 and 2, which are proven with continuity in §A.2 and §A.3 respectively. Our problem setup induces a special case of the NTK gram matrix which must be studied in §A.1 to set the stage for the remainder of the mathematics. # 2 Preliminaries Background on NTK: Suppose that a neural network performs nonlinear regression $f(\pmb{\theta}, \pmb{x}) : \mathcal{X} \rightarrow \mathbb{R}$ where $\pmb{\theta}$ is a vectorization of the network parameters and $\pmb{x} \in \mathcal{X}$ . Let there be $n$ training points which form a labeled set $\{(x_i, y_i)\}_{i=1}^n$ . If we train the network on the labeled set to minimize the squared loss function $\frac{1}{2} \sum_{i=1}^{n} (f(\pmb{\theta}, x_i) - y_i)^2$ via gradient descent, then we can derive a kernel method from the network by first considering an affine approximation of the network output in parameter space. If we denote the time-dependent parameter vector induced by gradient descent as $\pmb{\theta}^{(t)}$ for some iteration $t$ , then we define the feature map $\phi(\pmb{x})$ as the gradient of the network output with respect to $\pmb{\theta}$ evaluated at $\pmb{\theta}^{(0)}$ denoted as $\nabla_\pmb{\theta} f(\pmb{\theta}^{(0)}, \pmb{x})$ . The corresponding kernel, called the neural tangent kernel (NTK), is then an affine model that is linear in the network parameters. Under particular constraints such as the infinite width and infinitesimal learning rate, the NTK becomes an expectation: $$ N T K (\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}) = \mathbb {E} _ {\boldsymbol {\theta} \sim \mathcal {N}} \left\langle \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} ^ {(0)}, \boldsymbol {x} _ {i}), \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} ^ {(0)}, \boldsymbol {x} _ {j}) \right\rangle , $$ where the expectation emerges by the law of large numbers induced by the network's infinite width. Interestingly, the affine approximation is correct under the NTK constraints in parameter space, and is closely tied to the network's notion of lazy training. Ultimately, since training is linear in the often high-dimensional, possibly infinite, feature space, the neural network behaves as an affine kernel regression. We take all such pairwise NTK evaluations from the labeled training set to produce the positive semi-definite NTK gram matrix denoted as $NTK_{train}$ . Background on Neural Network Extrapolation: Xu et al. builds on the established results of the NTK equivalence between neural network training and kernel regression to more precisely analyze extrapolation. However, using the NTK directly requires analysis of the point-wise form as a kernel regression fit over the labeled training set. It can be more advantageous to work in the NTK induced feature space instead to derive a functional representation of the learned network, which may be more analytically manageable. This is precisely the route they take and formalize this equivalence between point-wise NTK regression and the learned function in the NTK induced feature space in their Lemma 2: $$ \begin{array}{l} f _ {N T K} (\boldsymbol {x}) = \phi (\boldsymbol {x}) ^ {\top} \beta_ {\mathrm {N T K}} \\ \text {w h e r e} \quad \beta_ {\mathrm {N T K}} = \min _ {\beta} \| \beta \| _ {2} \\ s. t. \quad \phi (\boldsymbol {x}) ^ {\top} \boldsymbol {\beta} = y _ {i} \quad \text {f o r} i = 1, \dots , n, \\ \end{array} $$ where $f_{NTK}(\boldsymbol{x}) = \phi(\boldsymbol{x})^\top \beta_{\mathrm{NTK}}$ is the min-norm functional form equivalent to NTK kernel regression fitted over the training data for any $\boldsymbol{x} \in \mathcal{X}$ . Further, they derive the precise closed-form of the NTK induced feature map for a ReLU two-layer MLP in their Lemma 3: $$ \phi (\boldsymbol {x}) = c ^ {\prime} \left(\boldsymbol {x} \cdot \mathbb {I} \left(\boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \geq 0\right), \boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \cdot \mathbb {I} \left(\boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \geq 0\right), \dots\right), $$ where $c'$ is a constant, $\mathbb{I}$ is the Heaviside indicator function, and $\pmb{w}^{(k)}\sim \mathcal{N}(\pmb {0},\pmb {I})$ for $k\to \infty$ . By analyzing the functional representation in the NTK RKHS, they discovered that for a labeled training set $\{(x_i,y_i)\}_{i = 1}^n$ and evaluation point $\pmb{x}_0 = t\pmb{v}$ for any direction $\pmb{v}\in \mathbb{R}^d$ , the network converges to a linear function. Problem Setup: Our problem setup inherits from both Jacot, Gabriel, and Hongler and Xu et al. primarily through the notation of the latter for compatibility. Let $\mathcal{X}$ be a $d$ -dimensional Euclidean input space and $\varphi$ be a set of $n$ training inputs such that $\varphi = \{\pmb{x}_i\}_{i=1}^n$ with $\pmb{x}_i \in \mathcal{X}$ for $i \in [n]$ . If we translate $\varphi$ by the vector $-t\pmb{v}_{\varphi}$ for any direction $\pmb{v}_{\varphi}$ , then we will have formed a new set $\varphi^\infty = \{\pmb{x}_i - t\pmb{v}_{\varphi} : \pmb{x}_i \in \varphi\}$ where a member is denoted $\pmb{x}_i^\infty = \pmb{x}_i - t\pmb{v}_{\varphi}$ for any $\pmb{x}_i^\infty \in \varphi^\infty$ . The labeled training set can then be constructed as $\{(x_i^\infty, y_i^\infty)\}_{i=1}^n$ where $y_i^\infty = g(x_i^\infty)$ for target function $g : \mathcal{X} \to \mathbb{R}$ . We train a single-output two-layer ReLU MLP $f_{NTK} : \mathcal{X} \to \mathbb{R}$ in the NTK regime using gradient descent to minimize the squared loss function over the labeled training set. We reintroduce the hat notation which denotes a data vector augmented with bias term: $\hat{\pmb{x}} = [\pmb{x}|1]$ . We also introduce the check notation which denotes the explicit exclusion of the bias weight with respect to the $k$ -th hidden neuron, $\check{\pmb{w}}^{(k)} \in \mathbb{R}^d$ . Clarifying the Training Data: Please agree that the definition of the labeled training set, which is constructed from $\varphi^{\infty}$ , facilitates an analysis of extrapolation at the origin location. If extrapolation can be configured by defining the labeled training set $\{(x_i,y_i)\}_{i=1}^n$ and the evaluation point $x_0 = t\mathbf{v}$ for any direction $\mathbf{v}$ , then it is not difficult to attain the "inverted" setup of the new training set $\{(x_i - t\mathbf{v}, g(x_i - t\mathbf{v}))\}_{i=1}^n$ and evaluation point
# A SPECIAL CASE OF QUADRATIC EXTRAPOLATION UNDER THE NEURAL TANGENT KERNEL ABSTRACT It has been demonstrated both theoretically and empirically that the ReLU MLP tends to extrapolate linearly for an out-of-distribution evaluation point. The machine learning literature provides ample analysis with respect to the mechanisms to which linearity is induced. However, the analysis of extrapolation at the origin under the NTK regime remains a more unexplored special case. In particular, the infinite-dimensional feature map induced by the neural tangent kernel is not translationally invariant. This means that the study of an out-of-distribution evaluation point very far from the origin is not equivalent to the evaluation of a point very near the origin. And since the feature map is rotation invariant, these two special cases may represent the most canonically extreme bounds of ReLU NTK extrapolation. Ultimately, it is this loose recognition of the two special cases of extrapolation that motivate the discovery of quadratic extrapolation for an evaluation close to the origin. # 1 Introduction The work of Xu et al. proves that an over-parameterized ReLU-activated multilayer perceptron (MLP) will extrapolate linearly when evaluated along any direction very distant from the origin. They formally prove extrapolative linearity by analysis of the learned regressor's functional form in the neural tangent kernel (NTK) reproducing kernel hilbert space (RKHS). And, since the infinite dimensional feature map induced by the neural tangent kernel is rotation invariant, the analysis covers the generalizable case of an evaluation point very distant from the origin. However, it is not difficult to recognize that the same feature map is not translation invariant. It is by a geometric reasoning that the origin of the RKHS must be a distinct special case whose analysis departs from Theorem 1 of Xu et al.. That is, in the limit of a large relative distance between the training point set and the evaluation point, one observes that there must be two special locations of the evaluation point with respect to the NTK induced feature map: A location casted along a singular feature direction, and a location which intersects all feature directions. It is this recognition of the distinguishable cases that motivates the extrapolative analysis at the origin location. The non translation invariance of the feature map implies that the extrapolative analysis at the origin and far from origin are not equivalent problems. It can be reasoned that they are two canonical cases of a more complete analysis of extrapolation. However, inducing extrapolation at the origin must be done carefully to ensure that the evaluation data is pushed out of the support of the training distribution space. This is achieved by this paper's definition of a labeled training set, which is formally presented in the problem setup of section 2. The desired effect of said definition is to induce a problem setup where all members of the training set are sent infinitely far away from the origin whilst fixing the evaluation data at the origin. Under this variant setting, we state Theorem 1, which discovers that an overparameterized neural network extrapolates quadratically when evaluated near the origin. This finding contrasts, but does not conflict with, Xu et al., which contrastingly concerns itself an evaluation point far from the origin. The paper is organized as follows. The proof of Theorem 1 is presented in §A.4 and will depend on the results of Lemmas 1 and 2, which are proven with continuity in §A.2 and §A.3 respectively. Our problem setup induces a special case of the NTK gram matrix which must be studied in §A.1 to set the stage for the remainder of the mathematics. # 2 Preliminaries Background on NTK: Suppose that a neural network performs nonlinear regression $f(\pmb{\theta}, \pmb{x}) : \mathcal{X} \rightarrow \mathbb{R}$ where $\pmb{\theta}$ is a vectorization of the network parameters and $\pmb{x} \in \mathcal{X}$ . Let there be $n$ training points which form a labeled set $\{(x_i, y_i)\}_{i=1}^n$ . If we train the network on the labeled set to minimize the squared loss function $\frac{1}{2} \sum_{i=1}^{n} (f(\pmb{\theta}, x_i) - y_i)^2$ via gradient descent, then we can derive a kernel method from the network by first considering an affine approximation of the network output in parameter space. If we denote the time-dependent parameter vector induced by gradient descent as $\pmb{\theta}^{(t)}$ for some iteration $t$ , then we define the feature map $\phi(\pmb{x})$ as the gradient of the network output with respect to $\pmb{\theta}$ evaluated at $\pmb{\theta}^{(0)}$ denoted as $\nabla_\pmb{\theta} f(\pmb{\theta}^{(0)}, \pmb{x})$ . The corresponding kernel, called the neural tangent kernel (NTK), is then an affine model that is linear in the network parameters. Under particular constraints such as the infinite width and infinitesimal learning rate, the NTK becomes an expectation: $$ N T K (\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}) = \mathbb {E} _ {\boldsymbol {\theta} \sim \mathcal {N}} \left\langle \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} ^ {(0)}, \boldsymbol {x} _ {i}), \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} ^ {(0)}, \boldsymbol {x} _ {j}) \right\rangle , $$ where the expectation emerges by the law of large numbers induced by the network's infinite width. Interestingly, the affine approximation is correct under the NTK constraints in parameter space, and is closely tied to the network's notion of lazy training. Ultimately, since training is linear in the often high-dimensional, possibly infinite, feature space, the neural network behaves as an affine kernel regression. We take all such pairwise NTK evaluations from the labeled training set to produce the positive semi-definite NTK gram matrix denoted as $NTK_{train}$ . Background on Neural Network Extrapolation: Xu et al. builds on the established results of the NTK equivalence between neural network training and kernel regression to more precisely analyze extrapolation. However, using the NTK directly requires analysis of the point-wise form as a kernel regression fit over the labeled training set. It can be more advantageous to work in the NTK induced feature space instead to derive a functional representation of the learned network, which may be more analytically manageable. This is precisely the route they take and formalize this equivalence between point-wise NTK regression and the learned function in the NTK induced feature space in their Lemma 2: $$ \begin{array}{l} f _ {N T K} (\boldsymbol {x}) = \phi (\boldsymbol {x}) ^ {\top} \beta_ {\mathrm {N T K}} \\ \text {w h e r e} \quad \beta_ {\mathrm {N T K}} = \min _ {\beta} \| \beta \| _ {2} \\ s. t. \quad \phi (\boldsymbol {x}) ^ {\top} \boldsymbol {\beta} = y _ {i} \quad \text {f o r} i = 1, \dots , n, \\ \end{array} $$ where $f_{NTK}(\boldsymbol{x}) = \phi(\boldsymbol{x})^\top \beta_{\mathrm{NTK}}$ is the min-norm functional form equivalent to NTK kernel regression fitted over the training data for any $\boldsymbol{x} \in \mathcal{X}$ . Further, they derive the precise closed-form of the NTK induced feature map for a ReLU two-layer MLP in their Lemma 3: $$ \phi (\boldsymbol {x}) = c ^ {\prime} \left(\boldsymbol {x} \cdot \mathbb {I} \left(\boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \geq 0\right), \boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \cdot \mathbb {I} \left(\boldsymbol {w} ^ {(k) ^ {\top}} \boldsymbol {x} \geq 0\right), \dots\right), $$ where $c'$ is a constant, $\mathbb{I}$ is the Heaviside indicator function, and $\pmb{w}^{(k)}\sim \mathcal{N}(\pmb {0},\pmb {I})$ for $k\to \infty$ . By analyzing the functional representation in the NTK RKHS, they discovered that for a labeled training set $\{(x_i,y_i)\}_{i = 1}^n$ and evaluation point $\pmb{x}_0 = t\pmb{v}$ for any direction $\pmb{v}\in \mathbb{R}^d$ , the network converges to a linear function. Problem Setup: Our problem setup inherits from both Jacot, Gabriel, and Hongler and Xu et al. primarily through the notation of the latter for compatibility. Let $\mathcal{X}$ be a $d$ -dimensional Euclidean input space and $\varphi$ be a set of $n$ training inputs such that $\varphi = \{\pmb{x}_i\}_{i=1}^n$ with $\pmb{x}_i \in \mathcal{X}$ for $i \in [n]$ . If we translate $\varphi$ by the vector $-t\pmb{v}_{\varphi}$ for any direction $\pmb{v}_{\varphi}$ , then we will have formed a new set $\varphi^\infty = \{\pmb{x}_i - t\pmb{v}_{\varphi} : \pmb{x}_i \in \varphi\}$ where a member is denoted $\pmb{x}_i^\infty = \pmb{x}_i - t\pmb{v}_{\varphi}$ for any $\pmb{x}_i^\infty \in \varphi^\infty$ . The labeled training set can then be constructed as $\{(x_i^\infty, y_i^\infty)\}_{i=1}^n$ where $y_i^\infty = g(x_i^\infty)$ for target function $g : \mathcal{X} \to \mathbb{R}$ . We train a single-output two-layer ReLU MLP $f_{NTK} : \mathcal{X} \to \mathbb{R}$ in the NTK regime using gradient descent to minimize the squared loss function over the labeled training set. We reintroduce the hat notation which denotes a data vector augmented with bias term: $\hat{\pmb{x}} = [\pmb{x}|1]$ . We also introduce the check notation which denotes the explicit exclusion of the bias weight with respect to the $k$ -th hidden neuron, $\check{\pmb{w}}^{(k)} \in \mathbb{R}^d$ . Clarifying the Training Data: Please agree that the definition of the labeled training set, which is constructed from $\varphi^{\infty}$ , facilitates an analysis of extrapolation at the origin location. If extrapolation can be configured by defining the labeled training set $\{(x_i,y_i)\}_{i=1}^n$ and the evaluation point $x_0 = t\mathbf{v}$ for any direction $\mathbf{v}$ , then it is not difficult to attain the "inverted" setup of the new training set $\{(x_i - t\mathbf{v}, g(x_i - t\mathbf{v}))\}_{i=1}^n$ and evaluation point $\mathbf{x}_0 = \mathbf{0}$ . Critically, the coordinate shift preserves the sufficient condition that induces extrapolation as $t \to \infty$ . Clarifying the Notation: This paper refers to the set $\varphi$ as a (point) realization, which is a convention related to point processes and stochastic geometry. Since the NTK regime deals with finite datasets, it may be useful to explicitly describe or analyze $\varphi$ as a point process in a later related work. The set $\varphi$ may be ascribed an underlying mathematical data generator for the purposes of a neural scaling law analysis, for instance. # 2.1 Related Work This paper identifies and deeply explores a special case of nonlinear NTK extrapolation. To the best of our knowledge, since our work is based on Xu et al. as a special coordinate-shifted case of their problem setup, discovering additional relevant literature in the space of NTK extrapolation is challenging. However, there are previous works that strongly align with the themes of this paper insofar as the exploration of special nonlinear regimes or NTK configurations. One notable work is Bai and Lee where they discover a special learning process using randomization that results in a dominant quadratic Taylor term as opposed to the standard linear dominance in a Taylor expansion. But, it must be made clear that the results of Bai and Lee do not specifically address extrapolation. Furthermore, various elements of this manuscript align with existing work insofar as the application of mathematical techniques for machine learning. We consider, for instance, how Rangamani, Rosasco, and Poggio's Remark 3 justifies our usage of Tikhonov regularization to pseudo-invert a special geometrically constrained NTK gram matrix in $\S A.1$ . Ultimately, this paper is an analysis of asymptotic quadratic extrapolation for over-parameterized neural networks and serves a complementary work to Xu et al.. # 3 Theoretical Contributions Remark 1. If all $n$ inputs of the training set $\varphi^{\infty}$ are located infinitely far from the origin along the same direction, then the asymptotic pseudo-inverse of the NTK gram matrix is a difference between the identity and all-ones matrix: $\frac{1}{\delta}\pmb {I} - \frac{t^2\kappa}{\delta(n\kappa t^2 + \delta)}\pmb {J}$ , where $\delta \rightarrow 0$ , $t\rightarrow \infty$ , and $\kappa$ is a constant. Special Case of the NTK Gram: We discover this closed-form in §A.1 by first recognizing that under the definition of $\varphi^{\infty}$ , the indicators for any training input become input agnostic insofar that the indicating logic strictly depends on a feature direction $\pmb{w}$ and training direction $\pmb{v}_{\varphi}$ . The definition of $\varphi^{\infty}$ induces the otherwise singular asymptotic NTK gram matrix $\kappa t^{2}J$ . We then use Tikhonov regularization to pseudo-invert this asymptotic NTK gram expressed as $(\kappa t^{2}J + \Gamma)^{-1}$ . We leverage this special case of the asymptotic NTK gram matrix induced by $\varphi^{\infty}$ and its pseudo-inverse to express the components of $\beta_{NTK}$ induced by training inputs $\varphi^{\infty}$ that are distant from the origin. These results are then used to prove Lemma 2 and ultimately Theorem 1. Theorem 1. An over-parameterized two-layer ReLU MLP $f_{NTK} : \mathbb{R}^d \to \mathbb{R}$ that is trained on a labeled set $\{(x_i^\infty, y_i^\infty)\}_{i=1}^n$ with $x_i^\infty = x_i - tv_\varphi$ for $x_i \in \mathcal{X}$ and any direction $v_\varphi$ in the NTK regime minimizing squared loss will converge to a quadratic extrapolator when evaluated at a point near the origin $\mathbf{0}$ as $t \to \infty$ . Theorem 1 Proof Sketch: Theorem 1 is the main contribution of this paper and states that an extremely wide NTK predictor with ReLU activations that is trained on a dataset which is extremely distant from the origin will converge to a quadratic extrapolator when evaluated near the origin. That is, the Theorem 1 states that the predictor's first and second directional derivatives exist and all higher-order derivatives are 0. And the proof of Theorem 1, which is presented in §A.4, depends on the results of Lemmas 1 and 2. Lemma 1 is a generalized algebraic manipulation and states that the directional derivative of the NTK predictor can be expressed in terms of the derivatives of the indicator. The significance of Lemma 1 is most clear when we leverage the Dirac-delta's so-called sifting property, also known as the sampling property. We note that the derivative of the Heaviside indicator is the Dirac-delta, and applies itself nicely when viewing the predictor's derivative as an integral. Lemma 2 completes the second half of the Theorem 1 proof by stating that the partial derivatives of the beta components with respect to the bias component of a feature direction $\boldsymbol{w}_{d+1}$ are always 0 for any order derivative past the second. The significance of Lemma 2 is clear when we see in §A.2 that the $z$ -th derivative of the predictor depends on the $(z-1)$ -th and $(z-2)$ -th partial derivatives of the beta components. It is not difficult to see that the quadratic-order persists when taking the $z$ -th derivative of $f_{NTK}$ . Lemma 1. The feature map of the $z$ -th directional derivative of $f_{NTK}$ for any direction $\mathbf{v}_0$ can be expressed in terms of the $z$ -th and $(z - 1)$ -th directional derivatives of the indicator for $\mathbf{v}_0$ such that: $$ D _ {\pmb {v}} ^ {z} f _ {N T K} (\pmb {x} _ {0}) = \beta_ {N T K} ^ {\top} \Big (\hat {\pmb {x}} _ {0} \cdot D _ {\pmb {v}} ^ {z} \mathbb {I} ^ {(k)} - z \hat {\pmb {v}} \cdot D _ {\pmb {v}} ^ {z - 1} \mathbb {I} ^ {(k)}, \pmb {w} ^ {(k)} ^ {\top} \hat {\pmb {x}} _ {0} \cdot D _ {\pmb {v}} ^ {z} \mathbb {I} ^ {(k)} - z \pmb {w} ^ {(k) ^ {\top}} \hat {\pmb {v}} \cdot D _ {\pmb {v}} ^ {z - 1} \mathbb {I} ^ {(k)}, \dots \Big) $$ Lemma 2. The components of the NTK representation coefficient $\beta_{NTK}$ induced by a training input set $\varphi^{\infty} = \{\pmb{x}_i^\infty\}_{i=1}^n$ where $\pmb{x}_i^\infty = \pmb{x}_i - t\pmb{v}_\varphi$ for some $\pmb{x}_i \in \mathcal{X}$ and any direction $\pmb{v}_\varphi$ are constant with respect to the bias component of any given feature direction $\pmb{w}_{d+1}$ such that: $$ \frac {\partial^ {z} \boldsymbol {\beta} _ {\boldsymbol {w}} ^ {1}}{\partial \boldsymbol {w} _ {d + 1} ^ {z}}, \frac {\partial^ {z} \boldsymbol {\beta} _ {\boldsymbol {w}} ^ {2}}{\partial \boldsymbol {w} _ {d + 1} ^ {z}} \rightarrow 0 \text {f o r a l l} z \geq 1. $$ # 4 Conclusion This paper identifies a special case of nonlinearity for NTK extrapolation at the origin of the RKHS. More specifically, this paper finds that at the origin, the infinitely-wide two-layer MLP retains a non-zero second-order Taylor term in the limit. The quadratic behavior is highly dependent on the degree of similarity between vector orientations $\pmb{w}$ - which represents the direction of a feature - and $\pmb{v}$ - the vector which defines the evaluation point. If, for instance, the two orientations are orthogonal, then the second derivative is unconditionally zero. The second derivative may also be zero dependent on the beta components, i.e., if the beta 1 component is orthogonal to $\pmb{v}$ and the beta 2 component is zero; However, this condition is less strict. The results are distinct from but complementary to the existing ML literature which primarily concern the linearity of neural network extrapolation. That is, since the feature map induced by the neural tangent kernel is not translation invariant, extrapolation at a point far from the origin is not equivalent to extrapolation a point close to the origin. We prove our results by determining a closed-form of the asymptotic pseudo-inverse NTK gram matrix to determine the components of $\beta_{NTK}$ induced by the definition of $\varphi^{\infty}$ . Then we discover a neat algebraic trick in Lemma 1 to rewrite the directional derivative of the predictor as partial derivatives of the beta components using the distributional derivative equivalence to the directional derivative of the indicator.
arxiv_cs
2025-12-10T00:00:00Z
https://arxiv.org/pdf/2512.15749
{"title": "A Special Case of Quadratic Extrapolation Under the Neural Tangent Kernel", "raw_content": "# A SPECIAL CASE OF QUADRATIC EXTRAPOLATION UNDER THE NEURAL TANGENT KERNEL\n\nAbiel J. Kim\n\nDecember 19, 2025\n\n# ABSTRACT\n\nIt has been demonstrated both theoretically and empirically that the ReLU MLP tends to extrapolate linearly for an out-of-distribution evaluation point. The machine learning literature provides ample analysis with respect to the mechanisms to which linearity is induced. However, the analysis of extrapolation at the origin under the NTK regime remains a more unexplored special case. In particular, the infinite-dimensional feature map induced by the neural tangent kernel is not translationally invariant. This means that the study of an out-of-distribution evaluation point very far from the origin is not equivalent to the evaluation of a point very near the origin. And since the feature map is rotation invariant, these two special cases may represent the most canonically extreme bounds of ReLU NTK extrapolation. Ultimately, it is this loose recognition of the two special cases of extrapolation that motivate the discovery of quadratic extrapolation for an evaluation close to the origin.\n\n# 1 Introduction\n\nThe work of Xu et al. [20] proves that an over-parameterized ReLU-activated multilayer perceptron (MLP) will extrapolate linearly when evaluated along any direction very distant from the origin. They formally prove extrapolative linearity by analysis of the learned regressor's functional form in the neural tangent kernel (NTK) reproducing kernel hilbert space (RKHS) [9]. And, since the infinite dimensional feature map induced by the neural tangent kernel is rotation invariant, the analysis covers the generalizable case of an evaluation point very distant from the origin. However, it is not difficult to recognize that the same feature map is not translation invariant. It is by a geometric reasoning that the origin of the RKHS must be a distinct special case whose analysis departs from Theorem 1 of Xu et al. [20]. That is, in the limit of a large relative distance between the training point set and the evaluation point, one observes that there must be two special locations of the evaluation point with respect to the NTK induced feature map: A location casted along a singular feature direction, and a location which intersects all feature directions.\n\nIt is this recognition of the distinguishable cases that motivates the extrapolative analysis at the origin location. The non translation invariance of the feature map implies that the extrapolative analysis at the origin and far from origin are not equivalent problems. It can be reasoned that they are two canonical cases of a more complete analysis of extrapolation. However, inducing extrapolation at the origin must be done carefully to ensure that the evaluation data is pushed out of the support of the training distribution space. This is achieved by this paper's definition of a labeled training set, which is formally presented in the problem setup of section 2. The desired effect of said definition is to induce a problem setup where all members of the training set are sent infinitely far away from the origin whilst fixing the evaluation data at the origin. Under this variant setting, we state Theorem 1, which discovers that an overparameterized neural network extrapolates quadratically when evaluated near the origin. This finding contrasts, but does not conflict with, Xu et al. [20], which contrastingly concerns itself an evaluation point far from the origin.\n\nThe paper is organized as follows. The proof of Theorem 1 is presented in §A.4 and will depend on the results of Lemmas 1 and 2, which are proven with continuity in §A.2 and §A.3 respectively. Our problem setup induces a special case of the NTK gram matrix which must be studied in §A.1 to set the stage for the remainder of the mathematics.\n\n# 2 Preliminaries\n\nBackground on NTK: Suppose that a neural network performs nonlinear regression $f(\\pmb{\\theta}, \\pmb{x}) : \\mathcal{X} \\rightarrow \\mathbb{R}$ where $\\pmb{\\theta}$ is a vectorization of the network parameters and $\\pmb{x} \\in \\mathcal{X}$ . Let there be $n$ training points which form a labeled set $\\{(x_i, y_i)\\}_{i=1}^n$ . If we train the network on the labeled set to minimize the squared loss function $\\frac{1}{2} \\sum_{i=1}^{n} (f(\\pmb{\\theta}, x_i) - y_i)^2$ via gradient descent, then we can derive a kernel method from the network by first considering an affine approximation of the network output in parameter space. If we denote the time-dependent parameter vector induced by gradient descent as $\\pmb{\\theta}^{(t)}$ for some iteration $t$ , then we define the feature map $\\phi(\\pmb{x})$ as the gradient of the network output with respect to $\\pmb{\\theta}$ evaluated at $\\pmb{\\theta}^{(0)}$ denoted as $\\nabla_\\pmb{\\theta} f(\\pmb{\\theta}^{(0)}, \\pmb{x})$ . The corresponding kernel, called the neural tangent kernel (NTK), is then an affine model that is linear in the network parameters. Under particular constraints such as the infinite width and infinitesimal learning rate, the NTK becomes an expectation:\n\n$$\nN T K (\\boldsymbol {x} _ {i}, \\boldsymbol {x} _ {j}) = \\mathbb {E} _ {\\boldsymbol {\\theta} \\sim \\mathcal {N}} \\left\\langle \\nabla_ {\\boldsymbol {\\theta}} f (\\boldsymbol {\\theta} ^ {(0)}, \\boldsymbol {x} _ {i}), \\nabla_ {\\boldsymbol {\\theta}} f (\\boldsymbol {\\theta} ^ {(0)}, \\boldsymbol {x} _ {j}) \\right\\rangle ,\n$$\n\nwhere the expectation emerges by the law of large numbers induced by the network's infinite width. Interestingly, the affine approximation is correct under the NTK constraints in parameter space, and is closely tied to the network's notion of lazy training. Ultimately, since training is linear in the often high-dimensional, possibly infinite, feature space, the neural network behaves as an affine kernel regression. We take all such pairwise NTK evaluations from the labeled training set to produce the positive semi-definite NTK gram matrix denoted as $NTK_{train}$ .\n\nBackground on Neural Network Extrapolation: Xu et al. [20] builds on the established results of the NTK equivalence between neural network training and kernel regression to more precisely analyze extrapolation. However, using the NTK directly requires analysis of the point-wise form as a kernel regression fit over the labeled training set. It can be more advantageous to work in the NTK induced feature space instead to derive a functional representation of the learned network, which may be more analytically manageable. This is precisely the route they take and formalize this equivalence between point-wise NTK regression and the learned function in the NTK induced feature space in their Lemma 2:\n\n$$\n\\begin{array}{l} f _ {N T K} (\\boldsymbol {x}) = \\phi (\\boldsymbol {x}) ^ {\\top} \\beta_ {\\mathrm {N T K}} \\\\ \\text {w h e r e} \\quad \\beta_ {\\mathrm {N T K}} = \\min _ {\\beta} \\| \\beta \\| _ {2} \\\\ s. t. \\quad \\phi (\\boldsymbol {x}) ^ {\\top} \\boldsymbol {\\beta} = y _ {i} \\quad \\text {f o r} i = 1, \\dots , n, \\\\ \\end{array}\n$$\n\nwhere $f_{NTK}(\\boldsymbol{x}) = \\phi(\\boldsymbol{x})^\\top \\beta_{\\mathrm{NTK}}$ is the min-norm functional form equivalent to NTK kernel regression fitted over the training data for any $\\boldsymbol{x} \\in \\mathcal{X}$ . Further, they derive the precise closed-form of the NTK induced feature map for a ReLU two-layer MLP in their Lemma 3:\n\n$$\n\\phi (\\boldsymbol {x}) = c ^ {\\prime} \\left(\\boldsymbol {x} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\boldsymbol {x} \\geq 0\\right), \\boldsymbol {w} ^ {(k) ^ {\\top}} \\boldsymbol {x} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\boldsymbol {x} \\geq 0\\right), \\dots\\right),\n$$\n\nwhere $c'$ is a constant, $\\mathbb{I}$ is the Heaviside indicator function, and $\\pmb{w}^{(k)}\\sim \\mathcal{N}(\\pmb {0},\\pmb {I})$ for $k\\to \\infty$ . By analyzing the functional representation in the NTK RKHS, they discovered that for a labeled training set $\\{(x_i,y_i)\\}_{i = 1}^n$ and evaluation point $\\pmb{x}_0 = t\\pmb{v}$ for any direction $\\pmb{v}\\in \\mathbb{R}^d$ , the network converges to a linear function.\n\nProblem Setup: Our problem setup inherits from both Jacot, Gabriel, and Hongler [9] and Xu et al. [20] primarily through the notation of the latter for compatibility. Let $\\mathcal{X}$ be a $d$ -dimensional Euclidean input space and $\\varphi$ be a set of $n$ training inputs such that $\\varphi = \\{\\pmb{x}_i\\}_{i=1}^n$ with $\\pmb{x}_i \\in \\mathcal{X}$ for $i \\in [n]$ . If we translate $\\varphi$ by the vector $-t\\pmb{v}_{\\varphi}$ for any direction $\\pmb{v}_{\\varphi}$ , then we will have formed a new set $\\varphi^\\infty = \\{\\pmb{x}_i - t\\pmb{v}_{\\varphi} : \\pmb{x}_i \\in \\varphi\\}$ where a member is denoted $\\pmb{x}_i^\\infty = \\pmb{x}_i - t\\pmb{v}_{\\varphi}$ for any $\\pmb{x}_i^\\infty \\in \\varphi^\\infty$ . The labeled training set can then be constructed as $\\{(x_i^\\infty, y_i^\\infty)\\}_{i=1}^n$ where $y_i^\\infty = g(x_i^\\infty)$ for target function $g : \\mathcal{X} \\to \\mathbb{R}$ . We train a single-output two-layer ReLU MLP $f_{NTK} : \\mathcal{X} \\to \\mathbb{R}$ in the NTK regime using gradient descent to minimize the squared loss function over the labeled training set. We reintroduce the hat notation which denotes a data vector augmented with bias term: $\\hat{\\pmb{x}} = [\\pmb{x}|1]$ . We also introduce the check notation which denotes the explicit exclusion of the bias weight with respect to the $k$ -th hidden neuron, $\\check{\\pmb{w}}^{(k)} \\in \\mathbb{R}^d$ .\n\nClarifying the Training Data: Please agree that the definition of the labeled training set, which is constructed from $\\varphi^{\\infty}$ , facilitates an analysis of extrapolation at the origin location. If extrapolation can be configured by defining the labeled training set $\\{(x_i,y_i)\\}_{i=1}^n$ and the evaluation point $x_0 = t\\mathbf{v}$ for any direction $\\mathbf{v}$ , then it is not difficult to attain the \"inverted\" setup of the new training set $\\{(x_i - t\\mathbf{v}, g(x_i - t\\mathbf{v}))\\}_{i=1}^n$ and evaluation point $\\mathbf{x}_0 = \\mathbf{0}$ . Critically, the coordinate shift preserves the sufficient condition that induces extrapolation as $t \\to \\infty$ .\n\nClarifying the Notation: This paper refers to the set $\\varphi$ as a (point) realization, which is a convention related to point processes and stochastic geometry [5]. Since the NTK regime deals with finite datasets, it may be useful to explicitly describe or analyze $\\varphi$ as a point process in a later related work. The set $\\varphi$ may be ascribed an underlying mathematical data generator for the purposes of a neural scaling law analysis, for instance.\n\n# 2.1 Related Work\n\nThis paper identifies and deeply explores a special case of nonlinear NTK extrapolation. To the best of our knowledge, since our work is based on Xu et al. [20] as a special coordinate-shifted case of their problem setup, discovering additional relevant literature in the space of NTK extrapolation is challenging. However, there are previous works that strongly align with the themes of this paper insofar as the exploration of special nonlinear regimes or NTK configurations. One notable work is Bai and Lee [3] where they discover a special learning process using randomization that results in a dominant quadratic Taylor term as opposed to the standard linear dominance in a Taylor expansion. But, it must be made clear that the results of Bai and Lee [3] do not specifically address extrapolation.\n\nFurthermore, various elements of this manuscript align with existing work insofar as the application of mathematical techniques for machine learning. We consider, for instance, how Rangamani, Rosasco, and Poggio [17]'s Remark 3 justifies our usage of Tikhonov regularization to pseudo-invert a special geometrically constrained NTK gram matrix in $\\S A.1$ . Ultimately, this paper is an analysis of asymptotic quadratic extrapolation for over-parameterized neural networks and serves a complementary work to Xu et al. [20].\n\n# 3 Theoretical Contributions\n\nRemark 1. If all $n$ inputs of the training set $\\varphi^{\\infty}$ are located infinitely far from the origin along the same direction, then the asymptotic pseudo-inverse of the NTK gram matrix is a difference between the identity and all-ones matrix: $\\frac{1}{\\delta}\\pmb {I} - \\frac{t^2\\kappa}{\\delta(n\\kappa t^2 + \\delta)}\\pmb {J}$ , where $\\delta \\rightarrow 0$ , $t\\rightarrow \\infty$ , and $\\kappa$ is a constant.\n\nSpecial Case of the NTK Gram: We discover this closed-form in §A.1 by first recognizing that under the definition of $\\varphi^{\\infty}$ , the indicators for any training input become input agnostic insofar that the indicating logic strictly depends on a feature direction $\\pmb{w}$ and training direction $\\pmb{v}_{\\varphi}$ . The definition of $\\varphi^{\\infty}$ induces the otherwise singular asymptotic NTK gram matrix $\\kappa t^{2}J$ . We then use Tikhonov regularization to pseudo-invert this asymptotic NTK gram expressed as $(\\kappa t^{2}J + \\Gamma)^{-1}$ . We leverage this special case of the asymptotic NTK gram matrix induced by $\\varphi^{\\infty}$ and its pseudo-inverse to express the components of $\\beta_{NTK}$ induced by training inputs $\\varphi^{\\infty}$ that are distant from the origin. These results are then used to prove Lemma 2 and ultimately Theorem 1.\n\nTheorem 1. An over-parameterized two-layer ReLU MLP $f_{NTK} : \\mathbb{R}^d \\to \\mathbb{R}$ that is trained on a labeled set $\\{(x_i^\\infty, y_i^\\infty)\\}_{i=1}^n$ with $x_i^\\infty = x_i - tv_\\varphi$ for $x_i \\in \\mathcal{X}$ and any direction $v_\\varphi$ in the NTK regime minimizing squared loss will converge to a quadratic extrapolator when evaluated at a point near the origin $\\mathbf{0}$ as $t \\to \\infty$ .\n\nTheorem 1 Proof Sketch: Theorem 1 is the main contribution of this paper and states that an extremely wide NTK predictor with ReLU activations that is trained on a dataset which is extremely distant from the origin will converge to a quadratic extrapolator when evaluated near the origin. That is, the Theorem 1 states that the predictor's first and second directional derivatives exist and all higher-order derivatives are 0. And the proof of Theorem 1, which is presented in §A.4, depends on the results of Lemmas 1 and 2. Lemma 1 is a generalized algebraic manipulation and states that the directional derivative of the NTK predictor can be expressed in terms of the derivatives of the indicator. The significance of Lemma 1 is most clear when we leverage the Dirac-delta's so-called sifting property, also known as the sampling property. We note that the derivative of the Heaviside indicator is the Dirac-delta, and applies itself nicely when viewing the predictor's derivative as an integral. Lemma 2 completes the second half of the Theorem 1 proof by stating that the partial derivatives of the beta components with respect to the bias component of a feature direction $\\boldsymbol{w}_{d+1}$ are always 0 for any order derivative past the second. The significance of Lemma 2 is clear when we see in §A.2 that the $z$ -th derivative of the predictor depends on the $(z-1)$ -th and $(z-2)$ -th partial derivatives of the beta components. It is not difficult to see that the quadratic-order persists when taking the $z$ -th derivative of $f_{NTK}$ .\n\nLemma 1. The feature map of the $z$ -th directional derivative of $f_{NTK}$ for any direction $\\mathbf{v}_0$ can be expressed in terms of the $z$ -th and $(z - 1)$ -th directional derivatives of the indicator for $\\mathbf{v}_0$ such that:\n\n$$\nD _ {\\pmb {v}} ^ {z} f _ {N T K} (\\pmb {x} _ {0}) = \\beta_ {N T K} ^ {\\top} \\Big (\\hat {\\pmb {x}} _ {0} \\cdot D _ {\\pmb {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\hat {\\pmb {v}} \\cdot D _ {\\pmb {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\pmb {w} ^ {(k)} ^ {\\top} \\hat {\\pmb {x}} _ {0} \\cdot D _ {\\pmb {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\pmb {w} ^ {(k) ^ {\\top}} \\hat {\\pmb {v}} \\cdot D _ {\\pmb {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\dots \\Big)\n$$\n\nLemma 2. The components of the NTK representation coefficient $\\beta_{NTK}$ induced by a training input set $\\varphi^{\\infty} = \\{\\pmb{x}_i^\\infty\\}_{i=1}^n$ where $\\pmb{x}_i^\\infty = \\pmb{x}_i - t\\pmb{v}_\\varphi$ for some $\\pmb{x}_i \\in \\mathcal{X}$ and any direction $\\pmb{v}_\\varphi$ are constant with respect to the bias component of any given feature direction $\\pmb{w}_{d+1}$ such that:\n\n$$\n\\frac {\\partial^ {z} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}}, \\frac {\\partial^ {z} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}} \\rightarrow 0 \\text {f o r a l l} z \\geq 1.\n$$\n\n# 4 Conclusion\n\nThis paper identifies a special case of nonlinearity for NTK extrapolation at the origin of the RKHS. More specifically, this paper finds that at the origin, the infinitely-wide two-layer MLP retains a non-zero second-order Taylor term in the limit. The quadratic behavior is highly dependent on the degree of similarity between vector orientations $\\pmb{w}$ - which represents the direction of a feature - and $\\pmb{v}$ - the vector which defines the evaluation point. If, for instance, the two orientations are orthogonal, then the second derivative is unconditionally zero. The second derivative may also be zero dependent on the beta components, i.e., if the beta 1 component is orthogonal to $\\pmb{v}$ and the beta 2 component is zero; However, this condition is less strict. The results are distinct from but complementary to the existing ML literature which primarily concern the linearity of neural network extrapolation. That is, since the feature map induced by the neural tangent kernel is not translation invariant, extrapolation at a point far from the origin is not equivalent to extrapolation a point close to the origin. We prove our results by determining a closed-form of the asymptotic pseudo-inverse NTK gram matrix to determine the components of $\\beta_{NTK}$ induced by the definition of $\\varphi^{\\infty}$ . Then we discover a neat algebraic trick in Lemma 1 to rewrite the directional derivative of the predictor as partial derivatives of the beta components using the distributional derivative equivalence to the directional derivative of the indicator.\n\n# Acknowledgments\n\nIn accordance with the arXiv policy for author usage of generative AI language tools, we report the use of Gemini 2.5 as a first line defense tool to primarily check for potential mistakes in mathematical derivations. The language and format of this manuscript was generally written and prepared without the assistance of a generative AI. The author asserts complete leadership over the formal proofs of this manuscript with all usage of generative AI limited to serving at an assistive capacity. The author hereby assumes full responsibility for the content of this manuscript.\n\n# References\n\n[1] Atish Agarwala, Fabian Pedregosa, and Jeffrey Pennington. Second-order regression models exhibit progressive sharpening to the edge of stability. 2022. arXiv: 2210.04860 [cs.LG]. URL: https://arxiv.org/abs/2210.04860. \n[2] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers. 2020. arXiv: 1811.04918 [cs.LG]. URL: https://arxiv.org/abs/1811.04918. \n[3] Yu Bai and Jason D. Lee. Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks. 2020. arXiv: 1910.01619 [cs.LG]. URL: https://arxiv.org/abs/1910.01619. \n[4] Minshuo Chen et al. Towards Understanding Hierarchical Learning: Benefits of Neural Representations. 2021. arXiv: 2006.13436 [cs.LG]. URL: https://arxiv.org/abs/2006.13436. \n[5] Sung Nok Chiu et al. Stochastic geometry and its applications. John Wiley & Sons, 2013. \n[6] Ramzi Dakhmouche and Hossein Gorji. Why Cannot Neural Networks Master Extrapolation? Insights from Physical Laws. 2025. arXiv: 2510.04102 [cs.LG]. URL: https://arxiv.org/abs/2510.04102. \n[7] Adrian Doicu, Thomas Trautmann, and Franz Schreier. \"Tikhonov regularization for linear problems\". In: Numerical Regularization for Atmospheric Inverse Problems. Springer, 2010, pp. 39-106. \n[8] Martin Haenggi. Stochastic geometry for wireless networks. Cambridge University Press, 2013. \n[9] Arthur Jacot, Franck Gabriel, and Clément Hongler. \"Neural Tangent Kernel: Convergence and Generalization in Neural Networks\". In: CoRR abs/1806.07572 (2018). arXiv: 1806.07572. URL: http://arxiv.org/abs/1806.07572. \n[10] Ram P Kanwal. Generalized functions: theory and applications. Springer Science & Business Media, 2012. \n[11] Sun Yuan Kung. Kernel methods and machine learning. Cambridge University Press, 2014. \n[12] Yicheng Li, Haobo Zhang, and Qian Lin. On the Asymptotic Learning Curves of Kernel Ridge Regression under Power-law Decay. 2023. arXiv: 2309.13337 [cs.LG]. URL: https://arxiv.org/abs/2309.13337. \n[13] Hengrui Luo and Yunzhang Zhu. Asymptotic Optimism of Random-Design Linear and Kernel Regression Models. 2025. arXiv: 2502.12999 [stat.ML]. URL: https://arxiv.org/abs/2502.12999. \n[14] Pierluigi Maponi. \"The solution of linear systems by using the Sherman-Morrison formula\". In: Linear algebra and its applications 420.2-3 (2007), pp. 276-294. \n[15] Jan Mikusinski. Operational calculus. Vol. 109. Elsevier, 2014.\n\n[16] Rachana Mysore et al. Mathematical Foundations of Neural Tangents and Infinite-Width Networks. 2025. arXiv: 2512.08264 [cs.LG]. URL: https://arxiv.org/abs/2512.08264. \n[17] Akshay Rangamani, Lorenzo Rosasco, and Tomaso Poggio. For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability. 2020. arXiv: 2006.15522 [stat.ML]. URL: https://arxiv.org/abs/2006.15522. \n[18] Jascha Sohl-Dickstein et al. On the infinite width limit of neural networks with a standard parameterization. 2020. arXiv: 2001.07301 [cs.LG]. URL: https://arxiv.org/abs/2001.07301. \n[19] Yong-Ming Tian et al. Depth-induced NTK: Bridging Over-parameterized Neural Networks and Deep Neural Kernels. 2025. arXiv: 2511.05585 [cs.LG]. URL: https://arxiv.org/abs/2511.05585. \n[20] Keyulu Xu et al. \"How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks\". In: CoRR abs/2009.11848 (2020). arXiv: 2009.11848. URL: https://arxiv.org/abs/2009.11848. \n[21] Juliusz Ziomek, George Whittle, and Michael A. Osborne. Just One Layer Norm Guarantees Stable Extrapolation. 2025. arXiv: 2505.14512 [cs.LG]. URL: https://arxiv.org/abs/2505.14512.\n\n# A Proofs\n\n# A.1 Special Case of the NTK Gram Matrix\n\nWe begin our analysis by making clear the form of $\\beta$ , which is the coefficient vector in the NTK RKHS that is fit over the labeled training data. We begin with the point-wise form of NTK regression to write $\\beta$ in terms of the NTK gram:\n\n$$\n\\begin{array}{l} f _ {N T K} (\\hat {\\boldsymbol {x}}) = \\left(\\langle \\phi (\\hat {\\boldsymbol {x}}), \\phi (\\hat {\\boldsymbol {x}} _ {1} ^ {\\infty}) \\rangle , \\dots , \\langle \\phi (\\hat {\\boldsymbol {x}}), \\phi (\\hat {\\boldsymbol {x}} _ {n} ^ {\\infty}) \\rangle\\right) ^ {\\top} \\cdot \\boldsymbol {N T K} _ {\\text {t r a i n}} ^ {- 1} \\boldsymbol {Y} (1) \\\\ = \\phi (\\hat {\\boldsymbol {x}}) ^ {\\top} \\boldsymbol {\\Phi} _ {t r a i n} ^ {\\top} \\boldsymbol {N} \\boldsymbol {T} \\boldsymbol {K} _ {t r a i n} ^ {- 1} \\boldsymbol {Y} (2) \\\\ = \\phi (\\hat {\\boldsymbol {x}}) ^ {\\top} \\boldsymbol {\\beta}. (3) \\\\ \\end{array}\n$$\n\nAttaining a closed form expression of $NTK_{train}^{-1}$ is a desirable but non-trivial analysis. Fortunately, later in this section, we will see how the definition of $\\varphi^{\\infty}$ induces the NTK gram to a closed-form asymptotic pseudo-inverse. But first, we recognize the application of Tikhonov regularization, which ensures the invertibility of the NTK gram matrix and induces a choice of $\\beta$ equivalent to the min-norm definition of the unique $\\beta_{NTK}$ . Tikhonov regularization was chosen for its simple usage but is also an approach supported by Rangamani, Rosasco, and Poggio [17]. We express $\\beta_{NTK}$ in terms of the Tikhonov regularized NTK gram matrix:\n\n$$\n\\boldsymbol {\\beta} _ {N T K} = \\boldsymbol {\\Phi} _ {t r a i n} ^ {\\top} \\left(\\boldsymbol {N} \\boldsymbol {T} \\boldsymbol {K} _ {t r a i n} + \\boldsymbol {\\Gamma}\\right) ^ {- 1} \\boldsymbol {Y} \\tag {4}\n$$\n\nfor Tikhonov matrix $\\Gamma = \\delta I, \\delta \\to 0^{+}$ . Before we solve for the pseudo-inverse of $NTK_{train}$ , we take note of the induced behavior of the indication function for a training data point under the definition of $\\varphi^{\\infty}$ , where we find that the indication depends solely on the dot product between any particular feature direction $\\boldsymbol{w}$ and the special $\\boldsymbol{v}_{\\varphi}$ that translates $\\varphi$ . In other words, under the definition of $\\varphi^{\\infty}$ , ReLU indicators for any training data point $\\boldsymbol{x}_i^\\infty \\in \\varphi^\\infty$ become input agnostic insofar that they become independent from $\\boldsymbol{x}_i \\in \\varphi$ :\n\n$$\n\\begin{array}{l} \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right) (5) \\\\ = \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {i} - t \\hat {\\boldsymbol {v}}\\right) \\geq 0\\right) (6) \\\\ = \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0). (7) \\\\ \\end{array}\n$$\n\nThe independence that arises between the indicators and the training inputs is a crucial insight and will be a recurring assistance that enables the pseudo-inversion of the NTK gram. Speaking of which, by definition of the neural tangent kernel, the $(i,j)$ -th entry of $NTK_{train}$ can be expressed as:\n\n$$\n\\begin{array}{l} \\boldsymbol {N T K} _ {\\text {t r a i n}} [ i, j ] = \\left\\langle \\phi \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right), \\phi \\left(\\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) \\right\\rangle (8) \\\\ = \\int \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\cdot \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty} \\geq 0\\right) (9) \\\\ + \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) \\cdot \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty} \\geq 0\\right) d \\mathbb {P} (\\boldsymbol {w}) (10) \\\\ \\end{array}\n$$\n\nfor any pair $(x_i^\\infty, x_j^\\infty)$ taken from the labeled training set. We observe the emergence of an indication pair in lines (9)-(10). But since indicators become input agnostic, we greatly simplify their indicating logic using equation (7):\n\n$$\n\\begin{array}{l} \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty} \\geq 0\\right) (11) \\\\ = \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} - t \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) \\geq 0\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j} - t \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) \\geq 0\\right) (12) \\\\ = \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) ^ {2} (13) \\\\ = \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0). (14) \\\\ \\end{array}\n$$\n\nThen we apply the definition of $\\varphi^{\\infty}$ to expand the dot product in equation (9):\n\n$$\n\\begin{array}{l} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\cdot \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty} (15) \\\\ = \\left(\\hat {\\boldsymbol {x}} _ {i} - t \\hat {\\boldsymbol {v}}\\right) \\cdot \\left(\\hat {\\boldsymbol {x}} _ {j} - t \\hat {\\boldsymbol {v}}\\right) (16) \\\\ = \\hat {\\boldsymbol {v}} ^ {2} t ^ {2} - \\left(\\hat {\\boldsymbol {x}} _ {i} + \\hat {\\boldsymbol {x}} _ {j}\\right) \\cdot \\hat {\\boldsymbol {v}} t + \\hat {\\boldsymbol {x}} _ {i} \\cdot \\hat {\\boldsymbol {x}} _ {j}, (17) \\\\ \\end{array}\n$$\n\nas well as the dot product in equation (10):\n\n$$\n\\begin{array}{l} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) \\cdot \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) (18) \\\\ = \\left(\\boldsymbol {w} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {i} - t \\hat {\\boldsymbol {v}}\\right)\\right) \\cdot \\left(\\boldsymbol {w} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {j} - t \\hat {\\boldsymbol {v}}\\right) \\right. (19) \\\\ = \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) ^ {2} t ^ {2} - \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} + \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j}\\right) t + \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i}\\right) \\cdot \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j}\\right), (20) \\\\ \\end{array}\n$$\n\nto rewrite the $(i,j)$ -th entry of the NTK gram matrix using lines (14), (17), and (20) as:\n\n$$\n\\begin{array}{l} \\boldsymbol {N T K} _ {\\text {t r a i n}} [ i, j ] (21) \\\\ = t ^ {2} \\int \\left(\\hat {\\boldsymbol {v}} ^ {2} + \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) ^ {2}\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) d \\mathbb {P} (\\boldsymbol {w}) (22) \\\\ - t \\int \\left(\\left(\\hat {\\boldsymbol {x}} _ {i} + \\hat {\\boldsymbol {x}} _ {j}\\right) \\cdot \\hat {\\boldsymbol {v}} + \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} + \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j}\\right)\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) d \\mathbb {P} (\\boldsymbol {w}) (23) \\\\ + \\int (\\hat {\\boldsymbol {x}} _ {i} \\cdot \\hat {\\boldsymbol {x}} _ {j} + (\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i}) \\cdot (\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {j})) \\cdot \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) d \\mathbb {P} (\\boldsymbol {w}). (24) \\\\ \\end{array}\n$$\n\nThe quadratic form that emerges in lines (22)-(24) is a direct consequence of applying the definition of $\\varphi^{\\infty}$ ; It defines the NTK gram matrix induced by the limiting training set. It is a beautiful structure because the leading-order term is the only term in the quadratic form that does not depend on indices $i$ and $j$ . Without this particular structure, pseudo-inverting the matrix $(NTK_{train} + \\Gamma)$ for a closed-form would be a more difficult analysis. Since line (22) is the leading order term, the resulting form intuitively suggests that as $\\varphi$ is shifted further from the origin along some direction $\\pmb{v}_{\\varphi}$ , the kernel regression solution depends less on the inputs of $\\varphi$ and more on the direction $\\pmb{v}_{\\varphi}$ :\n\n$$\n\\boldsymbol {N T K} _ {\\text {t r a i n}} [ i, j ] \\asymp t ^ {2} \\kappa , \\tag {25}\n$$\n\nwhere $\\kappa$ is a constant equal to the integral of line (22). Therefore, in the limit as $t\\to \\infty$ , we find that the $(i,j)$ -th entry of the NTK gram does not depend on $\\varphi$ . The asymptotic form is then a constant matrix, meaning that $\\mathbf{NTK}_{train}[i,j]$ is constant for any pair $(i,j)$ . We can finally invert the regularized NTK gram from line (4) as:\n\n$$\n\\begin{array}{l} \\left(\\boldsymbol {N} \\boldsymbol {T} \\boldsymbol {K} _ {\\text {t r a i n}} + \\boldsymbol {\\Gamma}\\right) ^ {- 1} (26) \\\\ \\asymp \\left(t ^ {2} \\kappa J + \\Gamma\\right) ^ {- 1} (27) \\\\ = \\left(\\delta I + \\left(t ^ {2} \\mathbf {1}\\right) (\\kappa \\mathbf {1}) ^ {\\top}\\right) ^ {- 1} (28) \\\\ = \\frac {1}{\\delta} \\boldsymbol {I} - \\frac {t ^ {2} \\kappa}{\\delta (n \\kappa t ^ {2} + \\delta)} \\boldsymbol {J}, (29) \\\\ \\end{array}\n$$\n\nwhere $J[i,j] = 1$ for any pair of indices $(i,j)$ . The penultimate equality has $\\delta I$ from our definition of $\\Gamma$ with the outer product between $t^2\\mathbf{1}$ and $\\kappa \\mathbf{1}$ . In the final equality, one inverts the matrix using the Sherman-Morrison formula [14]. It follows from line (29) that the $(i,j)$ -th entry of the NTK gram asymptotic pseudo-inverse is:\n\n$$\n\\left(\\boldsymbol {N} \\boldsymbol {T} \\boldsymbol {K} _ {\\text {t r a i n}} + \\boldsymbol {\\Gamma}\\right) ^ {- 1} [ i, j ] \\asymp \\left\\{ \\begin{array}{l l} - \\frac {\\kappa t ^ {2}}{\\delta (n \\kappa t ^ {2} + \\delta)}, & \\text {i f} i \\neq j \\\\ \\frac {1}{\\delta} - \\frac {\\kappa t ^ {2}}{\\delta (n \\kappa t ^ {2} + \\delta)}, & \\text {i f} i = j \\end{array} \\right. \\tag {30}\n$$\n\nUsing the piecewise definition of equation (30), let $\\alpha_{NTK} \\asymp \\left(\\frac{1}{\\delta}\\mathbf{I} - \\frac{t^2\\kappa}{\\delta(n\\kappa t^2 + \\delta)}\\mathbf{J}\\right)\\mathbf{Y}$ denote the matrix-vector product between the label vector $\\mathbf{Y}$ and the asymptotic pseudo-inverse. Note that $\\alpha_{NTK}$ is sub-scripted as such so that the applied regularization is explicit. It is not difficult to calculate the closed-form of the $i$ -th entry of $\\alpha_{NTK}$ :\n\n$$\n\\begin{array}{l} \\boldsymbol {\\alpha} _ {N T K} [ i ] = \\left(\\frac {1}{\\delta} \\boldsymbol {I} - \\frac {t ^ {2} \\kappa}{\\delta (n \\kappa t ^ {2} + \\delta)} \\boldsymbol {J}\\right) [ i ] \\cdot \\boldsymbol {Y} (31) \\\\ = \\sum_ {j = 1} ^ {n} \\left(\\frac {1}{\\delta} \\boldsymbol {I} - \\frac {t ^ {2} \\kappa}{\\delta (n \\kappa t ^ {2} + \\delta)} \\boldsymbol {J}\\right) [ i, j ] \\cdot \\boldsymbol {Y} [ j ] (32) \\\\ = \\sum_ {j = 1} ^ {n} \\left(- \\frac {t ^ {2} \\kappa}{\\delta (n \\kappa t ^ {2} + \\delta)}\\right) g \\left(\\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) + \\frac {1}{\\delta} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) (33) \\\\ = - \\frac {t ^ {2} \\kappa}{\\delta (n \\kappa t ^ {2} + \\delta)} \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) + \\frac {1}{\\delta} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right). (34) \\\\ \\end{array}\n$$\n\nAnd, it should be made clear the values of the $\\beta$ components. There are two components associated with a feature direction $\\pmb{w}^{(k)}$ for any $k$ . We follow the notation of Xu et al. [20] and denote the first (vector) beta component as $\\beta_{\\pmb{w}}^{1}$\n\nand the second (scalar) beta component as $\\beta_{\\pmb{w}}^{2}$ denoting $\\pmb{w}$ as a shorthand for any particular $\\pmb{w}^{(k)}$ . See line (38) below:\n\n$$\n\\begin{array}{l} = \\boldsymbol {\\Phi} _ {t r a i n} ^ {\\top} \\boldsymbol {\\alpha} (35) \\\\ = \\alpha_ {1} \\phi \\left(\\hat {\\boldsymbol {x}} _ {1} ^ {\\infty}\\right) + \\alpha_ {2} \\phi \\left(\\hat {\\boldsymbol {x}} _ {2} ^ {\\infty}\\right) + \\dots + \\alpha_ {n} \\phi \\left(\\hat {\\boldsymbol {x}} _ {n} ^ {\\infty}\\right) (36) \\\\ = \\boldsymbol {\\alpha} _ {1} \\left[ \\begin{array}{c} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) \\\\ \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) \\\\ \\vdots \\end{array} \\right] + \\dots + \\boldsymbol {\\alpha} _ {n} \\left[ \\begin{array}{c} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\vdots \\end{array} \\right] (37) \\\\ = \\left[ \\begin{array}{c} \\boldsymbol {\\alpha} _ {1} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) + \\dots + \\boldsymbol {\\alpha} _ {n} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\boldsymbol {\\alpha} _ {1} \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) + \\dots + \\boldsymbol {\\alpha} _ {n} \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\boldsymbol {\\alpha} _ {1} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) + \\dots + \\boldsymbol {\\alpha} _ {n} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\boldsymbol {\\alpha} _ {1} \\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} ^ {\\infty} \\geq 0\\right) + \\dots + \\boldsymbol {\\alpha} _ {n} \\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {(k + 1) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {n} ^ {\\infty} \\geq 0\\right) \\\\ \\vdots \\end{array} \\right]. (38) \\\\ \\end{array}\n$$\n\nIt follows from line (38) that the components of $\\beta_{NTK}$ can be written as:\n\n$$\n\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} = \\sum_ {i = 1} ^ {n} \\alpha_ {N T K} [ i ] \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right) \\tag {39}\n$$\n\n$$\n\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} = \\sum_ {i = 1} ^ {n} \\boldsymbol {\\alpha} _ {N T K} [ i ] \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\geq 0\\right). \\tag {40}\n$$\n\nLastly, we use lines (7), (34), and the definition of $\\varphi^{\\infty}$ , to finally rewrite equations (39)-(40) for a closed-form of the first and second beta components that are induced by the definition of $\\varphi^{\\infty}$ :\n\n$$\n\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} = \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) \\cdot \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) \\sum_ {i = 1} ^ {n} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\tag {41}\n$$\n\n$$\n\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} = \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) \\cdot \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) \\sum_ {i = 1} ^ {n} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\tag {42}\n$$\n\nwhere $C(t, \\delta, \\kappa) = -\\frac{t^2 \\kappa}{\\delta (n \\kappa t^2 + \\delta)}$ with $t \\to \\infty$ , $\\delta \\to 0^+$ , and $\\kappa$ depending on $\\mathbf{w}$ . One final note as an aside is that we can digress into a separate but related analysis if we take the target $g$ to be linear, i.e., we can apply the equivalence $g(\\hat{\\boldsymbol{x}}_i^\\infty) = g(\\hat{\\boldsymbol{x}}_i) - tg(\\hat{\\boldsymbol{v}}_\\varphi)$ to lines (41)-(42) and analyze unrelated forms. Although this is irrelevant to the paper, it may lead to an alternate proof of somewhat interesting findings.\n\n# A.2 Proof of Lemma 1\n\nIf $\\pmb{x}_0 \\in \\mathcal{X}$ is an evaluation point then let $\\pmb{x}_1 = \\pmb{x}_0 + h\\pmb{v}$ for some direction $\\pmb{v}$ . We can compute the $z$ -th directional derivative of $f_{NTK}$ recursively using the standard limit definition:\n\n$$\nD _ {\\boldsymbol {v}} ^ {z} f _ {N T K} \\left(\\boldsymbol {x} _ {0}\\right) = \\lim _ {h \\rightarrow 0} \\frac {D _ {\\boldsymbol {v}} ^ {z - 1} f _ {N T K} \\left(\\boldsymbol {x} _ {1}\\right) - D _ {\\boldsymbol {v}} ^ {z - 1} f _ {N T K} \\left(\\boldsymbol {x} _ {0}\\right)}{h}. \\tag {43}\n$$\n\nUsing the definition of $f_{NTK}$ we can expand the numerator of equation (43) for the first directional derivative:\n\n$$\n\\begin{array}{l} f _ {N T K} (\\hat {\\boldsymbol {x}} _ {1}) - f _ {N T K} (\\hat {\\boldsymbol {x}} _ {0}) (44) \\\\ = \\boldsymbol {\\beta} _ {N T K} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} - \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}, \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} - \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}, \\dots\\right), (45) \\\\ \\end{array}\n$$\n\nwhere $\\mathbb{I}_0^{(k)} = \\mathbb{I}\\left(\\boldsymbol{w}^{(k)}^\\top \\hat{\\boldsymbol{x}}_0 \\geq 0\\right)$ , $\\mathbb{I}_1^{(k)} = \\mathbb{I}\\left(\\boldsymbol{w}^{(k)}^\\top \\hat{\\boldsymbol{x}}_1 \\geq 0\\right)$ , ... are defined for notational brevity. The numerator for the second directional derivative is similarly expanded, omitting an $h$ which has been factored out:\n\n$$\n\\begin{array}{l} \\left(f _ {N T K} \\left(\\hat {\\boldsymbol {x}} _ {2}\\right) - f _ {N T K} \\left(\\hat {\\boldsymbol {x}} _ {1}\\right)\\right) - \\left(f _ {N T K} \\left(\\hat {\\boldsymbol {x}} _ {1}\\right) - f _ {N T K} \\left(\\hat {\\boldsymbol {x}} _ {0}\\right)\\right) (46) \\\\ = \\boldsymbol {\\beta} _ {N T K} ^ {\\top} \\left(\\left(\\hat {\\boldsymbol {x}} _ {2} \\cdot \\mathbb {I} _ {2} ^ {(k)} - \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)}\\right) - \\left(\\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} - \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}\\right), \\right. (47) \\\\ \\end{array}\n$$\n\n$$\n\\begin{array}{l} \\left(\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {2} \\cdot \\mathbb {I} _ {2} ^ {(k)} - \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)}\\right) - \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} - \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}, \\dots\\left. \\right) (48) \\\\ = \\boldsymbol {\\beta} _ {N T K} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {2} \\cdot \\mathbb {I} _ {2} ^ {(k)} - 2 \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} + \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}, \\right. (49) \\\\ \\end{array}\n$$\n\n$$\n\\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {2} \\cdot \\mathbb {I} _ {2} ^ {(k)} - 2 \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {1} \\cdot \\mathbb {I} _ {1} ^ {(k)} + \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {0} \\cdot \\mathbb {I} _ {0} ^ {(k)}, \\dots), \\tag {50}\n$$\n\nand so forth. The point is that the $z$ -th directional derivative of $f_{NTK}$ will contain the terms $\\pmb{x}_0, \\pmb{x}_1, \\dots, \\pmb{x}_z$ where $\\pmb{x}_z = \\pmb{x}_0 + zh\\pmb{v}$ where we repeatedly differentiate along the same direction $\\pmb{v}$ .\n\nNext, let $\\Sigma_{\\hat{\\pmb{x}},\\mathbb{I}^{(k)}}^{(z)}$ be defined as:\n\n$$\n\\Sigma_ {\\hat {\\boldsymbol {x}}, \\mathbb {I} ^ {(k)}} ^ {(z)} = P _ {z} ^ {(z)} \\hat {\\boldsymbol {x}} _ {z} \\mathbb {I} _ {z} ^ {(k)} + P _ {z - 1} ^ {(z)} \\hat {\\boldsymbol {x}} _ {z - 1} \\mathbb {I} _ {z - 1} ^ {(k)} + \\dots + P _ {0} ^ {(z)} \\hat {\\boldsymbol {x}} _ {0} \\mathbb {I} _ {0} ^ {(k)}, \\tag {51}\n$$\n\nwhere the coefficients $P_{z}^{(z)}, P_{z-1}^{(z)}, \\ldots, P_{0}^{(z)}$ represent the sign-alternating Pascal coefficients of the $z$ -th line in a 0-indexed Pascal triangle, e.g., $P_{1}^{(1)} = 1$ and $P_{0}^{(1)} = -1$ . We can now generally rewrite the $z$ -th directional derivative of $f_{NTK}$ using equation (51) as:\n\n$$\nD _ {\\boldsymbol {v}} ^ {z} f \\left(\\hat {\\boldsymbol {x}} _ {0}\\right) = \\lim _ {h \\rightarrow 0} \\frac {\\boldsymbol {\\beta} _ {N T K} ^ {\\top} \\left(\\Sigma_ {\\hat {\\boldsymbol {x}}, \\mathbb {I} ^ {(k)}} ^ {(z)}, \\boldsymbol {w} ^ {(k)} ^ {\\top} \\Sigma_ {\\hat {\\boldsymbol {x}}, \\mathbb {I} ^ {(k)}}, \\dots\\right)}{h ^ {z}}. \\tag {52}\n$$\n\nThe last definition in preparation for the proof of Lemma 1 will be the sum $\\Sigma_{\\mathbb{I}^{(k)}}^{(z)}$ , defined in terms of the indicators:\n\n$$\n\\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} = P _ {z} ^ {(z)} \\mathbb {I} _ {z} ^ {(k)} + P _ {z - 1} ^ {(z)} \\mathbb {I} _ {z - 1} ^ {(k)} + \\dots + P _ {0} ^ {(z)} \\mathbb {I} _ {0} ^ {(k)}. \\tag {53}\n$$\n\nLemma 1. The feature map of the $z$ -th directional derivative of $f_{NTK}$ for any direction $\\mathbf{v}_0$ can be expressed in terms of the $z$ -th and $(z - 1)$ -th directional derivatives of the indicator for $\\mathbf{v}_0$ such that:\n\n$$\nD _ {\\boldsymbol {v}} ^ {z} f _ {N T K} (\\boldsymbol {x} _ {0}) = \\beta_ {N T K} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {0} \\cdot D _ {\\boldsymbol {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\hat {\\boldsymbol {v}} \\cdot D _ {\\boldsymbol {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\boldsymbol {w} ^ {(k)} ^ {\\top} \\hat {\\boldsymbol {x}} _ {0} \\cdot D _ {\\boldsymbol {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {v}} \\cdot D _ {\\boldsymbol {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\dots\\right)\n$$\n\nProof. The first term of $\\Sigma_{\\hat{\\mathbf{x}},\\mathbb{I}^{(k)}}^{(z)}$ is $P_z^{(z)}\\hat{x}_z\\mathbb{I}_z^{(k)}$ . Since $P_z^{(z)}$ always lies on the left edge of the Pascal triangle, we always have $P_z^{(z)}\\hat{x}_z\\mathbb{I}_z^{(k)} = \\hat{x}_z\\mathbb{I}_z^{(k)}$ . We use the trick\n\n$$\n\\begin{array}{l} \\hat {\\boldsymbol {x}} _ {z} \\mathbb {I} _ {z} ^ {(k)} (54) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\left(\\mathbb {I} _ {z} ^ {(k)} + \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z)} - \\mathbb {I} _ {z} ^ {(k)}\\right) - \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z)} - \\mathbb {I} _ {z} ^ {(k)}\\right)\\right) (55) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\hat {\\boldsymbol {x}} _ {z} \\cdot \\left(\\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\mathbb {I} _ {z} ^ {(k)}\\right) (56) \\\\ \\end{array}\n$$\n\nso that\n\n$$\n\\begin{array}{l} \\sum_ {\\hat {\\boldsymbol {x}}, \\mathbb {I} ^ {(k)}} ^ {(z)} (57) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\hat {\\boldsymbol {x}} _ {z} \\cdot \\left(\\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\mathbb {I} _ {z} ^ {(k)}\\right) + \\left(\\Sigma_ {\\hat {\\boldsymbol {x}}. \\mathbb {I} ^ {(k)}} ^ {(z)} - \\hat {\\boldsymbol {x}} _ {z} \\mathbb {I} _ {z} ^ {(k)}\\right) (58) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} + \\Sigma_ {\\hat {\\boldsymbol {x}}, \\mathbb {I} ^ {(k)}} ^ {(z)} - \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} (59) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} + \\left(P _ {z - 1} ^ {(z)} \\hat {\\boldsymbol {x}} _ {z - 1} \\mathbb {I} _ {z - 1} ^ {(k)} - P _ {z - 1} ^ {(z)} \\hat {\\boldsymbol {x}} _ {z} \\mathbb {I} _ {z - 1} ^ {(k)}\\right) + \\dots + \\left(P _ {0} ^ {(z)} \\hat {\\boldsymbol {x}} _ {0} \\mathbb {I} _ {0} ^ {(k)} - P _ {0} ^ {(z)} \\hat {\\boldsymbol {x}} _ {z} \\mathbb {I} _ {0} ^ {(k)}\\right) (60) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} _ {(k)} ^ {(z)}} ^ {(z)} + P _ {z - 1} ^ {(z)} \\mathbb {I} _ {z - 1} ^ {(k)} \\left(\\hat {\\boldsymbol {x}} _ {z - 1} - \\hat {\\boldsymbol {x}} _ {z}\\right) + \\dots + P _ {0} ^ {(z)} \\mathbb {I} _ {0} ^ {(k)} \\left(\\hat {\\boldsymbol {x}} _ {0} - \\hat {\\boldsymbol {x}} _ {z}\\right) (61) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} + P _ {z - 1} ^ {(z)} \\mathbb {I} _ {z - 1} ^ {(k)} ([ - h \\boldsymbol {v} | 0 ]) + \\dots + P _ {0} ^ {(z)} \\mathbb {I} _ {0} ^ {(k)} ([ - z h \\boldsymbol {v} | 0 ]) (62) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\left(\\sum_ {i = 0} ^ {z - 1} P _ {i} ^ {(z) \\mathbb {I} _ {i} ^ {(k)}} (z - i) h [ \\boldsymbol {v} \\mid 0 ]\\right) (63) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\left(h [ \\boldsymbol {v} \\mid 0 ] \\sum_ {i = 0} ^ {z - 1} P _ {i} ^ {(z)} \\mathbb {I} _ {i} ^ {(k)} (z - i)\\right) (64) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - \\left(h [ \\boldsymbol {v} \\mid 0 ] \\sum_ {i = 0} ^ {z - 1} z P _ {i} ^ {(z - 1) \\mathbb {I} _ {i} ^ {(k)}}\\right) (65) \\\\ = \\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} (k)} ^ {(z)} - z h [ \\boldsymbol {v} \\mid 0 ] \\cdot \\Sigma_ {\\mathbb {I} (k)} ^ {(z - 1)} (66) \\\\ \\end{array}\n$$\n\nwhere we use the algebraic trick of lines (54)-(56) to get a secondary trick on line (59), which would be otherwise more difficult to see. Then, lines (60)-(62) follow from the definitions of lines (51) and (53). Another critical step in the proof sequence is the penultimate equality which realizes the equivalence between $P_{i}^{(z)}(z - i)$ and $zP_{i}^{(z - 1)}$ for $i = 0,\\dots ,z - 1$ . This equivalence finds a correspondence from a coefficient on the $z$ -th line of the Pascal triangle to the number on the left in the previous $(z - 1)$ -th line. And since the equivalence is defined for $i = 0,\\dots ,z - 1$ , it is well-defined because the correspondence from a coefficient $P_{i}^{(z)}$ to the preceding coefficient $P_{i}^{(z - 1)}$ on the left of the triangle is only undefined when $i = z$ which would be out of bounds with respect to an indexing error on the $(z - 1)$ -th line. Also note that this equivalence implicitly requires $z\\geq 1$ since we are computing derivatives.\n\nThe significance of the result on line (66) is that we can reformulate equation (52) to be expressed in terms of the definition (53), effectively re-expressing the binomial expansion coefficients, which comes from the limit definition of the directional derivative of $f_{NTK}$ , in terms of the indicators. And since we only manipulated equation (51), the limit is now taken with respect to the indicators. The point is that we can now write the $z$ -th derivative of $f_{NTK}$ in terms of the $z$ -th and $(z - 1)$ -th directional derivatives of $\\mathbb{I}$ :\n\n$$\n\\begin{array}{l} D _ {\\boldsymbol {v}} ^ {z} f (\\hat {\\boldsymbol {x}} _ {0}) (67) \\\\ = \\lim _ {h \\rightarrow 0} \\frac {\\beta_ {\\mathrm {N T K}} ^ {\\top}}{h ^ {z}} \\left(\\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - z h \\hat {\\boldsymbol {v}} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z - 1)}, \\boldsymbol {w} ^ {(k) ^ {\\top}} \\left(\\hat {\\boldsymbol {x}} _ {z} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z)} - z h \\hat {\\boldsymbol {v}} \\cdot \\Sigma_ {\\mathbb {I} ^ {(k)}} ^ {(z - 1)}\\right), \\dots\\right) (68) \\\\ = \\lim _ {h \\rightarrow 0} \\boldsymbol {\\beta} _ {\\mathrm {N T K}} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {z} \\cdot \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z)} / h ^ {z}\\right) - z \\hat {\\boldsymbol {v}} \\cdot \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z - 1)} / h ^ {z - 1}\\right), \\boldsymbol {w} ^ {(k)} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {z} \\cdot \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z)} / h ^ {z}\\right) - z \\hat {\\boldsymbol {v}} \\cdot \\left(\\Sigma_ {\\mathbb {I} (k)} ^ {(z - 1)} / h ^ {z - 1}\\right)\\right), \\dots\\right) (69) \\\\ = \\boldsymbol {\\beta} _ {\\mathrm {N T K}} ^ {\\top} \\left(\\hat {\\boldsymbol {x}} _ {0} \\cdot D _ {\\boldsymbol {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\hat {\\boldsymbol {v}} \\cdot D _ {\\boldsymbol {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {x}} _ {0} \\cdot D _ {\\boldsymbol {v}} ^ {z} \\mathbb {I} ^ {(k)} - z \\boldsymbol {w} ^ {(k) ^ {\\top}} \\hat {\\boldsymbol {v}} \\cdot D _ {\\boldsymbol {v}} ^ {z - 1} \\mathbb {I} ^ {(k)}, \\dots\\right) (70) \\\\ \\end{array}\n$$\n\nwhere line (70) follows from the limit definition of the directional derivative of the indicator evaluated at $\\hat{x}_0$ which completes our proof of Lemma 1.\n\nLet us look closer at line (70). It is well known that the derivative of the indicator (Heaviside function) $\\mathbb{I}$ - or any such step function for this matter - does not classically have a well-defined derivative. This fact makes the analysis beyond equation (70) difficult because we are interested in the derivative of the indicator evaluated at $x_0 = 0$ , which is precisely where the discontinuity exists.\n\nFortunately, we have a workaround. By generalizing the notion of the indicator's derivative, we can consider the distributional derivative of the indicator, which is the Dirac-delta function (impulse spike located at $x_0 = 0$ ). This is a similar workaround to how we pseudo-inverted the otherwise singular constant matrix $J$ from equation (27) by generalizing the notion of the matrix inverse. Using chain rule, the directional derivative of $\\mathbb{I}$ evaluated at $x_0 = 0$ is:\n\n$$\nD _ {\\boldsymbol {v}} ^ {z} \\mathbb {I} (\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {0} \\geq 0) = \\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle^ {z} \\cdot \\delta^ {(z - 1)} (\\boldsymbol {w} _ {d + 1}). \\tag {71}\n$$\n\nEquation (71) gives us a cleaner expression for the $z$ -th derivative of $f_{NTK}$ by the sifting property of the Dirac-delta. Continuing from equation (70), we use line (71) to get:\n\n$$\n\\begin{array}{l} = \\int \\left(\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}\\right) _ {d + 1} \\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle^ {z} \\cdot \\delta^ {(z - 1)} \\left(\\boldsymbol {w} _ {d + 1}\\right) d \\mathbb {P} (\\boldsymbol {w}) - \\int z \\langle \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} \\hat {\\boldsymbol {v}} \\rangle \\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle^ {z - 1} \\cdot \\delta^ {(z - 2)} \\left(\\boldsymbol {w} _ {d + 1}\\right) d \\mathbb {P} (\\boldsymbol {w}) (72) \\\\ + \\int \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\boldsymbol {w} _ {d + 1} \\left\\langle \\check {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z} \\cdot \\delta^ {(z - 1)} \\left(\\boldsymbol {w} _ {d + 1}\\right) d \\mathbb {P} (\\boldsymbol {w}) - \\int z \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\left\\langle \\check {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z} \\cdot \\delta^ {(z - 2)} \\left(\\boldsymbol {w} _ {d + 1}\\right) d \\mathbb {P} (\\boldsymbol {w}) (73) \\\\ = (- 1) ^ {z - 1} \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z} \\left[ \\frac {\\partial^ {z - 1}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z - 1}} \\left(\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}\\right) _ {d + 1} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} + (- 1) ^ {z - 1} z \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z - 1} \\left[ \\frac {\\partial^ {z - 2}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z - 2}} \\left\\langle \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} \\hat {\\boldsymbol {v}} \\right\\rangle \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} (74) \\\\ + (- 1) ^ {z - 1} \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z} \\left[ \\frac {\\partial^ {z - 1}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z - 1}} \\beta_ {\\boldsymbol {w}} ^ {2} \\boldsymbol {w} _ {d + 1} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} + (- 1) ^ {z - 1} z \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {z} \\left[ \\frac {\\partial^ {z - 2}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z - 2}} \\beta_ {\\boldsymbol {w}} ^ {2} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} (75) \\\\ \\end{array}\n$$\n\nwhich rewrites the derivative of $f_{NTK}$ in terms of the high derivatives of the beta components. The check notation $\\check{\\pmb{w}}$ emphasizes that the direction vector no longer depends on the bias component, $\\pmb{w}_{d+1}$ . Actually, the equivalence between $\\check{\\pmb{w}}^{\\top}\\pmb{v}$ and $\\pmb{w}^{\\top}\\hat{\\pmb{v}}$ is a useful consideration.\n\nAt this point, the emergence of the Dirac impulse suggests that nonlinearity may be preserved in high orders. Nonlinearity, if preserved, depends closely on the relationship between directions $\\boldsymbol{w}$ and $\\hat{\\boldsymbol{v}}$ as well as the derivative of the $\\beta_{NTK}$ components with respect to $\\boldsymbol{w}_{d+1}$ . However, the precise degree of nonlinearity remains unknown because we have not yet accounted for the influence of $\\varphi^{\\infty}$ . The upcoming section demonstrates that solving for the derivative of the $\\beta_{NTK}$ components simultaneously accounts for the positions of $\\varphi^{\\infty}$ .\n\n# A.3 Proof of Lemma 2\n\nFollowing lines (74)-(75), we must solve for the partial derivatives of the beta components. We will first consider the terms with a dependence on $\\boldsymbol{w}$ ; Recall the forms of the beta components from section 3 equations (41)-(42):\n\n$$\n\\begin{array}{l} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} = \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) \\cdot \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) \\sum_ {i = 1} ^ {n} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\\\ \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} = \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) \\cdot \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g (\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}) \\sum_ {i = 1} ^ {n} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} g (\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty})\\right). \\\\ \\end{array}\n$$\n\nFirstly, the partial derivative of the indicator is a trivial analysis. We recall that the distributional derivative of the indicator is the Dirac-delta function. And, since the bias component of a direction vector $\\hat{\\pmb{v}}_{d + 1}$ is 0, it is not difficult to see that the $z$ -th partial derivative of the indicator is 0 with respect to $\\pmb{w}_{d + 1}$ for all $z \\geq 1$ .\n\n$$\n\\frac {\\partial}{\\partial \\boldsymbol {w} _ {d + 1}} \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) = \\delta (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}})) \\cdot (- \\hat {\\boldsymbol {v}} _ {d + 1}) \\tag {76}\n$$\n\nSecondly, the partial derivative of the dot product $\\boldsymbol{w}^{\\top}\\hat{\\boldsymbol{x}}_i^\\infty$ is also trivial to solve. For clarity, we apply the definition of $\\varphi^{\\infty}$ before computing the partial derivative. Since the bias components of a data point and direction vector are 1 and 0 respectively, it is clear to see that the partial derivative equals 1.\n\n$$\n\\begin{array}{l} \\frac {\\partial}{\\partial \\boldsymbol {w} _ {d + 1}} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} (77) \\\\ = \\frac {\\partial}{\\partial \\boldsymbol {w} _ {d + 1}} \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {x}} _ {i} - t \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right)\\right) (78) \\\\ = \\left(\\hat {\\boldsymbol {x}} _ {i}\\right) _ {d + 1} - t \\hat {\\boldsymbol {v}} _ {d + 1} (79) \\\\ \\end{array}\n$$\n\nLastly, we want to discover the partial derivative of the constant $C(t,\\delta,\\kappa)$ . Recalling its definition, $\\kappa$ is the only term in $C(t,\\delta,\\kappa)$ that depends on $\\pmb{w}$ . We find that the $z$ -th derivative of $\\kappa$ is 0 for any $z \\geq 1$ :\n\n$$\n\\begin{array}{l} \\frac {\\partial^ {z} \\kappa}{\\partial w _ {d + 1} ^ {z}} (80) \\\\ = \\frac {\\partial^ {z}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}} \\int (\\hat {\\boldsymbol {v}} ^ {2} + (\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}) ^ {2}) \\cdot \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) d \\mathbb {P} (\\boldsymbol {w}) (81) \\\\ = \\int \\frac {\\partial^ {z}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}} \\left(\\hat {\\boldsymbol {v}} ^ {2} + \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) ^ {2}\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) d \\mathbb {P} (\\boldsymbol {w}) (82) \\\\ = \\int \\frac {\\partial^ {z - 1}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z - 1}} \\left(2 \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) \\hat {\\boldsymbol {v}} _ {d + 1} - \\left(\\hat {\\boldsymbol {v}} ^ {2} + \\left(\\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}}\\right) ^ {2}\\right) \\delta \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}})\\right) \\hat {\\boldsymbol {v}} _ {d + 1}\\right) d \\mathbb {P} (\\boldsymbol {w}), (83) \\\\ \\end{array}\n$$\n\nwhere the third equality is 0 from the fact that $\\hat{\\pmb{v}}_{d + 1}$ is 0. We are now prepared to differentiate the beta components.\n\nLemma 2. The components of the NTK representation coefficient $\\beta_{NTK}$ induced by a training input set $\\varphi^{\\infty} = \\{\\pmb{x}_i^\\infty\\}_{i=1}^n$ where $\\pmb{x}_i^\\infty = \\pmb{x}_i - t\\pmb{v}_\\varphi$ for some $\\pmb{x}_i \\in \\mathcal{X}$ and any direction $\\pmb{v}_\\varphi$ are constant with respect to the bias component of any given feature direction $\\pmb{w}_{d+1}$ such that:\n\n$$\n\\frac {\\partial^ {z} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}}, \\frac {\\partial^ {z} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2}}{\\partial \\boldsymbol {w} _ {d + 1} ^ {z}} = 0 \\text {f o r a l l} z \\geq 1.\n$$\n\nProof. Differentiating the first beta component is relatively straightforward. By product rule, we analyze the derivative of the indication on the LHS and the sum on the RHS. We already know that the derivative of the indicator for a training point induced by $\\varphi^{\\infty}$ is 0. We also know that the derivative of kappa is 0. And, since no other terms depend on $\\boldsymbol{w}$ the $z$ -th derivative of the first beta component with respect to $\\boldsymbol{w}_{d+1}$ is simply 0 for all $z \\geq 1$ :\n\n$$\n\\frac {\\partial \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}}{\\partial \\boldsymbol {w} _ {d + 1}} = \\left(\\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {j} ^ {\\infty}\\right) \\sum_ {i = 1} ^ {n} \\hat {\\boldsymbol {x}} _ {i} ^ {\\infty} \\frac {\\partial C (t , \\delta , \\kappa)}{\\partial \\boldsymbol {w} _ {d + 1}}\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right). \\tag {84}\n$$\n\nwhere\n\n$$\n\\frac {\\partial C (t , \\delta , \\kappa)}{\\partial \\boldsymbol {w} _ {d + 1}} = \\left(t ^ {2} \\kappa\\right) \\left(n \\delta t ^ {2} \\frac {\\partial \\kappa}{\\partial \\boldsymbol {w} _ {d + 1}}\\right) + \\left(t ^ {2} \\frac {\\partial \\kappa}{\\partial \\boldsymbol {w} _ {d + 1}}\\right) \\left(\\delta \\left(n \\kappa t ^ {2} + \\delta\\right)\\right). \\tag {85}\n$$\n\nSimilarly, we differentiate the second beta component by product rule. We observe the summation on the RHS where the dependence on $\\boldsymbol{w}$ is more elaborate. Using equation (79) we can see that the derivative of the dot product in the second term of the summation reduces to 1. Then, for the first term, we once again leverage equation (79) and the fact that the derivative of kappa is 0 to discover by a straightforward algebraic manipulation that the $z$ -th derivative of the second beta component approaches 0 for all $z \\geq 1$ :\n\n$$\n\\begin{array}{l} \\frac {\\partial \\beta_ {\\boldsymbol {w}} ^ {2}}{\\partial \\boldsymbol {w} _ {d + 1}} (86) \\\\ = \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) n + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\cdot \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) (87) \\\\ = \\left(C (t, \\delta , \\kappa) \\sum_ {j = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right) n + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) (88) \\\\ = \\left(- \\frac {n \\kappa g _ {\\text {s u m}} t ^ {2}}{\\delta (n \\kappa t ^ {2} + \\delta)} + \\frac {1}{\\delta} \\sum_ {i = 1} ^ {n} g \\left(\\hat {\\boldsymbol {x}} _ {i} ^ {\\infty}\\right)\\right) \\mathbb {I} \\left(\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0\\right) (89) \\\\ = \\left(\\frac {g _ {s u m}}{\\delta} - \\frac {n \\kappa g _ {s u m} t ^ {2}}{\\delta (n \\kappa t ^ {2} + \\delta)}\\right) \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) (90) \\\\ = \\frac {g _ {\\text {s u m}} \\delta}{\\delta \\left(n \\kappa t ^ {2} + \\delta\\right)} \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0) (91) \\\\ = \\frac {g _ {\\text {s u m}}}{n \\kappa t ^ {2} + \\delta} \\mathbb {I} (\\boldsymbol {w} ^ {\\top} (- \\hat {\\boldsymbol {v}}) \\geq 0). (92) \\\\ \\end{array}\n$$\n\nInspecting the final equality, it is not difficult to see that the first derivative of the second beta component approaches 0 as $\\delta \\to 0^{+}$ and $t\\to \\infty$ . Furthermore, since the derivatives of the indicator and kappa are both 0, it is clear to see that by chain rule, the second derivative of the second beta component is also 0. Therefore, the $z$ -th derivative of the second beta component is 0 for all $z\\geq 1$ . This completes our proof of Lemma 2.\n\n# A.4 Proof of Theorem 1\n\nTheorem 1. An over-parameterized two-layer ReLU MLP $f_{NTK}:\\mathbb{R}^d\\to \\mathbb{R}$ that is trained on a labeled set $\\{(x_i^\\infty ,y_i^\\infty)\\}_{i = 1}^n$ with $\\pmb {x}_i^\\infty = \\pmb {x}_i - t\\pmb {v}_\\varphi$ for $\\pmb {x}_i\\in \\mathcal{X}$ and any direction $\\pmb{v}_{\\varphi}$ in the NTK regime minimizing squared loss will converge to a quadratic extrapolator when evaluated at a point near the origin $\\mathbf{0}$ as $t\\rightarrow \\infty$\n\nProof. Under the definition of $\\varphi^{\\infty}$ , Lemma 2 states that $\\frac{\\partial^{z}\\beta_{w}^{1}}{\\partial w_{d + 1}^{z}}$ and $\\frac{\\partial^z\\beta_w^2}{\\partial w_{d + 1}^z}$ are 0 for orders $z\\geq 1$ . But since Lemma 1 shows that $D_{\\pmb{v_0}}^z f_{NTK}$ for any direction $\\pmb{v_0}$ actually depends on the lower ordered $(z - 1)$ -th and $(z - 2)$ -th derivatives $\\frac{\\partial^{z - 1}\\beta_w^1}{\\partial w_{d + 1}^{z - 1}},\\frac{\\partial^{z - 2}\\beta_w^1}{\\partial w_{d + 1}^{z - 2}},\\frac{\\partial^{z - 1}\\beta_w^2}{\\partial w_{d + 1}^{z - 1}}$ , and $\\frac{\\partial^{z - 2}\\beta_w^2}{\\partial w_{d + 1}^{z - 2}}$ , it is not difficult to see that the third and all higher order derivatives are automatically 0. Then, taking $z = 1$ we simplify equation (74) to get an examinable form of the first derivative:\n\n$$\n\\begin{array}{l} D _ {\\boldsymbol {v} _ {0}} f _ {N T K} (\\hat {\\boldsymbol {\\mathbf {0}}}) \\\\ = \\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle \\left[ (\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}) _ {d + 1} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - \\int \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} ^ {\\top} \\hat {\\boldsymbol {v}} \\cdot \\mathbb {I} (\\boldsymbol {w} _ {d + 1} \\geq 0) d \\mathbb {P} (\\boldsymbol {w}) - \\int \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\boldsymbol {w} ^ {\\top} \\hat {\\boldsymbol {v}} \\cdot \\mathbb {I} (\\boldsymbol {w} _ {d + 1} \\geq 0) d \\mathbb {P} (\\boldsymbol {w}). \\\\ \\end{array}\n$$\n\nBut more interestingly, we take $z = 2$ and simplify equation (75) for the second derivative:\n\n$$\n\\begin{array}{l} D _ {\\mathbf {v} _ {0}} ^ {2} f _ {N T K} (\\hat {\\mathbf {0}}) \\\\ = - \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {2} \\left[ \\frac {\\partial}{\\partial \\boldsymbol {w} _ {d + 1}} \\left(\\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1}\\right) _ {d + 1} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - 2 \\left\\langle \\tilde {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle \\left[ \\left\\langle \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} \\hat {\\boldsymbol {v}} \\right\\rangle \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} \\\\ - \\left\\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {2} \\left[ \\frac {\\partial}{\\partial \\boldsymbol {w} _ {d + 1}} \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\boldsymbol {w} _ {d + 1} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - 2 \\left\\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {2} \\left[ \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} \\\\ = - 2 \\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle \\left[ \\langle \\boldsymbol {\\beta} _ {w} ^ {1} \\hat {\\boldsymbol {v}} \\rangle \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - \\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle^ {2} \\left[ \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - 2 \\langle \\breve {\\boldsymbol {w}}, \\boldsymbol {v} \\rangle^ {2} \\left[ \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} \\\\ = - 2 \\left\\langle \\check {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle \\left[ \\left\\langle \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {1} \\hat {\\boldsymbol {v}} \\right\\rangle \\right] _ {\\boldsymbol {w} _ {d + 1} = 0} - 3 \\left\\langle \\check {\\boldsymbol {w}}, \\boldsymbol {v} \\right\\rangle^ {2} \\left[ \\boldsymbol {\\beta} _ {\\boldsymbol {w}} ^ {2} \\right] _ {\\boldsymbol {w} _ {d + 1} = 0}, \\\\ \\end{array}\n$$\n\nto see a great dependence in the final equality on the beta components and dot product between any particular $\\pmb{w}$ and direction of evaluation $\\pmb{v_0}$ . Thus, for the special case of a training input set $\\varphi^{\\infty}$ whose members are located far from the origin, the regressor becomes a quadratic extrapolator when evaluated near the origin."}
# RAMBO: Reliability Analysis for Mamba through Bit-flip attack Optimization Abstract State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer from the quadratic computational and memory costs of attention mechanisms. As SSMs are increasingly deployed in real-world applications, it is critical to assess their susceptibility to both software- and hardware-level threats to ensure secure and reliable operation. Among such threats, hardware-induced bit-flip attacks (BFAs) pose a particularly severe risk by corrupting model parameters through memory faults, thereby undermining model accuracy and functional integrity. To investigate this vulnerability, we introduce RAMBO, the first BFA framework specifically designed to target Mamba-based architectures. Through experiments on the Mamba-1.4b model with LAMBADA benchmark—a cloze-style word-prediction task—we demonstrate that flipping merely a single critical bit can catastrophically reduce accuracy from $74.64\%$ to $0\%$ and increase perplexity from 18.94 to $3.75 \times 10^{6}$ . These results demonstrate the pronounced fragility of SSMs to adversarial perturbations. The framework is open-sourced at https://anonymous.4open.science/r/ RAMBO-DA22. Keywords: State-space models, Mamba, Bit-flip Attack, Hardware faults, Adversarial robustness. # 1 Introduction The increasing popularity of natural language processing (NLP) models has fundamentally expanded the capabilities of Artificial Intelligence (AI), demonstrating remarkable proficiency in generating human-like text, interpreting nuanced context, and executing complex reasoning tasks. These advancements have not only reshaped natural language processing but have also extended AI applications into diverse fields such as computer vision and scientific research, heralding a new era of AI-driven solutions. Statespace models (SSMs), such as Mamba, have emerged as leading sequence-modeling architectures, offering linear-time scalability and strong performance on long-context tasks. Therefore, they have garnered attention as highly attractive alternatives to conventional transformer-based large language models due to their unique combination of efficiency, scalability, and representational strength. Unlike transformers, whose attention mechanism incurs quadratic computational and memory costs, SSMs operate in linear time, enabling fast processing of extremely long sequences. As SSMs continue to be integrated into real-world systems at an accelerated pace, it becomes increasingly important to analyze their vulnerability against both software-based and hardware-based threats to ensure their secure and reliable deployment. A major concern in the reliability of deep learning models arises from hardware-level attacks such as bit-flip attacks (BFAs), which exploit vulnerabilities in memory to corrupt the model's weight parameters. Such corruption can severely degrade model performance and violate its integrity. For example, BFA methodologies including DeepHammer inject faults into DRAM, flipping specific bits in stored weights to impair functionality. Even with advances in memory technology, recent techniques allow for remote, non-physical bit-flip manipulations, thereby expanding the threat surface available to BFAs. Bit-flip attacks (BFAs) have been studied extensively in the context of conventional deep neural networks (DNN). However, traditional BFA strategies often require iterative gradient recomputation after each individual bit-flip. While this is feasible for comparatively small models, it becomes computationally intractable as model size increases. Recent work has begun to expose severe vulnerabilities of transformer-based LLMs to BFAs via alternative strategies, demonstrating that as few as a single bit-flip can catastrophically degrade LLM performance. However, BFA implications for alternative sequence modeling paradigms - in particular structured state-space models (SSMs) - remain largely unexplored. State-space architectures such as Mamba implement fundamentally different information-propagation and parameterization mechanisms, trade off recurrence and selectivity for attention mechanisms. Due to these architectural distinctions, it is not appropriate to assume that attack strategies and defenses developed for transformers transfer directly to SSMs. Consequently, the absence of a systematic study of BFAs on SSMs constitutes a substantive gap in the current AI hardware robustness and security literature. To address this gap, we propose the framework "RAMBO" - a first of its kind SSM-aware BFA pipeline. The primary contribution of the paper are as follows: - We identify and formalize a previously unexplored vulnerability of state-space models (e.g. Mamba) to bit-flip attacks, and propose RAMBO, a first of its kind SSM-aware attack approach, bridging the gap between BFA research and structured sequence models. - RAMBO leverages the structural properties of Mamba-style SSM layers to prioritize critical parameter regions, adapts gradient-estimation and search heuristics, and uses the resulting perturbation effects to identify minimal bit-flip sets that maximally disrupt model behavior. - RAMBO uncovers a significant vulnerability of Mamba models. A mere one bit-flip $(7.14 \times 10^{-10}\%$ of all bits) in Mamba-1.4b, can reduce the LAMBADA word prediction accuracy from $74.64\%$ to $0\%$ , while increasing Wikitext perplexity from 18.94 to $3.75 \times 10^{6}$ . The rest of the paper is organized as follows: Section 2 provides relevant background information. The threat model is discussed in Section 3 and the proposed methodology is detailed in Section 4. Section 5 outlines the experimental setup and discusses the results. The concluding remarks are offered in Section 6. # 2 Background State-space models such as Mamba are architecturally composed of stacked Mamba blocks, each integrating selective state-space layers and projection components that collectively enable the model's long-range sequence modeling capabilities. # 2.1 State-Space Dynamics A Mamba block operates as a parameter-efficient state-space model (SSM) augmented with convolution, projection, and normalization layers. At each time-step $t$ , the latent state $h_t \in \mathbb{R}^n$ evolves according to the recurrence $$ h _ {t + 1} = \left(I + \Delta_ {t} A\right) h _ {t} + \Delta_ {t} B _ {t} x _ {t}, \tag {1} $$ and produces an output $$ y _ {t} = C _ {t} h _ {t} + D x _ {t}, \tag {2} $$ # where: - $A \in \mathbb{R}^{n \times n}$ is the state-transition matrix, parameterized for stability and fixed after training, - $B_{t}, C_{t} \in \mathbb{R}^{n}$ are input-dependent write and read vectors, - $D \in \mathbb{R}$ is a fixed skip coefficient, - $\Delta_t \in \mathbb{R}_+$ is a token- and channel-dependent step size controlling the effective timescale, - $x_{t} \in \mathbb{R}$ is the projected token input. This recurrence is analogous to a recurrent neural network (RNN), but with structured dynamics that can be parallelized efficiently via diagonalization and convolutional scan operations. # 2.2 Parameterization via Projection Layers Let $m$ denote the model embedding dimension, $n$ the latent state dimension per channel, and $r$ the low-rank dimension used for step-size generation. The projection pipeline operates as follows: $$ u _ {t} = W _ {\mathrm {i n}} x _ {t} \in \mathbb {R} ^ {c}, \quad c \approx m, \tag {3} $$ $$ p _ {t} = W _ {\text {p r o j}} u _ {t} \in \mathbb {R} ^ {2 n + r}, \tag {4} $$ $$ p _ {t} = \left(B _ {t} ^ {\text {r a w}}, C _ {t} ^ {\text {r a w}}, \Delta_ {t} ^ {\text {l o w}}\right), \tag {5} $$ $$ \Delta_ {t} = \operatorname {s o f t p l u s} \left(W _ {\Delta} \Delta_ {t} ^ {\text {l o w}}\right) \in \mathbb {R} ^ {c}. \tag {6} $$ # Here: - $W_{\mathrm{in}} \in \mathbb{R}^{m \times c}$ projects model embeddings into an intermediate space, - $W_{\mathrm{proj}} \in \mathbb{R}^{c \times (2n + r)}$ generates raw seeds for $B_t, C_t$ , and the low-rank step-size representation, - $W_{\Delta} \in \mathbb{R}^{r \times c}$ expands the low-rank $\Delta_t^{\mathrm{low}}$ into a per-channel step-size vector. Thus, although $A$ and $D$ are fixed parameters of the model, the effective dynamics are governed by input-dependent $B_{t}$ , $C_{t}$ , and $\Delta_{t}$ that vary across tokens. Therefore, these unique structural characteristics and projection parameterization of Mamba models must be considered in the development of efficient bit-flip attack strategies. Figure 1. Bit-flip attack on state space models like Mamba. # 3 Threat Model The proliferation of large-scale NLP models has heightened concerns over security risk, including backdoor and inference-time attacks. Beyond these vectors, a more insidious threat arises from direct manipulation of model parameters. Under this threat model, an adversary with low-level memory access can alter stored weights of deployed models to induce malicious behavior. Hardware-based attacks, such as RowHammer and Laser Fault Injection, enable such bit-level perturbations to critical model parameters. These bit-flips inject critical errors into the model's computational flow, propagating and compounding across layers and blocks, ultimately producing erroneous outputs, as illustrated for SSMs such as Mamba in Figure 1. The risk is further amplified in Machine Learning as a Service (MLaaS) settings, where shared hardware resources can expose models to co-residency and cross-process vulnerabilities. We categorize the threats into four levels based on the extent of knowledge
# RAMBO: Reliability Analysis for Mamba through Bit-flip attack Optimization Abstract State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer from the quadratic computational and memory costs of attention mechanisms. As SSMs are increasingly deployed in real-world applications, it is critical to assess their susceptibility to both software- and hardware-level threats to ensure secure and reliable operation. Among such threats, hardware-induced bit-flip attacks (BFAs) pose a particularly severe risk by corrupting model parameters through memory faults, thereby undermining model accuracy and functional integrity. To investigate this vulnerability, we introduce RAMBO, the first BFA framework specifically designed to target Mamba-based architectures. Through experiments on the Mamba-1.4b model with LAMBADA benchmark—a cloze-style word-prediction task—we demonstrate that flipping merely a single critical bit can catastrophically reduce accuracy from $74.64\%$ to $0\%$ and increase perplexity from 18.94 to $3.75 \times 10^{6}$ . These results demonstrate the pronounced fragility of SSMs to adversarial perturbations. The framework is open-sourced at https://anonymous.4open.science/r/ RAMBO-DA22. Keywords: State-space models, Mamba, Bit-flip Attack, Hardware faults, Adversarial robustness. # 1 Introduction The increasing popularity of natural language processing (NLP) models has fundamentally expanded the capabilities of Artificial Intelligence (AI), demonstrating remarkable proficiency in generating human-like text, interpreting nuanced context, and executing complex reasoning tasks. These advancements have not only reshaped natural language processing but have also extended AI applications into diverse fields such as computer vision and scientific research, heralding a new era of AI-driven solutions. Statespace models (SSMs), such as Mamba, have emerged as leading sequence-modeling architectures, offering linear-time scalability and strong performance on long-context tasks. Therefore, they have garnered attention as highly attractive alternatives to conventional transformer-based large language models due to their unique combination of efficiency, scalability, and representational strength. Unlike transformers, whose attention mechanism incurs quadratic computational and memory costs, SSMs operate in linear time, enabling fast processing of extremely long sequences. As SSMs continue to be integrated into real-world systems at an accelerated pace, it becomes increasingly important to analyze their vulnerability against both software-based and hardware-based threats to ensure their secure and reliable deployment. A major concern in the reliability of deep learning models arises from hardware-level attacks such as bit-flip attacks (BFAs), which exploit vulnerabilities in memory to corrupt the model's weight parameters. Such corruption can severely degrade model performance and violate its integrity. For example, BFA methodologies including DeepHammer inject faults into DRAM, flipping specific bits in stored weights to impair functionality. Even with advances in memory technology, recent techniques allow for remote, non-physical bit-flip manipulations, thereby expanding the threat surface available to BFAs. Bit-flip attacks (BFAs) have been studied extensively in the context of conventional deep neural networks (DNN). However, traditional BFA strategies often require iterative gradient recomputation after each individual bit-flip. While this is feasible for comparatively small models, it becomes computationally intractable as model size increases. Recent work has begun to expose severe vulnerabilities of transformer-based LLMs to BFAs via alternative strategies, demonstrating that as few as a single bit-flip can catastrophically degrade LLM performance. However, BFA implications for alternative sequence modeling paradigms - in particular structured state-space models (SSMs) - remain largely unexplored. State-space architectures such as Mamba implement fundamentally different information-propagation and parameterization mechanisms, trade off recurrence and selectivity for attention mechanisms. Due to these architectural distinctions, it is not appropriate to assume that attack strategies and defenses developed for transformers transfer directly to SSMs. Consequently, the absence of a systematic study of BFAs on SSMs constitutes a substantive gap in the current AI hardware robustness and security literature. To address this gap, we propose the framework "RAMBO" - a first of its kind SSM-aware BFA pipeline. The primary contribution of the paper are as follows: - We identify and formalize a previously unexplored vulnerability of state-space models (e.g. Mamba) to bit-flip attacks, and propose RAMBO, a first of its kind SSM-aware attack approach, bridging the gap between BFA research and structured sequence models. - RAMBO leverages the structural properties of Mamba-style SSM layers to prioritize critical parameter regions, adapts gradient-estimation and search heuristics, and uses the resulting perturbation effects to identify minimal bit-flip sets that maximally disrupt model behavior. - RAMBO uncovers a significant vulnerability of Mamba models. A mere one bit-flip $(7.14 \times 10^{-10}\%$ of all bits) in Mamba-1.4b, can reduce the LAMBADA word prediction accuracy from $74.64\%$ to $0\%$ , while increasing Wikitext perplexity from 18.94 to $3.75 \times 10^{6}$ . The rest of the paper is organized as follows: Section 2 provides relevant background information. The threat model is discussed in Section 3 and the proposed methodology is detailed in Section 4. Section 5 outlines the experimental setup and discusses the results. The concluding remarks are offered in Section 6. # 2 Background State-space models such as Mamba are architecturally composed of stacked Mamba blocks, each integrating selective state-space layers and projection components that collectively enable the model's long-range sequence modeling capabilities. # 2.1 State-Space Dynamics A Mamba block operates as a parameter-efficient state-space model (SSM) augmented with convolution, projection, and normalization layers. At each time-step $t$ , the latent state $h_t \in \mathbb{R}^n$ evolves according to the recurrence $$ h _ {t + 1} = \left(I + \Delta_ {t} A\right) h _ {t} + \Delta_ {t} B _ {t} x _ {t}, \tag {1} $$ and produces an output $$ y _ {t} = C _ {t} h _ {t} + D x _ {t}, \tag {2} $$ # where: - $A \in \mathbb{R}^{n \times n}$ is the state-transition matrix, parameterized for stability and fixed after training, - $B_{t}, C_{t} \in \mathbb{R}^{n}$ are input-dependent write and read vectors, - $D \in \mathbb{R}$ is a fixed skip coefficient, - $\Delta_t \in \mathbb{R}_+$ is a token- and channel-dependent step size controlling the effective timescale, - $x_{t} \in \mathbb{R}$ is the projected token input. This recurrence is analogous to a recurrent neural network (RNN), but with structured dynamics that can be parallelized efficiently via diagonalization and convolutional scan operations. # 2.2 Parameterization via Projection Layers Let $m$ denote the model embedding dimension, $n$ the latent state dimension per channel, and $r$ the low-rank dimension used for step-size generation. The projection pipeline operates as follows: $$ u _ {t} = W _ {\mathrm {i n}} x _ {t} \in \mathbb {R} ^ {c}, \quad c \approx m, \tag {3} $$ $$ p _ {t} = W _ {\text {p r o j}} u _ {t} \in \mathbb {R} ^ {2 n + r}, \tag {4} $$ $$ p _ {t} = \left(B _ {t} ^ {\text {r a w}}, C _ {t} ^ {\text {r a w}}, \Delta_ {t} ^ {\text {l o w}}\right), \tag {5} $$ $$ \Delta_ {t} = \operatorname {s o f t p l u s} \left(W _ {\Delta} \Delta_ {t} ^ {\text {l o w}}\right) \in \mathbb {R} ^ {c}. \tag {6} $$ # Here: - $W_{\mathrm{in}} \in \mathbb{R}^{m \times c}$ projects model embeddings into an intermediate space, - $W_{\mathrm{proj}} \in \mathbb{R}^{c \times (2n + r)}$ generates raw seeds for $B_t, C_t$ , and the low-rank step-size representation, - $W_{\Delta} \in \mathbb{R}^{r \times c}$ expands the low-rank $\Delta_t^{\mathrm{low}}$ into a per-channel step-size vector. Thus, although $A$ and $D$ are fixed parameters of the model, the effective dynamics are governed by input-dependent $B_{t}$ , $C_{t}$ , and $\Delta_{t}$ that vary across tokens. Therefore, these unique structural characteristics and projection parameterization of Mamba models must be considered in the development of efficient bit-flip attack strategies. Figure 1. Bit-flip attack on state space models like Mamba. # 3 Threat Model The proliferation of large-scale NLP models has heightened concerns over security risk, including backdoor and inference-time attacks. Beyond these vectors, a more insidious threat arises from direct manipulation of model parameters. Under this threat model, an adversary with low-level memory access can alter stored weights of deployed models to induce malicious behavior. Hardware-based attacks, such as RowHammer and Laser Fault Injection, enable such bit-level perturbations to critical model parameters. These bit-flips inject critical errors into the model's computational flow, propagating and compounding across layers and blocks, ultimately producing erroneous outputs, as illustrated for SSMs such as Mamba in Figure 1. The risk is further amplified in Machine Learning as a Service (MLaaS) settings, where shared hardware resources can expose models to co-residency and cross-process vulnerabilities. We categorize the threats into four levels based on the extent of knowledge an attacker can extract from the victim. A full-knowledge attack occurs when the attacker has complete information about the target model, training data and its defenses, enabling them to devise highly optimized attack strategies. In a white-box attack, attackers have full knowledge of the model parameters but lack direct memory access, training data, or its defenses, enabling them to execute fault-injection attacks such as Rowhammer. A gray-box attack involves partial knowledge of the model, allowing adversaries to exploit known vulnerabilities based on available information. Finally, in a black-box attack, attackers have no direct access to the target model's architecture or parameters. Attackers can only observe model outputs to infer model properties or extract sensitive information. In this work, we consider both white-box and gray-box threat models. The white-box setting applies because the targeted models are open-sourced, granting full access to their architecture and parameters. The gray-box setting reflects scenarios in which the adversary has only partial visibility into model internals. Within this setting, we focus on untargeted attacks, which aim to induce graceless degradation in overall model performance rather than targeting specific outputs. Such attacks are particularly insidious, as they compromise accuracy across a broad range of inputs while avoiding distinct failure patterns, making them more challenging to detect and defend against than targeted attacks. Therefore, RAMBO strategically employs an untargeted attack to maximize disruption and degrade model performance. This attack also fundamentally differs from denial of service attacks, which aim to overwhelm system resources to render services unavailable. # 4 Proposed RAMBO Methodology In this section, we present a theoretical sensitivity analysis followed by a detailed description of the proposed attack framework. # 4.1 Theoretical Vulnerability Analysis Here, we present parameter-ranking procedure to identify plausible targets for the RAMBO attack. We rank the SSM parameters' based on their importance as follows: - Transition backbone (A): This component determines how information evolves over time within each channel. Because (A) is applied at every timestep and across all layers, its parameters strongly influence the model's stability and its ability to capture long-range dependencies. - Projection seeds: These seeds generate the raw projection vectors $(B_{\mathrm{raw}})$ , $(C_{\mathrm{raw}})$ , and the low-rank update $(\Delta_{\mathrm{lowrank}})$ . If the seeds carry insufficient information, the model cannot effectively adapt to the input context. - Seed expansion parameters: These parameters convert the low-rank $(\Delta)$ seed into a detailed per-channel timescale vector. Poor calibration can produce unusable timescales—values that are too small prevent meaningful updates, while values that are too large destabilize the model. - Read/write vectors $(B_{t})$ and $(C_t)$ : Derived from the seeds, these vectors govern how new information is written into the model's state and how existing information is retrieved. From the ranking above, the $A$ modules and the projection seeds emerge as the most critical components. However, projection seeds are input dependent and incur substantial runtime overhead to probe and identify, making them impractical targets for real-time attacks. Consequently, it is more practical to target fixed model parameters such as the $A$ and $D$ modules and the projection layers, which are static and can be reliably identified and exploited. # 4.2 Attack Framework In this section, we delineate our attack methodology for impacting Mamba model performance. We employ the standard cross-entropy loss, computed between the output logits of the model and the corresponding ground-truth token IDs, as a measure of model performance. A lower cross-entropy loss indicates that the model assigns higher likelihood to the correct tokens and thus performs better. Our objective is to identify the most critical subset of parameter bits such that, when flipped, they cause a substantial increase in the cross-entropy loss. This increase directly translates into severe degradation of model performance, thereby highlighting the vulnerability of the model to targeted bit-flip manipulations. 4.2.1 Proxy for Parameter Sensitivity. For performing BFA, it is essential to analyze model parameter sensitivity profiles independently of robustness assumptions. In particular, parameters with larger gradients or higher magnitudes may exhibit amplified sensitivity, whereby perturbations yield disproportionately large effects on the output. Therefore, a hybrid sensitivity metric is used upon experiments after being inspired by, that considers both magnitude and gradient influences to capture the sensitivity profile holistically, and is therefore expressed as: $$ \mathbf {S} = \alpha \cdot | \nabla \mathbf {W} | + (1 - \alpha) \cdot | \mathbf {W} | \tag {7} $$ Algorithm 1 Layer Ranking & Wright Subset Selection Input: Model parameters $\mathbf{W}$ gradients $\nabla \mathbf{W}$ trade-off $\alpha$ ,sub sampling rate $r$ , and number of top layers $n$ Output: Sensitivity scores $\mathcal{L}_{\mathrm{sens}}$ , Critical weight indices $\mathcal{I}_{init}$ 1: Initialize sensitivity list $\mathcal{L}_{\mathrm{sens}}\gets$ 2: for all layers $l\in L$ do 3: $k\gets \lfloor r\times |\mathbf{W}^{(l)}| / 100\rfloor$ 4: $\mathbf{S}^{(l)}\gets \alpha |\nabla \mathbf{W}^{(l)}| + (1 - \alpha)|\mathbf{W}^{(l)}|$ 5: $I_{hyb}^{(l)}\gets \mathrm{TopKIndex}(S^{(l)},k)$ 6: $\mathcal{L}^{(l)}\gets \mathrm{BFlipLoss}(\mathbf{W}^{(l)},pos,I_{hyb}^{(l)})$ 7: Append $[\mathcal{L}^{(l)},l]$ to $\mathcal{L}_{\mathrm{sens}}$ 8: end for 9: $\mathcal{L}_{\mathrm{sens}}\gets \mathrm{SORT}(\mathcal{L}_{\mathrm{sens}})$ 10: $\mathcal{L}_{top}\gets \mathrm{TopN}(\mathcal{L}_{\mathrm{sens}},n)$ 11: $\mathcal{I}_{init}\gets [l,I_{hyb}^{(l)}]$ extracted from $\mathcal{L}_{top}$ Where $\nabla \mathbf{W}$ refers of parameter gradients, $\mathbf{W}$ refers to the magnitudes and $\alpha$ is a tunable parameter balancing the importance of magnitude and gradient. 4.2.2 Layer Sensitivity Analysis. Determining critical parameters in Mamba Models is complex due to the size of their parameter space. However, the identification of a sensitive layer is more manageable due to the reduced number of layers compared to the total number of parameters. To quantify layer sensitivity, we sample the top-(k) candidate bit-flips from each layer at a rate $(r)$ , guided by the hybrid sensitivity score (S). These selected bit-flips are applied, and the resulting model loss $(\mathcal{L})$ is measured to assess the layer's sensitivity. $k$ is computed as: $$ k = \operatorname {c a r d i n a l i t y} \left(\mathbf {W} ^ {(l)}\right) \times \frac {r}{1 0 0} \tag {8} $$ Here, $\mathbf{W}^{(l)}$ signifies the parameters within layer $l$ . To rank layers, we first quantify each layer's parameter sensitivity using a scoring function (Eq. 7). The resulting scores are ranked in descending order to identify the top-(k) most critical weights (Eq. 8). Bit-flip perturbations are then applied to the most significant bits of these weights to maximize deviation. The resulting loss, $(\mathcal{L}^{(l)})$ , indicates each layer's sensitivity. Algorithm 1 outlines a systematic procedure for layer sensitivity analysis and ranking. The algorithm begins by initializing an empty set, $\mathcal{L}_{\text{sens}}$ to store layer sensitivity scores (line 1). It uses a function BF1ipLoss to calculate the model loss $\mathcal{L}$ when weight perturbations are applied to a specified layer (line 6). The function BF1ipLoss accepts the parameters $\mathbf{W}^{(l)}$ , bit position $pos$ , and perturbation indices $I$ (line 2) as inputs. The computed loss $\mathcal{L}$ is returned. The process iterates over each model layer to evaluate its sensitivity to parameter faults (lines 2-9). For each layer $l$ , a hybrid sensitivity score $\mathbf{S}^{(l)}$ is computed using the weighted combination of parameter magnitudes and gradient magnitudes (Eq. 7, line 4). The TopKI index function then selects the top- $k$ most sensitive weights, forming the index set $I_{\text{hyb}}^{(l)}$ (line 5). These indices, together with the layer weights $\mathbf{W}^{(l)}$ and bit position 'pos', are passed to the BF1ipLoss function, which injects controlled bit-flips, recomposes corresponding model loss. The losses are recorded in the sensitivity list $\mathcal{L}_{\text{sens}}$ (line 7). After processing all layers, the sensitivity list is sorted, and the top- $n$ most vulnerable layers are identified (lines 9-10). Finally, the corresponding weight indices from these layers are extracted to form the critical weight indices $\mathcal{I}_{init}$ , which, along with $\mathcal{L}_{\mathrm{sens}}$ , constitutes the algorithm's output (line 11). 4.2.3 Critical Parameter Set Optimization. The initial set of critical parameters can be large and computationally prohibitive for an efficient BFA. Therefore, it becomes necessary to identify a minimal subset that still preserves the original attack effectiveness. Let the initial set of indices be denoted as $(I_{\mathrm{init}})$ , associated with a baseline attack loss $(L_{\mathrm{orig}} = L(I_{\mathrm{init}}))$ . The optimization objective is to find the smallest subset $(I \subseteq I_{\mathrm{init}})$ that maintains the attack performance within a small tolerance: $$ \min _ {\mathcal {I} \subseteq \mathcal {I} _ {\text {i n i t}}} | \mathcal {I} | \quad \text {s . t .} \quad L (\mathcal {I}) \geq L _ {\text {o r i g}} - \varepsilon . \tag {9} $$ This problem can be reformulated as a combinatorial optimization task over binary selection variables $(z_{i}\in 0,1)$ , where $(z_{i} = 1)$ indicates inclusion of the (i)-th parameter in the subset. The corresponding formulation becomes: $$ \min _ {z _ {i} \in 0, 1} \sum_ {i} z _ {i} \quad \text {s . t .} \quad L (z) \geq L _ {\text {o r i g}} - \varepsilon . \tag {10} $$ Since this problem is NP-hard, a continuous relaxation can be applied by allowing $(z_{i} \in)$ and using a differentiable surrogate loss $(\hat{L}(z))$ . The relaxed formulation is expressed as: $$ \min _ {z \in ^ {n}} \sum_ {i} z _ {i} + \lambda \cdot \max (0, L _ {\text {o r i g}} - \hat {L} (z) - \varepsilon), \tag {11} $$ where $\lambda$ is a regularization parameter that balances sparsity and loss preservation. Although this relaxation enables gradient-based optimization, the underlying loss landscape remains highly nonconvex and computationally expensive to evaluate in large-scale neural models. To address this issue, a randomized exclusionary heuristic is adopted, as described in Algorithm 2. The algorithm iteratively refines the subset by randomly excluding groups of indices and re-evaluating the loss. In each iteration, a candidate exclusion set $(\Delta)$ is selected, where $(|\Delta|)$ varies between 1 and $(\lfloor |\mathcal{I}| / 2\rfloor)$ . The modified subset is tested as: $\mathcal{I}' = \mathcal{I} \setminus \Delta$ , and the new loss $(L(\mathcal{I}')$ is compared against the baseline. If the loss satisfies $L(\mathcal{I}') \geq L_{\mathrm{orig}} - \varepsilon$ , the exclusion is accepted, permanently removing those indices from the set. The process continues until no further exclusion satisfies the constraint or the maximum number of iterations is reached. This exclusionary optimization approach provides a computationally efficient method for subset reduction. Although it does not guarantee global optimality, it achieves significant reduction in the number of critical indices while maintaining nearly identical attack loss - offering a practical balance between optimization cost and attack efficacy. Algorithm 2 presents the optimization procedure. The algorithm begins by initializing the reduced index set $(I_{\mathrm{red}})$ with the initial subset $(I_{\mathrm{init}})$ and recording structures for tracking progress (P) (lines 1-2). The baseline loss $(L_{\mathrm{orig}})$ is computed using the BF1 ipLoss function (line 2). At each iteration, random exclusion patterns are explored to identify indices that can be safely removed without violating the predefined loss tolerance $(\epsilon)$ (lines 4-6). Specifically, a random subset of indices $(I_{\mathrm{exc}} \subseteq I)$ of size up to half of the current subset is excluded to form a test subset $(I_{\mathrm{test}} = I \setminus I_{\mathrm{exc}})$ (lines 7-8). The resulting model loss $(L_{\mathrm{test}})$ is then evaluated (line 9). If the condition $(L_{\mathrm{test}} \geq L_{\mathrm{orig}} - \epsilon)$ holds, indicating negligible degradation, Algorithm 2 Exclusionary Weight Subset Optimization Input: Model parameters $\mathcal{W}_{\text{orig}}$ , weight indices $I_{init}$ , loss tolerance $\epsilon$ , maximum iterations $N_{max}$ Output: Reduced index set $I_{red}$ 1: $I_{red} \gets I_{init}, \mathcal{P} \gets \{\}$ 2: $\mathcal{L}_{\text{orig}} \gets BFlipLoss(\mathcal{W}_{\text{orig}}, \text{pos}, I_{init})$ 3: improved ← True, $t \gets 0$ 4: while improved and $t < N_{max}$ do 5: improved ← False, $t \gets t + 1$ 6: for $i = 1$ to 100 do 7: Randomly exclude $n_{\text{exc}} \in [1, |\mathcal{I}|/2]$ indices 8: Form $I_{test} = I \setminus I_{\text{exc}}$ 9: $\mathcal{L}_{\text{test}} \gets BFlipLoss(\mathcal{W}_{\text{orig}}, \text{pos}, I_{test})$ 10: if $\mathcal{L}_{\text{test}} \geq \mathcal{L}_{\text{orig}} - \epsilon$ then 11: $I \gets I_{test}$ , improved ← True 12: break 13: end if 14: end for 15: Record progress $(t, |\mathcal{I}|, \mathcal{L}_{\text{test}})$ 16: end while 17: $I_{red} \gets I, \mathcal{P} \gets \text{recorded progress}$ 18: return $\{I_{red}, \mathcal{P}\}$ the exclusion is accepted and $(I)$ is updated accordingly (lines 10-12). The process continues until no further improvement is observed or the maximum number of iterations $(N_{\mathrm{max}})$ is reached (lines 4-16). Finally, the optimized reduced index set $(I_{\mathrm{red}})$ and the recorded progress (P) are obtained (lines 17-18). # 5 Evaluation Results # 5.1 Experimental Setup We evaluated RAMBO on a diverse set of models, including Mamba and Mamba2, ranging from 370 million and 2.8 billion parameters. We further assess RAMBO on quantized models such as Quamba-1.4b-w8a8 (8-bit weights and 8-bit activations), Quamba-1.4b-w4a16, Quamba-2.8B-w8a8, and Quamba-2.8b-w4a16 to evaluate its effectiveness under low-precision settings. Model performance was assessed using standard benchmarks, such as the tasks from the Language Model Evaluation Harness, including ArcEasy, HellaSwag, PIQA and Winograde, which probe reasoning and generalization across a variety of domains. Additionally, we tested on the LAMBADA dataset, a word-prediction/ cloze-style natural language understanding task. We extended our evaluation to include Vision-Mamba models, such as Mambavision-S-1K and Mambavision-L-21K in half-precision (16-bit floating-point or FP16) format trained on ImageNet data to showcase multimodal effectiveness of RAMBO. Furthermore, we assess RAMBO on Hymba, a hybrid-head parallel architecture that combines transformer attention mechanisms and SSMs, to show the versatility of the proposed framework. We report both perplexity (on WikiText) and accuracy as evaluation metrics. Perplexity, defined as the exponential of the average negative log-likelihood over a sequence, measures predictive capability, whereas accuracy quantifies the proportion of correct predictions. Table 1. RAMBO evaluation on various models and datasets. <table><tr><td rowspan="2">Model</td><td rowspan="2"># Bit-Flips</td><td colspan="7">Benchmarks (WikiText perplexity and % Accuracy before/after attack)</td></tr><tr><td>Perplexity</td><td>Arc-Easy</td><td>Lambada</td><td>HellaSwag</td><td>PIQA</td><td>Winogrande</td><td>ImageNet-1K</td></tr><tr><td>Mamba-370m</td><td>4</td><td>24.87 / 1.26 x 1011</td><td>53.125% / 28.67%</td><td>67.86% / 5.33%</td><td>43.04% / 22%</td><td>67.03% / 44.66%</td><td>52.72% / 46%</td><td>NA</td></tr><tr><td>Mamba2-370m</td><td>4</td><td>26.69 / 2.48 x 104</td><td>46.975% / 12.67%</td><td>67.98% / 17.33%</td><td>43.08% / 8%</td><td>68.77% / 24%</td><td>51.46% / 7.33%</td><td>NA</td></tr><tr><td>Mamba-1.4b</td><td>1</td><td>18.94 / 3.75 x 106</td><td>62.5% / 14%</td><td>74.64% / 0%</td><td>55.76% / 0%</td><td>73.34% / 18%</td><td>54.54% / 46%</td><td>NA</td></tr><tr><td>Quamba-1.4b-w8a8</td><td>4</td><td>22.193 / 50.08</td><td>63.15% / 24%</td><td>54.67% / 46.67%</td><td>49.39% / 22.66%</td><td>71.76% / 49.33%</td><td>53.51% / 27.33%</td><td>NA</td></tr><tr><td>Quamba-1.4b-w4a16</td><td>1</td><td>23.62 / 606.673</td><td>61.75% / 0%</td><td>73.39% / 57.33%</td><td>55.17% / 0%</td><td>72.47% / 6%</td><td>53.57% / 6.67%</td><td>NA</td></tr><tr><td>Mamba2-1.3b</td><td>5</td><td>18.76 / 2.41 x 106</td><td>62.5% / 0%</td><td>75.39% / 30.67%</td><td>56.69% / 0%</td><td>72.25% / 0%</td><td>49.33% / 4.67%</td><td>NA</td></tr><tr><td>Mamba-2.8b</td><td>1</td><td>16.31 / 1.23 x 105</td><td>65.625% / 26.67%</td><td>78.34% / 4%</td><td>62.59% / 26%</td><td>74.48% / 52%</td><td>56.99% / 44.67%</td><td>NA</td></tr><tr><td>Quamba-2.8b-w8a8</td><td>1</td><td>19.53 / 1.26x1010</td><td>65.26% / 0%</td><td>77.55% / 22%</td><td>62.06% / 0%</td><td>73.99% / 0%</td><td>56.59% / 0%</td><td>NA</td></tr><tr><td>Quamba-2.8b-w4a16</td><td>1</td><td>9.91 / 7.65 x 105</td><td>68.7% / 60%</td><td>76.03% / 0%</td><td>61.66% / 0%</td><td>73.29% / 25.33%</td><td>58.01% / 46%</td><td>NA</td></tr><tr><td>Mamba2-2.7b</td><td>9</td><td>16.59 / 2.48 x 104</td><td>66.41% / 2%</td><td>77.78% / 15.33%</td><td>62.69% / 1.33%</td><td>74.86% / 17.33%</td><td>56.59% / 5.33%</td><td>NA</td></tr><tr><td>Hymba-1.5b</td><td>2</td><td>14.40 / 1.762 x 104</td><td>76.94% / 0%</td><td>82.20% / 30%</td><td>53.55% / 0.67%</td><td>77.31% / 2.67%</td><td>66.61% / 10.67%</td><td>NA</td></tr><tr><td>Mambavision-S-1K 50m</td><td>22</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>83.2%/46.1%</td></tr><tr><td>Mambavision-L-21K 200m</td><td>24</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>86.1%/47.2%</td></tr></table> (a) (b) Figure 2. Layer type-based sensitivity analysis (a) loss distribution, (b) bit-flip efficiency in Mamba 1.4b FP16 model. # 5.2 Preliminary Analysis 5.2.1 Layer Sensitivity Analysis. We perform bit-flip injections at a fixed rate to quantify how different layer types affect model degradation (loss increase and consequent accuracy drop). Figure 2 summarizes these results. Figure 2a reports the absolute increase in model loss caused by bit-flips per layer type in Mamba-1.4b model upon the introduction of $0.1\%$ bit-flips in ranked critical parameters (refer Section 4.2.1). Larger losses indicate greater criticality. It is observed that the layer type $A_{\log}$ is more critical compared to other layer types as bit-flips in these layers result in higher model losses and large accuracy degradation. This can be ascribed to the fact that $(A_{\log})$ directly parameterizes the state-transition matrix (A) through an exponential mapping, so perturbations in $(A_{\log})$ are amplified in (A), leading to large model loss, as discussed in Section 4.1. Layer sensitivity must also account for bit-flip efficiency, as layer sizes vary significantly and raw loss alone is not a sufficient indicator. We define bit-flip efficiency as the loss increase per flipped bit (Figure 2b); higher values denote greater impact per perturbation. Layers exhibiting both high raw loss upon bit-flips and high efficiency are the most favorable BFA targets. Consistent with the theoretical analysis in Section 4.1, the $A_{\log}$ layers in the Mamba/Quamba, $D$ and input and output projection layers in Mamba2 and Conv1d layers in Hymba and Mambavision models show the highest criticality and efficiency, making them the most sensitive layer types. 5.2.2 Weight-bit Subset Selection. We select the most-sensitive layer type identified in the preceding analysis (e.g., the $A_{\log}$ layer in Mamba-1.4b) and determine a critical subset of parameters whose bit-flip perturbations yield a substantially high model loss. Model (a) Figure 3. Critical (a) layer, and (b) weight subset selection in Mamba 1.4b FP16 model. (b) loss (y-axis), as depicted in Figure 3a, increases progressively with fixed $0.1\%$ bit-flips in A_log layers, in Mamba-1.4b model, from the initial to final Mamba blocks (x-axis). This indicates an increase in layer criticality from initial to final blocks. Therefore, we target the A_log layer in the final Mamba block for attack. Figure 3b shows loss (y-axis) as a function of injected bit-flips (x-axis): the loss surpasses our predefined threshold of 10 after six bit-flips. We conservatively use the parameter set identified at a operating point of 9 bit-flips, $(\log \geq 21)$ as the initial critical weight subset for the subsequent exclusionary optimization. 5.2.3 Weight-bit Subset Optimization. In this section, we refine the previously identified critical weight-bit subset to isolate the most critical bits. As shown in Figure 4a, the bit-flip attack optimization in the Mamba-1.4b FP16 model reduces the initial 9-bit subset to a single bit, indicated by the blue line, while maintaining a model loss (red line) above the loss threshold of 10 (green dotted line). This demonstrates the necessity and effectiveness of the optimization process in identifying most critical bits. # 5.3 Results This segment presents the degradation of model performance across benchmarks achieved by RAMBO (Table 1). The attributes of each model, specifically the model name and parameter count, are presented in the firts column. The second column presents the count of bit-flips injected to induce performance degradation. Subsequent columns furnish benchmark results, before and after the attack, for each task. All models demonstrated strong baseline performance, with LAMBADA accuracies up to $82.20\%$ , and WikiText perplexities between 16.31 and 26.69. Similarly, high accuracies were observed across reasoning and commonsense benchmarks such as ARC-Easy (53.12- 76.94%), HellaSwag (43.04-62.69%), PIQA (67.03-77.31%), and Winogrande (51.46- 66.61%). "NA" indicates when the benchmark is incompatible and does not apply to the model. Following the injection of only a few targeted bit-flips, RAMBO produced severe accuracy degradation across all tasks. In the Mamba-1.4b model, a single bit-flip caused complete collapse on LAMBADA (accuracy dropped from $74.64\%$ to $0\%$ ) and raised perplexity from 18.94 to $3.75 \times 10^{6}$ , while also reducing ARC-Easy accuracy from $62.5\%$ to $14\%$ . The Mamba-2.8b model exhibited similar vulnerability, where one bit-flip reduced LAMBADA accuracy from $78.34\%$ to $4\%$ and increased perplexity from 16.31 to $1.23 \times 10^{5}$ . Even smaller models, such as Mamba-370M, experienced notable drops, with four bit-flips lowering Arc-Easy accuracy from $53.125\%$ to $28.67\%$ and increasing perplexity from 24.87 to $1.26 \times 10^{11}$ . Similarly, Mamba2 are also highly sensitive: Mamba2-1.3b required 5 bit-flips to collapse Arc-Easy accuracy to $0\%$ , while Mamba2-2.7B requires 32 bit-flips to induce similar attack impact. Overall, these results reveal that Mamba architectures exhibit extreme bit-level fragility, where even a single bit-flip in critical parameters can induce catastrophic failures in model performance. # 5.4 Attack Efficacy across Quantization Levels Our evaluation has, up to this point, concentrated on RAMBO using half-precision FP16 models. However, a model's vulnerability to bit-flip attacks is largely determined by quantization precision. Since diverse numerical formats display varying sensitivities to perturbations, the effect of a bit-flip, along with the quantity of flips needed to cause model disruption, can differ significantly across different quantizations. Furthermore, previous BFA defenses commonly suggest quantization as a strategy for mitigation, necessitating the evaluation of RAMBO under quantized conditions. To this end, we evaluate RAMBO on INT4 (w4a16) and INT8 (w8a8) variants of Mamba-1.4b and Mamba-2.8b. Our findings show that these quantized models remain highly vulnerable to bit-flip attacks. Notably, a single bit-flip is sufficient to reduce the accuracy of the Mamba-2.8b w4a16 model on PIQA from $73.29\%$ to $25.33\%$ (refer Table 1), demonstrating that RAMBO remains extremely effective even under aggressive quantization. # 5.5 Attack Efficacy across Task Modality To assess RAMBO's effectiveness beyond NLP settings, we applied it to FP16 Mamba-Vision models with 50M-200M parameters trained (a) (b) Figure 4. (a) Weight-bit set optimization in Mamba 1.4b FP16 model, and (b) optimization performance comparison with GenBFA of AttentionBreaker framework on Mambavision-S-1K. on ImageNet. Our results show that RAMBO remains highly effective even in vision tasks, successfully identifying critical bit positions whose perturbation leads to substantial performance degradation. For example, in the MambaVision-L-21K model, RAMBO identified 24 critical bits that, when flipped, reduced the model's accuracy from $86.1\%$ to $47.2\%$ (refer Table 1). This demonstrates that RAMBO generalizes effectively beyond NLP and remains a potent bit-flip attack methodology for Mamba-based architectures on vision tasks. # 5.6 Attack Transferability across Benchmarks This experiment evaluates the cross-task transferability of bit-flip attack effects. Specifically, we examine whether an attack designed to degrade performance on Task A also induces comparable degradation on Task B. Such transferability would indicate a fundamental architectural vulnerability that is independent of the specific task. Using the Mamba-1.4b FP16 model, we first execute a single bit-flip attack on the ARC-Easy benchmark, reducing accuracy from $62.5\%$ to $14\%$ . We then measure the resulting impact on other language benchmarks, including HellaSwag, PIQA, and Winogrande. As summarized in Table 1, the attack exhibits substantial transferability: for example, HellaSwag accuracy drops from $55.76\%$ to $0\%$ , and PIQA accuracy declines from $73.34\%$ to $18\%$ . These results demonstrate that the effects of the attack propagate broadly across tasks and domains, underscoring a fundamental and widespread vulnerability in the model architecture. # 5.7 Gradient-free Attack Results We evaluate RAMBO in a fully gradient-free setting, where gradient information is unavailable throughout the attack. Parameter sensitivity scores are derived solely from weight magnitudes by setting $\alpha = 0$ in Equation 7. Under this configuration, only a single bit-flip on the 8-bit Quamba-2.8B-W8A8 model reduces ARC-Easy accuracy from $65.26\%$ to $0\%$ , demonstrating that magnitude-based scoring remains highly effective in identifying critical bits. This flexibility allows adversaries to select either gradient-free or gradient-based modes depending on their goals and computational constraints. Notably, many existing BFA defenses rely on restricting or obfuscating gradient access. RAMBO circumvents such defenses entirely by operating on magnitude-based importance alone. This adaptability underscores RAMBO's robustness in gradient-restricted environments and establishes it as a resilient BFA framework. # 5.8 Gray-box Attack Results To evaluate RAMBO's effectiveness under grey-box scenarios with partial access to model parameters, we simulate such a condition by restricting access to a only a subset of model layers, specifically the final two Mamba blocks in the Mamba-1.4b model. Even with this limited access, RAMBO remains highly potent. On the Arc-Easy dataset, flipping only a single bit in this partially visible Mamba-1.4b model is sufficient to reduce accuracy from $62.5\%$ to $15.2\%$ , demonstrating the strength of our approach under constrained access. # 5.9 Comparison with State-of-the-art LLM Bit-flip Attacks Although RAMBO is the first SSM-aware BFA and thus lacks a direct SSM-specific baseline, we assess how its exclusionary optimization strategy compares with LLM-based BFAs. To this end, we evaluate it against GenBFA optimization employed in the AttentionBreaker framework. As shown in Figure 4b, we observe that both methods identify the same 22 critical bits in Mambavision-S-1K, but RAMBO converges significantly faster, within 26 iterations (green line) compared to GenBFA's 70 (blue line), demonstrating superior efficiency despite its simpler design. Prior works, such as AttentionBreaker and SBFA, show that single or few bit-flips can collapse models with billions of parameters. Similarly, RAMBO achieves catastrophic degradation with just one bit-flip in Mamba-1.4b (FP16) and Quamba-2.8b (INT4). Even the hybrid Hymba architecture is oberserved to be extremely vulnerable and its performance collapses with merely two bit-flips. These results highlight the severe security risk posed by bit-flip attacks in both transformer- and state-space-based models. # 6 Conclusion This research investigates the vulnerability of state-space based models like Mamba and presents RAMBO, a novel BFA framework targeting these models. RAMBO introduces a novel sensitivity analysis of state space model architectures, thereby resulting in compromised performance. Specifically, perturbing merely one-bit (7.14 x $10^{-10}\%$ of the model parameters) in Mamba-1.4b model results on LAMBADA dataset prediction scores dropping from $74.64\%$ to $0\%$ . The results highlight RAMBO's utility in demonstrating Mamba's vulnerability to such adversarial interventions. # 7 Ethical Considerations This study aims to advance research on the safety and robustness of NLP models, with a particular focus on state-space architectures such as Mamba. It is imperative that the hardware fault vulnerabilities and attack methodologies presented here are used solely for constructive purposes, and not for malicious exploitation.
arxiv_cs
2025-12-14T00:00:00Z
https://arxiv.org/pdf/2512.15778
{"title": "RAMBO: Reliability Analysis for Mamba through Bit-flip attack Optimization", "raw_content": "# RAMBO: Reliability Analysis for Mamba through Bit-flip attack Optimization\n\nSanjay Das $^{1*}$ , Swastik Bhattacharya $^{1}$ , Shamik Kundu $^{2}$ , Arnab Raha $^{2}$ , Souvik Kundu $^{2}$ , and Kanad Basu $^{3}$\n\n<sup>1</sup>University of Texas at Dallas, USA; <sup>2</sup>Intel Corporation, USA; <sup>3</sup>Rensselaer Polytechnic Institute, USA\n\n*Corresponding author: Sanjay Das (Email: sanjay.das@utdallas.edu)\n\n# Abstract\n\nState-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer from the quadratic computational and memory costs of attention mechanisms. As SSMs are increasingly deployed in real-world applications, it is critical to assess their susceptibility to both software- and hardware-level threats to ensure secure and reliable operation.\n\nAmong such threats, hardware-induced bit-flip attacks (BFAs) pose a particularly severe risk by corrupting model parameters through memory faults, thereby undermining model accuracy and functional integrity. To investigate this vulnerability, we introduce RAMBO, the first BFA framework specifically designed to target Mamba-based architectures. Through experiments on the Mamba-1.4b model with LAMBADA benchmark—a cloze-style word-prediction task—we demonstrate that flipping merely a single critical bit can catastrophically reduce accuracy from $74.64\\%$ to $0\\%$ and increase perplexity from 18.94 to $3.75 \\times 10^{6}$ . These results demonstrate the pronounced fragility of SSMs to adversarial perturbations. The framework is open-sourced at https://anonymous.4open.science/r/ RAMBO-DA22.\n\nKeywords: State-space models, Mamba, Bit-flip Attack, Hardware faults, Adversarial robustness.\n\n# 1 Introduction\n\nThe increasing popularity of natural language processing (NLP) models has fundamentally expanded the capabilities of Artificial Intelligence (AI), demonstrating remarkable proficiency in generating human-like text, interpreting nuanced context, and executing complex reasoning tasks [36]. These advancements have not only reshaped natural language processing but have also extended AI applications into diverse fields such as computer vision and scientific research, heralding a new era of AI-driven solutions [2, 36]. Statespace models (SSMs), such as Mamba, have emerged as leading sequence-modeling architectures, offering linear-time scalability and strong performance on long-context tasks [4, 11]. Therefore, they have garnered attention as highly attractive alternatives to conventional transformer-based large language models due to their unique combination of efficiency, scalability, and representational strength. Unlike transformers, whose attention mechanism incurs quadratic computational and memory costs, SSMs operate in linear time, enabling fast processing of extremely long sequences [11]. As\n\nSSMs continue to be integrated into real-world systems at an accelerated pace, it becomes increasingly important to analyze their vulnerability against both software-based and hardware-based threats to ensure their secure and reliable deployment [5, 6].\n\nA major concern in the reliability of deep learning models arises from hardware-level attacks such as bit-flip attacks (BFAs), which exploit vulnerabilities in memory to corrupt the model's weight parameters. Such corruption can severely degrade model performance and violate its integrity. For example, BFA methodologies including DeepHammer inject faults into DRAM, flipping specific bits in stored weights to impair functionality [37]. Even with advances in memory technology, recent techniques allow for remote, non-physical bit-flip manipulations, thereby expanding the threat surface available to BFAs [15, 33]. Bit-flip attacks (BFAs) have been studied extensively in the context of conventional deep neural networks (DNN) [3, 28, 29]. However, traditional BFA strategies often require iterative gradient recomputation after each individual bit-flip. While this is feasible for comparatively small models, it becomes computationally intractable as model size increases [19, 29]. Recent work has begun to expose severe vulnerabilities of transformer-based LLMs to BFAs via alternative strategies, demonstrating that as few as a single bit-flip can catastrophically degrade LLM performance [6, 24]. However, BFA implications for alternative sequence modeling paradigms - in particular structured state-space models (SSMs) - remain largely unexplored.\n\nState-space architectures such as Mamba implement fundamentally different information-propagation and parameterization mechanisms, trade off recurrence and selectivity for attention mechanisms [12]. Due to these architectural distinctions, it is not appropriate to assume that attack strategies and defenses developed for transformers transfer directly to SSMs. Consequently, the absence of a systematic study of BFAs on SSMs constitutes a substantive gap in the current AI hardware robustness and security literature. To address this gap, we propose the framework \"RAMBO\" - a first of its kind SSM-aware BFA pipeline. The primary contribution of the paper are as follows:\n\n- We identify and formalize a previously unexplored vulnerability of state-space models (e.g. Mamba) to bit-flip attacks, and propose RAMBO, a first of its kind SSM-aware attack approach, bridging the gap between BFA research and structured sequence models. \n- RAMBO leverages the structural properties of Mamba-style SSM layers to prioritize critical parameter regions, adapts gradient-estimation and search heuristics, and uses the resulting perturbation effects to identify minimal bit-flip sets that maximally disrupt model behavior. \n- RAMBO uncovers a significant vulnerability of Mamba models. A mere one bit-flip $(7.14 \\times 10^{-10}\\%$ of all bits) in Mamba-1.4b, can reduce the LAMBADA word prediction accuracy\n\nfrom $74.64\\%$ to $0\\%$ , while increasing Wikitext perplexity from 18.94 to $3.75 \\times 10^{6}$ .\n\nThe rest of the paper is organized as follows: Section 2 provides relevant background information. The threat model is discussed in Section 3 and the proposed methodology is detailed in Section 4. Section 5 outlines the experimental setup and discusses the results. The concluding remarks are offered in Section 6.\n\n# 2 Background\n\nState-space models such as Mamba are architecturally composed of stacked Mamba blocks, each integrating selective state-space layers and projection components that collectively enable the model's long-range sequence modeling capabilities.\n\n# 2.1 State-Space Dynamics\n\nA Mamba block operates as a parameter-efficient state-space model (SSM) augmented with convolution, projection, and normalization layers [11]. At each time-step $t$ , the latent state $h_t \\in \\mathbb{R}^n$ evolves according to the recurrence\n\n$$\nh _ {t + 1} = \\left(I + \\Delta_ {t} A\\right) h _ {t} + \\Delta_ {t} B _ {t} x _ {t}, \\tag {1}\n$$\n\nand produces an output\n\n$$\ny _ {t} = C _ {t} h _ {t} + D x _ {t}, \\tag {2}\n$$\n\n# where:\n\n- $A \\in \\mathbb{R}^{n \\times n}$ is the state-transition matrix, parameterized for stability and fixed after training, \n- $B_{t}, C_{t} \\in \\mathbb{R}^{n}$ are input-dependent write and read vectors, \n- $D \\in \\mathbb{R}$ is a fixed skip coefficient, \n- $\\Delta_t \\in \\mathbb{R}_+$ is a token- and channel-dependent step size controlling the effective timescale, \n- $x_{t} \\in \\mathbb{R}$ is the projected token input.\n\nThis recurrence is analogous to a recurrent neural network (RNN), but with structured dynamics that can be parallelized efficiently via diagonalization and convolutional scan operations.\n\n# 2.2 Parameterization via Projection Layers\n\nLet $m$ denote the model embedding dimension, $n$ the latent state dimension per channel, and $r$ the low-rank dimension used for step-size generation. The projection pipeline operates as follows:\n\n$$\nu _ {t} = W _ {\\mathrm {i n}} x _ {t} \\in \\mathbb {R} ^ {c}, \\quad c \\approx m, \\tag {3}\n$$\n\n$$\np _ {t} = W _ {\\text {p r o j}} u _ {t} \\in \\mathbb {R} ^ {2 n + r}, \\tag {4}\n$$\n\n$$\np _ {t} = \\left(B _ {t} ^ {\\text {r a w}}, C _ {t} ^ {\\text {r a w}}, \\Delta_ {t} ^ {\\text {l o w}}\\right), \\tag {5}\n$$\n\n$$\n\\Delta_ {t} = \\operatorname {s o f t p l u s} \\left(W _ {\\Delta} \\Delta_ {t} ^ {\\text {l o w}}\\right) \\in \\mathbb {R} ^ {c}. \\tag {6}\n$$\n\n# Here:\n\n- $W_{\\mathrm{in}} \\in \\mathbb{R}^{m \\times c}$ projects model embeddings into an intermediate space, \n- $W_{\\mathrm{proj}} \\in \\mathbb{R}^{c \\times (2n + r)}$ generates raw seeds for $B_t, C_t$ , and the low-rank step-size representation, \n- $W_{\\Delta} \\in \\mathbb{R}^{r \\times c}$ expands the low-rank $\\Delta_t^{\\mathrm{low}}$ into a per-channel step-size vector.\n\nThus, although $A$ and $D$ are fixed parameters of the model, the effective dynamics are governed by input-dependent $B_{t}$ , $C_{t}$ , and $\\Delta_{t}$ that vary across tokens. Therefore, these unique structural characteristics and projection parameterization of Mamba models must be considered in the development of efficient bit-flip attack strategies.\n\n![](images/7825ddd2a4f053bb7ea01c6d3b36320ad7905dd901c245174caaa64b5ee08440.jpg) \nFigure 1. Bit-flip attack on state space models like Mamba.\n\n# 3 Threat Model\n\nThe proliferation of large-scale NLP models has heightened concerns over security risk, including backdoor and inference-time attacks [10, 23, 31, 35]. Beyond these vectors, a more insidious threat arises from direct manipulation of model parameters [1]. Under this threat model, an adversary with low-level memory access can alter stored weights of deployed models to induce malicious behavior. Hardware-based attacks, such as RowHammer [18] and Laser Fault Injection [32], enable such bit-level perturbations to critical model parameters. These bit-flips inject critical errors into the model's computational flow, propagating and compounding across layers and blocks, ultimately producing erroneous outputs, as illustrated for SSMs such as Mamba in Figure 1. The risk is further amplified in Machine Learning as a Service (MLaaS) settings, where shared hardware resources can expose models to co-residency and cross-process vulnerabilities [6].\n\nWe categorize the threats into four levels based on the extent of knowledge an attacker can extract from the victim. A full-knowledge attack occurs when the attacker has complete information about the target model, training data and its defenses, enabling them to devise highly optimized attack strategies [17, 38]. In a white-box attack, attackers have full knowledge of the model parameters but lack direct memory access, training data, or its defenses, enabling them to execute fault-injection attacks such as Rowhammer [20, 29]. A gray-box attack involves partial knowledge of the model, allowing adversaries to exploit known vulnerabilities based on available information [34, 39]. Finally, in a black-box attack, attackers have no direct access to the target model's architecture or parameters. Attackers can only observe model outputs to infer model properties or extract sensitive information [21, 27].\n\nIn this work, we consider both white-box and gray-box threat models. The white-box setting applies because the targeted models are open-sourced, granting full access to their architecture and parameters. The gray-box setting reflects scenarios in which the adversary has only partial visibility into model internals. Within this setting, we focus on untargeted attacks, which aim to induce graceless degradation in overall model performance rather than targeting specific outputs. Such attacks are particularly insidious, as they compromise accuracy across a broad range of inputs while avoiding distinct failure patterns, making them more challenging to detect and defend against than targeted attacks [24, 25, 28]. Therefore, RAMBO strategically employs an untargeted attack to maximize disruption and degrade model performance. This attack also fundamentally differs from denial of service attacks, which aim to overwhelm system resources to render services unavailable.\n\n# 4 Proposed RAMBO Methodology\n\nIn this section, we present a theoretical sensitivity analysis followed by a detailed description of the proposed attack framework.\n\n# 4.1 Theoretical Vulnerability Analysis\n\nHere, we present parameter-ranking procedure to identify plausible targets for the RAMBO attack. We rank the SSM parameters' based on their importance as follows:\n\n- Transition backbone (A): This component determines how information evolves over time within each channel. Because (A) is applied at every timestep and across all layers, its parameters strongly influence the model's stability and its ability to capture long-range dependencies. \n- Projection seeds: These seeds generate the raw projection vectors $(B_{\\mathrm{raw}})$ , $(C_{\\mathrm{raw}})$ , and the low-rank update $(\\Delta_{\\mathrm{lowrank}})$ . If the seeds carry insufficient information, the model cannot effectively adapt to the input context. \n- Seed expansion parameters: These parameters convert the low-rank $(\\Delta)$ seed into a detailed per-channel timescale vector. Poor calibration can produce unusable timescales—values that are too small prevent meaningful updates, while values that are too large destabilize the model. \n- Read/write vectors $(B_{t})$ and $(C_t)$ : Derived from the seeds, these vectors govern how new information is written into the model's state and how existing information is retrieved.\n\nFrom the ranking above, the $A$ modules and the projection seeds emerge as the most critical components. However, projection seeds are input dependent and incur substantial runtime overhead to probe and identify, making them impractical targets for real-time attacks. Consequently, it is more practical to target fixed model parameters such as the $A$ and $D$ modules and the projection layers, which are static and can be reliably identified and exploited.\n\n# 4.2 Attack Framework\n\nIn this section, we delineate our attack methodology for impacting Mamba model performance. We employ the standard cross-entropy loss [22], computed between the output logits of the model and the corresponding ground-truth token IDs, as a measure of model performance. A lower cross-entropy loss indicates that the model assigns higher likelihood to the correct tokens and thus performs better. Our objective is to identify the most critical subset of parameter bits such that, when flipped, they cause a substantial increase in the cross-entropy loss. This increase directly translates into severe degradation of model performance, thereby highlighting the vulnerability of the model to targeted bit-flip manipulations.\n\n4.2.1 Proxy for Parameter Sensitivity. For performing BFA, it is essential to analyze model parameter sensitivity profiles independently of robustness assumptions. In particular, parameters with larger gradients or higher magnitudes may exhibit amplified sensitivity, whereby perturbations yield disproportionately large effects on the output. Therefore, a hybrid sensitivity metric is used upon experiments after being inspired by [6], that considers both magnitude and gradient influences to capture the sensitivity profile holistically, and is therefore expressed as:\n\n$$\n\\mathbf {S} = \\alpha \\cdot | \\nabla \\mathbf {W} | + (1 - \\alpha) \\cdot | \\mathbf {W} | \\tag {7}\n$$\n\nAlgorithm 1 Layer Ranking & Wright Subset Selection \nInput: Model parameters $\\mathbf{W}$ gradients $\\nabla \\mathbf{W}$ trade-off $\\alpha$ ,sub sampling rate $r$ , and number of top layers $n$ \nOutput: Sensitivity scores $\\mathcal{L}_{\\mathrm{sens}}$ , Critical weight indices $\\mathcal{I}_{init}$ \n1: Initialize sensitivity list $\\mathcal{L}_{\\mathrm{sens}}\\gets [ ]$ \n2: for all layers $l\\in L$ do \n3: $k\\gets \\lfloor r\\times |\\mathbf{W}^{(l)}| / 100\\rfloor$ \n4: $\\mathbf{S}^{(l)}\\gets \\alpha |\\nabla \\mathbf{W}^{(l)}| + (1 - \\alpha)|\\mathbf{W}^{(l)}|$ \n5: $I_{hyb}^{(l)}\\gets \\mathrm{TopKIndex}(S^{(l)},k)$ \n6: $\\mathcal{L}^{(l)}\\gets \\mathrm{BFlipLoss}(\\mathbf{W}^{(l)},pos,I_{hyb}^{(l)})$ \n7: Append $[\\mathcal{L}^{(l)},l]$ to $\\mathcal{L}_{\\mathrm{sens}}$ \n8: end for \n9: $\\mathcal{L}_{\\mathrm{sens}}\\gets \\mathrm{SORT}(\\mathcal{L}_{\\mathrm{sens}})$ \n10: $\\mathcal{L}_{top}\\gets \\mathrm{TopN}(\\mathcal{L}_{\\mathrm{sens}},n)$ \n11: $\\mathcal{I}_{init}\\gets [l,I_{hyb}^{(l)}]$ extracted from $\\mathcal{L}_{top}$\n\nWhere $\\nabla \\mathbf{W}$ refers of parameter gradients, $\\mathbf{W}$ refers to the magnitudes and $\\alpha$ is a tunable parameter balancing the importance of magnitude and gradient.\n\n4.2.2 Layer Sensitivity Analysis. Determining critical parameters in Mamba Models is complex due to the size of their parameter space. However, the identification of a sensitive layer is more manageable due to the reduced number of layers compared to the total number of parameters. To quantify layer sensitivity, we sample the top-(k) candidate bit-flips from each layer at a rate $(r)$ , guided by the hybrid sensitivity score (S). These selected bit-flips are applied, and the resulting model loss $(\\mathcal{L})$ is measured to assess the layer's sensitivity. $k$ is computed as:\n\n$$\nk = \\operatorname {c a r d i n a l i t y} \\left(\\mathbf {W} ^ {(l)}\\right) \\times \\frac {r}{1 0 0} \\tag {8}\n$$\n\nHere, $\\mathbf{W}^{(l)}$ signifies the parameters within layer $l$ .\n\nTo rank layers, we first quantify each layer's parameter sensitivity using a scoring function (Eq. 7). The resulting scores are ranked in descending order to identify the top-(k) most critical weights (Eq. 8). Bit-flip perturbations are then applied to the most significant bits of these weights to maximize deviation. The resulting loss, $(\\mathcal{L}^{(l)})$ , indicates each layer's sensitivity.\n\nAlgorithm 1 outlines a systematic procedure for layer sensitivity analysis and ranking. The algorithm begins by initializing an empty set, $\\mathcal{L}_{\\text{sens}}$ to store layer sensitivity scores (line 1). It uses a function BF1ipLoss to calculate the model loss $\\mathcal{L}$ when weight perturbations are applied to a specified layer (line 6). The function BF1ipLoss accepts the parameters $\\mathbf{W}^{(l)}$ , bit position $pos$ , and perturbation indices $I$ (line 2) as inputs. The computed loss $\\mathcal{L}$ is returned. The process iterates over each model layer to evaluate its sensitivity to parameter faults (lines 2-9). For each layer $l$ , a hybrid sensitivity score $\\mathbf{S}^{(l)}$ is computed using the weighted combination of parameter magnitudes and gradient magnitudes (Eq. 7, line 4). The TopKI index function then selects the top- $k$ most sensitive weights, forming the index set $I_{\\text{hyb}}^{(l)}$ (line 5). These indices, together with the layer weights $\\mathbf{W}^{(l)}$ and bit position 'pos', are passed to the BF1ipLoss function, which injects controlled bit-flips, recomposes corresponding model loss. The losses are recorded in the sensitivity list $\\mathcal{L}_{\\text{sens}}$ (line 7). After processing all layers, the sensitivity list is sorted, and the top- $n$ most vulnerable layers are identified (lines\n\n9-10). Finally, the corresponding weight indices from these layers are extracted to form the critical weight indices $\\mathcal{I}_{init}$ , which, along with $\\mathcal{L}_{\\mathrm{sens}}$ , constitutes the algorithm's output (line 11).\n\n4.2.3 Critical Parameter Set Optimization. The initial set of critical parameters can be large and computationally prohibitive for an efficient BFA. Therefore, it becomes necessary to identify a minimal subset that still preserves the original attack effectiveness. Let the initial set of indices be denoted as $(I_{\\mathrm{init}})$ , associated with a baseline attack loss $(L_{\\mathrm{orig}} = L(I_{\\mathrm{init}}))$ . The optimization objective is to find the smallest subset $(I \\subseteq I_{\\mathrm{init}})$ that maintains the attack performance within a small tolerance:\n\n$$\n\\min _ {\\mathcal {I} \\subseteq \\mathcal {I} _ {\\text {i n i t}}} | \\mathcal {I} | \\quad \\text {s . t .} \\quad L (\\mathcal {I}) \\geq L _ {\\text {o r i g}} - \\varepsilon . \\tag {9}\n$$\n\nThis problem can be reformulated as a combinatorial optimization task over binary selection variables $(z_{i}\\in 0,1)$ , where $(z_{i} = 1)$ indicates inclusion of the (i)-th parameter in the subset. The corresponding formulation becomes:\n\n$$\n\\min _ {z _ {i} \\in 0, 1} \\sum_ {i} z _ {i} \\quad \\text {s . t .} \\quad L (z) \\geq L _ {\\text {o r i g}} - \\varepsilon . \\tag {10}\n$$\n\nSince this problem is NP-hard, a continuous relaxation can be applied by allowing $(z_{i} \\in [0,1])$ and using a differentiable surrogate loss $(\\hat{L}(z))$ . The relaxed formulation is expressed as:\n\n$$\n\\min _ {z \\in [ 0, 1 ] ^ {n}} \\sum_ {i} z _ {i} + \\lambda \\cdot \\max (0, L _ {\\text {o r i g}} - \\hat {L} (z) - \\varepsilon), \\tag {11}\n$$\n\nwhere $\\lambda$ is a regularization parameter that balances sparsity and loss preservation. Although this relaxation enables gradient-based optimization, the underlying loss landscape remains highly nonconvex and computationally expensive to evaluate in large-scale neural models.\n\nTo address this issue, a randomized exclusionary heuristic is adopted, as described in Algorithm 2. The algorithm iteratively refines the subset by randomly excluding groups of indices and re-evaluating the loss. In each iteration, a candidate exclusion set $(\\Delta)$ is selected, where $(|\\Delta|)$ varies between 1 and $(\\lfloor |\\mathcal{I}| / 2\\rfloor)$ . The modified subset is tested as: $\\mathcal{I}' = \\mathcal{I} \\setminus \\Delta$ , and the new loss $(L(\\mathcal{I}')$ is compared against the baseline. If the loss satisfies $L(\\mathcal{I}') \\geq L_{\\mathrm{orig}} - \\varepsilon$ , the exclusion is accepted, permanently removing those indices from the set. The process continues until no further exclusion satisfies the constraint or the maximum number of iterations is reached.\n\nThis exclusionary optimization approach provides a computationally efficient method for subset reduction. Although it does not guarantee global optimality, it achieves significant reduction in the number of critical indices while maintaining nearly identical attack loss - offering a practical balance between optimization cost and attack efficacy.\n\nAlgorithm 2 presents the optimization procedure. The algorithm begins by initializing the reduced index set $(I_{\\mathrm{red}})$ with the initial subset $(I_{\\mathrm{init}})$ and recording structures for tracking progress (P) (lines 1-2). The baseline loss $(L_{\\mathrm{orig}})$ is computed using the BF1 ipLoss function (line 2). At each iteration, random exclusion patterns are explored to identify indices that can be safely removed without violating the predefined loss tolerance $(\\epsilon)$ (lines 4-6). Specifically, a random subset of indices $(I_{\\mathrm{exc}} \\subseteq I)$ of size up to half of the current subset is excluded to form a test subset $(I_{\\mathrm{test}} = I \\setminus I_{\\mathrm{exc}})$ (lines 7-8). The resulting model loss $(L_{\\mathrm{test}})$ is then evaluated (line 9). If the condition $(L_{\\mathrm{test}} \\geq L_{\\mathrm{orig}} - \\epsilon)$ holds, indicating negligible degradation,\n\nAlgorithm 2 Exclusionary Weight Subset Optimization \nInput: Model parameters $\\mathcal{W}_{\\text{orig}}$ , weight indices $I_{init}$ , loss tolerance $\\epsilon$ , maximum iterations $N_{max}$ \nOutput: Reduced index set $I_{red}$ \n1: $I_{red} \\gets I_{init}, \\mathcal{P} \\gets \\{\\}$ \n2: $\\mathcal{L}_{\\text{orig}} \\gets BFlipLoss(\\mathcal{W}_{\\text{orig}}, \\text{pos}, I_{init})$ \n3: improved ← True, $t \\gets 0$ \n4: while improved and $t < N_{max}$ do \n5: improved ← False, $t \\gets t + 1$ \n6: for $i = 1$ to 100 do \n7: Randomly exclude $n_{\\text{exc}} \\in [1, |\\mathcal{I}|/2]$ indices \n8: Form $I_{test} = I \\setminus I_{\\text{exc}}$ \n9: $\\mathcal{L}_{\\text{test}} \\gets BFlipLoss(\\mathcal{W}_{\\text{orig}}, \\text{pos}, I_{test})$ \n10: if $\\mathcal{L}_{\\text{test}} \\geq \\mathcal{L}_{\\text{orig}} - \\epsilon$ then \n11: $I \\gets I_{test}$ , improved ← True \n12: break \n13: end if \n14: end for \n15: Record progress $(t, |\\mathcal{I}|, \\mathcal{L}_{\\text{test}})$ \n16: end while \n17: $I_{red} \\gets I, \\mathcal{P} \\gets \\text{recorded progress}$ \n18: return $\\{I_{red}, \\mathcal{P}\\}$\n\nthe exclusion is accepted and $(I)$ is updated accordingly (lines 10-12). The process continues until no further improvement is observed or the maximum number of iterations $(N_{\\mathrm{max}})$ is reached (lines 4-16). Finally, the optimized reduced index set $(I_{\\mathrm{red}})$ and the recorded progress (P) are obtained (lines 17-18).\n\n# 5 Evaluation Results\n\n# 5.1 Experimental Setup\n\nWe evaluated RAMBO on a diverse set of models, including Mamba and Mamba2, ranging from 370 million and 2.8 billion parameters [4, 11]. We further assess RAMBO on quantized models such as Quamba-1.4b-w8a8 (8-bit weights and 8-bit activations), Quamba-1.4b-w4a16, Quamba-2.8B-w8a8, and Quamba-2.8b-w4a16 to evaluate its effectiveness under low-precision settings. Model performance was assessed using standard benchmarks, such as the tasks from the Language Model Evaluation Harness [9], including ArcEasy, HellaSwag, PIQA and Winograde, which probe reasoning and generalization across a variety of domains. Additionally, we tested on the LAMBADA dataset [26], a word-prediction/ cloze-style natural language understanding task. We extended our evaluation to include Vision-Mamba models [14], such as Mambavision-S-1K and Mambavision-L-21K in half-precision (16-bit floating-point or FP16) format trained on ImageNet data [7] to showcase multimodal effectiveness of RAMBO. Furthermore, we assess RAMBO on Hymba [8], a hybrid-head parallel architecture that combines transformer attention mechanisms and SSMs, to show the versatility of the proposed framework. We report both perplexity (on WikiText [16]) and accuracy as evaluation metrics. Perplexity, defined as the exponential of the average negative log-likelihood over a sequence, measures predictive capability [16], whereas accuracy quantifies the proportion of correct predictions.\n\nTable 1. RAMBO evaluation on various models and datasets. \n\n<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\"># Bit-Flips</td><td colspan=\"7\">Benchmarks (WikiText perplexity and % Accuracy before/after attack)</td></tr><tr><td>Perplexity</td><td>Arc-Easy</td><td>Lambada</td><td>HellaSwag</td><td>PIQA</td><td>Winogrande</td><td>ImageNet-1K</td></tr><tr><td>Mamba-370m</td><td>4</td><td>24.87 / 1.26 x 1011</td><td>53.125% / 28.67%</td><td>67.86% / 5.33%</td><td>43.04% / 22%</td><td>67.03% / 44.66%</td><td>52.72% / 46%</td><td>NA</td></tr><tr><td>Mamba2-370m</td><td>4</td><td>26.69 / 2.48 x 104</td><td>46.975% / 12.67%</td><td>67.98% / 17.33%</td><td>43.08% / 8%</td><td>68.77% / 24%</td><td>51.46% / 7.33%</td><td>NA</td></tr><tr><td>Mamba-1.4b</td><td>1</td><td>18.94 / 3.75 x 106</td><td>62.5% / 14%</td><td>74.64% / 0%</td><td>55.76% / 0%</td><td>73.34% / 18%</td><td>54.54% / 46%</td><td>NA</td></tr><tr><td>Quamba-1.4b-w8a8</td><td>4</td><td>22.193 / 50.08</td><td>63.15% / 24%</td><td>54.67% / 46.67%</td><td>49.39% / 22.66%</td><td>71.76% / 49.33%</td><td>53.51% / 27.33%</td><td>NA</td></tr><tr><td>Quamba-1.4b-w4a16</td><td>1</td><td>23.62 / 606.673</td><td>61.75% / 0%</td><td>73.39% / 57.33%</td><td>55.17% / 0%</td><td>72.47% / 6%</td><td>53.57% / 6.67%</td><td>NA</td></tr><tr><td>Mamba2-1.3b</td><td>5</td><td>18.76 / 2.41 x 106</td><td>62.5% / 0%</td><td>75.39% / 30.67%</td><td>56.69% / 0%</td><td>72.25% / 0%</td><td>49.33% / 4.67%</td><td>NA</td></tr><tr><td>Mamba-2.8b</td><td>1</td><td>16.31 / 1.23 x 105</td><td>65.625% / 26.67%</td><td>78.34% / 4%</td><td>62.59% / 26%</td><td>74.48% / 52%</td><td>56.99% / 44.67%</td><td>NA</td></tr><tr><td>Quamba-2.8b-w8a8</td><td>1</td><td>19.53 / 1.26x1010</td><td>65.26% / 0%</td><td>77.55% / 22%</td><td>62.06% / 0%</td><td>73.99% / 0%</td><td>56.59% / 0%</td><td>NA</td></tr><tr><td>Quamba-2.8b-w4a16</td><td>1</td><td>9.91 / 7.65 x 105</td><td>68.7% / 60%</td><td>76.03% / 0%</td><td>61.66% / 0%</td><td>73.29% / 25.33%</td><td>58.01% / 46%</td><td>NA</td></tr><tr><td>Mamba2-2.7b</td><td>9</td><td>16.59 / 2.48 x 104</td><td>66.41% / 2%</td><td>77.78% / 15.33%</td><td>62.69% / 1.33%</td><td>74.86% / 17.33%</td><td>56.59% / 5.33%</td><td>NA</td></tr><tr><td>Hymba-1.5b</td><td>2</td><td>14.40 / 1.762 x 104</td><td>76.94% / 0%</td><td>82.20% / 30%</td><td>53.55% / 0.67%</td><td>77.31% / 2.67%</td><td>66.61% / 10.67%</td><td>NA</td></tr><tr><td>Mambavision-S-1K 50m</td><td>22</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>83.2%/46.1%</td></tr><tr><td>Mambavision-L-21K 200m</td><td>24</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>NA</td><td>86.1%/47.2%</td></tr></table>\n\n![](images/48e9a3c465f4f7a25f56bd6efb2589f4f7cf0e0222bd342100719f17b2e5ead4.jpg) \n(a)\n\n![](images/028343f49e7d502c20fbed9178255c7c5d8363cde85672eb0666c55140e2732b.jpg) \n(b) \nFigure 2. Layer type-based sensitivity analysis (a) loss distribution, (b) bit-flip efficiency in Mamba 1.4b FP16 model.\n\n# 5.2 Preliminary Analysis\n\n5.2.1 Layer Sensitivity Analysis. We perform bit-flip injections at a fixed rate to quantify how different layer types affect model degradation (loss increase and consequent accuracy drop). Figure 2 summarizes these results. Figure 2a reports the absolute increase in model loss caused by bit-flips per layer type in Mamba-1.4b model upon the introduction of $0.1\\%$ bit-flips in ranked critical parameters (refer Section 4.2.1). Larger losses indicate greater criticality. It is observed that the layer type $A_{\\log}$ is more critical compared to other layer types as bit-flips in these layers result in higher model losses and large accuracy degradation. This can be ascribed to the fact that $(A_{\\log})$ directly parameterizes the state-transition matrix (A) through an exponential mapping, so perturbations in $(A_{\\log})$ are amplified in (A), leading to large model loss, as discussed in Section 4.1.\n\nLayer sensitivity must also account for bit-flip efficiency, as layer sizes vary significantly and raw loss alone is not a sufficient indicator. We define bit-flip efficiency as the loss increase per flipped bit (Figure 2b); higher values denote greater impact per perturbation. Layers exhibiting both high raw loss upon bit-flips and high efficiency are the most favorable BFA targets. Consistent with the theoretical analysis in Section 4.1, the $A_{\\log}$ layers in the Mamba/Quamba, $D$ and input and output projection layers in Mamba2 and Conv1d layers in Hymba and Mambavision models show the highest criticality and efficiency, making them the most sensitive layer types.\n\n5.2.2 Weight-bit Subset Selection. We select the most-sensitive layer type identified in the preceding analysis (e.g., the $A_{\\log}$ layer in Mamba-1.4b) and determine a critical subset of parameters whose bit-flip perturbations yield a substantially high model loss. Model\n\n![](images/9147b48ae77b34281919987d87737a902f1c57e3fb7a16063842d2a7be7cb366.jpg) \n(a) \nFigure 3. Critical (a) layer, and (b) weight subset selection in Mamba 1.4b FP16 model.\n\n![](images/eae64192e7dcbf7d5dc6975a9751ec57ac3f07c7b838816dca4aa7e0e269df21.jpg) \n(b)\n\nloss (y-axis), as depicted in Figure 3a, increases progressively with fixed $0.1\\%$ bit-flips in A_log layers, in Mamba-1.4b model, from the initial to final Mamba blocks (x-axis). This indicates an increase in layer criticality from initial to final blocks. Therefore, we target the A_log layer in the final Mamba block for attack. Figure 3b shows loss (y-axis) as a function of injected bit-flips (x-axis): the loss surpasses our predefined threshold of 10 after six bit-flips. We conservatively use the parameter set identified at a operating point of 9 bit-flips, $(\\log \\geq 21)$ as the initial critical weight subset for the subsequent exclusionary optimization.\n\n5.2.3 Weight-bit Subset Optimization. In this section, we refine the previously identified critical weight-bit subset to isolate the most critical bits. As shown in Figure 4a, the bit-flip attack optimization in the Mamba-1.4b FP16 model reduces the initial 9-bit subset to a single bit, indicated by the blue line, while maintaining a model loss (red line) above the loss threshold of 10 (green dotted line). This demonstrates the necessity and effectiveness of the optimization process in identifying most critical bits.\n\n# 5.3 Results\n\nThis segment presents the degradation of model performance across benchmarks achieved by RAMBO (Table 1). The attributes of each model, specifically the model name and parameter count, are presented in the firts column. The second column presents the count of bit-flips injected to induce performance degradation. Subsequent columns furnish benchmark results, before and after the attack, for each task. All models demonstrated strong baseline performance, with LAMBADA accuracies up to $82.20\\%$ , and WikiText perplexities between 16.31 and 26.69. Similarly, high accuracies were observed across reasoning and commonsense benchmarks such as ARC-Easy\n\n(53.12- 76.94%), HellaSwag (43.04-62.69%), PIQA (67.03-77.31%), and Winogrande (51.46- 66.61%). \"NA\" indicates when the benchmark is incompatible and does not apply to the model.\n\nFollowing the injection of only a few targeted bit-flips, RAMBO produced severe accuracy degradation across all tasks. In the Mamba-1.4b model, a single bit-flip caused complete collapse on LAMBADA (accuracy dropped from $74.64\\%$ to $0\\%$ ) and raised perplexity from 18.94 to $3.75 \\times 10^{6}$ , while also reducing ARC-Easy accuracy from $62.5\\%$ to $14\\%$ . The Mamba-2.8b model exhibited similar vulnerability, where one bit-flip reduced LAMBADA accuracy from $78.34\\%$ to $4\\%$ and increased perplexity from 16.31 to $1.23 \\times 10^{5}$ . Even smaller models, such as Mamba-370M, experienced notable drops, with four bit-flips lowering Arc-Easy accuracy from $53.125\\%$ to $28.67\\%$ and increasing perplexity from 24.87 to $1.26 \\times 10^{11}$ .\n\nSimilarly, Mamba2 are also highly sensitive: Mamba2-1.3b required 5 bit-flips to collapse Arc-Easy accuracy to $0\\%$ , while Mamba2-2.7B requires 32 bit-flips to induce similar attack impact. Overall, these results reveal that Mamba architectures exhibit extreme bit-level fragility, where even a single bit-flip in critical parameters can induce catastrophic failures in model performance.\n\n# 5.4 Attack Efficacy across Quantization Levels\n\nOur evaluation has, up to this point, concentrated on RAMBO using half-precision FP16 models. However, a model's vulnerability to bit-flip attacks is largely determined by quantization precision. Since diverse numerical formats display varying sensitivities to perturbations, the effect of a bit-flip, along with the quantity of flips needed to cause model disruption, can differ significantly across different quantizations [6]. Furthermore, previous BFA defenses commonly suggest quantization as a strategy for mitigation, necessitating the evaluation of RAMBO under quantized conditions [30].\n\nTo this end, we evaluate RAMBO on INT4 (w4a16) and INT8 (w8a8) variants of Mamba-1.4b and Mamba-2.8b. Our findings show that these quantized models remain highly vulnerable to bit-flip attacks. Notably, a single bit-flip is sufficient to reduce the accuracy of the Mamba-2.8b w4a16 model on PIQA from $73.29\\%$ to $25.33\\%$ (refer Table 1), demonstrating that RAMBO remains extremely effective even under aggressive quantization.\n\n# 5.5 Attack Efficacy across Task Modality\n\nTo assess RAMBO's effectiveness beyond NLP settings, we applied it to FP16 Mamba-Vision models with 50M-200M parameters trained\n\n![](images/aa0c22f0babb47ccb36d1a9a32b696257d1a6073a60404b317e05dd3f1df7e96.jpg) \n(a)\n\n![](images/f0095e8df6898c9c07f77b032a65be2f74e34ea48970e369cc04aac6a060fe57.jpg) \n(b) \nFigure 4. (a) Weight-bit set optimization in Mamba 1.4b FP16 model, and (b) optimization performance comparison with GenBFA of AttentionBreaker framework[6] on Mambavision-S-1K.\n\non ImageNet. Our results show that RAMBO remains highly effective even in vision tasks, successfully identifying critical bit positions whose perturbation leads to substantial performance degradation. For example, in the MambaVision-L-21K model, RAMBO identified 24 critical bits that, when flipped, reduced the model's accuracy from $86.1\\%$ to $47.2\\%$ (refer Table 1). This demonstrates that RAMBO generalizes effectively beyond NLP and remains a potent bit-flip attack methodology for Mamba-based architectures on vision tasks.\n\n# 5.6 Attack Transferability across Benchmarks\n\nThis experiment evaluates the cross-task transferability of bit-flip attack effects. Specifically, we examine whether an attack designed to degrade performance on Task A also induces comparable degradation on Task B. Such transferability would indicate a fundamental architectural vulnerability that is independent of the specific task.\n\nUsing the Mamba-1.4b FP16 model, we first execute a single bit-flip attack on the ARC-Easy benchmark, reducing accuracy from $62.5\\%$ to $14\\%$ . We then measure the resulting impact on other language benchmarks, including HellaSwag, PIQA, and Winogrande. As summarized in Table 1, the attack exhibits substantial transferability: for example, HellaSwag accuracy drops from $55.76\\%$ to $0\\%$ , and PIQA accuracy declines from $73.34\\%$ to $18\\%$ . These results demonstrate that the effects of the attack propagate broadly across tasks and domains, underscoring a fundamental and widespread vulnerability in the model architecture.\n\n# 5.7 Gradient-free Attack Results\n\nWe evaluate RAMBO in a fully gradient-free setting, where gradient information is unavailable throughout the attack. Parameter sensitivity scores are derived solely from weight magnitudes by setting $\\alpha = 0$ in Equation 7. Under this configuration, only a single bit-flip on the 8-bit Quamba-2.8B-W8A8 model reduces ARC-Easy accuracy from $65.26\\%$ to $0\\%$ , demonstrating that magnitude-based scoring remains highly effective in identifying critical bits. This flexibility allows adversaries to select either gradient-free or gradient-based modes depending on their goals and computational constraints.\n\nNotably, many existing BFA defenses rely on restricting or obfuscating gradient access. RAMBO circumvents such defenses entirely by operating on magnitude-based importance alone. This adaptability underscores RAMBO's robustness in gradient-restricted environments and establishes it as a resilient BFA framework.\n\n# 5.8 Gray-box Attack Results\n\nTo evaluate RAMBO's effectiveness under grey-box scenarios with partial access to model parameters, we simulate such a condition by restricting access to a only a subset of model layers, specifically the final two Mamba blocks in the Mamba-1.4b model. Even with this limited access, RAMBO remains highly potent. On the Arc-Easy dataset, flipping only a single bit in this partially visible Mamba-1.4b model is sufficient to reduce accuracy from $62.5\\%$ to $15.2\\%$ , demonstrating the strength of our approach under constrained access.\n\n# 5.9 Comparison with State-of-the-art LLM Bit-flip Attacks\n\nAlthough RAMBO is the first SSM-aware BFA and thus lacks a direct SSM-specific baseline, we assess how its exclusionary optimization strategy compares with LLM-based BFAs. To this end, we evaluate it against GenBFA optimization employed in the AttentionBreaker\n\nframework [6]. As shown in Figure 4b, we observe that both methods identify the same 22 critical bits in Mambavision-S-1K, but RAMBO converges significantly faster, within 26 iterations (green line) compared to GenBFA's 70 (blue line), demonstrating superior efficiency despite its simpler design.\n\nPrior works, such as AttentionBreaker [6] and SBFA [13], show that single or few bit-flips can collapse models with billions of parameters. Similarly, RAMBO achieves catastrophic degradation with just one bit-flip in Mamba-1.4b (FP16) and Quamba-2.8b (INT4). Even the hybrid Hymba architecture is oberserved to be extremely vulnerable and its performance collapses with merely two bit-flips. These results highlight the severe security risk posed by bit-flip attacks in both transformer- and state-space-based models.\n\n# 6 Conclusion\n\nThis research investigates the vulnerability of state-space based models like Mamba and presents RAMBO, a novel BFA framework targeting these models. RAMBO introduces a novel sensitivity analysis of state space model architectures, thereby resulting in compromised performance. Specifically, perturbing merely one-bit (7.14 x $10^{-10}\\%$ of the model parameters) in Mamba-1.4b model results on LAMBADA dataset prediction scores dropping from $74.64\\%$ to $0\\%$ . The results highlight RAMBO's utility in demonstrating Mamba's vulnerability to such adversarial interventions.\n\n# 7 Ethical Considerations\n\nThis study aims to advance research on the safety and robustness of NLP models, with a particular focus on state-space architectures such as Mamba. It is imperative that the hardware fault vulnerabilities and attack methodologies presented here are used solely for constructive purposes, and not for malicious exploitation.\n\n# References\n\n[1] Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. Practical fault attack on deep neural networks. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 2204-2206. \n[2] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology 15, 3 (2024), 1-45. \n[3] Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. 2021. Proflip: Targeted trojan attack with progressive bit flips. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7718-7727. \n[4] Tri Dao and Albert Gu. 2024. Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. In International Conference on Machine Learning (ICML). \n[5] Badhan Chandra Das, M Hadi Amini, and Yanzhao Wu. 2024. Security and privacy challenges of large language models: A survey. arXiv preprint arXiv:2402.00888 (2024). \n[6] Sanjay Das, Swastik Bhattacharya, Souvik Kundu, Shamik Kundu, Anand Menon, Arnab Raha, and Kanad Basu. 2024. Attentionbreaker: Adaptive evolutionary optimization for unmasking vulnerabilities in lms through bit-flip attacks. arXiv e-prints (2024), arXiv-2411. \n[7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. IEEE, 248-255. \n[8] Xin Dong, Yonggan Fu, Shizhe Dao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, et al. 2024. Hymba: A hybrid-head architecture for small language models. arXiv preprint arXiv:2411.13676 (2024). \n[9] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2024. A framework for few-shot language model evaluation. doi:10.5281/zenodo.12608602 \n[10] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).\n\n[11] Albert Gu and Tri Dao. 2023. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. arXiv preprint arXiv:2312.00752 (2023). \n[12] Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 (2023). \n[13] Jingkai Guo, Chaitali Chakrabarti, and Deliang Fan. 2025. SBFA: Single Sneaky Bit Flip Attack to Break Large Language Models. arXiv preprint arXiv:2509.21843 (2025). \n[14] Ali Hatamizadeh and Jan Kautz. 2025. Mambavision: A hybrid mamba-transformer vision backbone. In Proceedings of the Computer Vision and Pattern Recognition Conference. 25261-25270. \n[15] Yu-ichi Hayashi, Naofumi Homma, Takeshi Sugawara, Takaaki Mizuki, Taka-fumi Aoki, and Hideaki Sone. 2011. Non-invasive EMI-based fault injection attack against cryptographic modules. In 2011 IEEE International Symposium on Electromagnetic Compatibility. IEEE, 763-767. \n[16] Yutong Hu, Quzhe Huang, Mingxu Tao, Chen Zhang, and Yansong Feng. 2024 Can Perplexity Reflect Large Language Model's Ability in Long Text Understanding? arXiv preprint arXiv:2405.06105 (2024). \n[17] Mehmet Kayaalp, Nael Abu-Ghazaleh, Dmitry Ponomarev, and Aamer Jaleel 2016. A high-resolution side-channel attack on last-level cache. In Proceedings of the 53rd Annual Design Automation Conference. 1-6. \n[18] Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. ACM SIGARCH Computer Architecture News 42, 3 (2014), 361-372. \n[19] Shamik Kundu, Sanjay Das, Sayar Karmakar, Arnab Raha, Souvik Kundu, Yiorgos Makris, and Kanad Basu. 2024. Bit-by-Bit: Investigating the Vulnerabilities of Binary Neural Networks to Adversarial Bit Flipping. Transactions on Machine Learning Research (2024). \n[20] Aleksander Madry. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017). \n[21] Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, and Marten van Dijk. 2021 Back in black: A comparative evaluation of recent state-of-the-art black-box attacks. IEEE Access 10 (2021), 998-1019. \n[22] Anqi Mao, Mehryar Mohri, and Yutao Zhong. 2023. Cross-entropy loss functions: Theoretical analysis and applications. In International conference on Machine learning, pmlr, 23803-23828. \n[23] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1765-1773. \n[24] Najmeh Nazari, Hosein Mohammadi Makrani, Chongzhou Fang, Hossein Sayadi, Setareh Rafatirad, Khaled N Khasawneh, and Houman Homayoun. 2024. Forget and Rewire: Enhancing the Resilience of Transformer-based Models against {Bit-Flip} Attacks. In 33rd USENIX Security Symposium (USENIX Security 24) 1349-1366. \n[25] Ozan Özdenizci and Robert Legenstein. 2022. Improving robustness against stealthy weight bit-flip attacks by output code matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13388-13397. \n[26] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: Long papers). 1525–1534. \n[27] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security. 506-519. \n[28] Cheng Qian, Ming Zhang, Yuanping Nie, Shuaibing Lu, and Huayang Cao. 2023. A survey of bit-flip attacks on deep neural network and corresponding defense methods. *Electronics* 12, 4 (2023), 853. \n[29] Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. 2019. Bit-flip attack: Crush ing neural network with progressive bit search. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1211-1220. \n[30] Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, and Deliang Fan. 2021. T-bfa: Targeted bit-flip adversarial weight attack. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 11 (2021), 7928-7939. \n[31] Aniruddha Saha, Akshayvarun Subramanya, and Hamed Piriyawash. 2020. Hidden trigger backdoor attacks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 11957-11965. \n[32] Bodo Selmke, Stefan Brummer, Johann Heysztl, and Georg Sigl. 2015. Precise laser fault injections into $90\\mathrm{nm}$ and $45\\mathrm{nm}$ sram-cells. In International Conference on Smart Card Research and Advanced Applications. Springer, 193-205. \n[33] Amit Mazumder Shuvo, Tao Zhang, Farimah Farahmandi, and Mark Tehranipoor 2023. A comprehensive survey on non-invasive fault injection attacks. Cryptology ePrint Archive (2023). \n[34] Yun Xiang, Yongchao Xu, Yingjie Li, Wen Ma, Qi Xuan, and Yi Liu. 2020. Side-channel gray-box attack for dnns. IEEE Transactions on Circuits and Systems II: Express Briefs 68, 1 (2020), 501-505. \n[35] Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2019. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations.\n\n[36] Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, Qipeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, et al. 2024. A survey of resource-efficient lmm and multimodal foundation models. arXiv preprint arXiv:2401.08092 (2024). \n[37] Fan Yao, Adnan Siraj Rakin, and Deliang Fan. 2020. {DeepHammer}: Depleting the intelligence of deep neural networks through targeted chain of bit flips. In 29th USENIX Security Symposium (USENIX Security 20). 1463-1480.\n\n[38] Yuval Yarom and Katrina Falkner. 2014. {FLUSH+ RELOAD}: A high resolution, low noise, I3 cache {Side-Channel} attack. In 23rd USENIX security symposium (USENIX security 14), 719-732. \n[39] Zhilu Zhang and Mert Sabuncu. 2018. Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems 31 (2018)."}
# LLaDA2.0: Scaling Up Diffusion Language Models to 100B Abstract This paper presents LLaDA2.0 — a tuple of discrete diffusion large language models (dLLM) scaling up to 100B total parameters through systematic conversion from auto-regressive (AR) models — establishing a new paradigm for frontier-scale deployment. Instead of costly training from scratch, LLaDA2.0 upholds knowledge inheritance, progressive adaption and efficiency-aware design principle, and seamless converts a pre-trained AR model into dLLM with a novel 3-phase block-level WSD based training scheme: progressive increasing block-size in block diffusion (warm-up), large-scale full-sequence diffusion (stable) and reverting back to compact-size block diffusion (decay). Along with post-training alignment with SFT and DPO, we obtain LLaDA2.0-mini (16B) and LLaDA2.0-flash (100B), two instruction-tuned Mixture-of-Experts (MoE) variants optimized for practical deployment. By preserving the advantages of parallel decoding, these models deliver superior performance and efficiency at the frontier scale. Both models were open-sourced. Huggingface: https://hf.co/collections/inclusionAI/llada-20 Figure 1: LLaDA2.0-flash main results. # 1 Introduction Large Language Models have achieved remarkable success through the AR paradigm, modeling sequences via next-token prediction with strict left-to-right causal dependencies (Hurst et al., 2024; Grattafori et al., 2024; Yang et al., 2025). This approach naturally aligns with the sequential structure of language and enables efficient training through next-token likelihood maximization. However, the very success of this paradigm creates fundamental limitations: the sequential generation process imposes severe inference bottlenecks, precluding parallelization, and increasing latency at scale, while the rigid causal structure can be suboptimal for tasks requiring bidirectional reasoning and holistic understanding. Discrete Masked Diffusion Language Models (MDLM) have emerged as a compelling alternative to the prevailing AR paradigm. By reconstructing sequences from random masked inputs, these models inherently support parallel generation and leverage a full bidirectional context, offering a different architectural approach (Gong et al., 2025; Yu et al., 2025). Although these conceptual advantages are clear, the field is still in an early developmental stage. Current research is actively focused on key challenges, including the refinement of specialized training regimes, the design of efficient sampling strategies, the efficient inference of open-source models, and reinforcement learning for MDLM. As a result of this ongoing exploration, most existing diffusion models, including recent advancements like Block Diffusion Language Models (BDLMs) (Arriola et al., 2025), operate at a smaller scale (e.g., $\leq 8\mathrm{B}$ parameters). Bridging this scale difference to the hundreds of billions of parameters seen in the leading mainstream AR models is a primary frontier for enabling diffusion models to fully capture complex linguistic patterns for practical deployment. In this work, we introduce LLaDA2.0 series with 100B/16B total parameters diffusion language models that resolves these fundamental challenges through a novel two-stage continual pre-training (CPT) paradigm. Rather than attempting to train diffusion models from scratch, we leverage existing AR checkpoints as the foundation for a systematic conversion process that preserves linguistic knowledge while introducing diffusion capabilities. The first stage, CPT aims to transform the foundational AR model into a capable diffusion language model. However, direct conversion is challenging due to the inherent data distribution gap between left-to-right generation and bidirectional denoising. Although the BDLM formulation partially reduces this gap through blockwise masked reconstruction, it suffers from low data utilization, limiting the effective exploitation of large-scale corpora. To this end, we introduce the Warmup-Stable-Decay (WSD) strategy, smoothly bridging the AR-to-dLLM gap while substantially improving CPT efficiency. WSD gradually expands the model's receptive field to introduce diffusion-style context (Warmup), strengthens global denoising under full-sequence training (Stable), and then refines the model into an efficient blockwise structure (Decay). This progressive adjustment enables a stable and data-efficient transition to diffusion-based learning. Additionally, under full attention in packed training sequences, diffusion models risk forming spurious dependencies across document boundaries, leading to semantic confusion and instability in bidirectional training. To prevent such cross-document interference, we introduce a document-level attention mask that restricts self-attention within individual documents, ensuring coherent context modeling. The second stage, Post-training for Practical Deployment, transitions the model from a raw predictive engine into a capable and efficient assistant. The random masking nature of the diffusion fine-tuning objective means any single sample provides only a partial learning signal. We address this by employing a complementary masking strategy, which ensures near- $100\%$ data utilization and accelerates convergence by guaranteeing every token contributes to the model's learning. With an efficient foundation for instruction tuning, we then align the model with human preferences by adapting modern techniques like Direct Preference Optimization (DPO)—originally designed for AR models—by reformulating the objective over the model's reconstruction loss. Beyond alignment, practical deployment hinges on inference speed. To realize the full promise of parallel decoding, which is often limited by a model's lack of predictive confidence, we incorporate an auxiliary confidence prediction loss. This trains the model to be "sharper" and more certain, unlocking aggressive and efficient parallel generation without degrading quality. We release instruction-tuned variants for practical deployment: LLaDA2.0-mini (16B parameters) for resource-constrained applications and LLaDA2.0-flash (100B parameters) for high-performance scenarios. Both variants retain the parallel decoding advantages of our diffusion training while being optimized for instruction following and safety through comprehensive post-training alignment. Our contributions provide a practical recipe for the community to leverage AR stability while achieving diffusion parallelism, opening new possibilities for efficient large-scale language modeling. # 2 Related Work # 2.1 Train dLLMs from scratch Auto-regressive language models (Ling et al., 2025; Moonshot, 2025; Liu et al., 2024; Meta-AI, 2025) are typically trained by maximizing the likelihood of predicting the next token. Under this paradigm, model performance has been shown to scale effectively with increasing model size, dataset volume, and computational resources, following well-established scaling laws. Recently, MDLMs (Song et al., 2025; Ye et al., 2025; Nie et al., 2025) have emerged as an alternative generative framework, reformulating text generation as an iterative denoising process. In each forward step, a subset of tokens is randomly masked, and the model is trained to recover the original tokens conditioned on the remaining unmasked context. Encouraged by this paradigm shift, several studies have explored training MDLMs from scratch to assess their full potential. For instance, LLaDA (Nie et al., 2025) demonstrated that a 8B dense MDLM, trained entirely from scratch, achieves performance competitive with similarly sized AR counterparts. Building upon this, LLaDA-MoE (Zhu et al., 2025) introduced the Mixture-of-Experts (MoE) architecture into the MDLM for the first time, showing that a scratch-trained MoE-based MDLM can surpass dense models in both efficiency and capability, thereby validating the compatibility and scalability of MDLMs with advanced MoE designs. Moreover, due to the fundamentally different training dynamics compared to AR models, established training practices and hyperparameter recipes from the AR domain are often suboptimal for MDLMs. To address this gap, recent efforts such as Quakka (Ni et al., 2025) and OpenMoE2 (Ni & team, 2025) have begun investigating the scaling properties and optimal training strategies specifically tailored for MDLMs, laying the groundwork for principled scaling in this emerging paradigm. However, from-scratch trained MDLMs still lag behind state-of-the-art AR models in overall performance. This gap can be largely attributed to the disparity in training data volume and the maturity of infrastructure support—factors that have been extensively optimized over years of development for AR models. Moreover, due to the high computational cost and long training cycles required for pretraining from scratch, MDLMs mentioned above are typically limited in model scale ( $\leq 8\mathrm{B}$ ), whereas leading AR models now routinely scale into tens or even hundreds of billions. # 2.2 Scaling dLLMs with AR initialization Given the strong knowledge capacity and performance of AR models, several recent studies have explored initializing dLLMs from pre-trained AR models to reduce training costs and narrow the performance gap between AR models and dLLMs. For instance, DiffusionLLaMA (Gong et al., 2025) and Dream-7B (Ye et al., 2025) adopt a mask annealing strategy to gradually transition from causal attention to bidirectional attention during training, while employing a CART-based loss reweighting scheme to balance token-level learning dynamics. In contrast, RND1 (Keshigeyan et al., 2025) takes a more direct approach by immediately converting the causal attention mechanism of the AR model into a bidirectional one upon initialization. Notably, RND1 observes that when initializing DLM training from an AR model, preserving knowledge-intensive capabilities requires
# LLaDA2.0: Scaling Up Diffusion Language Models to 100B Abstract This paper presents LLaDA2.0 — a tuple of discrete diffusion large language models (dLLM) scaling up to 100B total parameters through systematic conversion from auto-regressive (AR) models — establishing a new paradigm for frontier-scale deployment. Instead of costly training from scratch, LLaDA2.0 upholds knowledge inheritance, progressive adaption and efficiency-aware design principle, and seamless converts a pre-trained AR model into dLLM with a novel 3-phase block-level WSD based training scheme: progressive increasing block-size in block diffusion (warm-up), large-scale full-sequence diffusion (stable) and reverting back to compact-size block diffusion (decay). Along with post-training alignment with SFT and DPO, we obtain LLaDA2.0-mini (16B) and LLaDA2.0-flash (100B), two instruction-tuned Mixture-of-Experts (MoE) variants optimized for practical deployment. By preserving the advantages of parallel decoding, these models deliver superior performance and efficiency at the frontier scale. Both models were open-sourced. Huggingface: https://hf.co/collections/inclusionAI/llada-20 Figure 1: LLaDA2.0-flash main results. # 1 Introduction Large Language Models have achieved remarkable success through the AR paradigm, modeling sequences via next-token prediction with strict left-to-right causal dependencies (Hurst et al., 2024; Grattafori et al., 2024; Yang et al., 2025). This approach naturally aligns with the sequential structure of language and enables efficient training through next-token likelihood maximization. However, the very success of this paradigm creates fundamental limitations: the sequential generation process imposes severe inference bottlenecks, precluding parallelization, and increasing latency at scale, while the rigid causal structure can be suboptimal for tasks requiring bidirectional reasoning and holistic understanding. Discrete Masked Diffusion Language Models (MDLM) have emerged as a compelling alternative to the prevailing AR paradigm. By reconstructing sequences from random masked inputs, these models inherently support parallel generation and leverage a full bidirectional context, offering a different architectural approach (Gong et al., 2025; Yu et al., 2025). Although these conceptual advantages are clear, the field is still in an early developmental stage. Current research is actively focused on key challenges, including the refinement of specialized training regimes, the design of efficient sampling strategies, the efficient inference of open-source models, and reinforcement learning for MDLM. As a result of this ongoing exploration, most existing diffusion models, including recent advancements like Block Diffusion Language Models (BDLMs) (Arriola et al., 2025), operate at a smaller scale (e.g., $\leq 8\mathrm{B}$ parameters). Bridging this scale difference to the hundreds of billions of parameters seen in the leading mainstream AR models is a primary frontier for enabling diffusion models to fully capture complex linguistic patterns for practical deployment. In this work, we introduce LLaDA2.0 series with 100B/16B total parameters diffusion language models that resolves these fundamental challenges through a novel two-stage continual pre-training (CPT) paradigm. Rather than attempting to train diffusion models from scratch, we leverage existing AR checkpoints as the foundation for a systematic conversion process that preserves linguistic knowledge while introducing diffusion capabilities. The first stage, CPT aims to transform the foundational AR model into a capable diffusion language model. However, direct conversion is challenging due to the inherent data distribution gap between left-to-right generation and bidirectional denoising. Although the BDLM formulation partially reduces this gap through blockwise masked reconstruction, it suffers from low data utilization, limiting the effective exploitation of large-scale corpora. To this end, we introduce the Warmup-Stable-Decay (WSD) strategy, smoothly bridging the AR-to-dLLM gap while substantially improving CPT efficiency. WSD gradually expands the model's receptive field to introduce diffusion-style context (Warmup), strengthens global denoising under full-sequence training (Stable), and then refines the model into an efficient blockwise structure (Decay). This progressive adjustment enables a stable and data-efficient transition to diffusion-based learning. Additionally, under full attention in packed training sequences, diffusion models risk forming spurious dependencies across document boundaries, leading to semantic confusion and instability in bidirectional training. To prevent such cross-document interference, we introduce a document-level attention mask that restricts self-attention within individual documents, ensuring coherent context modeling. The second stage, Post-training for Practical Deployment, transitions the model from a raw predictive engine into a capable and efficient assistant. The random masking nature of the diffusion fine-tuning objective means any single sample provides only a partial learning signal. We address this by employing a complementary masking strategy, which ensures near- $100\%$ data utilization and accelerates convergence by guaranteeing every token contributes to the model's learning. With an efficient foundation for instruction tuning, we then align the model with human preferences by adapting modern techniques like Direct Preference Optimization (DPO)—originally designed for AR models—by reformulating the objective over the model's reconstruction loss. Beyond alignment, practical deployment hinges on inference speed. To realize the full promise of parallel decoding, which is often limited by a model's lack of predictive confidence, we incorporate an auxiliary confidence prediction loss. This trains the model to be "sharper" and more certain, unlocking aggressive and efficient parallel generation without degrading quality. We release instruction-tuned variants for practical deployment: LLaDA2.0-mini (16B parameters) for resource-constrained applications and LLaDA2.0-flash (100B parameters) for high-performance scenarios. Both variants retain the parallel decoding advantages of our diffusion training while being optimized for instruction following and safety through comprehensive post-training alignment. Our contributions provide a practical recipe for the community to leverage AR stability while achieving diffusion parallelism, opening new possibilities for efficient large-scale language modeling. # 2 Related Work # 2.1 Train dLLMs from scratch Auto-regressive language models (Ling et al., 2025; Moonshot, 2025; Liu et al., 2024; Meta-AI, 2025) are typically trained by maximizing the likelihood of predicting the next token. Under this paradigm, model performance has been shown to scale effectively with increasing model size, dataset volume, and computational resources, following well-established scaling laws. Recently, MDLMs (Song et al., 2025; Ye et al., 2025; Nie et al., 2025) have emerged as an alternative generative framework, reformulating text generation as an iterative denoising process. In each forward step, a subset of tokens is randomly masked, and the model is trained to recover the original tokens conditioned on the remaining unmasked context. Encouraged by this paradigm shift, several studies have explored training MDLMs from scratch to assess their full potential. For instance, LLaDA (Nie et al., 2025) demonstrated that a 8B dense MDLM, trained entirely from scratch, achieves performance competitive with similarly sized AR counterparts. Building upon this, LLaDA-MoE (Zhu et al., 2025) introduced the Mixture-of-Experts (MoE) architecture into the MDLM for the first time, showing that a scratch-trained MoE-based MDLM can surpass dense models in both efficiency and capability, thereby validating the compatibility and scalability of MDLMs with advanced MoE designs. Moreover, due to the fundamentally different training dynamics compared to AR models, established training practices and hyperparameter recipes from the AR domain are often suboptimal for MDLMs. To address this gap, recent efforts such as Quakka (Ni et al., 2025) and OpenMoE2 (Ni & team, 2025) have begun investigating the scaling properties and optimal training strategies specifically tailored for MDLMs, laying the groundwork for principled scaling in this emerging paradigm. However, from-scratch trained MDLMs still lag behind state-of-the-art AR models in overall performance. This gap can be largely attributed to the disparity in training data volume and the maturity of infrastructure support—factors that have been extensively optimized over years of development for AR models. Moreover, due to the high computational cost and long training cycles required for pretraining from scratch, MDLMs mentioned above are typically limited in model scale ( $\leq 8\mathrm{B}$ ), whereas leading AR models now routinely scale into tens or even hundreds of billions. # 2.2 Scaling dLLMs with AR initialization Given the strong knowledge capacity and performance of AR models, several recent studies have explored initializing dLLMs from pre-trained AR models to reduce training costs and narrow the performance gap between AR models and dLLMs. For instance, DiffusionLLaMA (Gong et al., 2025) and Dream-7B (Ye et al., 2025) adopt a mask annealing strategy to gradually transition from causal attention to bidirectional attention during training, while employing a CART-based loss reweighting scheme to balance token-level learning dynamics. In contrast, RND1 (Keshigeyan et al., 2025) takes a more direct approach by immediately converting the causal attention mechanism of the AR model into a bidirectional one upon initialization. Notably, RND1 observes that when initializing DLM training from an AR model, preserving knowledge-intensive capabilities requires constraining updates to the model's dense layers to prevent catastrophic forgetting. Block Diffusion Language Models (BDLMs) (Arriola et al., 2025) provide a hybrid paradigm that balances efficiency and performance by combining diffusion and AR modeling. Tokens are generated block-wise: within each block, a diffusion process reconstructs masked tokens, while blocks are produced auto-regressively. This design enables variable-length generation and supports KV-cache reuse during decoding, enhancing inference efficiency. Consequently, BDLMs can be effectively initialized from AR models, narrowing the performance gap. For example, SDAR (Cheng et al., 2025) leverages the Qwen-3 series (Yang et al., 2025) to train more efficient BDLMs. By exploring various block sizes and optimization strategies, it achieves performance comparable to its AR base model. However, one key limitation across all existing methods is their restricted model scale—ranging only from 7B to 30B parameters—leaving the feasibility and scalability of AR-initialized diffusion models largely unexplored at larger scales. Besides, the low training efficiency of block diffusion hinders its widely application to large-scale corpus for large-size models. Whether such initialization strategies can effectively generalize to models beyond the 30B scale remains an open question. # 2.3 dLLMs post-training Beyond pre-training, post-training is crucial for unlocking the full potential of dLLMs by aligning them with specific tasks and human preferences. This process typically involves supervised fine-tuning (SFT) to instill instruction-following capabilities, reinforcement learning (RL) to enhance complex reasoning, and inference optimization to address efficiency bottlenecks. Recent work has explored SFT to adapt dLLMs for specialized domains. For instance, Dream-Coder (Xie et al., 2025) fine-tunes a 7B dLLM for code generation, demonstrating unique abilities like adaptive "sketch-then-fill" strategies for complex algorithms. Similarly, the general-purpose model Dream-7B (Ye et al., 2025) leverages SFT to achieve performance on par with top-tier AR models, while uniquely excelling at tasks requiring complex planning and constraint satisfaction. Other studies have investigated specialized fine-tuning strategies to balance quality and efficiency. Seed-Diffusion (Song et al., 2025), for example, employs a two-stage curriculum learning strategy to train a high-speed code generation model, while LiDAR (Liu et al., 2025) introduces a hybrid "think in diffusion, generate in AR" architecture through fine-tuning, significantly boosting inference throughput while maintaining quality. To further enhance dLLMs' reasoning abilities, researchers have begun adapting reinforcement learning techniques. However, applying standard policy gradient methods is challenging due to the intractable log-likelihood of dLLMs. To address this, SPG (Wang et al., 2025a) proposes a novel Sandwich Policy Gradient algorithm that obtains a more robust and less biased gradient by maximizing an evidence lower bound for high-reward samples and minimizing an evidence upper bound for low-reward ones. Another line of work, TraceRL (Wang et al., 2025d), focuses on aligning the training objective with the model's multi-step generation trajectory. This framework led to the TraDo series of models, which have not only surpassed strong AR models on reasoning benchmarks but also produced the first dLLM capable of long-chain-of-thought reasoning. A significant challenge for dLLMs is their slow inference speed, stemming from the iterative nature of the denoising process. To mitigate this, several acceleration methods have been proposed. DPad (Chen et al., 2025a) offers a training-free solution by treating future tokens as a dynamic "scratchpad" and using a sliding window and distance-based pruning to reduce redundant computations, achieving a dramatic speedup, especially for long sequence generation. In contrast, D2F (Wang et al., 2025c) introduces a hybrid autoregressive-diffusion paradigm that enables parallel denoising of future text blocks even before preceding ones are fully generated. This approach allows dLLMs to leverage KV-caching and, for the first time, surpass the inference speed of equivalently sized AR models. Despite these advances, the field of dLLM post-training is still nascent. Systematic exploration of how these techniques—SFT, RL, and acceleration—interact with one another, and how they scale to models with hundreds of billions of parameters, remains an open and critical area for future research. # 3 LLaDA2.0 Training Paradigm Figure (2) illustrates the holistic training pipeline of LLaDA2.0, a staged and scalable framework designed to transform AR language models into highly efficient diffusion language models. Our paradigm follows a three-stage progression: (1) Continual Pre-training from AR to MDLM, (2) Block Diffusion Pre-training to transition from token-level to block-level diffusion modeling, and (3) Post-training for alignment and task specialization. The process begins with a strong AR base model. We first perform continual pre-training to adapt this model into an MDLM, where it learns to reconstruct randomly masked tokens in a bidirectional, denoising fashion. This phase bridges the gap between AR and diffusion-based generation while preserving the representational geometry of the original model. Building upon the trained MDLM, we then introduce block diffusion pre-training, during which the model is further trained to denoise contiguous spans of text—referred to as "blocks"—rather than individual tokens. This shift enables higher computational efficiency and better long-range coherence during generation. Finally, after mastering non-autoregressive generation at both token and block levels, the model undergoes post-training-including SFT and DPO to align its outputs with human intent, instruction-following capability, and downstream application requirements. This stage ensures that the powerful generative backbone developed during diffusion pre-training translates into practical performance gains across diverse tasks. Overall, LLaDA2.0's training paradigm emphasizes knowledge inheritance, progressive adaptation, and efficiency-aware design, enabling seamless evolution from AR models to fluent, flexible, and fast diffusion large language models. Figure 2: A schematic of the progressive training framework for transforming an AR model into a MDLM. Continual Pre-Training Stage facilitates the Warmup-Stable-Decay strategies by scheduling block size $L_{B}$ enables smooth, stable, and effective attention mask adaptation. Post-training Stage facilitates the same block diffusion configuration conducting the instruction SFT, Confidence-Aware Parallel SFT, and DPO. The right panel illustrates the document-level block diffusion attention mask, which enables an efficient, vectorized forward pass by constructing a single input sequence from multiple noisy and clean examples, such as $[x_{\mathrm{noisy1}}, \ldots, x_{\mathrm{clean1}}, \ldots]$ . The forward pass then employs a combination of block-diagonal $(\mathbf{M}_{\mathrm{BD}})$ , offset block-causal $(\mathbf{M}_{\mathrm{OBC}})$ , and block-causal $(\mathbf{M}_{\mathrm{BC}})$ masks. # 4 Continual Pre-training via Warmup-Stable-Decay (WSD) # Takeaway (1) Warmup-Stable-Decay enables a smooth and data-efficient conversion from AR to dLLMs. (2) The document-level attention mask ensures coherent bidirectional modeling within semantic boundaries. (3) Top-k Checkpoint Merge enhances performance and generalization by averaging the top k model checkpoints. Converting a pre-trained AR language model into a high-performance diffusion language model is fundamentally challenging due to the misalignment in architectural inductive biases and training objectives. While AR models generate tokens sequentially from left to right, diffusion-based models rely on bidirectional context and learn to reconstruct corrupted sequences in arbitrary unmasking orders. A direct objective switch often leads to unstable optimization and severe degradation of pretrained knowledge. To address this gap, we propose a Warmup-Stable-Decay (WSD) continual pre-training strategy that enables a smooth, stable, and effective transition from AR to dLLM. WSD decomposes the conversion into three coordinated phases: - Warmup: Progressively increase the block size in block diffusion language models (BDLM) to gradually transform the AR model into a full-sequence masked diffusion language model (MDLM). - Stable: Stabilize and enrich the model's understanding of diffusion dynamics through large-scale training under the MDLM paradigm. - Decay: Revert back to a compact BDLM with smaller block sizes to achieve better speed-efficiency trade-offs during inference. This progressive schedule preserves the AR model's priors while steadily adapting it to the structural requirements of diffusion modeling. Moreover, the document-level attention mask is applied throughout training to all input sequences. This mechanism is crucial for handling packed heterogeneous documents, preventing the model from forming spurious connections across unrelated texts, thereby ensuring semantic coherence and improving learning stability within each document. In addition, we adopt a top-k checkpoint merging strategy (Tian et al., 2025), to enhance generalization by averaging the parameters of the best-performing checkpoints, smoothing the parameter landscape, and yielding a more robust final model with boosted performance. # 4.1 Warmup-Stable-Decay Conversion Strategy We begin with the AR base models Ling-mini-2.0 and Ling-flash-2.0 (Ling et al., 2025), which can be viewed as a special case of BDLM with block size 1. This perspective allows us to treat AR models as the initial BDLM configuration with minimal granularity. Phase-1: Progressive Block Size Warmup The core idea of the warmup phase is to gradually increase the block size, thereby expanding the receptive field within which the model performs joint denoising. Starting from block size $L_{B} = 1$ , we incrementally scale it up to 4, 32, then 64, and ultimately reach $L_{B} = 4096$ at which point the entire sequence is treated as one single block. To avoid fragmented blocks, we require the sequence length to be divisible by the current block size. At the final enlargement, the BDLM becomes equivalent to a standard MDLM that operates over fully masked sequences with global attention. Crucially, each block-size transition is trained on moderate-scale data to ensure smooth adaptation. This progressive enlargement allows the model to smoothly adapt its internal representations to handle larger contextual spans and more complex masking patterns. Phase-2: Large Scale Stable Training Once the block size reaches 4096 and the model transitions to the MDLM pattern, the "clean" part of the attention computation (see Figure 2) no longer needs to be maintained. This significantly reduces the computational cost of attention, allowing data to be processed far more efficiently under the MDLM paradigm. With the model now fully adapted to this regime, the stable training phase focuses on deepening its understanding of diffusion dynamics through extensive training on large-scale corpora. At this stage, the block size is fixed at 4096, effectively making every input a single-block sequence, equivalent to the classical MDLM setting. Phase-3: Block Size Decay Finally, after large-scale MDLM training, we gradually reduce the block size from 4096 to a small block size (e.g., 32) to convert the model back into an efficient BDLM. This decay process distills the global contextual knowledge learned during MDLM into a compact blockwise structure. By decreasing the block size step-by-step (e.g., starting from 4096 to 2048) rather than abruptly, the model smoothly adapts from global to local conditioning, preserving its semantic understanding while regaining BDLM's practical benefits such as KV-cache reuse and fast variable-length generation. Overall Training Objective The optimization objective of BDLM (Arriola et al., 2025) is designed to enable the model to accurately reconstruct the original, uncorrupted tokens within these designated masked blocks using a standard cross-entropy loss. Specifically, we define the training loss during warmup and decay phases (phase-1&3) under the BDLM paradigm as: $$ \mathcal {L} _ {\mathrm {B D L M}} (\theta) = - \mathbb {E} _ {t, \boldsymbol {x} _ {0}, \boldsymbol {x} _ {t}} \left[ \frac {\alpha_ {t} ^ {\prime}}{1 - \alpha_ {t}} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {L _ {B}} \mathbb {1} [ x _ {t, k} ^ {i} = [ \mathrm {M A S K} ] \log p _ {\theta} (\boldsymbol {x} _ {0, k} ^ {i} | \boldsymbol {x} _ {0, < k}, \boldsymbol {x} _ {t, k}) \right], \tag {1} $$ where the expectation is over timestep $t$ , the clean sequence $x_0$ , and its corrupted version $x_t$ (tokens masked with probability $1 - \alpha_t$ ). Indicator $\mathbb{1}[\cdot]$ ensures predictions are made only for masked tokens, and $-\alpha_t' / (1 - \alpha_t)$ is the diffusion-derived time weight. Here $K = L_{\mathrm{total}} / L_B$ is the number of blocks, $L_B$ the block size, $x_{t,k}^i$ the $i$ -th token in block $k$ , $x_{0,k}$ the preceding clean blocks, and $x_{t,k}$ the noisy version of the current block. During the stable training (phase-2) of MDLM (i.e., $K = 1$ ), the objective simplifies to: $$ \mathcal {L} _ {\mathrm {M D L M}} (\theta) = - \mathbb {E} _ {t, \boldsymbol {x} _ {0}, \boldsymbol {x} _ {t}} [ \frac {\alpha_ {t} ^ {\prime}}{1 - \alpha_ {t}} \sum_ {i = 1} ^ {L} \mathbb {1} [ x _ {t} ^ {i} = [ \text {M A S K} ] \log p _ {\theta} (x _ {0} ^ {i} | \boldsymbol {x} _ {t}) ]. \tag {2} $$ # 4.2 Document-level Attention Mask Our training sequences are formed by packing heterogeneous documents into fixed-length segments to maximize throughput. However, this introduces artificial long-range dependencies across semantically unrelated texts. Without careful handling, standard attention would incorrectly attend across document boundaries, leading to contextual confusion and significantly hindering the model's ability to perform robust bidirectional modeling crucial for denoising. To mitigate this fundamental challenge and preserve semantic coherence, we redefine the attention mechanism with a specialized block-wise document-level attention mask. This mask ensures that attention operates strictly within document boundaries, preventing cross-document contamination and allowing the model to fully leverage bidirectional context for accurate reconstruction of corrupted blocks. The native Block Diffusion vectorizes the training process to achieve parallel training of blocks, and this mask is applied accordingly. Specifically, for a concatenated sequence $x_{full}$ of length $2L$ (comprising $x_{t}$ followed by $x_{0}$ ), and assuming tokens $i$ and $j$ are already confined to the same document segment (as enforced by the initial document-level mask), the attention mask $M \in \{0,1\}^{2L \times 2L}$ is constructed by dividing each sequence $(x_{t}$ and $x_{0}$ ) into contiguous blocks. Let $b(k) = \lfloor k / L_{B} \rfloor$ denote the block index for token $k$ given a block size $L_{B}$ . The mask is defined as: $$ M _ {i j} = \left\{ \begin{array}{l l} \mathbb {1} _ {b (i) = b (j)} & \text {i f} i \in x _ {t} \text {a n d} j \in x _ {t} \\ \mathbb {1} _ {b (i) > b (j - L)} & \text {i f} i \in x _ {t} \text {a n d} j \in x _ {0} \\ \mathbb {1} _ {b (i - L) \geq b (j - L)} & \text {i f} i \in x _ {0} \text {a n d} j \in x _ {0} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3} $$ Where $i, j \in \{0, 1, \dots, 2L - 1\}$ are the indices in the full sequence. The first condition $(\mathbb{1}_{b(i) = b(j)})$ implements block-diagonal attention within the noisy sequence $x_{t}$ . The second $(\mathbb{1}_{b(i) > b(j - L)})$ enables cross-attention from $x_{t}$ to $x_{0}$ , but only from blocks in $x_{t}$ to earlier blocks in $x_{0}$ . The third $(\mathbb{1}_{b(i - L) \geq b(j - L)})$ imposes a causal block attention pattern within the clean sequence $x$ , allowing a block to attend to itself and all preceding blocks. The "otherwise" condition corresponds to a zero matrix, explicitly preventing attention from queries in $x_{0}$ to keys in $x_{t}$ . This allows each block to leverage context from relevant blocks (according to the mask) for reconstruction, capturing inter-block dependencies while maintaining the causal and block-diagonal principles essential for stable diffusion training. During our exploration, we also experimented with other tricks like random-length (Xie et al., 2025) and CART (Ye et al., 2025). However, the results demonstrate that the document-level attention mask is more fundamental in CPT training compared to these techniques, and it consistently achieves superior performance. As illustrated in Figure 2, this forms a structured attention layout that balances locality and global document coherence. For MDLM, the document-level attention mask simplifies to $M \in \{0,1\}^{L \times L}$ , where: $$ M _ {i j} = \left\{ \begin{array}{l l} 1, & \text {i f} i, j \text {b e l o n g t o t h e s a m e d o c u m e n t}, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {4} $$ # 4.3 Top-k Checkpoint Merge To further enhance the generalization and robustness of our Block Diffusion Language Model, we employ a top-k checkpoint merging strategy. Upon completion of BDLM pre-training, we identify the top $k$ best-performing model checkpoints, typically selected based on validation metrics like perplexity. The parameters (weights and biases) of these $k$ checkpoints are then arithmetically averaged to form a single, unified BDLM. Based on WSM scheduler (Tian et al., 2025), this merge strategy can effectively ensemble diverse "knowledge" captured by the model at various optimal or near-optimal training states. This smooths the parameter landscape, mitigates overfitting, and yields a more stable and generalizable model. A key advantage of the WSM approach is its optimizer-agnostic nature, allowing seamless integration without altering the underlying training pipeline. Crucially, this post-training Top-k Merge fundamentally differs from the Exponential Moving Average (EMA). While EMA is an in-training technique that continuously smooths parameters, merging is an offline procedure. It explicitly selects and averages distinct, high-performing model states, consolidating their strengths rather than merely smoothing the final training step. # 5 Post-training # Takeaway (1) Applying complementary masking and a mask ratio bandwidth during SFT improves sample efficiency and stabilizes convergence. (2) An auxiliary confidence loss is incorporated to sharpen predictions, which is crucial for efficient parallel decoding. (3) DPO is adapted by defining sequence log-probabilities over masked tokens, enabling effective preference alignment for the diffusion model. # 5.1 Supervised Fine-Tuning with Block Diffusion Following the pre-training phase, the model is aligned to follow user instructions through supervised fine-tuning (SFT). This is achieved by adapting the diffusion training objective to be conditional on an input prompt, $\pmb{c}$ . The model is thus trained to generate the desired response $\pmb{x}_0$ by minimizing the following loss function: $$ \mathcal {L} _ {\mathrm {S F T}} (\theta) = - \mathbb {E} _ {t, (\boldsymbol {c}, \boldsymbol {x} _ {0}), \boldsymbol {x} _ {t}} \left[ \frac {\alpha_ {t} ^ {\prime}}{1 - \alpha_ {t}} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {L _ {B}} \mathbb {1} [ x _ {t, k} ^ {i} = [ \mathrm {M A S K} ] \log p _ {\theta} (x _ {0, k} ^ {i} | \boldsymbol {c}, \boldsymbol {x} _ {0, < k}, \boldsymbol {x} _ {t, k}) \right]. \tag {5} $$ Here, the model $p_{\theta}$ learns to predict the original tokens $x_{0,k}^{i}$ of a clean response from a noisy version $\boldsymbol{x}_t$ . The loss is computed only on masked tokens within the current noisy block $\boldsymbol{x}_{t,k}$ . To do this, the prediction is conditioned on the prompt $c$ , auto-regressive context from prior clean blocks $\boldsymbol{x}_{0, < k}$ , and the current noisy block $\boldsymbol{x}_{t,k}$ that it must denoise. Padding strategies & Mask ratio bandwidth To ensure compatibility with our block-wise attention mask, we quantize each sequence's length. Specifically, the original length is rounded up to the nearest multiple of the block size, $b$ . This process defines an "effective length" for each sequence, guaranteeing its boundaries align perfectly with the block boundaries required by the attention mechanism. To optimize the training dynamics, we further implement a "mask ratio bandwidth" strategy. Standard discrete diffusion processes typically sample mask probabilities across the full unit interval, $\alpha_{t} \sim U$ . However, as identified by Arriola et al. (2025), extreme masking rates induce high gradient variance while offering minimal learning signal: near-zero masking renders reconstruction trivial, while near-total masking reduces the objective to simply learning data margins. To mitigate this, we clip the noise schedule, constraining the sampling of mask rates to a bounded interval $[\alpha_{\min}, \alpha_{\max}]$ rather than the full range. This bandwidth restriction focuses the training objective on the noise regimes that provide the most informative gradients, thereby stabilizing convergence and improving the model's generative perplexity. Complementary Masking Complementary Masking (Li et al., 2025) is a training optimization that enhances the data efficiency of the MDLM objective, $\mathcal{L}_{\mathrm{MDLM}}(\theta)$ . The strategy's core principle is to generate two antithetical training instances from a single source sequence $x_0$ . A primary noised sequence, $x_{t}$ , is formed using a random mask, while a complementary sequence, $x_{t}^{\prime}$ , is simultaneously produced using that mask's logical inverse<sup>1</sup>. By incorporating both $\pmb{x}_t$ and $\pmb{x}_t'$ into the same training batch, this method provides a deterministic guarantee: every token position across the sequence length $L$ is presented to the model in its uncorrupted state exactly once within the pair. This not only doubles the effective data utilization from each sample, thereby accelerating convergence, but also entirely eliminates token-level sampling bias. Consequently, the model benefits from a more comprehensive and uniform learning signal at every optimization step, leading to enhanced robustness. Data Recipe Curation A balanced, high-quality SFT dataset underpins the model's capabilities, achieved through a strategic composition of tasks spanning three principal pillars: Reasoning, General, and Industrial. The Reasoning pillar hones analytical and logical faculties through mathematics and code generation. The General pillar cultivates linguistic richness and social intelligence via creative and dialogic tasks. The Industrial pillar embeds domain-specific expertise by simulating end-to-end workflows under real-world constraints. This integrated methodology ensures a holistic skill profile, preventing capability skew and enabling fluid shifts between abstract reasoning and applied problem-solving. # 5.2 Confidence-Aware Parallel Training To enhance the model's predictive confidence, which is crucial for efficient parallel decoding, we propose Confidence-Aware Parallel (CAP) Training. We incorporate an auxiliary confidence loss, $\mathcal{L}_{\mathrm{conf}}$ , inspired by dParallel (Chen et al., 2025b). The primary objective, $\mathcal{L}_{\mathrm{SFT}}$ , ensures correctness but provides diminishing incentive to sharpen the predictive distribution for tokens that are already correctly predicted. The confidence loss addresses this by selectively minimizing the entropy of the model's output distribution, $p_{\theta}(\boldsymbol{x}_0|\boldsymbol{x}_t,\boldsymbol{c})$ , but only for the subset of tokens that are correctly predicted in a given step. This compels the model to increase its certainty on its correct predictions. The final training objective is a weighted combination of the two losses: $$ \mathcal {L} (\theta) = \mathcal {L} _ {\mathrm {S F T}} (\theta) + \lambda \mathcal {L} _ {\text {c o n f}} (\theta), \tag {6} $$ Figure 3: Average score and tokens-per-forward (TPF) for LLaDA2.0-flash with and without CAP across 12 benchmarks. Inference speed (tokens per second) of LLaDA2.0-flash compared with similarly sized AR models on 4 code and math benchmarks. LLaDA2.0-flash-CAP LLaDA2.0-flash Ling-flash-2.0 Qwen3-30B-A3B-Inst-2507 where $\lambda$ is a hyperparameter that balances the two objectives. As illustrated in Figure 3, CAP training effectively improves the decoding efficiency of LLaDA2.0-flash while maintaining competitive compression performance, demonstrating a favorable trade-off between generation quality and inference speed. # 5.3 DPO Building upon the SFT stage, we further align the policy model $\pi_{\theta}$ with human intent using Direct Preference Optimization. To support this, we constructed a comprehensive dataset comprising 1.5 million preference pairs across diverse domains, including general knowledge, mathematics, and instruction following. To ensure a stable transition in optimization, the learning rate for the DPO stage is initialized consistently with the final learning rate of the preceding SFT phase. Since the policy model $\pi_{\theta}$ is trained to reconstruct clean tokens $x_0$ from noisy blocks $x_{t}$ conditioned on context $c$ , the standard DPO formulation—which requires exact log-likelihoods—is intractable. Following established practices for diffusion models, we substitute the conditional log-likelihoods with their ELBO. We first define the conditional Block Diffusion ELBO, $B_{\mathrm{BDLM}}(\theta ,\boldsymbol {x}|c)$ , for a response $\pmb{x}$ . This term mirrors the inner objective of our SFT loss (equation 5) and is estimated via a single Monte Carlo sample over timesteps and noise: $$ B _ {\mathrm {B D L M}} (\theta , \boldsymbol {x} | \boldsymbol {c}) = \mathbb {E} _ {t, \boldsymbol {x} _ {t}} \left[ \frac {\alpha_ {t} ^ {\prime}}{1 - \alpha_ {t}} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {L _ {B}} \mathbb {1} [ x _ {t, k} ^ {i} = [ \mathrm {M A S K} ] \log p _ {\theta} (x _ {k} ^ {i} | \boldsymbol {c}, \boldsymbol {x} _ {< k}, \boldsymbol {x} _ {t, k}) \right]. \tag {7} $$ Given a preference pair $(\pmb{x}_w, \pmb{x}_l)$ , where $\pmb{x}_w$ is the preferred response and $\pmb{x}_l$ is the dispreferred response, the DPO objective maximizes the margin between the ELBO estimates of the policy $\pi_{\theta}$ and the frozen reference model $\pi_{\theta_{\mathrm{ref}}}$ (initialized from the post-SFT model). The final loss function is defined as: $$ \mathcal {L} _ {\mathrm {D P O}} (\theta) = - \mathbb {E} _ {(\boldsymbol {c}, \boldsymbol {x} _ {w}, \boldsymbol {x} _ {l}) \sim \mathcal {D}} \left[ \log \sigma \left(\beta \left[ \Delta B (\boldsymbol {x} _ {w} | \boldsymbol {c}) - \Delta B (\boldsymbol {x} _ {l} | \boldsymbol {c}) \right]\right) \right], \tag {8} $$ where $\Delta B(\pmb{x}|\pmb{c}) = B_{\mathrm{BDLM}}(\theta, \pmb{x}|\pmb{c}) - B_{\mathrm{BDLM}}(\theta_{\mathrm{ref}}, \pmb{x}|\pmb{c})$ represents the ELBO advantage of the policy over the reference model, and $\beta$ is a hyperparameter (set to 0.1) that controls the deviation from the reference policy. # 5.4 Inference We sample one block at a diffusion step, conditioned on previously sampled blocks $p_{\theta}(x_s^b | c, x_t^{<b})$ . The generation of each block is itself a multi-step iterative refinement process. At each step, candidate tokens are sampled for all remaining unfilled positions within the block. A hybrid acceptance strategy is then employed: we first accept all tokens whose sampling probability exceeds a predefined confidence 'threshold'. If an insufficient number of tokens meet this criterion, a low-confidence fallback is triggered, where we instead accept a fixed number of the most probable tokens regardless of their absolute confidence. This dual mechanism ensures steady generation progress. # 6 Evaluation # 6.1 Setup To comprehensively evaluate the quality of instruction-tuned models, we employ a diverse suite of benchmarks categorized into five dimensions: - **Knowledge:** MMLU (Hendrycks et al., 2020), MMLU-Pro (Wang et al., 2024), GPQA-Diamond (Rein et al., 2024), ARC (Clark et al., 2018), CMMLU (Li et al., 2023a) C-Eval (Huang et al., 2023), GAOKAO-Bench (Zhang et al., 2023), SciBench (Wang et al., 2023), PHYBench (Qiu et al., 2025), TriviaQA (Joshi et al., 2017) - Reasoning: SQuAD 2.0 (Rajpurkar et al., 2018), DROP (Dua et al., 2019), KOR-Bench (Ma et al., 2024), HellaSwag (Zellers et al., 2019), BIG-Bench Hard (Suzgun et al., 2023), BIG-Bench Extra Hard (Kazemi et al., 2025), MuSR (Sprague et al., 2023), ZebraLogic (Lin et al., 2025), PrOntoQA (Saparov & He, 2022), PIQA (Bisk et al., 2020), OCNLI (Hu et al., 2020), BIG-Bench Hard-CN (team, 2023c) Coding: CRUXEval (Gu et al., 2024), MBPP (Austin et al., 2021), MultiPL-E (Cassano et al., 2023), HumanEval (Chen et al., 2021), BigCodeBench (Zhuo et al., 2024), LiveCodeBench (Jain et al., 2024), Spider (Yu et al., 2018), BIRD (Li et al., 2023b), HumanEval+ (Liu et al., 2023), MBPP+ (Liu et al., 2023), HumanEvalFix (Muennighoff et al., 2023), Aider (team, 2023a), HumanEval-CN (team, 2023c) - Math: GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), OlympiadBench (He et al., 2024), AIME 2025 (AIME, 2025), Omni-MATH (Gao et al., 2024), HARDMath2 (Roggeveen et al., 2025), GSM-Plus (Li et al., 2024), CMATH (Wei et al., 2023) - Agent & Alignment: BFCL (Patil et al., 2025), IFEval (Zhou et al., 2023), CodeIF-Bench (Wang et al., 2025b), Nexus Function Calling Benchmark (team, 2023b) This extensive evaluation suite, comprising a total of 47 benchmarks, provides a holistic foundation for assessing model capabilities. In our experiments, we compare the LLaDA2.0 series against strong open-source auto-regressive (AR) models. For all LLaDA2.0 models, we utilize a temperature of 0.0, a block size of 32, and a decoding threshold of 0.95. # 6.2 Results The overall results, presented in the following tables, indicate that the LLaDA2.0 architecture is not only highly competitive, but also shows a promising trend of closing the performance gap with, and even surpassing, AR models in specific key areas. Our models consistently demonstrate strong, and often superior, performance in complex, structured tasks. For instance, LLaDA2.0-mini already outperforms a comparable AR model (Qwen3-8B) in the domains of Reasoning, Coding, and Math. This signal is amplified in our larger model, as LLaDA2.0-flash achieves parity with the powerful Qwen3-30B-A3B-Instruct-2507 and establishes a lead in the critical Coding and Agent domains. This suggests that as diffusion models scale, their inherent strengths in structured generation and tool use become increasingly apparent. As shown in Table 1, LLaDA2.0-mini achieves a competitive average score of 64.34, closely approaching its AR peer, Ling-mini-2.0 (65.77). This demonstrates the fundamental viability of the diffusion approach. More importantly, it shows promising signals in complex tasks, outperforming its direct competitor on reasoning benchmarks like SQuAD 2.0 (86.50) and demonstrating more robust instruction following on IFEval (80.78). Its strong performance in coding tasks such as HumanEval (86.59) further suggests an early aptitude for structured generation. This potential becomes even more evident with our larger model, LLaDA2.0-flash. As shown in Table 2, with an average score of 73.18, it stands firmly on par with strong AR models such as Qwen3-30B-A3B-Instruct-2507 (73.60). Crucially, LLaDA2.0-flash begins to exhibit clear advantages in complex generative tasks, a sign that the diffusion architecture may hold inherent strengths. In the critical domain of coding, it consistently outperforms its AR peers, scoring higher on HumanEval (94.51), MBPP (88.29) and MultiPL-E (74.87). This trend of surpassing AR models also extends to agent capabilities (BFCL v3: 75.43) and advanced mathematics (AIME 2025: 60.00). In conclusion, the LLaDA2.0 series successfully demonstrates that diffusion-based language models are a powerful and scalable alternative to the dominant auto-regressive paradigm. While rapidly narrowing the gap on general benchmarks, they are already showcasing the potential to surpass traditional architectures in complex, structured domains like code generation and tool use. This positions diffusion models as a highly promising direction for the future of language generation. Table 1: Benchmark Performance of LLaDA2.0-mini <table><tr><td>Benchmark</td><td>Qwen3-8B (no Think)</td><td>Ling-mini-2.0</td><td>LLaDA2.0-mini-preview</td><td>LLaDA2.0-mini</td></tr><tr><td>Average</td><td>63.42</td><td>65.77</td><td>54.67</td><td>64.34</td></tr><tr><td colspan="5">Knowledge</td></tr><tr><td>MMLU</td><td>80.94</td><td>82.15</td><td>72.49</td><td>80.53</td></tr><tr><td>MMLU-Pro</td><td>65.48</td><td>63.72</td><td>49.22</td><td>63.22</td></tr><tr><td>CMMLU</td><td>79.17</td><td>80.84</td><td>67.53</td><td>79.50</td></tr><tr><td>C-EVAL</td><td>81.36</td><td>82.10</td><td>66.54</td><td>81.38</td></tr><tr><td>GAOKAO-Bench</td><td>84.94</td><td>87.23</td><td>74.46</td><td>84.30</td></tr><tr><td>ARC-c</td><td>93.35</td><td>93.09</td><td>89.15</td><td>93.56</td></tr><tr><td>GPQA</td><td>46.59</td><td>56.80</td><td>23.74</td><td>47.98</td></tr><tr><td>SciBench</td><td>2.85</td><td>5.28</td><td>4.10</td><td>3.53</td></tr><tr><td>PHYBench</td><td>9.76</td><td>14.59</td><td>5.08</td><td>11.70</td></tr><tr><td>TriviaQA</td><td>52.51</td><td>55.63</td><td>50.49</td><td>51.33</td></tr><tr><td colspan="5">Reasoning</td></tr><tr><td>BIG-Bench Hard</td><td>79.48</td><td>83.70</td><td>70.64</td><td>78.21</td></tr><tr><td>BIG-Bench Extra Hard</td><td>18.27</td><td>14.81</td><td>12.36</td><td>16.47</td></tr><tr><td>bbh-zh</td><td>80.09</td><td>66.11</td><td>66.62</td><td>75.75</td></tr><tr><td>MuSR</td><td>70.02</td><td>71.36</td><td>56.77</td><td>71.48</td></tr><tr><td>ZebraLogic</td><td>37.48</td><td>79.85</td><td>14.80</td><td>64.20</td></tr><tr><td>PrOntoQA</td><td>93.12</td><td>96.06</td><td>70.00</td><td>86.00</td></tr><tr><td>PIQA</td><td>88.30</td><td>87.54</td><td>84.33</td><td>86.51</td></tr><tr><td>OCNLI</td><td>61.49</td><td>60.17</td><td>58.68</td><td>64.51</td></tr><tr><td>HellaSwag</td><td>79.56</td><td>69.02</td><td>74.01</td><td>79.01</td></tr><tr><td>KOR-Bench</td><td>54.48</td><td>62.72</td><td>37.26</td><td>50.40</td></tr><tr><td>DROP</td><td>84.56</td><td>78.80</td><td>79.49</td><td>81.91</td></tr><tr><td>SQuAD 2.0</td><td>85.21</td><td>75.56</td><td>85.61</td><td>86.50</td></tr><tr><td colspan="5">Coding</td></tr><tr><td>CRUXEval-O</td><td>74.06</td><td>76.12</td><td>61.88</td><td>71.62</td></tr><tr><td>MBPP</td><td>78.92</td><td>84.07</td><td>77.75</td><td>81.50</td></tr><tr><td>MBPP+</td><td>71.96</td><td>76.46</td><td>66.67</td><td>74.07</td></tr><tr><td>MultiPL-E</td><td>61.70</td><td>67.09</td><td>62.43</td><td>67.46</td></tr><tr><td>HumanEval</td><td>84.76</td><td>85.98</td><td>80.49</td><td>86.59</td></tr><tr><td>HumanEval+</td><td>78.66</td><td>81.71</td><td>71.95</td><td>79.88</td></tr><tr><td>HumanEvalFix</td><td>76.02</td><td>82.83</td><td>60.16</td><td>74.90</td></tr><tr><td>HumanEval-cn</td><td>74.39</td><td>71.34</td><td>73.17</td><td>78.66</td></tr><tr><td>BigIntCodeBench-Full</td><td>36.05</td><td>35.00</td><td>30.44</td><td>32.89</td></tr><tr><td>LiveCodeBench</td><td>26.38</td><td>34.97</td><td>19.82</td><td>31.50</td></tr><tr><td>Aider</td><td>55.64</td><td>49.62</td><td>28.57</td><td>39.85</td></tr><tr><td>BIRD-SQL</td><td>36.11</td><td>39.67</td><td>27.71</td><td>39.34</td></tr><tr><td>Spider</td><td>72.80</td><td>76.43</td><td>75.64</td><td>76.76</td></tr><tr><td colspan="5">Math</td></tr><tr><td>GSM8K</td><td>93.63</td><td>94.62</td><td>89.01</td><td>94.24</td></tr><tr><td>MATH</td><td>86.28</td><td>94.66</td><td>73.50</td><td>93.22</td></tr><tr><td>OlympiadBench</td><td>55.33</td><td>72.30</td><td>36.30</td><td>67.70</td></tr><tr><td>AIME 2025</td><td>22.08</td><td>47.66</td><td>10.00</td><td>36.67</td></tr><tr><td>HARDMath2</td><td>7.58</td><td>9.95</td><td>0.95</td><td>0.47</td></tr><tr><td>Omni-MATH</td><td>33.20</td><td>48.80</td><td>19.20</td><td>41.70</td></tr><tr><td>GSM-Plus</td><td>86.09</td><td>87.82</td><td>81.44</td><td>86.24</td></tr><tr><td>CMATH</td><td>95.42</td><td>96.40</td><td>90.53</td><td>95.72</td></tr><tr><td colspan="5">Agent &amp; Alignment</td></tr><tr><td>IFEval-strict-prompt</td><td>86.90</td><td>76.16</td><td>62.50</td><td>80.78</td></tr><tr><td>BFCL v3</td><td>70.08</td><td>53.98</td><td>74.11</td><td>70.90</td></tr><tr><td>CodeIF-Bench</td><td>50.00</td><td>46.00</td><td>48.00</td><td>48.00</td></tr><tr><td>Nexus FC</td><td>37.71</td><td>34.38</td><td>33.68</td><td>35.18</td></tr></table> Table 2: Benchmark Performance of LLaDA2.0-flash <table><tr><td>Benchmark</td><td>Qwen3-30B-A3B-Instruct-2507</td><td>Ling-flash-2.0</td><td>LLaDA2.0-flash-preview</td><td>LLaDA2.0-flash</td></tr><tr><td>Average</td><td>73.60</td><td>72.15</td><td>65.97</td><td>73.18</td></tr><tr><td colspan="5">Knowledge</td></tr><tr><td>MMLU</td><td>87.13</td><td>87.98</td><td>83.15</td><td>87.69</td></tr><tr><td>MMLU-Pro</td><td>74.23</td><td>76.84</td><td>66.16</td><td>73.36</td></tr><tr><td>CMMLU</td><td>86.36</td><td>86.59</td><td>79.64</td><td>85.13</td></tr><tr><td>C-EVAL</td><td>88.17</td><td>88.03</td><td>79.28</td><td>86.75</td></tr><tr><td>GAOKAO-Bench</td><td>94.53</td><td>93.24</td><td>86.12</td><td>93.90</td></tr><tr><td>ARC-c</td><td>95.81</td><td>95.08</td><td>93.90</td><td>95.93</td></tr><tr><td>GPQA</td><td>57.34</td><td>67.12</td><td>41.92</td><td>61.98</td></tr><tr><td>SciBench</td><td>4.54</td><td>4.14</td><td>5.13</td><td>4.13</td></tr><tr><td>PHYBench</td><td>29.84</td><td>27.67</td><td>7.58</td><td>30.06</td></tr><tr><td>TriviaQA</td><td>65.61</td><td>69.76</td><td>69.25</td><td>66.88</td></tr><tr><td colspan="5">Reasoning</td></tr><tr><td>BIG-Bench Hard</td><td>85.54</td><td>89.36</td><td>82.85</td><td>86.75</td></tr><tr><td>BIG-Bench Extra Hard</td><td>37.80</td><td>23.24</td><td>16.70</td><td>27.86</td></tr><tr><td>BIG-Bench Hard - CN</td><td>86.18</td><td>75.09</td><td>83.38</td><td>87.52</td></tr><tr><td>MuSR</td><td>79.15</td><td>82.72</td><td>78.75</td><td>80.48</td></tr><tr><td>ZebraLogic</td><td>90.97</td><td>87.60</td><td>39.90</td><td>82.30</td></tr><tr><td>PrOntoQA</td><td>97.12</td><td>97.88</td><td>93.50</td><td>96.50</td></tr><tr><td>PIQA</td><td>91.57</td><td>91.95</td><td>91.84</td><td>92.76</td></tr><tr><td>OCNLI</td><td>71.59</td><td>65.36</td><td>69.39</td><td>71.63</td></tr><tr><td>HellaSwag</td><td>86.31</td><td>81.59</td><td>86.00</td><td>84.97</td></tr><tr><td>KOR-Bench</td><td>68.00</td><td>68.96</td><td>53.28</td><td>64.24</td></tr><tr><td>DROP</td><td>87.57</td><td>88.32</td><td>88.17</td><td>87.90</td></tr><tr><td>SQuAD 2.0</td><td>89.51</td><td>81.32</td><td>90.61</td><td>90.00</td></tr><tr><td colspan="5">Coding</td></tr><tr><td>CRUXEval-O</td><td>86.75</td><td>82.75</td><td>74.50</td><td>85.12</td></tr><tr><td>MBPP</td><td>86.65</td><td>85.01</td><td>86.65</td><td>88.29</td></tr><tr><td>MBPP+</td><td>78.04</td><td>76.19</td><td>75.93</td><td>79.63</td></tr><tr><td>MultiPL-E</td><td>70.67</td><td>65.76</td><td>72.38</td><td>74.87</td></tr><tr><td>HumanEval</td><td>93.29</td><td>85.98</td><td>88.41</td><td>94.51</td></tr><tr><td>HumanEval+</td><td>88.41</td><td>85.98</td><td>82.32</td><td>87.80</td></tr><tr><td>HumanEvalFix</td><td>91.16</td><td>92.68</td><td>83.33</td><td>90.24</td></tr><tr><td>HumanEval-CN</td><td>87.20</td><td>74.39</td><td>84.76</td><td>89.02</td></tr><tr><td>Bigcodebench-Full</td><td>41.49</td><td>40.70</td><td>40.44</td><td>41.58</td></tr><tr><td>LiveCodeBench</td><td>41.63</td><td>44.11</td><td>29.07</td><td>42.29</td></tr><tr><td>Aider</td><td>71.43</td><td>71.43</td><td>51.13</td><td>66.92</td></tr><tr><td>Spider</td><td>81.79</td><td>80.58</td><td>81.37</td><td>82.49</td></tr><tr><td>BIRD-SQL</td><td>47.75</td><td>47.49</td><td>45.34</td><td>45.76</td></tr><tr><td colspan="5">Math</td></tr><tr><td>GSM8K</td><td>96.36</td><td>95.45</td><td>95.75</td><td>96.06</td></tr><tr><td>MATH</td><td>96.70</td><td>96.10</td><td>83.52</td><td>95.44</td></tr><tr><td>OlympiadBench</td><td>77.59</td><td>76.19</td><td>49.33</td><td>74.07</td></tr><tr><td>AIME 2025</td><td>61.88</td><td>55.89</td><td>23.33</td><td>60.00</td></tr><tr><td>HARDMath2</td><td>4.27</td><td>23.70</td><td>3.79</td><td>4.27</td></tr><tr><td>Omni-MATH</td><td>54.00</td><td>53.00</td><td>24.60</td><td>50.30</td></tr><tr><td>GSM-Plus</td><td>89.45</td><td>89.83</td><td>88.25</td><td>89.64</td></tr><tr><td>CMATH</td><td>96.58</td><td>96.52</td><td>95.26</td><td>96.90</td></tr><tr><td colspan="5">Agent &amp; Alignment</td></tr><tr><td>IFEval-strict -prompt</td><td>84.29</td><td>81.52</td><td>75.60</td><td>81.70</td></tr><tr><td>BFCL v3</td><td>73.19</td><td>67.57</td><td>74.86</td><td>75.43</td></tr><tr><td>CodelF-Bench</td><td>54.00</td><td>56.00</td><td>56.00</td><td>58.00</td></tr><tr><td>Nexus FC</td><td>49.93</td><td>36.25</td><td>47.98</td><td>50.45</td></tr></table> Figure 4: Score/TPF vs threshold/block size Figure 5: Performance on the RULER benchmark. # 6.3 Analysis Analysis of Inference Hyper-parameters In addition to our main evaluation, we conducted a brief analysis to tune key inference hyperparameters. To ensure efficiency, this analysis was performed on our LLaDA2.0-mini model, using a representative subset of our benchmarks to understand the trade-off between generation quality (score) and inference speed (measured as TPF - Tokens Per Forward; higher is faster). Denoising Threshold. We first investigate the impact of the Denoising Threshold. While keeping the Block Size fixed at 32, we varied the threshold and observed its effect on quality and speed. As shown in Figure 4, the results reveal a clear trade-off. A threshold of 0.95 achieved the highest quality score (70.15) at the cost of the lowest inference speed (2.55 TPF). Lowering the threshold to 0.85 boosted the speed to its peak (3.31 TPF), but led to an unacceptable degradation in quality, with the score dropping to 67.90. Block Size. Subsequently, we analyze the effect of Block Size. We set the Denoising Threshold to 0.95, the optimal value identified in the prior experiment. The results in Figure 4 demonstrate a similar trade-off. A block size of 16 yielded the highest score (70.26) but with the slowest inference (2.44 TPF). In contrast, increasing the block size to 32 substantially improved the speed to 2.55 TPF with only a marginal quality drop to 70.15. Further increasing the block size to 64 proved suboptimal, as it degraded both score and speed relative to the size-32 setting. Therefore, a block size of 32 emerges as the most compelling choice, offering a significant speed-up for a negligible performance cost. In summary, based on this analysis, the configuration for our main evaluation is well-supported. The Denoising Threshold of 0.95 is the clear choice for maximizing quality. For block-size, the setting of 32 represents an optimal balance, providing the highest throughput with virtually no sacrifice in performance compared to the slightly higher-scoring but slower setting of 16. Analysis of Context Length To rigorously validate our model's performance across various context lengths, we conducted a series of evaluations using the RULER benchmark. As shown in Figure 5, both models demonstrate strong performance and stability within context length of $32\mathrm{k}$ . The LLaDA2.0-flash model is particularly robust, maintaining a score above 93 across all lengths from 4k to 32k. The LLaDA2.0-mini model also achieves high scores, starting at 93.29 for 4k but showing a degradation to 83.94 at 32k. To test the models' extrapolation capabilities, we extended the context length to 64k. This was achieved by employing dynamic RoPE scaling during inference, specifically using the YaRN method with a scaling factor of 2.0. However, this extension resulted in a performance degradation for both models, demonstrating a clear trade-off between context length extension and task accuracy. In summary, this evaluation highlights two key findings: (1) The LLaDA2.0 models are exceptionally robust for long-context tasks within their native 32k window. (2) They can be successfully extended to handle 64k sequences via YaRN scaling, providing flexibility for extreme-length applications, albeit with a predictable performance cost. # 7 Training & Inference Infrastructure # 7.1 Pretraining We adopt Megatron-LM (Shoeybi et al., 2019) as the pretraining backend to enable efficient training of a 100B-parameter model with long sequences, leveraging data parallelism (DP), pipeline parallelism (PP), tensor parallelism (TP), context parallelism (CP), and expert parallelism (EP), as Figure 6 shows. To ensure consistency of masked tokens, we generate masked tokens on a single model-parallel (MP, that is, TP and PP) rank and then broadcast to all other ranks within the MP ranks. Efficient Block Diffusion Training For flexible support of arbitrary block diffusion attention mask, we utilize cuDNN as the backend for the attention mechanism. This approach achieves more than 1.3x end-to-end speedup and over $90\%$ memory savings in the attention layer compared to the unfused attention implementation in TransformerEngine when training LLaDA2.0-mini. We further apply a zig-zag partitioning strategy to the block diffusion attention mask to achieve effective load balancing across the CP group. Numerical Stability During the transition from AR to diffusion models, training can suffer from gradient explosion, especially at high mask ratios within a document. This issue stems from the fact that masked token embeddings are set to zero during AR training, as these tokens are never observed, leading their corresponding weights to gradually decay to zero. A straightforward fix, randomly reinitializing the masked token embeddings upon loading the AR model, may disrupt other well-trained parameters, potentially causing catastrophic forgetting. To mitigate this while preserving pre-trained knowledge, we instead add independent Gaussian noise to the output of the embedding layer for each masked token during the initial iterations of training. This ensures that the L2 norm of the masked token's embedding remains significant to avoid gradient explosion, thereby stabilizing the training process. Figure 6: Parallelism overview. # 7.2 Post-Training For the post-training phase, we leverage dFactory $^2$ (InclusionAI, 2025), a repository providing efficient training recipes for dLLMs. Built upon the VeOmni (Ma et al., 2025a) distributed training framework, dFactory allows us to effectively implement complex parallelization schemes. Specifically, our setup for fine-tuning LLaDA2.0 combines Data Parallelism (DP) and Expert Parallelism (EP) to ensure scalable and stable training. To further enhance data throughput and hardware utilization, we adopt a data packing strategy analogous to those used in continued pre-training, which concatenates multiple short sequences into a single longer sequence. This integrated approach provides a robust and high-performance infrastructure for the post-training of our model. # 7.3 Inference Engine We adapt dInfer $^3$ (Ma et al., 2025b)—originally built for high-performance diffusion LLM inference—to efficiently support block diffusion inference. This requires the inference engine to leverage optimization techniques traditionally designed for AR models. For instance, the framework can now effectively exploit KV-cache reuse to substantially reduce prefetch computation. As block diffusion inference closely resembles auto-regressive generation in the execution pattern, we also incorporated block diffusion inference support into SGLang $^4$ (Zheng et al., 2024), allowing it to benefit from the same class of system-level optimizations designed for AR models. More mature features in dInfer are undergoing to transport to SGLang. Inference speed Figure 3 compares the average inference throughput (Tokens Per Second, TPS = #decoding tokens/#total-time) of our optimized LLaDA2.0-flash models against state-of-the-art AR models of similar scale on four reasoning and code-generation benchmarks (HumanEval, MBPP, GSM8K, and CRUXEval). All models are evaluated under a consistent generation setup. For diffusion-based models (LLaDA2.0-flash and LLaDA2.0-flash-CAP), we adopt a threshold decoder with a threshold of 0.95. The AR baselines (Ling-flash-2.0 and Qwen3-30B-A3B-Instruct-2507) are deployed using SGLang, while the diffusion models are served with dInfer, ensuring fair performance comparison in real inference environments. As shown, LLaDA2.0-flash-CAP reaches 535 TPS, outperforming the standard LLaDA2.0-flash (383 TPS) and providing up to $2.1 \times$ speed-up over the AR baselines (256 TPS and 237 TPS). # 8 Conclusion In this work, we introduced LLaDA2.0, discrete diffusion language models scaling up to 100B total parameters through systematic conversion from auto-regressive models, as well as a set of novel and comprehensive recipes designed to smooth and effectively transform traditional AR language models into highly efficient and performant Masked Diffusion Language Models. Through extensive evaluations, it validates the feasibility of the training paradigm. The LLaDA2.0-mini and LLaDA2.0-flash models achieve performances that are competitive with their AR counterparts. Slightly surprisingly, LLaDA2.0-flash seems to have demonstrated advantages in complex, structured domains such as code generation, mathematical reasoning, and agentic tool use. These may have opened a new door to future work in the agentic LLM era while solidifying a gaugeable potential of dLLM for test-time scaling. Future work may point to further scaling of the parameter volume, RL/thinking paradigm and extending the decoding speed to its extreme.
arxiv_cs
2025-12-10T00:00:00Z
https://arxiv.org/pdf/2512.15745
{"title": "LLaDA2.0: Scaling Up Diffusion Language Models to 100B", "raw_content": "# LLaDA2.0: Scaling Up Diffusion Language Models to 100B\n\nTiwei Bie $^{1}$ , Maosong Cao $^{1}$ , Kun Chen $^{1}$ , Lun Du $^{1}$ , Mingliang Gong $^{1}$ , Zhuochen Gong $^{1}$ , Yanmei Gu $^{1}$ , Jiaqi Hu $^{1,3}$ , Zenan Huang $^{1}$ , Zhenzhong Lan $^{1,4,\\dagger}$ , Chengxi Li $^{1}$ , Chongxuan Li $^{2}$ , Jianguo Li $^{1,\\dagger}$ , Zehuan Li $^{1}$ , Huabin Liu $^{1}$ , Ling Liu $^{1}$ , Guoshan Lu $^{1}$ , Xiaocheng Lu $^{1,5}$ , Yuxin Ma $^{1}$ , Jianfeng Tan $^{1}$ , Lanning Wei $^{1}$ , Ji-Rong Wen $^{2}$ , Yipeng Xing $^{1}$ , Xiaolu Zhang $^{1}$ , Junbo Zhao $^{1,3,\\dagger}$ , Da Zheng $^{1,\\dagger}$ , Jun Zhou $^{1}$ , Junlin Zhou $^{1}$ , Zhanchao Zhou $^{1,4}$ , Liwang Zhu $^{1}$ , Yihong Zhuang $^{1}$\n\n<sup>1</sup>Ant Group, <sup>2</sup>Renmin University of China, <sup>3</sup>Zhejiang University, <sup>4</sup>Westlake University, <sup>5</sup>HongKong University of Science and Technology\n\n# Abstract\n\nThis paper presents LLaDA2.0 — a tuple of discrete diffusion large language models (dLLM) scaling up to 100B total parameters through systematic conversion from auto-regressive (AR) models — establishing a new paradigm for frontier-scale deployment. Instead of costly training from scratch, LLaDA2.0 upholds knowledge inheritance, progressive adaption and efficiency-aware design principle, and seamless converts a pre-trained AR model into dLLM with a novel 3-phase block-level WSD based training scheme: progressive increasing block-size in block diffusion (warm-up), large-scale full-sequence diffusion (stable) and reverting back to compact-size block diffusion (decay). Along with post-training alignment with SFT and DPO, we obtain LLaDA2.0-mini (16B) and LLaDA2.0-flash (100B), two instruction-tuned Mixture-of-Experts (MoE) variants optimized for practical deployment. By preserving the advantages of parallel decoding, these models deliver superior performance and efficiency at the frontier scale. Both models were open-sourced.\n\nHuggingface: https://hf.co/collections/inclusionAI/llada-20\n\n![](images/da57c67b49b734650d7e26c5bf7076b6de7a0a012389be678f6ae6ab0f762d2d.jpg)\n\n![](images/70f2856099275e0e3bc0ef70c1b64c8076f0b3377389636603301a83d642be2c.jpg)\n\n![](images/e8420b82946a35c858d37fc114e8b5ad5b5c15ca3221fa84fbebe58a279d0679.jpg)\n\n![](images/7175bd8bb2c31885d3bae4b62f5535d47c57fce2ba2eb110e817ff33587dd6e2.jpg)\n\n![](images/c1d2a2bd68f23c9967ba4ffdf1ed4cb9ba129e8bd3177e486026db43ba864a3c.jpg)\n\n![](images/51a60cfbfcbe37dca967357d7b67cf6cb69a07172b24748377dfd2e1d0d4b9b4.jpg) \nFigure 1: LLaDA2.0-flash main results.\n\n![](images/2692e4e47c0c4292d815c3b4cd43aadf3e03c92dd166f7aff039230b34609de5.jpg)\n\n![](images/e1af7956e5241a9d2ac7c03c9a3842775441a01d7b73893e63f24b0d696682ed.jpg)\n\n![](images/f99dbf16a972730ccbb0c5c3b70a99bf42342d56aec8d6f502dae5ff2d57ba61.jpg)\n\n# 1 Introduction\n\nLarge Language Models have achieved remarkable success through the AR paradigm, modeling sequences via next-token prediction with strict left-to-right causal dependencies (Hurst et al., 2024; Grattafori et al., 2024; Yang et al., 2025). This approach naturally aligns with the sequential structure of language and enables efficient training through next-token likelihood maximization. However, the very success of this paradigm creates fundamental limitations: the sequential generation process imposes severe inference bottlenecks, precluding parallelization, and increasing latency at scale, while the rigid causal structure can be suboptimal for tasks requiring bidirectional reasoning and holistic understanding.\n\nDiscrete Masked Diffusion Language Models (MDLM) have emerged as a compelling alternative to the prevailing AR paradigm. By reconstructing sequences from random masked inputs, these models inherently support parallel generation and leverage a full bidirectional context, offering a different architectural approach (Gong et al., 2025; Yu et al., 2025). Although these conceptual advantages are clear, the field is still in an early developmental stage. Current research is actively focused on key challenges, including the refinement of specialized training regimes, the design of efficient sampling strategies, the efficient inference of open-source models, and reinforcement learning for MDLM. As a result of this ongoing exploration, most existing diffusion models, including recent advancements like Block Diffusion Language Models (BDLMs) (Arriola et al., 2025), operate at a smaller scale (e.g., $\\leq 8\\mathrm{B}$ parameters). Bridging this scale difference to the hundreds of billions of parameters seen in the leading mainstream AR models is a primary frontier for enabling diffusion models to fully capture complex linguistic patterns for practical deployment.\n\nIn this work, we introduce LLaDA2.0 series with 100B/16B total parameters diffusion language models that resolves these fundamental challenges through a novel two-stage continual pre-training (CPT) paradigm. Rather than attempting to train diffusion models from scratch, we leverage existing AR checkpoints as the foundation for a systematic conversion process that preserves linguistic knowledge while introducing diffusion capabilities.\n\nThe first stage, CPT aims to transform the foundational AR model into a capable diffusion language model. However, direct conversion is challenging due to the inherent data distribution gap between left-to-right generation and bidirectional denoising. Although the BDLM formulation partially reduces this gap through blockwise masked reconstruction, it suffers from low data utilization, limiting the effective exploitation of large-scale corpora. To this end, we introduce the Warmup-Stable-Decay (WSD) strategy, smoothly bridging the AR-to-dLLM gap while substantially improving CPT efficiency. WSD gradually expands the model's receptive field to introduce diffusion-style context (Warmup), strengthens global denoising under full-sequence training (Stable), and then refines the model into an efficient blockwise structure (Decay). This progressive adjustment enables a stable and data-efficient transition to diffusion-based learning. Additionally, under full attention in packed training sequences, diffusion models risk forming spurious dependencies across document boundaries, leading to semantic confusion and instability in bidirectional training. To prevent such cross-document interference, we introduce a document-level attention mask that restricts self-attention within individual documents, ensuring coherent context modeling.\n\nThe second stage, Post-training for Practical Deployment, transitions the model from a raw predictive engine into a capable and efficient assistant. The random masking nature of the diffusion fine-tuning objective means any single sample provides only a partial learning signal. We address this by employing a complementary masking strategy, which ensures near- $100\\%$ data utilization and accelerates convergence by guaranteeing every token contributes to the model's learning. With an efficient foundation for instruction tuning, we then align the model with human preferences by adapting modern techniques like Direct Preference Optimization (DPO)—originally designed for AR models—by reformulating the objective over the model's reconstruction loss. Beyond alignment, practical deployment hinges on inference speed. To realize the full promise of parallel decoding, which is often limited by a model's lack of predictive confidence, we incorporate an auxiliary confidence prediction loss. This trains the model to be \"sharper\" and more certain, unlocking aggressive and efficient parallel generation without degrading quality.\n\nWe release instruction-tuned variants for practical deployment: LLaDA2.0-mini (16B parameters) for resource-constrained applications and LLaDA2.0-flash (100B parameters) for high-performance scenarios. Both variants retain the parallel decoding advantages of our diffusion training while being optimized for instruction following and safety through comprehensive post-training alignment.\n\nOur contributions provide a practical recipe for the community to leverage AR stability while achieving diffusion parallelism, opening new possibilities for efficient large-scale language modeling.\n\n# 2 Related Work\n\n# 2.1 Train dLLMs from scratch\n\nAuto-regressive language models (Ling et al., 2025; Moonshot, 2025; Liu et al., 2024; Meta-AI, 2025) are typically trained by maximizing the likelihood of predicting the next token. Under this paradigm, model performance has been shown to scale effectively with increasing model size, dataset volume, and computational resources, following well-established scaling laws. Recently, MDLMs (Song et al., 2025; Ye et al., 2025; Nie et al., 2025) have emerged as an alternative generative framework, reformulating text generation as an iterative denoising process. In each forward step, a subset of tokens is randomly masked, and the model is trained to recover the original tokens conditioned on the remaining unmasked context.\n\nEncouraged by this paradigm shift, several studies have explored training MDLMs from scratch to assess their full potential. For instance, LLaDA (Nie et al., 2025) demonstrated that a 8B dense MDLM, trained entirely from scratch, achieves performance competitive with similarly sized AR counterparts. Building upon this, LLaDA-MoE (Zhu et al., 2025) introduced the Mixture-of-Experts (MoE) architecture into the MDLM for the first time, showing that a scratch-trained MoE-based MDLM can surpass dense models in both efficiency and capability, thereby validating the compatibility and scalability of MDLMs with advanced MoE designs. Moreover, due to the fundamentally different training dynamics compared to AR models, established training practices and hyperparameter recipes from the AR domain are often suboptimal for MDLMs. To address this gap, recent efforts such as Quakka (Ni et al., 2025) and OpenMoE2 (Ni & team, 2025) have begun investigating the scaling properties and optimal training strategies specifically tailored for MDLMs, laying the groundwork for principled scaling in this emerging paradigm.\n\nHowever, from-scratch trained MDLMs still lag behind state-of-the-art AR models in overall performance. This gap can be largely attributed to the disparity in training data volume and the maturity of infrastructure support—factors that have been extensively optimized over years of development for AR models. Moreover, due to the high computational cost and long training cycles required for pretraining from scratch, MDLMs mentioned above are typically limited in model scale ( $\\leq 8\\mathrm{B}$ ), whereas leading AR models now routinely scale into tens or even hundreds of billions.\n\n# 2.2 Scaling dLLMs with AR initialization\n\nGiven the strong knowledge capacity and performance of AR models, several recent studies have explored initializing dLLMs from pre-trained AR models to reduce training costs and narrow the performance gap between AR models and dLLMs. For instance, DiffusionLLaMA (Gong et al., 2025) and Dream-7B (Ye et al., 2025) adopt a mask annealing strategy to gradually transition from causal attention to bidirectional attention during training, while employing a CART-based loss reweighting scheme to balance token-level learning dynamics. In contrast, RND1 (Keshigeyan et al., 2025) takes a more direct approach by immediately converting the causal attention mechanism of the AR model into a bidirectional one upon initialization. Notably, RND1 observes that when initializing DLM training from an AR model, preserving knowledge-intensive capabilities requires constraining updates to the model's dense layers to prevent catastrophic forgetting.\n\nBlock Diffusion Language Models (BDLMs) (Arriola et al., 2025) provide a hybrid paradigm that balances efficiency and performance by combining diffusion and AR modeling. Tokens are generated block-wise: within each block, a diffusion process reconstructs masked tokens, while blocks are produced auto-regressively. This design enables variable-length generation and supports KV-cache reuse during decoding, enhancing inference efficiency. Consequently, BDLMs can be effectively initialized from AR models, narrowing the performance gap. For example, SDAR (Cheng et al., 2025) leverages the Qwen-3 series (Yang et al., 2025) to train more efficient BDLMs. By exploring various block sizes and optimization strategies, it achieves performance comparable to its AR base model.\n\nHowever, one key limitation across all existing methods is their restricted model scale—ranging only from 7B to 30B parameters—leaving the feasibility and scalability of AR-initialized diffusion models largely unexplored at larger scales. Besides, the low training efficiency of block diffusion hinders its widely application to large-scale corpus for large-size models. Whether such initialization strategies can effectively generalize to models beyond the 30B scale remains an open question.\n\n# 2.3 dLLMs post-training\n\nBeyond pre-training, post-training is crucial for unlocking the full potential of dLLMs by aligning them with specific tasks and human preferences. This process typically involves supervised fine-tuning (SFT) to instill\n\ninstruction-following capabilities, reinforcement learning (RL) to enhance complex reasoning, and inference optimization to address efficiency bottlenecks.\n\nRecent work has explored SFT to adapt dLLMs for specialized domains. For instance, Dream-Coder (Xie et al., 2025) fine-tunes a 7B dLLM for code generation, demonstrating unique abilities like adaptive \"sketch-then-fill\" strategies for complex algorithms. Similarly, the general-purpose model Dream-7B (Ye et al., 2025) leverages SFT to achieve performance on par with top-tier AR models, while uniquely excelling at tasks requiring complex planning and constraint satisfaction. Other studies have investigated specialized fine-tuning strategies to balance quality and efficiency. Seed-Diffusion (Song et al., 2025), for example, employs a two-stage curriculum learning strategy to train a high-speed code generation model, while LiDAR (Liu et al., 2025) introduces a hybrid \"think in diffusion, generate in AR\" architecture through fine-tuning, significantly boosting inference throughput while maintaining quality.\n\nTo further enhance dLLMs' reasoning abilities, researchers have begun adapting reinforcement learning techniques. However, applying standard policy gradient methods is challenging due to the intractable log-likelihood of dLLMs. To address this, SPG (Wang et al., 2025a) proposes a novel Sandwich Policy Gradient algorithm that obtains a more robust and less biased gradient by maximizing an evidence lower bound for high-reward samples and minimizing an evidence upper bound for low-reward ones. Another line of work, TraceRL (Wang et al., 2025d), focuses on aligning the training objective with the model's multi-step generation trajectory. This framework led to the TraDo series of models, which have not only surpassed strong AR models on reasoning benchmarks but also produced the first dLLM capable of long-chain-of-thought reasoning.\n\nA significant challenge for dLLMs is their slow inference speed, stemming from the iterative nature of the denoising process. To mitigate this, several acceleration methods have been proposed. DPad (Chen et al., 2025a) offers a training-free solution by treating future tokens as a dynamic \"scratchpad\" and using a sliding window and distance-based pruning to reduce redundant computations, achieving a dramatic speedup, especially for long sequence generation. In contrast, D2F (Wang et al., 2025c) introduces a hybrid autoregressive-diffusion paradigm that enables parallel denoising of future text blocks even before preceding ones are fully generated. This approach allows dLLMs to leverage KV-caching and, for the first time, surpass the inference speed of equivalently sized AR models.\n\nDespite these advances, the field of dLLM post-training is still nascent. Systematic exploration of how these techniques—SFT, RL, and acceleration—interact with one another, and how they scale to models with hundreds of billions of parameters, remains an open and critical area for future research.\n\n# 3 LLaDA2.0 Training Paradigm\n\nFigure (2) illustrates the holistic training pipeline of LLaDA2.0, a staged and scalable framework designed to transform AR language models into highly efficient diffusion language models. Our paradigm follows a three-stage progression: (1) Continual Pre-training from AR to MDLM, (2) Block Diffusion Pre-training to transition from token-level to block-level diffusion modeling, and (3) Post-training for alignment and task specialization.\n\nThe process begins with a strong AR base model. We first perform continual pre-training to adapt this model into an MDLM, where it learns to reconstruct randomly masked tokens in a bidirectional, denoising fashion. This phase bridges the gap between AR and diffusion-based generation while preserving the representational geometry of the original model.\n\nBuilding upon the trained MDLM, we then introduce block diffusion pre-training, during which the model is further trained to denoise contiguous spans of text—referred to as \"blocks\"—rather than individual tokens. This shift enables higher computational efficiency and better long-range coherence during generation.\n\nFinally, after mastering non-autoregressive generation at both token and block levels, the model undergoes post-training-including SFT and DPO to align its outputs with human intent, instruction-following capability, and downstream application requirements. This stage ensures that the powerful generative backbone developed during diffusion pre-training translates into practical performance gains across diverse tasks.\n\nOverall, LLaDA2.0's training paradigm emphasizes knowledge inheritance, progressive adaptation, and efficiency-aware design, enabling seamless evolution from AR models to fluent, flexible, and fast diffusion large language models.\n\n![](images/a85eff8300c7f21132baa97a78bd4531e459011646c0faf0117ec26645e41cbd.jpg) \nFigure 2: A schematic of the progressive training framework for transforming an AR model into a MDLM. Continual Pre-Training Stage facilitates the Warmup-Stable-Decay strategies by scheduling block size $L_{B}$ enables smooth, stable, and effective attention mask adaptation. Post-training Stage facilitates the same block diffusion configuration conducting the instruction SFT, Confidence-Aware Parallel SFT, and DPO. The right panel illustrates the document-level block diffusion attention mask, which enables an efficient, vectorized forward pass by constructing a single input sequence from multiple noisy and clean examples, such as $[x_{\\mathrm{noisy1}}, \\ldots, x_{\\mathrm{clean1}}, \\ldots]$ . The forward pass then employs a combination of block-diagonal $(\\mathbf{M}_{\\mathrm{BD}})$ , offset block-causal $(\\mathbf{M}_{\\mathrm{OBC}})$ , and block-causal $(\\mathbf{M}_{\\mathrm{BC}})$ masks.\n\n![](images/242ccc2035786280ca8b5dc17ca5c46a6cfc8d43950bb4571fa9a9dec8951c1c.jpg)\n\n![](images/933406f13c80504f7eea3619448e222a570ae59560754fcc234f298b80b3f0c3.jpg)\n\n# 4 Continual Pre-training via Warmup-Stable-Decay (WSD)\n\n![](images/8a5fb3b5234ef6552ffd325a3d46e902c0026d7751d6f1816856347cc8687693.jpg)\n\n# Takeaway\n\n(1) Warmup-Stable-Decay enables a smooth and data-efficient conversion from AR to dLLMs. \n(2) The document-level attention mask ensures coherent bidirectional modeling within semantic boundaries. \n(3) Top-k Checkpoint Merge enhances performance and generalization by averaging the top k model checkpoints.\n\nConverting a pre-trained AR language model into a high-performance diffusion language model is fundamentally challenging due to the misalignment in architectural inductive biases and training objectives. While AR models generate tokens sequentially from left to right, diffusion-based models rely on bidirectional context and learn to reconstruct corrupted sequences in arbitrary unmasking orders. A direct objective switch often leads to unstable optimization and severe degradation of pretrained knowledge.\n\nTo address this gap, we propose a Warmup-Stable-Decay (WSD) continual pre-training strategy that enables a smooth, stable, and effective transition from AR to dLLM. WSD decomposes the conversion into three coordinated phases:\n\n- Warmup: Progressively increase the block size in block diffusion language models (BDLM) to gradually transform the AR model into a full-sequence masked diffusion language model (MDLM). \n- Stable: Stabilize and enrich the model's understanding of diffusion dynamics through large-scale training under the MDLM paradigm. \n- Decay: Revert back to a compact BDLM with smaller block sizes to achieve better speed-efficiency trade-offs during inference.\n\nThis progressive schedule preserves the AR model's priors while steadily adapting it to the structural requirements of diffusion modeling.\n\nMoreover, the document-level attention mask is applied throughout training to all input sequences. This mechanism is crucial for handling packed heterogeneous documents, preventing the model from forming spurious connections across unrelated texts, thereby ensuring semantic coherence and improving learning\n\nstability within each document. In addition, we adopt a top-k checkpoint merging strategy (Tian et al., 2025), to enhance generalization by averaging the parameters of the best-performing checkpoints, smoothing the parameter landscape, and yielding a more robust final model with boosted performance.\n\n# 4.1 Warmup-Stable-Decay Conversion Strategy\n\nWe begin with the AR base models Ling-mini-2.0 and Ling-flash-2.0 (Ling et al., 2025), which can be viewed as a special case of BDLM with block size 1. This perspective allows us to treat AR models as the initial BDLM configuration with minimal granularity.\n\nPhase-1: Progressive Block Size Warmup The core idea of the warmup phase is to gradually increase the block size, thereby expanding the receptive field within which the model performs joint denoising. Starting from block size $L_{B} = 1$ , we incrementally scale it up to 4, 32, then 64, and ultimately reach $L_{B} = 4096$ at which point the entire sequence is treated as one single block. To avoid fragmented blocks, we require the sequence length to be divisible by the current block size. At the final enlargement, the BDLM becomes equivalent to a standard MDLM that operates over fully masked sequences with global attention. Crucially, each block-size transition is trained on moderate-scale data to ensure smooth adaptation. This progressive enlargement allows the model to smoothly adapt its internal representations to handle larger contextual spans and more complex masking patterns.\n\nPhase-2: Large Scale Stable Training Once the block size reaches 4096 and the model transitions to the MDLM pattern, the \"clean\" part of the attention computation (see Figure 2) no longer needs to be maintained. This significantly reduces the computational cost of attention, allowing data to be processed far more efficiently under the MDLM paradigm. With the model now fully adapted to this regime, the stable training phase focuses on deepening its understanding of diffusion dynamics through extensive training on large-scale corpora. At this stage, the block size is fixed at 4096, effectively making every input a single-block sequence, equivalent to the classical MDLM setting.\n\nPhase-3: Block Size Decay Finally, after large-scale MDLM training, we gradually reduce the block size from 4096 to a small block size (e.g., 32) to convert the model back into an efficient BDLM. This decay process distills the global contextual knowledge learned during MDLM into a compact blockwise structure. By decreasing the block size step-by-step (e.g., starting from 4096 to 2048) rather than abruptly, the model smoothly adapts from global to local conditioning, preserving its semantic understanding while regaining BDLM's practical benefits such as KV-cache reuse and fast variable-length generation.\n\nOverall Training Objective The optimization objective of BDLM (Arriola et al., 2025) is designed to enable the model to accurately reconstruct the original, uncorrupted tokens within these designated masked blocks using a standard cross-entropy loss. Specifically, we define the training loss during warmup and decay phases (phase-1&3) under the BDLM paradigm as:\n\n$$\n\\mathcal {L} _ {\\mathrm {B D L M}} (\\theta) = - \\mathbb {E} _ {t, \\boldsymbol {x} _ {0}, \\boldsymbol {x} _ {t}} \\left[ \\frac {\\alpha_ {t} ^ {\\prime}}{1 - \\alpha_ {t}} \\sum_ {k = 1} ^ {K} \\sum_ {i = 1} ^ {L _ {B}} \\mathbb {1} [ x _ {t, k} ^ {i} = [ \\mathrm {M A S K} ] \\log p _ {\\theta} (\\boldsymbol {x} _ {0, k} ^ {i} | \\boldsymbol {x} _ {0, < k}, \\boldsymbol {x} _ {t, k}) \\right], \\tag {1}\n$$\n\nwhere the expectation is over timestep $t$ , the clean sequence $x_0$ , and its corrupted version $x_t$ (tokens masked with probability $1 - \\alpha_t$ ). Indicator $\\mathbb{1}[\\cdot]$ ensures predictions are made only for masked tokens, and $-\\alpha_t' / (1 - \\alpha_t)$ is the diffusion-derived time weight. Here $K = L_{\\mathrm{total}} / L_B$ is the number of blocks, $L_B$ the block size, $x_{t,k}^i$ the $i$ -th token in block $k$ , $x_{0,k}$ the preceding clean blocks, and $x_{t,k}$ the noisy version of the current block.\n\nDuring the stable training (phase-2) of MDLM (i.e., $K = 1$ ), the objective simplifies to:\n\n$$\n\\mathcal {L} _ {\\mathrm {M D L M}} (\\theta) = - \\mathbb {E} _ {t, \\boldsymbol {x} _ {0}, \\boldsymbol {x} _ {t}} [ \\frac {\\alpha_ {t} ^ {\\prime}}{1 - \\alpha_ {t}} \\sum_ {i = 1} ^ {L} \\mathbb {1} [ x _ {t} ^ {i} = [ \\text {M A S K} ] \\log p _ {\\theta} (x _ {0} ^ {i} | \\boldsymbol {x} _ {t}) ]. \\tag {2}\n$$\n\n# 4.2 Document-level Attention Mask\n\nOur training sequences are formed by packing heterogeneous documents into fixed-length segments to maximize throughput. However, this introduces artificial long-range dependencies across semantically unrelated texts. Without careful handling, standard attention would incorrectly attend across document boundaries, leading to contextual confusion and significantly hindering the model's ability to perform robust bidirectional modeling crucial for denoising.\n\nTo mitigate this fundamental challenge and preserve semantic coherence, we redefine the attention mechanism with a specialized block-wise document-level attention mask. This mask ensures that attention operates\n\nstrictly within document boundaries, preventing cross-document contamination and allowing the model to fully leverage bidirectional context for accurate reconstruction of corrupted blocks. The native Block Diffusion vectorizes the training process to achieve parallel training of blocks, and this mask is applied accordingly. Specifically, for a concatenated sequence $x_{full}$ of length $2L$ (comprising $x_{t}$ followed by $x_{0}$ ), and assuming tokens $i$ and $j$ are already confined to the same document segment (as enforced by the initial document-level mask), the attention mask $M \\in \\{0,1\\}^{2L \\times 2L}$ is constructed by dividing each sequence $(x_{t}$ and $x_{0}$ ) into contiguous blocks. Let $b(k) = \\lfloor k / L_{B} \\rfloor$ denote the block index for token $k$ given a block size $L_{B}$ . The mask is defined as:\n\n$$\nM _ {i j} = \\left\\{ \\begin{array}{l l} \\mathbb {1} _ {b (i) = b (j)} & \\text {i f} i \\in x _ {t} \\text {a n d} j \\in x _ {t} \\\\ \\mathbb {1} _ {b (i) > b (j - L)} & \\text {i f} i \\in x _ {t} \\text {a n d} j \\in x _ {0} \\\\ \\mathbb {1} _ {b (i - L) \\geq b (j - L)} & \\text {i f} i \\in x _ {0} \\text {a n d} j \\in x _ {0} \\\\ 0 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {3}\n$$\n\nWhere $i, j \\in \\{0, 1, \\dots, 2L - 1\\}$ are the indices in the full sequence. The first condition $(\\mathbb{1}_{b(i) = b(j)})$ implements block-diagonal attention within the noisy sequence $x_{t}$ . The second $(\\mathbb{1}_{b(i) > b(j - L)})$ enables cross-attention from $x_{t}$ to $x_{0}$ , but only from blocks in $x_{t}$ to earlier blocks in $x_{0}$ . The third $(\\mathbb{1}_{b(i - L) \\geq b(j - L)})$ imposes a causal block attention pattern within the clean sequence $x$ , allowing a block to attend to itself and all preceding blocks. The \"otherwise\" condition corresponds to a zero matrix, explicitly preventing attention from queries in $x_{0}$ to keys in $x_{t}$ . This allows each block to leverage context from relevant blocks (according to the mask) for reconstruction, capturing inter-block dependencies while maintaining the causal and block-diagonal principles essential for stable diffusion training. During our exploration, we also experimented with other tricks like random-length (Xie et al., 2025) and CART (Ye et al., 2025). However, the results demonstrate that the document-level attention mask is more fundamental in CPT training compared to these techniques, and it consistently achieves superior performance. As illustrated in Figure 2, this forms a structured attention layout that balances locality and global document coherence.\n\nFor MDLM, the document-level attention mask simplifies to $M \\in \\{0,1\\}^{L \\times L}$ , where:\n\n$$\nM _ {i j} = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} i, j \\text {b e l o n g t o t h e s a m e d o c u m e n t}, \\\\ 0, & \\text {o t h e r w i s e .} \\end{array} \\right. \\tag {4}\n$$\n\n# 4.3 Top-k Checkpoint Merge\n\nTo further enhance the generalization and robustness of our Block Diffusion Language Model, we employ a top-k checkpoint merging strategy. Upon completion of BDLM pre-training, we identify the top $k$ best-performing model checkpoints, typically selected based on validation metrics like perplexity. The parameters (weights and biases) of these $k$ checkpoints are then arithmetically averaged to form a single, unified BDLM. Based on WSM scheduler (Tian et al., 2025), this merge strategy can effectively ensemble diverse \"knowledge\" captured by the model at various optimal or near-optimal training states. This smooths the parameter landscape, mitigates overfitting, and yields a more stable and generalizable model. A key advantage of the WSM approach is its optimizer-agnostic nature, allowing seamless integration without altering the underlying training pipeline. Crucially, this post-training Top-k Merge fundamentally differs from the Exponential Moving Average (EMA). While EMA is an in-training technique that continuously smooths parameters, merging is an offline procedure. It explicitly selects and averages distinct, high-performing model states, consolidating their strengths rather than merely smoothing the final training step.\n\n# 5 Post-training\n\n# Takeaway\n\n(1) Applying complementary masking and a mask ratio bandwidth during SFT improves sample efficiency and stabilizes convergence. \n(2) An auxiliary confidence loss is incorporated to sharpen predictions, which is crucial for efficient parallel decoding. \n(3) DPO is adapted by defining sequence log-probabilities over masked tokens, enabling effective preference alignment for the diffusion model.\n\n# 5.1 Supervised Fine-Tuning with Block Diffusion\n\nFollowing the pre-training phase, the model is aligned to follow user instructions through supervised fine-tuning (SFT). This is achieved by adapting the diffusion training objective to be conditional on an input prompt, $\\pmb{c}$ . The model is thus trained to generate the desired response $\\pmb{x}_0$ by minimizing the following loss function:\n\n$$\n\\mathcal {L} _ {\\mathrm {S F T}} (\\theta) = - \\mathbb {E} _ {t, (\\boldsymbol {c}, \\boldsymbol {x} _ {0}), \\boldsymbol {x} _ {t}} \\left[ \\frac {\\alpha_ {t} ^ {\\prime}}{1 - \\alpha_ {t}} \\sum_ {k = 1} ^ {K} \\sum_ {i = 1} ^ {L _ {B}} \\mathbb {1} [ x _ {t, k} ^ {i} = [ \\mathrm {M A S K} ] \\log p _ {\\theta} (x _ {0, k} ^ {i} | \\boldsymbol {c}, \\boldsymbol {x} _ {0, < k}, \\boldsymbol {x} _ {t, k}) \\right]. \\tag {5}\n$$\n\nHere, the model $p_{\\theta}$ learns to predict the original tokens $x_{0,k}^{i}$ of a clean response from a noisy version $\\boldsymbol{x}_t$ . The loss is computed only on masked tokens within the current noisy block $\\boldsymbol{x}_{t,k}$ . To do this, the prediction is conditioned on the prompt $c$ , auto-regressive context from prior clean blocks $\\boldsymbol{x}_{0, < k}$ , and the current noisy block $\\boldsymbol{x}_{t,k}$ that it must denoise.\n\nPadding strategies & Mask ratio bandwidth To ensure compatibility with our block-wise attention mask, we quantize each sequence's length. Specifically, the original length is rounded up to the nearest multiple of the block size, $b$ . This process defines an \"effective length\" for each sequence, guaranteeing its boundaries align perfectly with the block boundaries required by the attention mechanism.\n\nTo optimize the training dynamics, we further implement a \"mask ratio bandwidth\" strategy. Standard discrete diffusion processes typically sample mask probabilities across the full unit interval, $\\alpha_{t} \\sim U[0,1]$ . However, as identified by Arriola et al. (2025), extreme masking rates induce high gradient variance while offering minimal learning signal: near-zero masking renders reconstruction trivial, while near-total masking reduces the objective to simply learning data margins. To mitigate this, we clip the noise schedule, constraining the sampling of mask rates to a bounded interval $[\\alpha_{\\min}, \\alpha_{\\max}]$ rather than the full range. This bandwidth restriction focuses the training objective on the noise regimes that provide the most informative gradients, thereby stabilizing convergence and improving the model's generative perplexity.\n\nComplementary Masking Complementary Masking (Li et al., 2025) is a training optimization that enhances the data efficiency of the MDLM objective, $\\mathcal{L}_{\\mathrm{MDLM}}(\\theta)$ . The strategy's core principle is to generate two antithetical training instances from a single source sequence $x_0$ . A primary noised sequence, $x_{t}$ , is formed using a random mask, while a complementary sequence, $x_{t}^{\\prime}$ , is simultaneously produced using that mask's logical inverse<sup>1</sup>.\n\nBy incorporating both $\\pmb{x}_t$ and $\\pmb{x}_t'$ into the same training batch, this method provides a deterministic guarantee: every token position across the sequence length $L$ is presented to the model in its uncorrupted state exactly once within the pair. This not only doubles the effective data utilization from each sample, thereby accelerating convergence, but also entirely eliminates token-level sampling bias. Consequently, the model benefits from a more comprehensive and uniform learning signal at every optimization step, leading to enhanced robustness.\n\nData Recipe Curation A balanced, high-quality SFT dataset underpins the model's capabilities, achieved through a strategic composition of tasks spanning three principal pillars: Reasoning, General, and Industrial. The Reasoning pillar hones analytical and logical faculties through mathematics and code generation. The General pillar cultivates linguistic richness and social intelligence via creative and dialogic tasks. The Industrial pillar embeds domain-specific expertise by simulating end-to-end workflows under real-world constraints. This integrated methodology ensures a holistic skill profile, preventing capability skew and enabling fluid shifts between abstract reasoning and applied problem-solving.\n\n# 5.2 Confidence-Aware Parallel Training\n\nTo enhance the model's predictive confidence, which is crucial for efficient parallel decoding, we propose Confidence-Aware Parallel (CAP) Training. We incorporate an auxiliary confidence loss, $\\mathcal{L}_{\\mathrm{conf}}$ , inspired by dParallel (Chen et al., 2025b). The primary objective, $\\mathcal{L}_{\\mathrm{SFT}}$ , ensures correctness but provides diminishing incentive to sharpen the predictive distribution for tokens that are already correctly predicted. The confidence loss addresses this by selectively minimizing the entropy of the model's output distribution, $p_{\\theta}(\\boldsymbol{x}_0|\\boldsymbol{x}_t,\\boldsymbol{c})$ , but only for the subset of tokens that are correctly predicted in a given step. This compels the model to increase its certainty on its correct predictions. The final training objective is a weighted combination of the two losses:\n\n$$\n\\mathcal {L} (\\theta) = \\mathcal {L} _ {\\mathrm {S F T}} (\\theta) + \\lambda \\mathcal {L} _ {\\text {c o n f}} (\\theta), \\tag {6}\n$$\n\nFigure 3: Average score and tokens-per-forward (TPF) for LLaDA2.0-flash with and without CAP across 12 benchmarks. Inference speed (tokens per second) of LLaDA2.0-flash compared with similarly sized AR models on 4 code and math benchmarks. \n![](images/0f411479181bd27dd70758dc86713ea71b24354231c1ef3ccadacef208bedbf6.jpg) \nLLaDA2.0-flash-CAP LLaDA2.0-flash Ling-flash-2.0 Qwen3-30B-A3B-Inst-2507\n\n![](images/3a591da2f40ffee2827ca6d9dd32d1a30565fd86fcb00b046a4a8952edac9acc.jpg)\n\n![](images/4fa48a6b89a0e0a9161a156fc5f404c22e4daac9367ee45c7f6c55d638ccb5ea.jpg)\n\nwhere $\\lambda$ is a hyperparameter that balances the two objectives. As illustrated in Figure 3, CAP training effectively improves the decoding efficiency of LLaDA2.0-flash while maintaining competitive compression performance, demonstrating a favorable trade-off between generation quality and inference speed.\n\n# 5.3 DPO\n\nBuilding upon the SFT stage, we further align the policy model $\\pi_{\\theta}$ with human intent using Direct Preference Optimization. To support this, we constructed a comprehensive dataset comprising 1.5 million preference pairs across diverse domains, including general knowledge, mathematics, and instruction following. To ensure a stable transition in optimization, the learning rate for the DPO stage is initialized consistently with the final learning rate of the preceding SFT phase.\n\nSince the policy model $\\pi_{\\theta}$ is trained to reconstruct clean tokens $x_0$ from noisy blocks $x_{t}$ conditioned on context $c$ , the standard DPO formulation—which requires exact log-likelihoods—is intractable. Following established practices for diffusion models, we substitute the conditional log-likelihoods with their ELBO. We first define the conditional Block Diffusion ELBO, $B_{\\mathrm{BDLM}}(\\theta ,\\boldsymbol {x}|c)$ , for a response $\\pmb{x}$ . This term mirrors the inner objective of our SFT loss (equation 5) and is estimated via a single Monte Carlo sample over timesteps and noise:\n\n$$\nB _ {\\mathrm {B D L M}} (\\theta , \\boldsymbol {x} | \\boldsymbol {c}) = \\mathbb {E} _ {t, \\boldsymbol {x} _ {t}} \\left[ \\frac {\\alpha_ {t} ^ {\\prime}}{1 - \\alpha_ {t}} \\sum_ {k = 1} ^ {K} \\sum_ {i = 1} ^ {L _ {B}} \\mathbb {1} [ x _ {t, k} ^ {i} = [ \\mathrm {M A S K} ] \\log p _ {\\theta} (x _ {k} ^ {i} | \\boldsymbol {c}, \\boldsymbol {x} _ {< k}, \\boldsymbol {x} _ {t, k}) \\right]. \\tag {7}\n$$\n\nGiven a preference pair $(\\pmb{x}_w, \\pmb{x}_l)$ , where $\\pmb{x}_w$ is the preferred response and $\\pmb{x}_l$ is the dispreferred response, the DPO objective maximizes the margin between the ELBO estimates of the policy $\\pi_{\\theta}$ and the frozen reference model $\\pi_{\\theta_{\\mathrm{ref}}}$ (initialized from the post-SFT model). The final loss function is defined as:\n\n$$\n\\mathcal {L} _ {\\mathrm {D P O}} (\\theta) = - \\mathbb {E} _ {(\\boldsymbol {c}, \\boldsymbol {x} _ {w}, \\boldsymbol {x} _ {l}) \\sim \\mathcal {D}} \\left[ \\log \\sigma \\left(\\beta \\left[ \\Delta B (\\boldsymbol {x} _ {w} | \\boldsymbol {c}) - \\Delta B (\\boldsymbol {x} _ {l} | \\boldsymbol {c}) \\right]\\right) \\right], \\tag {8}\n$$\n\nwhere $\\Delta B(\\pmb{x}|\\pmb{c}) = B_{\\mathrm{BDLM}}(\\theta, \\pmb{x}|\\pmb{c}) - B_{\\mathrm{BDLM}}(\\theta_{\\mathrm{ref}}, \\pmb{x}|\\pmb{c})$ represents the ELBO advantage of the policy over the reference model, and $\\beta$ is a hyperparameter (set to 0.1) that controls the deviation from the reference policy.\n\n# 5.4 Inference\n\nWe sample one block at a diffusion step, conditioned on previously sampled blocks $p_{\\theta}(x_s^b | c, x_t^{<b})$ . The generation of each block is itself a multi-step iterative refinement process. At each step, candidate tokens are sampled for all remaining unfilled positions within the block. A hybrid acceptance strategy is then employed: we first accept all tokens whose sampling probability exceeds a predefined confidence 'threshold'. If an insufficient number of tokens meet this criterion, a low-confidence fallback is triggered, where we instead accept a fixed number of the most probable tokens regardless of their absolute confidence. This dual mechanism ensures steady generation progress.\n\n# 6 Evaluation\n\n# 6.1 Setup\n\nTo comprehensively evaluate the quality of instruction-tuned models, we employ a diverse suite of benchmarks categorized into five dimensions:\n\n- **Knowledge:** MMLU (Hendrycks et al., 2020), MMLU-Pro (Wang et al., 2024), GPQA-Diamond (Rein et al., 2024), ARC (Clark et al., 2018), CMMLU (Li et al., 2023a) C-Eval (Huang et al., 2023), GAOKAO-Bench (Zhang et al., 2023), SciBench (Wang et al., 2023), PHYBench (Qiu et al., 2025), TriviaQA (Joshi et al., 2017) \n- Reasoning: SQuAD 2.0 (Rajpurkar et al., 2018), DROP (Dua et al., 2019), KOR-Bench (Ma et al., 2024), HellaSwag (Zellers et al., 2019), BIG-Bench Hard (Suzgun et al., 2023), BIG-Bench Extra Hard (Kazemi et al., 2025), MuSR (Sprague et al., 2023), ZebraLogic (Lin et al., 2025), PrOntoQA (Saparov & He, 2022), PIQA (Bisk et al., 2020), OCNLI (Hu et al., 2020), BIG-Bench Hard-CN (team, 2023c) \nCoding: CRUXEval (Gu et al., 2024), MBPP (Austin et al., 2021), MultiPL-E (Cassano et al., 2023), HumanEval (Chen et al., 2021), BigCodeBench (Zhuo et al., 2024), LiveCodeBench (Jain et al., 2024), Spider (Yu et al., 2018), BIRD (Li et al., 2023b), HumanEval+ (Liu et al., 2023), MBPP+ (Liu et al., 2023), HumanEvalFix (Muennighoff et al., 2023), Aider (team, 2023a), HumanEval-CN (team, 2023c) \n- Math: GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), OlympiadBench (He et al., 2024), AIME 2025 (AIME, 2025), Omni-MATH (Gao et al., 2024), HARDMath2 (Roggeveen et al., 2025), GSM-Plus (Li et al., 2024), CMATH (Wei et al., 2023) \n- Agent & Alignment: BFCL (Patil et al., 2025), IFEval (Zhou et al., 2023), CodeIF-Bench (Wang et al., 2025b), Nexus Function Calling Benchmark (team, 2023b)\n\nThis extensive evaluation suite, comprising a total of 47 benchmarks, provides a holistic foundation for assessing model capabilities. In our experiments, we compare the LLaDA2.0 series against strong open-source auto-regressive (AR) models.\n\nFor all LLaDA2.0 models, we utilize a temperature of 0.0, a block size of 32, and a decoding threshold of 0.95.\n\n# 6.2 Results\n\nThe overall results, presented in the following tables, indicate that the LLaDA2.0 architecture is not only highly competitive, but also shows a promising trend of closing the performance gap with, and even surpassing, AR models in specific key areas. Our models consistently demonstrate strong, and often superior, performance in complex, structured tasks. For instance, LLaDA2.0-mini already outperforms a comparable AR model (Qwen3-8B) in the domains of Reasoning, Coding, and Math. This signal is amplified in our larger model, as LLaDA2.0-flash achieves parity with the powerful Qwen3-30B-A3B-Instruct-2507 and establishes a lead in the critical Coding and Agent domains. This suggests that as diffusion models scale, their inherent strengths in structured generation and tool use become increasingly apparent.\n\nAs shown in Table 1, LLaDA2.0-mini achieves a competitive average score of 64.34, closely approaching its AR peer, Ling-mini-2.0 (65.77). This demonstrates the fundamental viability of the diffusion approach. More importantly, it shows promising signals in complex tasks, outperforming its direct competitor on reasoning benchmarks like SQuAD 2.0 (86.50) and demonstrating more robust instruction following on IFEval (80.78). Its strong performance in coding tasks such as HumanEval (86.59) further suggests an early aptitude for structured generation.\n\nThis potential becomes even more evident with our larger model, LLaDA2.0-flash. As shown in Table 2, with an average score of 73.18, it stands firmly on par with strong AR models such as Qwen3-30B-A3B-Instruct-2507 (73.60). Crucially, LLaDA2.0-flash begins to exhibit clear advantages in complex generative tasks, a sign that the diffusion architecture may hold inherent strengths. In the critical domain of coding, it consistently outperforms its AR peers, scoring higher on HumanEval (94.51), MBPP (88.29) and MultiPL-E (74.87). This trend of surpassing AR models also extends to agent capabilities (BFCL v3: 75.43) and advanced mathematics (AIME 2025: 60.00).\n\nIn conclusion, the LLaDA2.0 series successfully demonstrates that diffusion-based language models are a powerful and scalable alternative to the dominant auto-regressive paradigm. While rapidly narrowing the gap on general benchmarks, they are already showcasing the potential to surpass traditional architectures in complex, structured domains like code generation and tool use. This positions diffusion models as a highly promising direction for the future of language generation.\n\nTable 1: Benchmark Performance of LLaDA2.0-mini \n\n<table><tr><td>Benchmark</td><td>Qwen3-8B (no Think)</td><td>Ling-mini-2.0</td><td>LLaDA2.0-mini-preview</td><td>LLaDA2.0-mini</td></tr><tr><td>Average</td><td>63.42</td><td>65.77</td><td>54.67</td><td>64.34</td></tr><tr><td colspan=\"5\">Knowledge</td></tr><tr><td>MMLU</td><td>80.94</td><td>82.15</td><td>72.49</td><td>80.53</td></tr><tr><td>MMLU-Pro</td><td>65.48</td><td>63.72</td><td>49.22</td><td>63.22</td></tr><tr><td>CMMLU</td><td>79.17</td><td>80.84</td><td>67.53</td><td>79.50</td></tr><tr><td>C-EVAL</td><td>81.36</td><td>82.10</td><td>66.54</td><td>81.38</td></tr><tr><td>GAOKAO-Bench</td><td>84.94</td><td>87.23</td><td>74.46</td><td>84.30</td></tr><tr><td>ARC-c</td><td>93.35</td><td>93.09</td><td>89.15</td><td>93.56</td></tr><tr><td>GPQA</td><td>46.59</td><td>56.80</td><td>23.74</td><td>47.98</td></tr><tr><td>SciBench</td><td>2.85</td><td>5.28</td><td>4.10</td><td>3.53</td></tr><tr><td>PHYBench</td><td>9.76</td><td>14.59</td><td>5.08</td><td>11.70</td></tr><tr><td>TriviaQA</td><td>52.51</td><td>55.63</td><td>50.49</td><td>51.33</td></tr><tr><td colspan=\"5\">Reasoning</td></tr><tr><td>BIG-Bench Hard</td><td>79.48</td><td>83.70</td><td>70.64</td><td>78.21</td></tr><tr><td>BIG-Bench Extra Hard</td><td>18.27</td><td>14.81</td><td>12.36</td><td>16.47</td></tr><tr><td>bbh-zh</td><td>80.09</td><td>66.11</td><td>66.62</td><td>75.75</td></tr><tr><td>MuSR</td><td>70.02</td><td>71.36</td><td>56.77</td><td>71.48</td></tr><tr><td>ZebraLogic</td><td>37.48</td><td>79.85</td><td>14.80</td><td>64.20</td></tr><tr><td>PrOntoQA</td><td>93.12</td><td>96.06</td><td>70.00</td><td>86.00</td></tr><tr><td>PIQA</td><td>88.30</td><td>87.54</td><td>84.33</td><td>86.51</td></tr><tr><td>OCNLI</td><td>61.49</td><td>60.17</td><td>58.68</td><td>64.51</td></tr><tr><td>HellaSwag</td><td>79.56</td><td>69.02</td><td>74.01</td><td>79.01</td></tr><tr><td>KOR-Bench</td><td>54.48</td><td>62.72</td><td>37.26</td><td>50.40</td></tr><tr><td>DROP</td><td>84.56</td><td>78.80</td><td>79.49</td><td>81.91</td></tr><tr><td>SQuAD 2.0</td><td>85.21</td><td>75.56</td><td>85.61</td><td>86.50</td></tr><tr><td colspan=\"5\">Coding</td></tr><tr><td>CRUXEval-O</td><td>74.06</td><td>76.12</td><td>61.88</td><td>71.62</td></tr><tr><td>MBPP</td><td>78.92</td><td>84.07</td><td>77.75</td><td>81.50</td></tr><tr><td>MBPP+</td><td>71.96</td><td>76.46</td><td>66.67</td><td>74.07</td></tr><tr><td>MultiPL-E</td><td>61.70</td><td>67.09</td><td>62.43</td><td>67.46</td></tr><tr><td>HumanEval</td><td>84.76</td><td>85.98</td><td>80.49</td><td>86.59</td></tr><tr><td>HumanEval+</td><td>78.66</td><td>81.71</td><td>71.95</td><td>79.88</td></tr><tr><td>HumanEvalFix</td><td>76.02</td><td>82.83</td><td>60.16</td><td>74.90</td></tr><tr><td>HumanEval-cn</td><td>74.39</td><td>71.34</td><td>73.17</td><td>78.66</td></tr><tr><td>BigIntCodeBench-Full</td><td>36.05</td><td>35.00</td><td>30.44</td><td>32.89</td></tr><tr><td>LiveCodeBench</td><td>26.38</td><td>34.97</td><td>19.82</td><td>31.50</td></tr><tr><td>Aider</td><td>55.64</td><td>49.62</td><td>28.57</td><td>39.85</td></tr><tr><td>BIRD-SQL</td><td>36.11</td><td>39.67</td><td>27.71</td><td>39.34</td></tr><tr><td>Spider</td><td>72.80</td><td>76.43</td><td>75.64</td><td>76.76</td></tr><tr><td colspan=\"5\">Math</td></tr><tr><td>GSM8K</td><td>93.63</td><td>94.62</td><td>89.01</td><td>94.24</td></tr><tr><td>MATH</td><td>86.28</td><td>94.66</td><td>73.50</td><td>93.22</td></tr><tr><td>OlympiadBench</td><td>55.33</td><td>72.30</td><td>36.30</td><td>67.70</td></tr><tr><td>AIME 2025</td><td>22.08</td><td>47.66</td><td>10.00</td><td>36.67</td></tr><tr><td>HARDMath2</td><td>7.58</td><td>9.95</td><td>0.95</td><td>0.47</td></tr><tr><td>Omni-MATH</td><td>33.20</td><td>48.80</td><td>19.20</td><td>41.70</td></tr><tr><td>GSM-Plus</td><td>86.09</td><td>87.82</td><td>81.44</td><td>86.24</td></tr><tr><td>CMATH</td><td>95.42</td><td>96.40</td><td>90.53</td><td>95.72</td></tr><tr><td colspan=\"5\">Agent &amp; Alignment</td></tr><tr><td>IFEval-strict-prompt</td><td>86.90</td><td>76.16</td><td>62.50</td><td>80.78</td></tr><tr><td>BFCL v3</td><td>70.08</td><td>53.98</td><td>74.11</td><td>70.90</td></tr><tr><td>CodeIF-Bench</td><td>50.00</td><td>46.00</td><td>48.00</td><td>48.00</td></tr><tr><td>Nexus FC</td><td>37.71</td><td>34.38</td><td>33.68</td><td>35.18</td></tr></table>\n\nTable 2: Benchmark Performance of LLaDA2.0-flash \n\n<table><tr><td>Benchmark</td><td>Qwen3-30B-A3B-Instruct-2507</td><td>Ling-flash-2.0</td><td>LLaDA2.0-flash-preview</td><td>LLaDA2.0-flash</td></tr><tr><td>Average</td><td>73.60</td><td>72.15</td><td>65.97</td><td>73.18</td></tr><tr><td colspan=\"5\">Knowledge</td></tr><tr><td>MMLU</td><td>87.13</td><td>87.98</td><td>83.15</td><td>87.69</td></tr><tr><td>MMLU-Pro</td><td>74.23</td><td>76.84</td><td>66.16</td><td>73.36</td></tr><tr><td>CMMLU</td><td>86.36</td><td>86.59</td><td>79.64</td><td>85.13</td></tr><tr><td>C-EVAL</td><td>88.17</td><td>88.03</td><td>79.28</td><td>86.75</td></tr><tr><td>GAOKAO-Bench</td><td>94.53</td><td>93.24</td><td>86.12</td><td>93.90</td></tr><tr><td>ARC-c</td><td>95.81</td><td>95.08</td><td>93.90</td><td>95.93</td></tr><tr><td>GPQA</td><td>57.34</td><td>67.12</td><td>41.92</td><td>61.98</td></tr><tr><td>SciBench</td><td>4.54</td><td>4.14</td><td>5.13</td><td>4.13</td></tr><tr><td>PHYBench</td><td>29.84</td><td>27.67</td><td>7.58</td><td>30.06</td></tr><tr><td>TriviaQA</td><td>65.61</td><td>69.76</td><td>69.25</td><td>66.88</td></tr><tr><td colspan=\"5\">Reasoning</td></tr><tr><td>BIG-Bench Hard</td><td>85.54</td><td>89.36</td><td>82.85</td><td>86.75</td></tr><tr><td>BIG-Bench Extra Hard</td><td>37.80</td><td>23.24</td><td>16.70</td><td>27.86</td></tr><tr><td>BIG-Bench Hard - CN</td><td>86.18</td><td>75.09</td><td>83.38</td><td>87.52</td></tr><tr><td>MuSR</td><td>79.15</td><td>82.72</td><td>78.75</td><td>80.48</td></tr><tr><td>ZebraLogic</td><td>90.97</td><td>87.60</td><td>39.90</td><td>82.30</td></tr><tr><td>PrOntoQA</td><td>97.12</td><td>97.88</td><td>93.50</td><td>96.50</td></tr><tr><td>PIQA</td><td>91.57</td><td>91.95</td><td>91.84</td><td>92.76</td></tr><tr><td>OCNLI</td><td>71.59</td><td>65.36</td><td>69.39</td><td>71.63</td></tr><tr><td>HellaSwag</td><td>86.31</td><td>81.59</td><td>86.00</td><td>84.97</td></tr><tr><td>KOR-Bench</td><td>68.00</td><td>68.96</td><td>53.28</td><td>64.24</td></tr><tr><td>DROP</td><td>87.57</td><td>88.32</td><td>88.17</td><td>87.90</td></tr><tr><td>SQuAD 2.0</td><td>89.51</td><td>81.32</td><td>90.61</td><td>90.00</td></tr><tr><td colspan=\"5\">Coding</td></tr><tr><td>CRUXEval-O</td><td>86.75</td><td>82.75</td><td>74.50</td><td>85.12</td></tr><tr><td>MBPP</td><td>86.65</td><td>85.01</td><td>86.65</td><td>88.29</td></tr><tr><td>MBPP+</td><td>78.04</td><td>76.19</td><td>75.93</td><td>79.63</td></tr><tr><td>MultiPL-E</td><td>70.67</td><td>65.76</td><td>72.38</td><td>74.87</td></tr><tr><td>HumanEval</td><td>93.29</td><td>85.98</td><td>88.41</td><td>94.51</td></tr><tr><td>HumanEval+</td><td>88.41</td><td>85.98</td><td>82.32</td><td>87.80</td></tr><tr><td>HumanEvalFix</td><td>91.16</td><td>92.68</td><td>83.33</td><td>90.24</td></tr><tr><td>HumanEval-CN</td><td>87.20</td><td>74.39</td><td>84.76</td><td>89.02</td></tr><tr><td>Bigcodebench-Full</td><td>41.49</td><td>40.70</td><td>40.44</td><td>41.58</td></tr><tr><td>LiveCodeBench</td><td>41.63</td><td>44.11</td><td>29.07</td><td>42.29</td></tr><tr><td>Aider</td><td>71.43</td><td>71.43</td><td>51.13</td><td>66.92</td></tr><tr><td>Spider</td><td>81.79</td><td>80.58</td><td>81.37</td><td>82.49</td></tr><tr><td>BIRD-SQL</td><td>47.75</td><td>47.49</td><td>45.34</td><td>45.76</td></tr><tr><td colspan=\"5\">Math</td></tr><tr><td>GSM8K</td><td>96.36</td><td>95.45</td><td>95.75</td><td>96.06</td></tr><tr><td>MATH</td><td>96.70</td><td>96.10</td><td>83.52</td><td>95.44</td></tr><tr><td>OlympiadBench</td><td>77.59</td><td>76.19</td><td>49.33</td><td>74.07</td></tr><tr><td>AIME 2025</td><td>61.88</td><td>55.89</td><td>23.33</td><td>60.00</td></tr><tr><td>HARDMath2</td><td>4.27</td><td>23.70</td><td>3.79</td><td>4.27</td></tr><tr><td>Omni-MATH</td><td>54.00</td><td>53.00</td><td>24.60</td><td>50.30</td></tr><tr><td>GSM-Plus</td><td>89.45</td><td>89.83</td><td>88.25</td><td>89.64</td></tr><tr><td>CMATH</td><td>96.58</td><td>96.52</td><td>95.26</td><td>96.90</td></tr><tr><td colspan=\"5\">Agent &amp; Alignment</td></tr><tr><td>IFEval-strict -prompt</td><td>84.29</td><td>81.52</td><td>75.60</td><td>81.70</td></tr><tr><td>BFCL v3</td><td>73.19</td><td>67.57</td><td>74.86</td><td>75.43</td></tr><tr><td>CodelF-Bench</td><td>54.00</td><td>56.00</td><td>56.00</td><td>58.00</td></tr><tr><td>Nexus FC</td><td>49.93</td><td>36.25</td><td>47.98</td><td>50.45</td></tr></table>\n\n![](images/6d0a9c00a8b2af31727848bbc98916f41096531f32db5c264b6edfeed2bf56b7.jpg) \nFigure 4: Score/TPF vs threshold/block size\n\n![](images/1a540cee2a347924d1a7ea4572fc8552d3e819b7060e7a7700235fed26df9ab1.jpg)\n\n![](images/dad897aa28b905e367eca11bc193e2f30fd3b0b8d56fc2f5b426c3ac78beabef.jpg) \nFigure 5: Performance on the RULER benchmark.\n\n# 6.3 Analysis\n\nAnalysis of Inference Hyper-parameters In addition to our main evaluation, we conducted a brief analysis to tune key inference hyperparameters. To ensure efficiency, this analysis was performed on our LLaDA2.0-mini model, using a representative subset of our benchmarks to understand the trade-off between generation quality (score) and inference speed (measured as TPF - Tokens Per Forward; higher is faster).\n\nDenoising Threshold. We first investigate the impact of the Denoising Threshold. While keeping the Block Size fixed at 32, we varied the threshold and observed its effect on quality and speed. As shown in Figure 4, the results reveal a clear trade-off. A threshold of 0.95 achieved the highest quality score (70.15) at the cost of the lowest inference speed (2.55 TPF). Lowering the threshold to 0.85 boosted the speed to its peak (3.31 TPF), but led to an unacceptable degradation in quality, with the score dropping to 67.90.\n\nBlock Size. Subsequently, we analyze the effect of Block Size. We set the Denoising Threshold to 0.95, the optimal value identified in the prior experiment. The results in Figure 4 demonstrate a similar trade-off. A block size of 16 yielded the highest score (70.26) but with the slowest inference (2.44 TPF). In contrast, increasing the block size to 32 substantially improved the speed to 2.55 TPF with only a marginal quality drop to 70.15. Further increasing the block size to 64 proved suboptimal, as it degraded both score and speed relative to the size-32 setting. Therefore, a block size of 32 emerges as the most compelling choice, offering a significant speed-up for a negligible performance cost.\n\nIn summary, based on this analysis, the configuration for our main evaluation is well-supported. The Denoising Threshold of 0.95 is the clear choice for maximizing quality. For block-size, the setting of 32 represents an optimal balance, providing the highest throughput with virtually no sacrifice in performance compared to the slightly higher-scoring but slower setting of 16.\n\nAnalysis of Context Length To rigorously validate our model's performance across various context lengths, we conducted a series of evaluations using the RULER benchmark.\n\nAs shown in Figure 5, both models demonstrate strong performance and stability within context length of $32\\mathrm{k}$ . The LLaDA2.0-flash model is particularly robust, maintaining a score above 93 across all lengths from 4k to 32k. The LLaDA2.0-mini model also achieves high scores, starting at 93.29 for 4k but showing a degradation to 83.94 at 32k.\n\nTo test the models' extrapolation capabilities, we extended the context length to 64k. This was achieved by employing dynamic RoPE scaling during inference, specifically using the YaRN method with a scaling factor of 2.0. However, this extension resulted in a performance degradation for both models, demonstrating a clear trade-off between context length extension and task accuracy.\n\nIn summary, this evaluation highlights two key findings: (1) The LLaDA2.0 models are exceptionally robust for long-context tasks within their native 32k window. (2) They can be successfully extended to handle 64k sequences via YaRN scaling, providing flexibility for extreme-length applications, albeit with a predictable performance cost.\n\n# 7 Training & Inference Infrastructure\n\n# 7.1 Pretraining\n\nWe adopt Megatron-LM (Shoeybi et al., 2019) as the pretraining backend to enable efficient training of a 100B-parameter model with long sequences, leveraging data parallelism (DP), pipeline parallelism (PP),\n\ntensor parallelism (TP), context parallelism (CP), and expert parallelism (EP), as Figure 6 shows. To ensure consistency of masked tokens, we generate masked tokens on a single model-parallel (MP, that is, TP and PP) rank and then broadcast to all other ranks within the MP ranks.\n\nEfficient Block Diffusion Training For flexible support of arbitrary block diffusion attention mask, we utilize cuDNN as the backend for the attention mechanism. This approach achieves more than 1.3x end-to-end speedup and over $90\\%$ memory savings in the attention layer compared to the unfused attention implementation in TransformerEngine when training LLaDA2.0-mini. We further apply a zig-zag partitioning strategy to the block diffusion attention mask to achieve effective load balancing across the CP group.\n\nNumerical Stability During the transition from AR to diffusion models, training can suffer from gradient explosion, especially at high mask ratios within a document. This issue stems from the fact that masked token embeddings are set to zero during AR training, as these tokens are never observed, leading their corresponding weights to gradually decay to zero. A straightforward fix,\n\nrandomly reinitializing the masked token embeddings upon loading the AR model, may disrupt other well-trained parameters, potentially causing catastrophic forgetting. To mitigate this while preserving pre-trained knowledge, we instead add independent Gaussian noise to the output of the embedding layer for each masked token during the initial iterations of training. This ensures that the L2 norm of the masked token's embedding remains significant to avoid gradient explosion, thereby stabilizing the training process.\n\n![](images/3313be412152ca3aa13bbd09f49e5f0d005bbf2d618f4a040e2901887699c630.jpg) \nFigure 6: Parallelism overview.\n\n# 7.2 Post-Training\n\nFor the post-training phase, we leverage dFactory $^2$ (InclusionAI, 2025), a repository providing efficient training recipes for dLLMs. Built upon the VeOmni (Ma et al., 2025a) distributed training framework, dFactory allows us to effectively implement complex parallelization schemes. Specifically, our setup for fine-tuning LLaDA2.0 combines Data Parallelism (DP) and Expert Parallelism (EP) to ensure scalable and stable training. To further enhance data throughput and hardware utilization, we adopt a data packing strategy analogous to those used in continued pre-training, which concatenates multiple short sequences into a single longer sequence. This integrated approach provides a robust and high-performance infrastructure for the post-training of our model.\n\n# 7.3 Inference Engine\n\nWe adapt dInfer $^3$ (Ma et al., 2025b)—originally built for high-performance diffusion LLM inference—to efficiently support block diffusion inference. This requires the inference engine to leverage optimization techniques traditionally designed for AR models. For instance, the framework can now effectively exploit KV-cache reuse to substantially reduce prefetch computation. As block diffusion inference closely resembles auto-regressive generation in the execution pattern, we also incorporated block diffusion inference support into SGLang $^4$ (Zheng et al., 2024), allowing it to benefit from the same class of system-level optimizations designed for AR models. More mature features in dInfer are undergoing to transport to SGLang.\n\nInference speed Figure 3 compares the average inference throughput (Tokens Per Second, TPS = #decoding tokens/#total-time) of our optimized LLaDA2.0-flash models against state-of-the-art AR models of similar scale on four reasoning and code-generation benchmarks (HumanEval, MBPP, GSM8K, and CRUXEval). All models are evaluated under a consistent generation setup. For diffusion-based models (LLaDA2.0-flash and LLaDA2.0-flash-CAP), we adopt a threshold decoder with a threshold of 0.95. The AR baselines (Ling-flash-2.0 and Qwen3-30B-A3B-Instruct-2507) are deployed using SGLang, while the diffusion models\n\nare served with dInfer, ensuring fair performance comparison in real inference environments. As shown, LLaDA2.0-flash-CAP reaches 535 TPS, outperforming the standard LLaDA2.0-flash (383 TPS) and providing up to $2.1 \\times$ speed-up over the AR baselines (256 TPS and 237 TPS).\n\n# 8 Conclusion\n\nIn this work, we introduced LLaDA2.0, discrete diffusion language models scaling up to 100B total parameters through systematic conversion from auto-regressive models, as well as a set of novel and comprehensive recipes designed to smooth and effectively transform traditional AR language models into highly efficient and performant Masked Diffusion Language Models.\n\nThrough extensive evaluations, it validates the feasibility of the training paradigm. The LLaDA2.0-mini and LLaDA2.0-flash models achieve performances that are competitive with their AR counterparts. Slightly surprisingly, LLaDA2.0-flash seems to have demonstrated advantages in complex, structured domains such as code generation, mathematical reasoning, and agentic tool use. These may have opened a new door to future work in the agentic LLM era while solidifying a gaugeable potential of dLLM for test-time scaling.\n\nFuture work may point to further scaling of the parameter volume, RL/thinking paradigm and extending the decoding speed to its extreme.\n\n# References\n\nAIME. AIME Problems and Solutions, 2025. URL https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions. \nMarianne Arriola, Aaron Gokaslan, Justin T Chiu, Zhihan Yang, Zhixuan Qi, Jiaqi Han, Subham Sekhar Sahoo, and Volodymyr Kuleshov. Block diffusion: Interpolating between autoregressive and diffusion language models. arXiv:2503.09573, 2025. \nJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv:2108.07732, 2021. \nYonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020. \nFederico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, et al. MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation. IEEE Transactions on Software Engineering, 49(7):3675-3691, 2023. \nMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv:2107.03374, 2021. \nXinhua Chen, Sitao Huang, Cong Guo, Chiyue Wei, Yintao He, Jianyi Zhang, Hai \"Helen\" Li, and Yiran Chen. DPad: Efficient Diffusion Language Models with Suffix Dropout, August 2025a. arXiv:2508.14148. \nZigeng Chen, Gongfan Fang, Xinyin Ma, Ruonan Yu, and Xinchao Wang. dParallel: Learnable Parallel Decoding for dLLMs. arXiv:2509.26488, 2025b. \nShuang Cheng, Yihan Bian, Dawei Liu, Linfeng Zhang, Qian Yao, Zhongbo Tian, Wenhai Wang, Qipeng Guo, Kai Chen, Biqing Qi, et al. Sdar: A synergistic diffusion-autoregression paradigm for scalable sequence generation. arXiv:2510.06303, 2025. \nPeter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv:1803.05457, 2018. \nKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training Verifiers to Solve Math Word Problems. arXiv:2110.14168, 2021.\n\nDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning over Paragraphs. arXiv:1903.00161, 2019. \nBofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, et al. Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv:2410.07985, 2024. \nShansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, Hao Peng, and Lingpeng Kong. Scaling diffusion language models via adaptation from autoregressive models. In The Thirteenth International Conference on Learning Representations, 2025. \nAaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The Llama 3 Herd of Models. arXiv:2407.21783, 2024. \nAlex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida I Wang. \nCruxEval: A Benchmark for Code Reasoning, Understanding and Execution. arXiv:2401.03065, 2024. \nChaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems. arXiv:2402.14008, 2024. \nDan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring Massive Multitask Language Understanding. arXiv:2009.03300, 2020. \nDan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring Mathematical Problem Solving with the Math Dataset. arXiv:2103.03874, 2021. \nHai Hu, Kyle Richardson, Liang Xu, Lu Li, Sandra Kübler, and Lawrence S Moss. Ocnli: Original chinese natural language inference. arXiv:2010.05444, 2020. \nYuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Yao Fu, et al. C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models. Advances in Neural Information Processing Systems, 36:62991-63010, 2023. \nAaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. GPT-4o System Card. arXiv:2410.21276, 2024. \nInclusionAI. dFactory: Easy and Efficient dLLM Fine-Tuning, 2025. URL https://github.com/inclusionAI/dFactory. \nNaman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv:2403.07974, 2024. \nMandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv:1705.03551, 2017. \nMehran Kazemi, Bahare Fatemi, Hritik Bansal, John Palowitch, Chrysovalantis Anastasiou, Sanket Vaibhav Mehta, Lalit K Jain, Virginia Aglietti, Disha Jindal, Yuanzhu Peter Chen, et al. Big-bench extra hard. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 26473-26501, 2025. \nChandrasegaran Keshigeyan, Thomas Armin, and others at Radical Numerics. Training diffusion language models at scale using autoregressive models. https://github.com/RadicalNumerics/RND1, 2025. \nHaonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. CMMLU: Measuring Massive Multitask Language Understanding in Chinese. arXiv:2306.09212, 2023a. \nJinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, et al. Can lIm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Advances in Neural Information Processing Systems, 36:42330-42357, 2023b. \nQintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solvers. arXiv:2402.19255, 2024.\n\nShufan Li, Konstantinos Kallidromitis, Hritik Bansal, Akash Gokul, Yusuke Kato, Kazuki Kozuka, Jason Kuen, Zhe Lin, Kai-Wei Chang, and Aditya Grover. LaViDa: A Large Diffusion Language Model for Multimodal Understanding, June 2025. arXiv:2505.16839. \nBill Yuchen Lin, Ronan Le Bras, Kyle Richardson, Ashish Sabharwal, Radha Poovendran, Peter Clark, and Yejin Choi. Zebralogic: On the scaling limits of lms for logical reasoning. arXiv:2502.01100, 2025. \nTeam Ling, Ang Li, Ben Liu, Binbin Hu, Bing Li, Bingwei Zeng, Borui Ye, Caizhi Tang, Changxin Tian, Chao Huang, Chao Zhang, et al. Every activation boosted: Scaling general reasoner to 1 trillion open language foundation. arXiv:2510.22115, 2025. \nAixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. DeepSeek-V3 Technical Report. arXiv:2412.19437, 2024. \nJiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. Advances in Neural Information Processing Systems, 36:21558-21572, 2023. \nJingyu Liu, Xin Dong, Zhifan Ye, Rishabh Mehta, Yonggan Fu, Vartika Singh, Jan Kautz, Ce Zhang, and Pavlo Molchanov. TiDAR: Think in Diffusion, Talk in Autoregression, November 2025. arXiv:2511.08923. \nKaijing Ma, Xinrun Du, Yunran Wang, Haoran Zhang, Zhoufutu Wen, Xingwei Qu, Jian Yang, Jiaheng Liu, Minghao Liu, Xiang Yue, et al. Kor-bench: Benchmarking language models on knowledge-orthogonal reasoning tasks. arXiv:2410.06526, 2024. \nQianli Ma, Yaowei Zheng, Zhelun Shi, Zhongkai Zhao, Bin Jia, Ziyue Huang, Zhiqi Lin, Youjie Li, Jiacheng Yang, Yanghua Peng, et al. Veomni: Scaling any modality model training with model-centric distributed recipe zoo. arXiv:2508.02317, 2025a. \nYuxin Ma, Lun Du, Lanning Wei, Kun Chen, Qian Xu, Kangyu Wang, Guofeng Feng, Guoshan Lu, Lin Liu, Xiaojing Qiand Xinyuan Zhang, et al. dinfer: An efficient inference framework for diffusion language models. arXiv:2510.08666, 2025b. \nMeta-AI. The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation, 2025. URL https://ai.meta.com/blog/llama-4-multimodal-intelligence/. \nMoonshot. Kimi K2. https://github.com/MoonshotAI/Kimi-K2/, 2025. \nNiklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro Von Werra, and Shayne Longpre. Octopack: Instruction tuning code large language models. In NeurIPS 2023 workshop on instruction tuning and instruction following, 2023. \nJinjie Ni and team. Openmoe 2: Sparse diffusion language models. https://jinjieni.notion.site/0penMoE-2-Sparse-Diffusion-Language-Models-277d8f03a8668065a4ecd23f23bd6aac, 2025. Notion Blog. \nJinjie Ni, Qian Liu, Chao Du, Longxu Dou, Hang Yan, Zili Wang, Tianyu Pang, and Michael Qizhe Shieh. Training optimal large diffusion language models. arXiv:2510.03280, 2025. \nShen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. Large language diffusion models, 2025. \nShishir G. Patil, Huanzhi Mao, Charlie Cheng-Jie Ji, Fanjia Yan, Vishnu Suresh, Ion Stoica, and Joseph E. Gonzalez. The berkeley function calling leaderboard (bfcl): From tool use to agentic evaluation of large language models. In Forty-second International Conference on Machine Learning, 2025. \nShi Qiu, Shaoyang Guo, Zhuo-Yang Song, Yunbo Sun, Zeyu Cai, Jiashen Wei, Tianyu Luo, Yixuan Yin, Haoxu Zhang, Yi Hu, et al. Phybench: Holistic evaluation of physical perception and reasoning in large language models. arXiv:2504.16074, 2025. \nPranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. arXiv:1806.03822, 2018. \nDavid Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. GPQA: A Graduate-Level Google-Proof Q&Q Benchmark. In First Conference on Language Modeling, 2024.\n\nJames V Roggeveen, Erik Y Wang, Will Flintoft, Peter Donets, Lucy S Nathwani, Nickholas Gutierrez, David Ettel, Anton Marius Graf, Siddharth Dandavate, Arjun Nageswaran, et al. Hardmath2: A benchmark for applied mathematics built by students as part of a graduate class. arXiv:2505.11774, 2025. \nAbulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv:2210.01240, 2022. \nMohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv:1909.08053, 2019. \nYuxuan Song, Zheng Zhang, Cheng Luo, Pengyang Gao, Fan Xia, Hao Luo, Zheng Li, Yuehang Yang, Hongli Yu, Xingwei Qu, Yuwei Fu, Jing Su, Ge Zhang, Wenhao Huang, Mingxuan Wang, Lin Yan, Xiaoying Jia, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Yonghui Wu, and Hao Zhou. Seed diffusion: A large-scale diffusion language model with high-speed inference, 2025. \nZayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, and Greg Durrett. Musr: Testing the limits of chain-of-thought with multistep soft reasoning. arXiv:2310.16049, 2023. \nMirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics: ACL 2023, pp. 13003-13051, 2023. \nAider-AI team. Aider-ai/aider, 2023a. URL https://github.com/Aider-AI/aider. \nNexusflow.ai team. Nexusraven-v2: Surpassing gpt-4 for zero-shot function calling, 2023b. URL https://nexusflow.ai/blogs/ravenv2. \nOpencompact team. Open-compass/opencompass, 2023c. URL https://github.com/open-compass/opencompass. \nChangxin Tian, Jiapeng Wang, Qian Zhao, Kunlong Chen, Jia Liu, Ziqi Liu, Jiaxin Mao, Wayne Xin Zhao, Zhiqiang Zhang, and Jun Zhou. Wsm: decay-free learning rate schedule via checkpoint merging for llm pre-training. arXiv:2507.17634, 2025. \nChenyu Wang, Paria Rashidinejad, DiJia Su, Song Jiang, Sid Wang, Siyan Zhao, Cai Zhou, Shannon Zejiang Shen, Feiyu Chen, Tommi Jaakkola, Yuandong Tian, and Bo Liu. Spg: Sandwiched policy gradient for masked diffusion language models. arXiv:2510.09541, 2025a. \nPeiding Wang, Li Zhang, Fang Liu, Lin Shi, Minxiao Li, Bo Shen, and An Fu. Codeif-bench: Evaluating instruction-following capabilities of large language models in interactive code generation. arXiv:2503.22688, 2025b. \nXiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv:2307.10635, 2023. \nXu Wang, Chenkai Xu, Yijie Jin, Jiachun Jin, Hao Zhang, and Zhijie Deng. Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing, August 2025c. arXiv:2508.09192. \nYinjie Wang, Ling Yang, Bowen Li, Ye Tian, Ke Shen, and Mengdi Wang. Revolutionizing reinforcement learning framework for diffusion large language models. arXiv:2509.06949, 2025d. \nYubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024. \nTianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and Bin Wang. Cmath: Can your language model pass Chinese elementary school math test? arXiv:2306.16636, 2023. \nZhihui Xie, Jiacheng Ye, Lin Zheng, Jiahui Gao, Jingwei Dong, Zirui Wu, Xueliang Zhao, Shansan Gong, Xin Jiang, Zhenguo Li, and Lingpeng Kong. Dream-Coder 7B: An Open Diffusion Language Model for Code, September 2025. arXiv:2509.01142.\n\nAn Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. Qwen3 Technical Report. arXiv:2505.09388, 2025. \nJiacheng Ye, Zhihui Xie, Lin Zheng, Jiahui Gao, Zirui Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. Dream 7b: Diffusion large language models. arXiv:2508.15487, 2025. \nRunpeng Yu, Qi Li, and Xinchao Wang. Discrete Diffusion in Large Language and Multimodal Models: A Survey, September 2025. arXiv:2506.13759. \nTao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv:1809.08887, 2018. \nRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv:1905.07830, 2019. \nXiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. Evaluating the performance of large language models on gaokao benchmark. arXiv:2305.12474, 2023. \nLianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. Sglang: Efficient execution of structured language model programs, 2024. \nJeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-Following Evaluation for Large Language Models. arXiv:2311.07911, 2023. \nFengqi Zhu, Zebin You, Yipeng Xing, Zenan Huang, Lin Liu, Yihong Zhuang, Guoshan Lu, Kangyu Wang, Xudong Wang, Lanning Wei, Hongrui Guo, Jiaqi Hu, Wentao Ye, Tieyuan Chen, Chenchen Li, Chengfu Tang, Haibo Feng, Jun Hu, Jun Zhou, Xiaolu Zhang, Zhenzhong Lan, Junbo Zhao, Da Zheng, Chongxuan Li, Jianguo Li, and Ji-Rong Wen. Llada-moe: A sparse moe diffusion language model, 2025. \nTerry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, et al. Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions. arXiv:2406.15877, 2024."}
# On Two Dimensional Flat Hessian Potentials Abstract A Riemannian metric is termed a Hessian metric if in some coordinate system it can be locally represented as the Hessian quadratic form of some locally defined smooth potential function. Under very mild extra technical conditions, we first theoretically describe the potentials of flat Hessian metrics on surfaces, and then construct these potentials explicitly using methods from integrable systems. Keywords: Hessian metric, hydrodynamic system, flat surface, Schrödinger equation. # 1 Introduction Let $M$ be a differentiable manifold of dimension $n$ . Recall that, for a Riemannian metric $h$ on $M$ , a coordinate map $u: U \to \mathbb{R}^n$ of $M$ at the vicinity of a point $p \in M$ is termed a Hesse coordinate of $(M, h)$ at $p$ , if and only if $$ h | _ {U} = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \frac {\partial^ {2} \Phi}{\partial u ^ {i} \partial u ^ {j}} d u ^ {i} \otimes d u ^ {j} $$ for some $\Phi \in C^{\infty}(U)$ , and in this case, the smooth function $\Phi$ is termed a potential of $h$ in Hesse coordinate $u\colon U\to \mathbb{R}^n$ . A Riemannian metric $h$ on $M$ is termed a Hessian metric, if for every point $p\in M$ there exists a Hesse coordinate of $(M,h)$ at $p$ . Specifically, the following construction provides Hessian metrics: Let $f$ be a real-valued function of two real variables such that its Hessian matrix is everywhere positive definite. Then $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Hessian metric, and $f$ is its potential function. In this article, using methods from mathematical physics, we study thoroughly potential $f = f(x,y)$ of which Hessian quadratic form as a Riemannian metric is flat. # 2 Preliminary Results We shall first establish the curvature formula for Hessian surfaces: Proposition 2.1 Let $f$ be a real-valued smooth function defined on a domain in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Then Gaußian curvature $K$ of the metric $g$ satisfies $$ K = - \frac {\left| \begin{array}{c c c} f _ {x x} & f _ {x x x} & f _ {x x y} \\ f _ {x y} & f _ {x x y} & f _ {x y y} \\ f _ {y y} & f _ {x y y} & f _ {y y y} \end{array} \right|}{4 \left| \begin{array}{c c} f _ {x x} & f _ {x y} \\ f _ {x y} & f _ {y y} \end{array} \right| ^ {2}} = - \frac {f _ {x x} \{f _ {x y} , f _ {y y} \} + f _ {x y} \{f _ {y y} , f _ {x x} \} + f _ {y y} \{f _ {x x} , f _ {x y} \}}{4 (f _ {x x} f _ {y y} - f _ {x y} f _ {x y}) ^ {2}} $$ where $\{\cdot, \cdot\}$ is the standard Poisson bracket normalized by $\{x, y\} = 1$ . Proof Apply Brioschi's formula to Hessian metric. For flat Hessian metrics, Brioschi's formula simplifies further. Corollary 2.2 Let $f$ be a real-valued smooth function defined on a domain in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Then $$ f _ {x x} \{f _ {x y}, f _ {y y} \} + f _ {x y} \{f _ {y y}, f _ {x x} \} + f _ {y y} \{f _ {x x}, f _ {x y} \} = \left| \begin{array}{l l l} f _ {x x} & f _ {x x x} & f _ {x x y} \\ f _ {x y} & f _ {x x y} & f _ {x y y} \\ f _ {y y} & f _ {x y y} & f _ {y y y} \end{array} \right| = 0 $$ if and only if $g$ is flat. Proof This is an immediate consequence of Proposition 2.1. Although, as observed in, in dimension greater than 2, not all Riemannian metrics are Hessian, it is proved in that all Riemannian metrics on surfaces are Hessian. However, Hessian potential functions for a Riemannian metric on a surface, while exist, are in general not unique. For instance, as we shall see latter, besides the half norm square, the flat metric tensor of the Euclidean plane may well be the Hessian quadratic form of many other potential functions. Here, we shall provide a convenient equivalent condition for a real-valued smooth function of two real variables to be a Hessian potential of a flat metric. Lemma 2.3 Let $\Omega := \{(u,v) \in \mathbb{R}^2 : u^2 + v^2 < 1\}$ be the unit disk, and $\mathbf{x} \colon \Omega \to \mathbb{R}^3$ a parametrization of a regular surface $S$ in $\mathbb{R}^3$ . Suppose that $(0,0,0) \notin S$ and $\langle \mathbf{x}, \mathbf{x}_u \times \mathbf{x}_v \rangle = 0$ . Then, there exists a homogeneous function $P \in C^\infty(\mathbb{R}^3 - \{0\})$ such that $P(\mathbf{x}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proof This is a well-known fact in classical differential geometry in $\mathbb{R}^3$ . We nevertheless give a sketch of proof. Consider the smooth mapping $\mathbf{F} \coloneqq \| \mathbf{x} \|^{-1} \mathbf{x}$ . Since $\langle \mathbf{x}, \mathbf{x}_u \times \mathbf{x}_v \rangle = 0$ , a straightforward computation shows that $\mathbf{F}_u \times \mathbf{F}_v \equiv \mathbf{0}$ . By the implicit function theorem, there exist a sufficiently small neighborhood $U$ of $(0,0) \in \Omega$ and a smooth function $$ \phi \colon \left\{\left(x, y, z\right) \in \mathbb {R} ^ {3}: x ^ {2} + y ^ {2} + z ^ {2} = 1 \right\}\rightarrow \mathbb {R} $$ such that $\mathbf{F}(U)$ is contained in the zero loci of $\phi$ . Now, define homogeneous function $P$ of degree zero via $$ P (x, y, z) := \phi (x / \rho , y / \rho , z / \rho) $$ where $\rho = \sqrt{x^2 + y^2 + z^2}$ . Then, by the very construction, we have that $P(\mathbf{x}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proposition 2.4 Let $f$ be a real-valued smooth function defined on the unit disk $\Omega$ in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Assume further that $$ \left\{f _ {x x}, f _ {x y} \right\} ^ {2} + \left\{f _ {x y}, f _ {y y} \right\} ^ {2} + \left\{f _ {y y}, f _ {x x} \right\} ^ {2} > 0. $$ Then, the metric $g$ is flat if and only if there exists a homogeneous function $P \in C^{\infty}(\mathbb{R}^{3} - \{0\})$ such that $P(f_{xx}, f_{xy}, f_{yy}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proof Combine Corollary 2.2 with Lemma 2.3. # 3 The Main Construction Based on the results from section 2, to find two dimensional flat Hessian potentials, we shall search for all real-valued smooth functions in two real variables that solve the following PDE problem: $$ \left\{ \begin{array}{l} f _ {x x} \left\{f _ {x y}, f _ {y y} \right\} + f _ {x y} \left\{f _ {y y}, f _ {x x} \right\} + f _ {y y} \left\{f _ {x x}, f _ {x y} \right\} = 0, \\ f _ {x x} + f _ {y y} > 0, \\ f _ {x x} f _ {y y} - f _ {x y} f _ {x y} > 0, \end{array} \right. \tag {1} $$ where we also add two additional constraints $$ \left\{ \begin{array}{l} \left\{f _ {x x} + f _ {y y}, f _ {x y} \right\} ^ {2} > 0, \\ \left\{f _ {x x}, f _ {x y} \right\} ^ {2} + \left\{f _ {y y}, f _ {x y} \right\} ^ {2} > 0, \end{array} \right. \tag {2} $$ which are mild technical conditions imposed to prevent the solutions from degeneracy. Let $f$ be a solution to equation (1) subject to condition (2). Then, in particular, the Hessian matrix of $f$ is everywhere positive definite. For simplicity, write $(E,F,G)\coloneqq (f_{xx},f_{xy},f_{yy})$ . We define $u\coloneqq (E + G)^{-1}F$ and define $v\coloneqq \log (E + G)$ . Since $\{E + G,F\} ^2 >0$ , straightforward computation yields that the Jacobian $\partial (u,v) / \partial (x,y)$ is everywhere non-singular. Moreover, since $$ \left\{E, F \right\} ^ {2} + \left\{G, F \right\} ^ {2} > 0, $$ without loss of generality we may assume that the Jacobian $\partial (E,F) / \partial (x,y)$ is everywhere non-singular. By Proposition 2.4, there exists a real-valued smooth function $\varphi$ of one real variable such that $$ \left\{ \begin{array}{l} E = \varphi (u) e ^ {v}, \\ F = u e ^ {v}, \\ G = (1 - \varphi (u)) e ^ {v}. \end{array} \right. $$ The integrability conditions $E_{y} = F_{x}$ and $F_{y} = G_{x}$ then rearrange into the following hydrodynamic system $$ \left[ \begin{array}{l} u _ {y} \\ v _ {y} \end{array} \right] = \frac {1}{D (u)} \left[ \begin{array}{c c} u + \varphi (u) \varphi^ {\prime} (u) & u ^ {2} + \varphi (u) ^ {2} - \varphi (u) \\ - \varphi^ {\prime} (u) ^ {2} - 1 & - u - \varphi (u) \varphi^ {\prime} (u) + \varphi^ {\prime} (u) \end{array} \right] \left[ \begin{array}{l} u _ {x} \\ v _ {x} \end{array} \right] \tag {3} $$ where here $D(u) = \varphi'(u) u - \varphi(u)$ is nowhere vanishing, as it is the determinant of the Jacobian $\partial(E, F) / \partial(u, v)$ . Now, straightforward computation yields that the characteristic velocities $$ \lambda_ {i} := \frac {\varphi^ {\prime} (u) + (- 1) ^ {i} \sqrt {\varphi^ {\prime} (u) ^ {2} - 4 D (u) (1 + D (u))}}{2 D (u)} $$ are real and distinct for $i = 1,2$ . Take any $i \in \{1, 2\}$ . Recall that, up to an additive constant, the phase function $p_i$ with characteristic velocity $\lambda_i$ is the smooth function of one real variable such that $$ \frac {d p _ {i}}{d u} = \frac {1 + \lambda_ {i} \varphi^ {\prime}}{\lambda_ {i} + \varphi^ {\prime}}, $$ and the $i$ -th Riemann invariant of equation (3) is $r_i(u,v)\coloneqq v + p_i(u)$ . The method of hodograph transformation then brings equation (3) into a system of linear equations $$ \frac {\partial x}{\partial r _ {i}} + \lambda_ {i} \frac {\partial y}{\partial r _ {i}} = 0 \tag {4} $$ for $i = 1,2$ . Define the conformal coordinates $\theta \coloneqq (r_1 + r_2) / 2$ and $t \coloneqq (r_1 - r_2) / 2$ . Since $2t = p_{1}(u) - p_{2}(u)$ and straightforward computation yields that $$ \frac {d p _ {1}}{d u} > \frac {d p _ {2}}{d u}, $$ by the inverse function theorem, $u$ is
# On Two Dimensional Flat Hessian Potentials Abstract A Riemannian metric is termed a Hessian metric if in some coordinate system it can be locally represented as the Hessian quadratic form of some locally defined smooth potential function. Under very mild extra technical conditions, we first theoretically describe the potentials of flat Hessian metrics on surfaces, and then construct these potentials explicitly using methods from integrable systems. Keywords: Hessian metric, hydrodynamic system, flat surface, Schrödinger equation. # 1 Introduction Let $M$ be a differentiable manifold of dimension $n$ . Recall that, for a Riemannian metric $h$ on $M$ , a coordinate map $u: U \to \mathbb{R}^n$ of $M$ at the vicinity of a point $p \in M$ is termed a Hesse coordinate of $(M, h)$ at $p$ , if and only if $$ h | _ {U} = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \frac {\partial^ {2} \Phi}{\partial u ^ {i} \partial u ^ {j}} d u ^ {i} \otimes d u ^ {j} $$ for some $\Phi \in C^{\infty}(U)$ , and in this case, the smooth function $\Phi$ is termed a potential of $h$ in Hesse coordinate $u\colon U\to \mathbb{R}^n$ . A Riemannian metric $h$ on $M$ is termed a Hessian metric, if for every point $p\in M$ there exists a Hesse coordinate of $(M,h)$ at $p$ . Specifically, the following construction provides Hessian metrics: Let $f$ be a real-valued function of two real variables such that its Hessian matrix is everywhere positive definite. Then $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Hessian metric, and $f$ is its potential function. In this article, using methods from mathematical physics, we study thoroughly potential $f = f(x,y)$ of which Hessian quadratic form as a Riemannian metric is flat. # 2 Preliminary Results We shall first establish the curvature formula for Hessian surfaces: Proposition 2.1 Let $f$ be a real-valued smooth function defined on a domain in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Then Gaußian curvature $K$ of the metric $g$ satisfies $$ K = - \frac {\left| \begin{array}{c c c} f _ {x x} & f _ {x x x} & f _ {x x y} \\ f _ {x y} & f _ {x x y} & f _ {x y y} \\ f _ {y y} & f _ {x y y} & f _ {y y y} \end{array} \right|}{4 \left| \begin{array}{c c} f _ {x x} & f _ {x y} \\ f _ {x y} & f _ {y y} \end{array} \right| ^ {2}} = - \frac {f _ {x x} \{f _ {x y} , f _ {y y} \} + f _ {x y} \{f _ {y y} , f _ {x x} \} + f _ {y y} \{f _ {x x} , f _ {x y} \}}{4 (f _ {x x} f _ {y y} - f _ {x y} f _ {x y}) ^ {2}} $$ where $\{\cdot, \cdot\}$ is the standard Poisson bracket normalized by $\{x, y\} = 1$ . Proof Apply Brioschi's formula to Hessian metric. For flat Hessian metrics, Brioschi's formula simplifies further. Corollary 2.2 Let $f$ be a real-valued smooth function defined on a domain in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Then $$ f _ {x x} \{f _ {x y}, f _ {y y} \} + f _ {x y} \{f _ {y y}, f _ {x x} \} + f _ {y y} \{f _ {x x}, f _ {x y} \} = \left| \begin{array}{l l l} f _ {x x} & f _ {x x x} & f _ {x x y} \\ f _ {x y} & f _ {x x y} & f _ {x y y} \\ f _ {y y} & f _ {x y y} & f _ {y y y} \end{array} \right| = 0 $$ if and only if $g$ is flat. Proof This is an immediate consequence of Proposition 2.1. Although, as observed in, in dimension greater than 2, not all Riemannian metrics are Hessian, it is proved in that all Riemannian metrics on surfaces are Hessian. However, Hessian potential functions for a Riemannian metric on a surface, while exist, are in general not unique. For instance, as we shall see latter, besides the half norm square, the flat metric tensor of the Euclidean plane may well be the Hessian quadratic form of many other potential functions. Here, we shall provide a convenient equivalent condition for a real-valued smooth function of two real variables to be a Hessian potential of a flat metric. Lemma 2.3 Let $\Omega := \{(u,v) \in \mathbb{R}^2 : u^2 + v^2 < 1\}$ be the unit disk, and $\mathbf{x} \colon \Omega \to \mathbb{R}^3$ a parametrization of a regular surface $S$ in $\mathbb{R}^3$ . Suppose that $(0,0,0) \notin S$ and $\langle \mathbf{x}, \mathbf{x}_u \times \mathbf{x}_v \rangle = 0$ . Then, there exists a homogeneous function $P \in C^\infty(\mathbb{R}^3 - \{0\})$ such that $P(\mathbf{x}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proof This is a well-known fact in classical differential geometry in $\mathbb{R}^3$ . We nevertheless give a sketch of proof. Consider the smooth mapping $\mathbf{F} \coloneqq \| \mathbf{x} \|^{-1} \mathbf{x}$ . Since $\langle \mathbf{x}, \mathbf{x}_u \times \mathbf{x}_v \rangle = 0$ , a straightforward computation shows that $\mathbf{F}_u \times \mathbf{F}_v \equiv \mathbf{0}$ . By the implicit function theorem, there exist a sufficiently small neighborhood $U$ of $(0,0) \in \Omega$ and a smooth function $$ \phi \colon \left\{\left(x, y, z\right) \in \mathbb {R} ^ {3}: x ^ {2} + y ^ {2} + z ^ {2} = 1 \right\}\rightarrow \mathbb {R} $$ such that $\mathbf{F}(U)$ is contained in the zero loci of $\phi$ . Now, define homogeneous function $P$ of degree zero via $$ P (x, y, z) := \phi (x / \rho , y / \rho , z / \rho) $$ where $\rho = \sqrt{x^2 + y^2 + z^2}$ . Then, by the very construction, we have that $P(\mathbf{x}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proposition 2.4 Let $f$ be a real-valued smooth function defined on the unit disk $\Omega$ in $\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. Assume further that $$ \left\{f _ {x x}, f _ {x y} \right\} ^ {2} + \left\{f _ {x y}, f _ {y y} \right\} ^ {2} + \left\{f _ {y y}, f _ {x x} \right\} ^ {2} > 0. $$ Then, the metric $g$ is flat if and only if there exists a homogeneous function $P \in C^{\infty}(\mathbb{R}^{3} - \{0\})$ such that $P(f_{xx}, f_{xy}, f_{yy}) \equiv 0$ holds at the vicinity of $(0,0) \in \Omega$ . Proof Combine Corollary 2.2 with Lemma 2.3. # 3 The Main Construction Based on the results from section 2, to find two dimensional flat Hessian potentials, we shall search for all real-valued smooth functions in two real variables that solve the following PDE problem: $$ \left\{ \begin{array}{l} f _ {x x} \left\{f _ {x y}, f _ {y y} \right\} + f _ {x y} \left\{f _ {y y}, f _ {x x} \right\} + f _ {y y} \left\{f _ {x x}, f _ {x y} \right\} = 0, \\ f _ {x x} + f _ {y y} > 0, \\ f _ {x x} f _ {y y} - f _ {x y} f _ {x y} > 0, \end{array} \right. \tag {1} $$ where we also add two additional constraints $$ \left\{ \begin{array}{l} \left\{f _ {x x} + f _ {y y}, f _ {x y} \right\} ^ {2} > 0, \\ \left\{f _ {x x}, f _ {x y} \right\} ^ {2} + \left\{f _ {y y}, f _ {x y} \right\} ^ {2} > 0, \end{array} \right. \tag {2} $$ which are mild technical conditions imposed to prevent the solutions from degeneracy. Let $f$ be a solution to equation (1) subject to condition (2). Then, in particular, the Hessian matrix of $f$ is everywhere positive definite. For simplicity, write $(E,F,G)\coloneqq (f_{xx},f_{xy},f_{yy})$ . We define $u\coloneqq (E + G)^{-1}F$ and define $v\coloneqq \log (E + G)$ . Since $\{E + G,F\} ^2 >0$ , straightforward computation yields that the Jacobian $\partial (u,v) / \partial (x,y)$ is everywhere non-singular. Moreover, since $$ \left\{E, F \right\} ^ {2} + \left\{G, F \right\} ^ {2} > 0, $$ without loss of generality we may assume that the Jacobian $\partial (E,F) / \partial (x,y)$ is everywhere non-singular. By Proposition 2.4, there exists a real-valued smooth function $\varphi$ of one real variable such that $$ \left\{ \begin{array}{l} E = \varphi (u) e ^ {v}, \\ F = u e ^ {v}, \\ G = (1 - \varphi (u)) e ^ {v}. \end{array} \right. $$ The integrability conditions $E_{y} = F_{x}$ and $F_{y} = G_{x}$ then rearrange into the following hydrodynamic system $$ \left[ \begin{array}{l} u _ {y} \\ v _ {y} \end{array} \right] = \frac {1}{D (u)} \left[ \begin{array}{c c} u + \varphi (u) \varphi^ {\prime} (u) & u ^ {2} + \varphi (u) ^ {2} - \varphi (u) \\ - \varphi^ {\prime} (u) ^ {2} - 1 & - u - \varphi (u) \varphi^ {\prime} (u) + \varphi^ {\prime} (u) \end{array} \right] \left[ \begin{array}{l} u _ {x} \\ v _ {x} \end{array} \right] \tag {3} $$ where here $D(u) = \varphi'(u) u - \varphi(u)$ is nowhere vanishing, as it is the determinant of the Jacobian $\partial(E, F) / \partial(u, v)$ . Now, straightforward computation yields that the characteristic velocities $$ \lambda_ {i} := \frac {\varphi^ {\prime} (u) + (- 1) ^ {i} \sqrt {\varphi^ {\prime} (u) ^ {2} - 4 D (u) (1 + D (u))}}{2 D (u)} $$ are real and distinct for $i = 1,2$ . Take any $i \in \{1, 2\}$ . Recall that, up to an additive constant, the phase function $p_i$ with characteristic velocity $\lambda_i$ is the smooth function of one real variable such that $$ \frac {d p _ {i}}{d u} = \frac {1 + \lambda_ {i} \varphi^ {\prime}}{\lambda_ {i} + \varphi^ {\prime}}, $$ and the $i$ -th Riemann invariant of equation (3) is $r_i(u,v)\coloneqq v + p_i(u)$ . The method of hodograph transformation then brings equation (3) into a system of linear equations $$ \frac {\partial x}{\partial r _ {i}} + \lambda_ {i} \frac {\partial y}{\partial r _ {i}} = 0 \tag {4} $$ for $i = 1,2$ . Define the conformal coordinates $\theta \coloneqq (r_1 + r_2) / 2$ and $t \coloneqq (r_1 - r_2) / 2$ . Since $2t = p_{1}(u) - p_{2}(u)$ and straightforward computation yields that $$ \frac {d p _ {1}}{d u} > \frac {d p _ {2}}{d u}, $$ by the inverse function theorem, $u$ is a univariate function of time variable $t$ . Therefore, the differential 1-form $$ \Gamma (t) d t := \frac {2 d \lambda_ {1}}{\lambda_ {1} - \lambda_ {2}} $$ is well-defined. Also, up to a multiplicative constant, let $\mu$ be the positive smooth function of one real variable satisfying $2\dot{\mu} + \Gamma \mu = 0$ , and denote $\Psi \coloneqq \mu^2 y$ . Then the equation (4) reduces to the Klein-Gordon relativistic wave equation $$ \ddot {\Psi} - \partial_ {\theta} ^ {2} \Psi + V (t) \Psi = 0 \tag {5} $$ where the potential is $V = \dot{\Gamma} - \Gamma^2$ . The equation (5) can be solved by separating variables to transform it into a Sturm-Liouville eigenvalue problem, namely, the general solution of equation (5) is the superposition of particular solutions of the form $$ \Psi (t, \theta) = A \cos (k \theta) \psi_ {k} (t) + B \sin (k \theta) \psi_ {k} (t) $$ for some real numbers $A, B, k \in \mathbb{R}$ , where $\psi_k$ is a solution of the time-independent Schrödinger equation $$ - \ddot {\psi} + V \psi = k ^ {2} \psi $$ of total energy $k^2 \geq 0$ . Conversely, once we obtain a solution $\Psi$ of equation (5), we immediately have that $y := \mu^{-2}\Psi$ , and we can recover $x$ via integrating the 1-form $$ d x = - \sum_ {i = 1} ^ {2} \lambda_ {i} \frac {\partial y}{\partial r _ {i}} d r _ {i} $$ which fixes the Hesse coordinate. Then, we may invert the Hesse coordinate to obtain $u = u(x,y)$ and $v = v(x,y)$ . The Hessian metric is then $$ H := \left[ \begin{array}{c c} f _ {x x} & f _ {x y} \\ f _ {x y} & f _ {y y} \end{array} \right] = \left[ \begin{array}{c c} \varphi (u) e ^ {v} & u e ^ {v} \\ u e ^ {v} & (1 - \varphi (u)) e ^ {v} \end{array} \right] $$ and its potential $f$ is locally determined by the double integration of $H$ over a star-shaped neighborhood of $(0,0) \in \Omega$ . # 4 More Examples By exploring examples, in this section we shall show that the solutions of equation (1) form a large family. Example 4.1 Let $f$ be a real-valued smooth homogeneous function of degree $d$ defined on a domain in $\mathbb{R}^2$ such that its Hessian matrix $H \coloneqq \mathrm{Hess}(f)$ is everywhere positive definite. Then, by Euler's theorem on homogeneous functions, we have $$ (2 - d) H + x \frac {\partial H}{\partial x} + y \frac {\partial H}{\partial y} = 0, $$ that is, the matrices $H, H_x, H_y$ are everywhere $\mathbb{R}$ -linearly dependent. Therefore, by Corollary 2.2, we have that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a flat Riemannian metric. We also suggest the following very concrete example of a Hessian potential for the flat Euclidean plane. Example 4.2 Let $\mathbb{H}^2 \coloneqq \{(x,y) \in \mathbb{R}^2 : y > 0\}$ be the upper half plane and define $f \in C^\infty(\mathbb{H}^2)$ via $$ f (x, y) := \frac {x ^ {2}}{2 y} + \frac {1}{4} \log (y) y. $$ Then the coordinate change $(x(r,\theta),y(r,\theta)) = (r^2\theta ,r^2)$ brings the Hessian quadratic form $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} = \frac {1}{y} d x ^ {2} - \frac {2 x}{y ^ {2}} d x d y + \frac {4 x ^ {2} + y ^ {2}}{4 y ^ {3}} d y ^ {2} $$ of $f$ to the flat metric tensor $ds^{2} = dr^{2} + r^{2}d\theta^{2}$ of the Euclidean plane in its polar coordinates. The flatness of $g$ can be proved also by noticing that $$ f _ {x x} f _ {x x} - 4 f _ {x x} f _ {y y} + 4 f _ {x y} f _ {x y} \equiv 0 $$ and then apply Proposition 2.4. Before we close this section, we note the uniqueness of radially symmetric flat Hessian potential. Remark 4.3 Let $\Omega \coloneqq \{(x,y) \in \mathbb{R}^2 : x^2 + y^2 < 1\}$ be the unit disk, and $f \in C^\infty(\Omega)$ a smooth function of which Hessian matrix is everywhere positive definite, so that $$ g := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} $$ is a Riemannian metric. If $f$ is radially symmetric and $g$ is flat, then a straightforward computation yields that there exists a real number $C > 0$ such that $f(x,y) = C(x^{2} + y^{2})$ .
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15079
{"title": "On Two Dimensional Flat Hessian Potentials", "raw_content": "# On Two Dimensional Flat Hessian Potentials\n\nHanwen Liu\n\nMathematics Institute, University of Warwick, Coventry, CV4 7AL, UK.\n\nCorresponding author(s). E-mail(s): hanwen.liu@warwick.ac.uk;\n\n# Abstract\n\nA Riemannian metric is termed a Hessian metric if in some coordinate system it can be locally represented as the Hessian quadratic form of some locally defined smooth potential function. Under very mild extra technical conditions, we first theoretically describe the potentials of flat Hessian metrics on surfaces, and then construct these potentials explicitly using methods from integrable systems.\n\nKeywords: Hessian metric, hydrodynamic system, flat surface, Schrödinger equation.\n\n# 1 Introduction\n\nLet $M$ be a differentiable manifold of dimension $n$ . Recall that, for a Riemannian metric $h$ on $M$ , a coordinate map $u: U \\to \\mathbb{R}^n$ of $M$ at the vicinity of a point $p \\in M$ is termed a Hesse coordinate of $(M, h)$ at $p$ , if and only if\n\n$$\nh | _ {U} = \\sum_ {i = 1} ^ {n} \\sum_ {j = 1} ^ {n} \\frac {\\partial^ {2} \\Phi}{\\partial u ^ {i} \\partial u ^ {j}} d u ^ {i} \\otimes d u ^ {j}\n$$\n\nfor some $\\Phi \\in C^{\\infty}(U)$ , and in this case, the smooth function $\\Phi$ is termed a potential of $h$ in Hesse coordinate $u\\colon U\\to \\mathbb{R}^n$ . A Riemannian metric $h$ on $M$ is termed a Hessian metric, if for every point $p\\in M$ there exists a Hesse coordinate of $(M,h)$ at $p$ .\n\nSpecifically, the following construction provides Hessian metrics: Let $f$ be a real-valued function of two real variables such that its Hessian matrix is everywhere positive definite. Then\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a Hessian metric, and $f$ is its potential function.\n\nIn this article, using methods from mathematical physics, we study thoroughly potential $f = f(x,y)$ of which Hessian quadratic form as a Riemannian metric is flat.\n\n# 2 Preliminary Results\n\nWe shall first establish the curvature formula for Hessian surfaces:\n\nProposition 2.1 Let $f$ be a real-valued smooth function defined on a domain in $\\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a Riemannian metric. Then Gaußian curvature $K$ of the metric $g$ satisfies\n\n$$\nK = - \\frac {\\left| \\begin{array}{c c c} f _ {x x} & f _ {x x x} & f _ {x x y} \\\\ f _ {x y} & f _ {x x y} & f _ {x y y} \\\\ f _ {y y} & f _ {x y y} & f _ {y y y} \\end{array} \\right|}{4 \\left| \\begin{array}{c c} f _ {x x} & f _ {x y} \\\\ f _ {x y} & f _ {y y} \\end{array} \\right| ^ {2}} = - \\frac {f _ {x x} \\{f _ {x y} , f _ {y y} \\} + f _ {x y} \\{f _ {y y} , f _ {x x} \\} + f _ {y y} \\{f _ {x x} , f _ {x y} \\}}{4 (f _ {x x} f _ {y y} - f _ {x y} f _ {x y}) ^ {2}}\n$$\n\nwhere $\\{\\cdot, \\cdot\\}$ is the standard Poisson bracket normalized by $\\{x, y\\} = 1$ .\n\nProof Apply Brioschi's formula to Hessian metric.\n\n![](images/df3e74d50ed31bbfca774746179fde88ebcffadc4f1ff5435fa925312f9adb30.jpg)\n\nFor flat Hessian metrics, Brioschi's formula simplifies further.\n\nCorollary 2.2 Let $f$ be a real-valued smooth function defined on a domain in $\\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a Riemannian metric. Then\n\n$$\nf _ {x x} \\{f _ {x y}, f _ {y y} \\} + f _ {x y} \\{f _ {y y}, f _ {x x} \\} + f _ {y y} \\{f _ {x x}, f _ {x y} \\} = \\left| \\begin{array}{l l l} f _ {x x} & f _ {x x x} & f _ {x x y} \\\\ f _ {x y} & f _ {x x y} & f _ {x y y} \\\\ f _ {y y} & f _ {x y y} & f _ {y y y} \\end{array} \\right| = 0\n$$\n\nif and only if $g$ is flat.\n\nProof This is an immediate consequence of Proposition 2.1.\n\n![](images/d8bf73ecaa5b0e33a2126affd39629d468c56d5cc4a51dd7df900a203db8d341.jpg)\n\nAlthough, as observed in [1], in dimension greater than 2, not all Riemannian metrics are Hessian, it is proved in [2] that all Riemannian metrics on surfaces are Hessian.\n\nHowever, Hessian potential functions for a Riemannian metric on a surface, while exist, are in general not unique. For instance, as we shall see latter, besides the half norm square, the flat metric tensor of the Euclidean plane may well be the Hessian quadratic form of many other potential functions.\n\nHere, we shall provide a convenient equivalent condition for a real-valued smooth function of two real variables to be a Hessian potential of a flat metric.\n\nLemma 2.3 Let $\\Omega := \\{(u,v) \\in \\mathbb{R}^2 : u^2 + v^2 < 1\\}$ be the unit disk, and $\\mathbf{x} \\colon \\Omega \\to \\mathbb{R}^3$ a parametrization of a regular surface $S$ in $\\mathbb{R}^3$ . Suppose that $(0,0,0) \\notin S$ and $\\langle \\mathbf{x}, \\mathbf{x}_u \\times \\mathbf{x}_v \\rangle = 0$ . Then, there exists a homogeneous function $P \\in C^\\infty(\\mathbb{R}^3 - \\{0\\})$ such that $P(\\mathbf{x}) \\equiv 0$ holds at the vicinity of $(0,0) \\in \\Omega$ .\n\nProof This is a well-known fact in classical differential geometry in $\\mathbb{R}^3$ . We nevertheless give a sketch of proof.\n\nConsider the smooth mapping $\\mathbf{F} \\coloneqq \\| \\mathbf{x} \\|^{-1} \\mathbf{x}$ . Since $\\langle \\mathbf{x}, \\mathbf{x}_u \\times \\mathbf{x}_v \\rangle = 0$ , a straightforward computation shows that $\\mathbf{F}_u \\times \\mathbf{F}_v \\equiv \\mathbf{0}$ . By the implicit function theorem, there exist a sufficiently small neighborhood $U$ of $(0,0) \\in \\Omega$ and a smooth function\n\n$$\n\\phi \\colon \\left\\{\\left(x, y, z\\right) \\in \\mathbb {R} ^ {3}: x ^ {2} + y ^ {2} + z ^ {2} = 1 \\right\\}\\rightarrow \\mathbb {R}\n$$\n\nsuch that $\\mathbf{F}(U)$ is contained in the zero loci of $\\phi$ . Now, define homogeneous function $P$ of degree zero via\n\n$$\nP (x, y, z) := \\phi (x / \\rho , y / \\rho , z / \\rho)\n$$\n\nwhere $\\rho = \\sqrt{x^2 + y^2 + z^2}$ . Then, by the very construction, we have that $P(\\mathbf{x}) \\equiv 0$ holds at the vicinity of $(0,0) \\in \\Omega$ .\n\nProposition 2.4 Let $f$ be a real-valued smooth function defined on the unit disk $\\Omega$ in $\\mathbb{R}^2$ . Suppose that the Hessian matrix of $f$ is everywhere positive definite, so that\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a Riemannian metric. Assume further that\n\n$$\n\\left\\{f _ {x x}, f _ {x y} \\right\\} ^ {2} + \\left\\{f _ {x y}, f _ {y y} \\right\\} ^ {2} + \\left\\{f _ {y y}, f _ {x x} \\right\\} ^ {2} > 0.\n$$\n\nThen, the metric $g$ is flat if and only if there exists a homogeneous function $P \\in C^{\\infty}(\\mathbb{R}^{3} - \\{0\\})$ such that $P(f_{xx}, f_{xy}, f_{yy}) \\equiv 0$ holds at the vicinity of $(0,0) \\in \\Omega$ .\n\nProof Combine Corollary 2.2 with Lemma 2.3.\n\n![](images/0578c989d0a3cb9ced971b8abb0e458abb8d7aa309155900eb56b4ab3f90efb0.jpg)\n\n# 3 The Main Construction\n\nBased on the results from section 2, to find two dimensional flat Hessian potentials, we shall search for all real-valued smooth functions in two real variables that solve the following PDE problem:\n\n$$\n\\left\\{ \\begin{array}{l} f _ {x x} \\left\\{f _ {x y}, f _ {y y} \\right\\} + f _ {x y} \\left\\{f _ {y y}, f _ {x x} \\right\\} + f _ {y y} \\left\\{f _ {x x}, f _ {x y} \\right\\} = 0, \\\\ f _ {x x} + f _ {y y} > 0, \\\\ f _ {x x} f _ {y y} - f _ {x y} f _ {x y} > 0, \\end{array} \\right. \\tag {1}\n$$\n\nwhere we also add two additional constraints\n\n$$\n\\left\\{ \\begin{array}{l} \\left\\{f _ {x x} + f _ {y y}, f _ {x y} \\right\\} ^ {2} > 0, \\\\ \\left\\{f _ {x x}, f _ {x y} \\right\\} ^ {2} + \\left\\{f _ {y y}, f _ {x y} \\right\\} ^ {2} > 0, \\end{array} \\right. \\tag {2}\n$$\n\nwhich are mild technical conditions imposed to prevent the solutions from degeneracy.\n\nLet $f$ be a solution to equation (1) subject to condition (2). Then, in particular, the Hessian matrix of $f$ is everywhere positive definite. For simplicity, write $(E,F,G)\\coloneqq (f_{xx},f_{xy},f_{yy})$ . We define $u\\coloneqq (E + G)^{-1}F$ and define $v\\coloneqq \\log (E + G)$ .\n\nSince $\\{E + G,F\\} ^2 >0$ , straightforward computation yields that the Jacobian $\\partial (u,v) / \\partial (x,y)$ is everywhere non-singular. Moreover, since\n\n$$\n\\left\\{E, F \\right\\} ^ {2} + \\left\\{G, F \\right\\} ^ {2} > 0,\n$$\n\nwithout loss of generality we may assume that the Jacobian $\\partial (E,F) / \\partial (x,y)$ is everywhere non-singular. By Proposition 2.4, there exists a real-valued smooth function $\\varphi$ of one real variable such that\n\n$$\n\\left\\{ \\begin{array}{l} E = \\varphi (u) e ^ {v}, \\\\ F = u e ^ {v}, \\\\ G = (1 - \\varphi (u)) e ^ {v}. \\end{array} \\right.\n$$\n\nThe integrability conditions $E_{y} = F_{x}$ and $F_{y} = G_{x}$ then rearrange into the following hydrodynamic system\n\n$$\n\\left[ \\begin{array}{l} u _ {y} \\\\ v _ {y} \\end{array} \\right] = \\frac {1}{D (u)} \\left[ \\begin{array}{c c} u + \\varphi (u) \\varphi^ {\\prime} (u) & u ^ {2} + \\varphi (u) ^ {2} - \\varphi (u) \\\\ - \\varphi^ {\\prime} (u) ^ {2} - 1 & - u - \\varphi (u) \\varphi^ {\\prime} (u) + \\varphi^ {\\prime} (u) \\end{array} \\right] \\left[ \\begin{array}{l} u _ {x} \\\\ v _ {x} \\end{array} \\right] \\tag {3}\n$$\n\nwhere here $D(u) = \\varphi'(u) u - \\varphi(u)$ is nowhere vanishing, as it is the determinant of the Jacobian $\\partial(E, F) / \\partial(u, v)$ . Now, straightforward computation yields that the characteristic velocities\n\n$$\n\\lambda_ {i} := \\frac {\\varphi^ {\\prime} (u) + (- 1) ^ {i} \\sqrt {\\varphi^ {\\prime} (u) ^ {2} - 4 D (u) (1 + D (u))}}{2 D (u)}\n$$\n\nare real and distinct for $i = 1,2$ .\n\nTake any $i \\in \\{1, 2\\}$ . Recall that, up to an additive constant, the phase function $p_i$ with characteristic velocity $\\lambda_i$ is the smooth function of one real variable such that\n\n$$\n\\frac {d p _ {i}}{d u} = \\frac {1 + \\lambda_ {i} \\varphi^ {\\prime}}{\\lambda_ {i} + \\varphi^ {\\prime}},\n$$\n\nand the $i$ -th Riemann invariant of equation (3) is $r_i(u,v)\\coloneqq v + p_i(u)$ . The method of hodograph transformation then brings equation (3) into a system of linear equations\n\n$$\n\\frac {\\partial x}{\\partial r _ {i}} + \\lambda_ {i} \\frac {\\partial y}{\\partial r _ {i}} = 0 \\tag {4}\n$$\n\nfor $i = 1,2$ . Define the conformal coordinates $\\theta \\coloneqq (r_1 + r_2) / 2$ and $t \\coloneqq (r_1 - r_2) / 2$ . Since $2t = p_{1}(u) - p_{2}(u)$ and straightforward computation yields that\n\n$$\n\\frac {d p _ {1}}{d u} > \\frac {d p _ {2}}{d u},\n$$\n\nby the inverse function theorem, $u$ is a univariate function of time variable $t$ . Therefore, the differential 1-form\n\n$$\n\\Gamma (t) d t := \\frac {2 d \\lambda_ {1}}{\\lambda_ {1} - \\lambda_ {2}}\n$$\n\nis well-defined. Also, up to a multiplicative constant, let $\\mu$ be the positive smooth function of one real variable satisfying $2\\dot{\\mu} + \\Gamma \\mu = 0$ , and denote $\\Psi \\coloneqq \\mu^2 y$ . Then the equation (4) reduces to the Klein-Gordon relativistic wave equation\n\n$$\n\\ddot {\\Psi} - \\partial_ {\\theta} ^ {2} \\Psi + V (t) \\Psi = 0 \\tag {5}\n$$\n\nwhere the potential is $V = \\dot{\\Gamma} - \\Gamma^2$ . The equation (5) can be solved by separating variables to transform it into a Sturm-Liouville eigenvalue problem, namely, the general solution of equation (5) is the superposition of particular solutions of the form\n\n$$\n\\Psi (t, \\theta) = A \\cos (k \\theta) \\psi_ {k} (t) + B \\sin (k \\theta) \\psi_ {k} (t)\n$$\n\nfor some real numbers $A, B, k \\in \\mathbb{R}$ , where $\\psi_k$ is a solution of the time-independent Schrödinger equation\n\n$$\n- \\ddot {\\psi} + V \\psi = k ^ {2} \\psi\n$$\n\nof total energy $k^2 \\geq 0$ .\n\nConversely, once we obtain a solution $\\Psi$ of equation (5), we immediately have that $y := \\mu^{-2}\\Psi$ , and we can recover $x$ via integrating the 1-form\n\n$$\nd x = - \\sum_ {i = 1} ^ {2} \\lambda_ {i} \\frac {\\partial y}{\\partial r _ {i}} d r _ {i}\n$$\n\nwhich fixes the Hesse coordinate. Then, we may invert the Hesse coordinate to obtain $u = u(x,y)$ and $v = v(x,y)$ . The Hessian metric is then\n\n$$\nH := \\left[ \\begin{array}{c c} f _ {x x} & f _ {x y} \\\\ f _ {x y} & f _ {y y} \\end{array} \\right] = \\left[ \\begin{array}{c c} \\varphi (u) e ^ {v} & u e ^ {v} \\\\ u e ^ {v} & (1 - \\varphi (u)) e ^ {v} \\end{array} \\right]\n$$\n\nand its potential $f$ is locally determined by the double integration of $H$ over a star-shaped neighborhood of $(0,0) \\in \\Omega$ .\n\n# 4 More Examples\n\nBy exploring examples, in this section we shall show that the solutions of equation (1) form a large family.\n\nExample 4.1 Let $f$ be a real-valued smooth homogeneous function of degree $d$ defined on a domain in $\\mathbb{R}^2$ such that its Hessian matrix $H \\coloneqq \\mathrm{Hess}(f)$ is everywhere positive definite. Then, by Euler's theorem on homogeneous functions, we have\n\n$$\n(2 - d) H + x \\frac {\\partial H}{\\partial x} + y \\frac {\\partial H}{\\partial y} = 0,\n$$\n\nthat is, the matrices $H, H_x, H_y$ are everywhere $\\mathbb{R}$ -linearly dependent. Therefore, by Corollary 2.2, we have that\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a flat Riemannian metric.\n\nWe also suggest the following very concrete example of a Hessian potential for the flat Euclidean plane.\n\nExample 4.2 Let $\\mathbb{H}^2 \\coloneqq \\{(x,y) \\in \\mathbb{R}^2 : y > 0\\}$ be the upper half plane and define $f \\in C^\\infty(\\mathbb{H}^2)$ via\n\n$$\nf (x, y) := \\frac {x ^ {2}}{2 y} + \\frac {1}{4} \\log (y) y.\n$$\n\nThen the coordinate change $(x(r,\\theta),y(r,\\theta)) = (r^2\\theta ,r^2)$ brings the Hessian quadratic form\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2} = \\frac {1}{y} d x ^ {2} - \\frac {2 x}{y ^ {2}} d x d y + \\frac {4 x ^ {2} + y ^ {2}}{4 y ^ {3}} d y ^ {2}\n$$\n\nof $f$ to the flat metric tensor $ds^{2} = dr^{2} + r^{2}d\\theta^{2}$ of the Euclidean plane in its polar coordinates. The flatness of $g$ can be proved also by noticing that\n\n$$\nf _ {x x} f _ {x x} - 4 f _ {x x} f _ {y y} + 4 f _ {x y} f _ {x y} \\equiv 0\n$$\n\nand then apply Proposition 2.4.\n\nBefore we close this section, we note the uniqueness of radially symmetric flat Hessian potential.\n\nRemark 4.3 Let $\\Omega \\coloneqq \\{(x,y) \\in \\mathbb{R}^2 : x^2 + y^2 < 1\\}$ be the unit disk, and $f \\in C^\\infty(\\Omega)$ a smooth function of which Hessian matrix is everywhere positive definite, so that\n\n$$\ng := f _ {x x} d x ^ {2} + 2 f _ {x y} d x d y + f _ {y y} d y ^ {2}\n$$\n\nis a Riemannian metric. If $f$ is radially symmetric and $g$ is flat, then a straightforward computation yields that there exists a real number $C > 0$ such that $f(x,y) = C(x^{2} + y^{2})$ .\n\n# Acknowledgement\n\nThe author thanks the reviewers for various suggestions. This research was completed while the author was studying at the mathematics institute of the University of Warwick. The author therefore would like to thank the hospitality of the University of Warwick.\n\n# References\n\n[1] Amari, S., Armstrong, J.: Curvature of Hessian manifolds. Differential Geometry and its Applications 33, 1-12 (2014) \n[2] Han, Q., Wang, G.: Hessian surfaces and local Lagrangian embeddings. Annales de l'Institut Henri Poincaré C, Analyse non linéaire 35, 675-685 (2018)"}
# On $S$ - $J$ -Noetherian Rings Abstract Let $R$ be a commutative ring with identity, $S \subseteq R$ be a multiplicative set and $J$ be an ideal of $R$ . In this paper, we introduce the concept of $S$ - $J$ -Noetherian rings, which generalizes both $J$ -Noetherian rings and $S$ -Noetherian rings. We study several properties and characterizations of this new class of rings. For instance, we prove Cohen's-type theorem for $S$ - $J$ -Noetherian rings. Among other results, we establish the existence of $S$ -primary decomposition in $S$ - $J$ -Noetherian rings as a generalization of classical Lasker-Noether theorem. Keywords: $J$ -ideals, $S$ - $J$ -Noetherian rings, $S$ -Noetherian rings. MSC(2020): 13A15, 13B02, 13C05, 13E05. # 1 Introduction Throughout the paper, let $R$ be a commutative ring with identity, $S \subseteq R$ be a multiplicative set, and $J$ be a fixed ideal of $R$ . For an ideal $I$ of $R$ , we denote $\overline{S} = \{s + I \mid s \in S\}$ which is a multiplicative closed subset of $R / I$ . The Noetherian property of rings plays a crucial role in areas such as commutative algebra and algebraic geometry. Given the significance of Noetherian rings, numerous authors attempted to generalize the concept of Noetherian rings (see,,,,,, and). As one of its crucial generalizations, Anderson and Dumitrescu introduced the concept of $S$ -Noetherian rings. An ideal $I$ of $R$ is $S$ -finite if there exists an element $s \in S$ and a finitely generated ideal $F$ of $R$ such that $sI \subseteq F \subseteq I$ . A ring $R$ is called $S$ -Noetherian if every ideal of $R$ is $S$ -finite. Recently, Alhazmy et al. introduced the concept of $J$ -Noetherian rings as a generalization of Noetherian rings. An ideal $I$ of $R$ is called a $J$ -ideal if $I \nsubseteq J$ and $R$ is said to be $J$ -Noetherian if every $J$ -ideal is finitely generated. A particular interesting case occurs when $J = Nil(R)$ , the ideal consisting of all nilpotent elements of $R$ . In this situation, a $J$ -Noetherian ring is referred to as a Nonnil-Noetherian ring, which was first introduced and studied by Badawi in. Furthermore, when $J = J(R)$ , the Jacobson radical of $R$ , the $J$ -Noetherian ring is termed a non- $J$ -Noetherian ring. This class of rings was first introduced by Dabbabi et al. in 2024, where they characterized various properties of non- $J$ -Noetherian rings. The primary objective of this paper is to introduce and study the notion of $S$ - $J$ -Noetherian rings. We present an example of an $S$ - $J$ -Noetherian ring which is not an $S$ -Noetherian ring (see Example 2.4). We generalize various properties and characterizations of both $J$ -Noetherian and $S$ -Noetherian rings to this new class of rings. For instance, we establish Cohen-type theorem for $S$ - $J$ -Noetherian rings and prove that the polynomial ring $R[X]$ is $S$ - $J$ -Noetherian if and only if it is $S$ -Noetherian. Also, we show that the quotient of an $S$ - $J$ -Noetherian ring is an $\overline{S}$ -Noetherian ring (see Proposition 2.11). Moreover, we provide necessary and sufficient conditions for an $S$ - $J$ -Noetherian ring to belong to the class of $S$ -Noetherian rings (see Theorem 2.7 and 2.18). In [15, Theorem 2.10], among the other result, Singh et al. generalized the classical Lasker-Noether theorem for $S$ -Noetherian modules. We end the paper by extending the classical Lasker-Noether theorem for the class of $S$ - $J$ -Noetherian rings (see Theorem 2.22). # 2 Main Results We begin by introducing the concept of $S$ - $J$ -Noetherian rings. Definition 2.1. Let $R$ be a ring, $S \subseteq R$ be a multiplicative set, and $J$ an ideal of $R$ . An ideal $I$ of $R$ is said to be a $J$ -ideal if $I \not\subseteq J$ . We say that $R$ is an $S-J$ -Noetherian ring if each $J$ -ideal of $R$ is $S$ -finite. It is evident that every $J$ -Noetherian ring is an $S$ - $J$ -Noetherian ring when $S = \{1\}$ . However, the following example illustrates that the converse is not true in general. Example 2.2. Consider the ring $R = \mathcal{F}[X_1, X_2, \ldots]$ , where $\mathcal{F}$ is a field, and let $J = (0)$ . Define the ideal $I = (X_1, \ldots, X_n, \ldots)$ . Clearly, $I$ is a $J$ -ideal but is not finitely generated. Hence $R$ is not a $J$ -Noetherian ring. Now, let $S = R \setminus \{0\}$ be the multiplicative closed subset of $R$ . Let $K$ be a nonzero proper ideal of $R$ . Evidently, $K$ is $J$ -ideal and $K \cap S \neq \emptyset$ . Therefore, by [3, Proposition 2(a)], $K$ is $S$ -finite. Hence $R$ is $S$ - $J$ -Noetherian. Cohen's theorem is the classical result which states that a ring is Noetherian if all its prime ideals are finitely generated. Now, we extend this result for $S$ - $J$ -Noetherian rings. Theorem 2.3. A ring $R$ is $S$ -J-Noetherian if and only if its prime $J$ -ideals (disjoint from $S$ ) are $S$ -finite. Proof. If $R$ is $S-J$ -Noetherian, then it is obvious that all prime $J$ -ideals of $R$ are $S$ -finite. Now, suppose that all prime $J$ -ideals (disjoint from $S$ ) of $R$ are $S$ -finite and assume that $R$ is not $S-J$ -Noetherian. Therefore the set $\mathcal{F}$ of all $J$ -ideals that are non- $S$ -finite is a non-empty set which is ordered by the inclusion. By Zorn's lemma, choose $P$ maximal in $\mathcal{F}$ . This implies $P$ is not a $S$ -finite and so $P \cap S = \emptyset$ . We show that $P$ is a prime ideal of $R$ . This makes $P$ a $J$ -prime ideal (disjoint from $S$ ) that is $S$ -finite, which is a contradiction to the fact that $P \in \mathcal{F}$ . Suppose there exists $a, b \in R \setminus P$ such that $ab \in P$ . If $P + aR \subseteq J$ , $P \subseteq J$ , a contradiction, as $P$ is $J$ -ideal. Therefore $P + aR$ is $J$ -ideal. Since $P \subsetneq P + aR$ , it follows that $P + aR$ is $S$ -finite since $P$ is a maximal element of $\mathcal{F}$ . Then, there exist $s \in S$ , $\alpha_1, \ldots, \alpha_n \in P$ and $x_1, \ldots, x_n \in R$ such that $s(P + aR) \subseteq (\alpha_1 + ax_1, \ldots, \alpha_n + ax_n) \subseteq P + aR$ . Consider the ideal $Q = (P : a) = \{x \in R \mid ax \in P\}$ . Evidently, $Q$ is $J$ -ideal and $P \subsetneq Q$ as $b \in Q \setminus P$ . By the maximality of $P$ , $Q$ is an $S$ -finite ideal. Then there exist $t \in S$ and $\beta_1, \ldots, \beta_k \in Q$ such that $tQ \subseteq (\beta_1, \ldots, \beta_k) \subseteq Q$ . Let $x \in P$ . Then $sx \in s(P + aR) \subseteq (\alpha_1 + ax_1, \ldots, \alpha_n + ax_n)$ , and so there exist $u_1, \ldots, u_n \in R$ such that $sx = u_1(\alpha_1 + ax_1) + \dots + u_n(\alpha_n + ax_n) = u_1\alpha_1 + \dots + u_na_n + a(u_1x_1 + \dots + u_nx_n)$ . So $a(u_1x_1 + \dots + u_nx_n) = sx - (u_1\alpha_1 + \dots + u_na_n) \in P$ . Then $u_1x_1 + \dots + u_nx_n \in (P : a) = Q$ . Therefore we can find $w_1, \ldots, w_k \in R$ such that $t(u_1x_1 + \ldots + u_nx_n) = w_1\beta_1 + \dots + w_k\beta_k$ , which states that $stx = t(u_1\alpha_1 + \dots + u_na_n) + at(u_1x_1 + \dots + u_nx_n) = t(u_1\alpha_1 + \dots + u_na_n) + a(w_1\beta_1 + \dots + w_k\beta_k)$ . Hence we obtain $uP \subseteq (\alpha_1, \ldots, \alpha_n, a\beta_1, \ldots, a\beta_k) \subseteq P$ , where $u = st \in S$ , which means that $P$ is $S$ -finite. This contradicts to the choice of $P$ . Thus $R$ is $S-J$ -Noetherian. Every $S$ -Noetherian ring is clearly an $S$ - $J$ -Noetherian ring. However, an $S$ - $J$ -Noetherian ring need not be an $S$ -Noetherian ring. For this, consider the following example. Example 2.4. Consider a ring $R_1 = \mathcal{F}[X_1, \ldots, X_n, \ldots]$ , where $\mathcal{F}$ is a field, and $I = (X_i^2; i \in \mathbb{N})$ be an ideal of $R_1$ . Let $R = R_1 / I$ . Consider the prime ideal $P = (X_i; i \in \mathbb{N})$ of $R_1$ . Note that any prime ideal of the ring $R$ contains $P / I$ . Then the unique minimal prime ideal of $R$ is $P / I$ . Take $J = P / I$ and $S = R \setminus (P / I)$ is a multiplicative subset of $R$ . Any $J$ -prime ideal $P'$ of $R$ contains properly $P / I$ , and then $P' \cap S \neq \emptyset$ . By [3, Proposition 2(a)], $P'$ is $S$ -finite. By Theorem 2.3, $R$ is an $S$ -J-Noetherian ring. Next, our aim is to show that $R$ is not a $S$ -Noetherian ring. Suppose the ideal $P / I$ is $S$ -finite. There exist $\bar{s} \in S$ and $i_1, \ldots, i_n \in \mathbb{N}$ such that $\bar{s}(P / I) \subseteq (\overline{X_{i_1}}, \ldots, \overline{X_{i_n}}) \subseteq (P / I)$ . The polynomial $\bar{s}$ of $R$ uses a finite number of variables $X_{j_1}, \ldots, X_{j_m}$ and its constant term $d \neq 0$ . Let $k \in \mathbb{N} \setminus \{i_1, \ldots, i_n, j_1, \ldots, j_m\}$ . Then, $\bar{s}\overline{X_k} = f_1\overline{X_{i_1}} + \dots + f_n\overline{X_{i_n}}$ , where $f_1, \ldots, f_n \in R$ . Thus $sX_k - f_1X_{i_1} - \dots - f_nX_{i_n} \in I$ . This implies that $X_{i_1} = \dots = X_{i_n} = X_{j_1} = \dots = X_{j_m} = 0$ , we obtain $dX_k \in (X_i^2 | i \in \mathbb{N} \setminus \{i_1, \ldots, i_n, j_1, \ldots, j_m\})$ . This is a contradiction. Examples 2.2 and 2.4 demonstrate that the concept of $S$ - $J$ -Noetherian rings is a proper generalization of both the $J$ -Noetherian rings and $S$ -Noetherian rings. Recall, let $E$ be a family of ideals of a ring $R$ . An element $I \in E$ is said to be an $S$ -maximal element of $E$ if there exists an $s \in S$ such that for each $J \in E$ , if $I \subseteq J$ , then $sJ \subseteq I$ . Also, a chain of ideals $(I_i)_{i \in \Lambda}$ of $R$ is called $S$ -stationary if there exist $k \in \Lambda$ and $s \in S$ such that $sI_i \subseteq I_k$ for all $i \in \Lambda$ , where $\Lambda$ is an arbitrary indexing set. A family $\mathcal{F}$ of ideals of $R$ is
# On $S$ - $J$ -Noetherian Rings Abstract Let $R$ be a commutative ring with identity, $S \subseteq R$ be a multiplicative set and $J$ be an ideal of $R$ . In this paper, we introduce the concept of $S$ - $J$ -Noetherian rings, which generalizes both $J$ -Noetherian rings and $S$ -Noetherian rings. We study several properties and characterizations of this new class of rings. For instance, we prove Cohen's-type theorem for $S$ - $J$ -Noetherian rings. Among other results, we establish the existence of $S$ -primary decomposition in $S$ - $J$ -Noetherian rings as a generalization of classical Lasker-Noether theorem. Keywords: $J$ -ideals, $S$ - $J$ -Noetherian rings, $S$ -Noetherian rings. MSC(2020): 13A15, 13B02, 13C05, 13E05. # 1 Introduction Throughout the paper, let $R$ be a commutative ring with identity, $S \subseteq R$ be a multiplicative set, and $J$ be a fixed ideal of $R$ . For an ideal $I$ of $R$ , we denote $\overline{S} = \{s + I \mid s \in S\}$ which is a multiplicative closed subset of $R / I$ . The Noetherian property of rings plays a crucial role in areas such as commutative algebra and algebraic geometry. Given the significance of Noetherian rings, numerous authors attempted to generalize the concept of Noetherian rings (see,,,,,, and). As one of its crucial generalizations, Anderson and Dumitrescu introduced the concept of $S$ -Noetherian rings. An ideal $I$ of $R$ is $S$ -finite if there exists an element $s \in S$ and a finitely generated ideal $F$ of $R$ such that $sI \subseteq F \subseteq I$ . A ring $R$ is called $S$ -Noetherian if every ideal of $R$ is $S$ -finite. Recently, Alhazmy et al. introduced the concept of $J$ -Noetherian rings as a generalization of Noetherian rings. An ideal $I$ of $R$ is called a $J$ -ideal if $I \nsubseteq J$ and $R$ is said to be $J$ -Noetherian if every $J$ -ideal is finitely generated. A particular interesting case occurs when $J = Nil(R)$ , the ideal consisting of all nilpotent elements of $R$ . In this situation, a $J$ -Noetherian ring is referred to as a Nonnil-Noetherian ring, which was first introduced and studied by Badawi in. Furthermore, when $J = J(R)$ , the Jacobson radical of $R$ , the $J$ -Noetherian ring is termed a non- $J$ -Noetherian ring. This class of rings was first introduced by Dabbabi et al. in 2024, where they characterized various properties of non- $J$ -Noetherian rings. The primary objective of this paper is to introduce and study the notion of $S$ - $J$ -Noetherian rings. We present an example of an $S$ - $J$ -Noetherian ring which is not an $S$ -Noetherian ring (see Example 2.4). We generalize various properties and characterizations of both $J$ -Noetherian and $S$ -Noetherian rings to this new class of rings. For instance, we establish Cohen-type theorem for $S$ - $J$ -Noetherian rings and prove that the polynomial ring $R[X]$ is $S$ - $J$ -Noetherian if and only if it is $S$ -Noetherian. Also, we show that the quotient of an $S$ - $J$ -Noetherian ring is an $\overline{S}$ -Noetherian ring (see Proposition 2.11). Moreover, we provide necessary and sufficient conditions for an $S$ - $J$ -Noetherian ring to belong to the class of $S$ -Noetherian rings (see Theorem 2.7 and 2.18). In [15, Theorem 2.10], among the other result, Singh et al. generalized the classical Lasker-Noether theorem for $S$ -Noetherian modules. We end the paper by extending the classical Lasker-Noether theorem for the class of $S$ - $J$ -Noetherian rings (see Theorem 2.22). # 2 Main Results We begin by introducing the concept of $S$ - $J$ -Noetherian rings. Definition 2.1. Let $R$ be a ring, $S \subseteq R$ be a multiplicative set, and $J$ an ideal of $R$ . An ideal $I$ of $R$ is said to be a $J$ -ideal if $I \not\subseteq J$ . We say that $R$ is an $S-J$ -Noetherian ring if each $J$ -ideal of $R$ is $S$ -finite. It is evident that every $J$ -Noetherian ring is an $S$ - $J$ -Noetherian ring when $S = \{1\}$ . However, the following example illustrates that the converse is not true in general. Example 2.2. Consider the ring $R = \mathcal{F}[X_1, X_2, \ldots]$ , where $\mathcal{F}$ is a field, and let $J = (0)$ . Define the ideal $I = (X_1, \ldots, X_n, \ldots)$ . Clearly, $I$ is a $J$ -ideal but is not finitely generated. Hence $R$ is not a $J$ -Noetherian ring. Now, let $S = R \setminus \{0\}$ be the multiplicative closed subset of $R$ . Let $K$ be a nonzero proper ideal of $R$ . Evidently, $K$ is $J$ -ideal and $K \cap S \neq \emptyset$ . Therefore, by [3, Proposition 2(a)], $K$ is $S$ -finite. Hence $R$ is $S$ - $J$ -Noetherian. Cohen's theorem is the classical result which states that a ring is Noetherian if all its prime ideals are finitely generated. Now, we extend this result for $S$ - $J$ -Noetherian rings. Theorem 2.3. A ring $R$ is $S$ -J-Noetherian if and only if its prime $J$ -ideals (disjoint from $S$ ) are $S$ -finite. Proof. If $R$ is $S-J$ -Noetherian, then it is obvious that all prime $J$ -ideals of $R$ are $S$ -finite. Now, suppose that all prime $J$ -ideals (disjoint from $S$ ) of $R$ are $S$ -finite and assume that $R$ is not $S-J$ -Noetherian. Therefore the set $\mathcal{F}$ of all $J$ -ideals that are non- $S$ -finite is a non-empty set which is ordered by the inclusion. By Zorn's lemma, choose $P$ maximal in $\mathcal{F}$ . This implies $P$ is not a $S$ -finite and so $P \cap S = \emptyset$ . We show that $P$ is a prime ideal of $R$ . This makes $P$ a $J$ -prime ideal (disjoint from $S$ ) that is $S$ -finite, which is a contradiction to the fact that $P \in \mathcal{F}$ . Suppose there exists $a, b \in R \setminus P$ such that $ab \in P$ . If $P + aR \subseteq J$ , $P \subseteq J$ , a contradiction, as $P$ is $J$ -ideal. Therefore $P + aR$ is $J$ -ideal. Since $P \subsetneq P + aR$ , it follows that $P + aR$ is $S$ -finite since $P$ is a maximal element of $\mathcal{F}$ . Then, there exist $s \in S$ , $\alpha_1, \ldots, \alpha_n \in P$ and $x_1, \ldots, x_n \in R$ such that $s(P + aR) \subseteq (\alpha_1 + ax_1, \ldots, \alpha_n + ax_n) \subseteq P + aR$ . Consider the ideal $Q = (P : a) = \{x \in R \mid ax \in P\}$ . Evidently, $Q$ is $J$ -ideal and $P \subsetneq Q$ as $b \in Q \setminus P$ . By the maximality of $P$ , $Q$ is an $S$ -finite ideal. Then there exist $t \in S$ and $\beta_1, \ldots, \beta_k \in Q$ such that $tQ \subseteq (\beta_1, \ldots, \beta_k) \subseteq Q$ . Let $x \in P$ . Then $sx \in s(P + aR) \subseteq (\alpha_1 + ax_1, \ldots, \alpha_n + ax_n)$ , and so there exist $u_1, \ldots, u_n \in R$ such that $sx = u_1(\alpha_1 + ax_1) + \dots + u_n(\alpha_n + ax_n) = u_1\alpha_1 + \dots + u_na_n + a(u_1x_1 + \dots + u_nx_n)$ . So $a(u_1x_1 + \dots + u_nx_n) = sx - (u_1\alpha_1 + \dots + u_na_n) \in P$ . Then $u_1x_1 + \dots + u_nx_n \in (P : a) = Q$ . Therefore we can find $w_1, \ldots, w_k \in R$ such that $t(u_1x_1 + \ldots + u_nx_n) = w_1\beta_1 + \dots + w_k\beta_k$ , which states that $stx = t(u_1\alpha_1 + \dots + u_na_n) + at(u_1x_1 + \dots + u_nx_n) = t(u_1\alpha_1 + \dots + u_na_n) + a(w_1\beta_1 + \dots + w_k\beta_k)$ . Hence we obtain $uP \subseteq (\alpha_1, \ldots, \alpha_n, a\beta_1, \ldots, a\beta_k) \subseteq P$ , where $u = st \in S$ , which means that $P$ is $S$ -finite. This contradicts to the choice of $P$ . Thus $R$ is $S-J$ -Noetherian. Every $S$ -Noetherian ring is clearly an $S$ - $J$ -Noetherian ring. However, an $S$ - $J$ -Noetherian ring need not be an $S$ -Noetherian ring. For this, consider the following example. Example 2.4. Consider a ring $R_1 = \mathcal{F}[X_1, \ldots, X_n, \ldots]$ , where $\mathcal{F}$ is a field, and $I = (X_i^2; i \in \mathbb{N})$ be an ideal of $R_1$ . Let $R = R_1 / I$ . Consider the prime ideal $P = (X_i; i \in \mathbb{N})$ of $R_1$ . Note that any prime ideal of the ring $R$ contains $P / I$ . Then the unique minimal prime ideal of $R$ is $P / I$ . Take $J = P / I$ and $S = R \setminus (P / I)$ is a multiplicative subset of $R$ . Any $J$ -prime ideal $P'$ of $R$ contains properly $P / I$ , and then $P' \cap S \neq \emptyset$ . By [3, Proposition 2(a)], $P'$ is $S$ -finite. By Theorem 2.3, $R$ is an $S$ -J-Noetherian ring. Next, our aim is to show that $R$ is not a $S$ -Noetherian ring. Suppose the ideal $P / I$ is $S$ -finite. There exist $\bar{s} \in S$ and $i_1, \ldots, i_n \in \mathbb{N}$ such that $\bar{s}(P / I) \subseteq (\overline{X_{i_1}}, \ldots, \overline{X_{i_n}}) \subseteq (P / I)$ . The polynomial $\bar{s}$ of $R$ uses a finite number of variables $X_{j_1}, \ldots, X_{j_m}$ and its constant term $d \neq 0$ . Let $k \in \mathbb{N} \setminus \{i_1, \ldots, i_n, j_1, \ldots, j_m\}$ . Then, $\bar{s}\overline{X_k} = f_1\overline{X_{i_1}} + \dots + f_n\overline{X_{i_n}}$ , where $f_1, \ldots, f_n \in R$ . Thus $sX_k - f_1X_{i_1} - \dots - f_nX_{i_n} \in I$ . This implies that $X_{i_1} = \dots = X_{i_n} = X_{j_1} = \dots = X_{j_m} = 0$ , we obtain $dX_k \in (X_i^2 | i \in \mathbb{N} \setminus \{i_1, \ldots, i_n, j_1, \ldots, j_m\})$ . This is a contradiction. Examples 2.2 and 2.4 demonstrate that the concept of $S$ - $J$ -Noetherian rings is a proper generalization of both the $J$ -Noetherian rings and $S$ -Noetherian rings. Recall, let $E$ be a family of ideals of a ring $R$ . An element $I \in E$ is said to be an $S$ -maximal element of $E$ if there exists an $s \in S$ such that for each $J \in E$ , if $I \subseteq J$ , then $sJ \subseteq I$ . Also, a chain of ideals $(I_i)_{i \in \Lambda}$ of $R$ is called $S$ -stationary if there exist $k \in \Lambda$ and $s \in S$ such that $sI_i \subseteq I_k$ for all $i \in \Lambda$ , where $\Lambda$ is an arbitrary indexing set. A family $\mathcal{F}$ of ideals of $R$ is said to be $S$ -saturated if it satisfies the following property: for every ideal $I$ of $R$ , if there exist $s \in S$ and $J \in \mathcal{F}$ such that $sI \subseteq J$ , then $I \in \mathcal{F}$ . Theorem 2.5. Let $J$ be a proper ideal of $R$ . Then the following statements are equivalent. 1. $R$ is an $S$ -J-Noetherian. 2. Every ascending chain of $J$ -ideals of $R$ is $S$ -stationary. 3. Every nonempty $S$ -saturated set of $J$ -ideals of $R$ has a maximal element. 4. Every nonempty family of $J$ -ideals has an $S$ -maximal element with respect to inclusion. Proof. $(1) \Rightarrow (2)$ . Let $(I_n)_{n \in \Lambda}$ be an increasing sequence of $J$ -ideals of $R$ . Define the ideal $I = \bigcup_{n \in \Lambda} I_n$ . If $I \subseteq J$ , then $I_n \subseteq J$ , which is not possible since each $I_n$ is a $J$ -ideal. Thus $I$ is a $J$ -ideal of $R$ . Also, $I$ is $S$ -finite since $R$ is $S-J$ -Noetherian. Consequently, there exist a finitely generated ideal $F \subseteq R$ and $s \in S$ such that $sI \subseteq F \subseteq I$ . Since $F$ is finitely generated, there is a $k \in \Lambda$ satisfying $F \subseteq I_k$ . Then we have $sI \subseteq F \subseteq I_k$ , from which it follows that $sI_n \subseteq I_k$ for each $n \in \Lambda$ . $(2) \Rightarrow (3)$ . Let $\mathcal{D}$ be an $S$ -saturated set of $J$ -ideals of $R$ . Given any chain $\{I_n\}_{n \in \Lambda} \subseteq \mathcal{D}$ , we claim that $I = \bigcup_{n \in \Lambda} I_n$ belongs to $\mathcal{D}$ , which will establish that $I$ as an upper bound for the chain. Indeed, by (2), there exist $k \in \Lambda$ and $s \in S$ such that $sI_n \subseteq I_k$ for every $n \in \Lambda$ . Consequently, we obtain $sI = s\left(\bigcup_{n \in \Lambda} I_n\right) \subseteq I_k$ . Since $\mathcal{D}$ is $S$ -saturated, it follows that $I \in \mathcal{D}$ , as required. Applying Zorn's lemma, we conclude that $\mathcal{D}$ has a maximal element. $(3) \Rightarrow (4)$ . Let $\mathcal{D}$ be a nonempty set of $J$ -ideals of $R$ . Consider the family $\mathcal{D}^S$ of all $J$ -ideals $L \subseteq R$ such that there exist some $s \in S$ and $L_0 \in \mathcal{D}$ with $sL \subseteq L_0$ . Clearly, $\mathcal{D} \subseteq \mathcal{D}^S$ , so $\mathcal{D}^S \neq \emptyset$ . It is straightforward to see that $\mathcal{D}^S$ is $S$ -saturated. Thus, by (3) $\mathcal{D}^S$ has a maximal element $K \in \mathcal{D}^S$ . Fix $s \in S$ and $Q \in \mathcal{D}$ such that $sK \subseteq Q$ . Now, we claim that $Q$ is an $S$ -maximal element of $\mathcal{D}$ ; specifically, given $L \in \mathcal{D}$ with $Q \subseteq L$ , we will show that $sL \subseteq Q$ . Note that $K + L$ satisfies $s(K + L) = sK + sL \subseteq Q + L \subseteq L$ , so that $K + L \in \mathcal{D}^S$ . Also, if $(K + L) \subseteq J$ , then $K \subseteq J$ , which is not possible since $K$ is a $J$ -ideal of $R$ . Thus $K + L$ is a $J$ -ideal of $R$ . Therefore maximality of $K$ implies $K = K + L$ , so that $L \subseteq K$ . But then $sL \subseteq sK \subseteq Q$ , as desired. $(4) \Rightarrow (1)$ . Let $I$ be a $J$ -ideal of $R$ , which we will prove to be $S$ -finite. Let $\mathcal{D}$ be the family of finitely generated $J$ -ideal of $R$ such that $J \subseteq I$ . Choose $x \in I \setminus J$ . Then $L = (x) \subseteq I$ , and $L \nsubseteq J$ . This implies that $L \in \mathcal{D}$ , and so $\mathcal{D}$ is nonempty. Then $\mathcal{D}$ has an $S$ -maximal element $K \in \mathcal{D}$ . Fixing $x \in I$ , take a finitely generated ideal of the form $Q = K + xR$ . Since $K \subseteq I$ and $x \in I$ , so $Q \subseteq I$ . Consequently, $Q \in \mathcal{D}$ such that $K \subseteq Q$ . This implies that there exists $s \in S$ such that $sQ \subseteq K$ ; in particular, $sx \in K$ . This verifies $sI \subseteq K \subseteq I$ , so that $I$ is $S$ -finite. It follows that $R$ is $S-J$ -Noetherian. Let $f: R \to R'$ be a homomorphism and $S$ a multiplicative closed subset of $R$ . Then it is easy to see that $f(S)$ is a multiplicative closed subset of $R'$ if $0 \notin f(S)$ and $1 \in f(S)$ . Proposition 2.6. Let $f: R \to R'$ be an epimorphism and $J$ be an ideal of $R'$ . If $R$ is an $S$ - $f^{-1}(J)$ -Noetherian ring with $0 \notin f(S)$ , then $R'$ is a $f(S)$ - $J$ -Noetherian ring. Proof. Suppose $\{I_i\}_{i \in \Lambda}$ is any increasing chain of $J$ -ideals of $R'$ . Then $I_i \not\subseteq J$ for each $i \in \Lambda$ . Suppose contrary that, for each $i$ there exist $\alpha_i \in I_i \setminus J$ such that $f^{-1}(\alpha_i) \subseteq f^{-1}(J)$ . Then $\alpha_i \in f(f^{-1}(\alpha_i)) \subseteq f(f^{-1}(J)) = J$ , for $f$ is an epimorphism. This is a contradiction, as $\alpha_i \notin J$ . Thus $f^{-1}(I_i) \not\subsetneq f^{-1}(J)$ for each $i \in \Lambda$ and hence $f^{-1}(I_i)$ is $f^{-1}(J)$ ideal of $R$ . Then we have an increasing chain $\{f^{-1}(I_i)\}_{i \in \Lambda}$ of $f^{-1}(J)$ -ideal of $R$ . Since $R$ is an $S$ - $f^{-1}(J)$ -Noetherian, there exist $k \in \wedge$ and $s \in S$ such that $sf^{-1}(I_i) \subseteq f^{-1}(I_k)$ for all $i \in \wedge$ . Applying $f$ to both sides, we obtain $f(tf^{-1}(I_i)) = f(s)f(f^{-1}(I_i)) \subseteq f(f^{-1}(I_k))$ for all $i \in \wedge$ . Since $f$ is an epimorphism, it follows that $f(s)I_i \subseteq I_k$ for all $i \in \wedge$ . Hence, by Theorem 2.5, $R'$ is a $f(S)$ -J-Noetherian ring. Theorem 2.7. Let $S$ be a multiplicative subset of a ring $R$ . The following statements are equivalent: 1. $R$ is S-Noetherian. 2. $R$ is $S$ -J-Noetherian and $J$ is an $S$ -finite ideal of $R$ . Proof. $(1) \Rightarrow (2)$ . This implication is obvious. $(2) \Rightarrow (1)$ . Let $P$ be a prime ideal of $R$ . If $P \subseteq J$ , then $P$ is $S$ -finite by the assumption. Suppose that $P$ contains properly in $J$ . Then $P$ is a $J$ -ideal of $R$ disjoint with $S$ . Since $R$ is $S-J$ -Noetherian, then $P$ is $S$ -finite disjoint from $S$ . So, by [3, Corollary 5], $R$ is $S$ -Noetherian. Let $R$ be a ring and $S$ be a multiplicative subset of $R$ . Recall, let $S$ be an anti-Archimedean subset of $R$ if $\bigcap_{n\geq 1}s^{n}R\cap S\neq \emptyset$ for all $s\in S$ . Corollary 2.8. Let $S \subseteq R$ be an anti-Archimedean multiplicative set and $J$ is $S$ -finite. If $R$ is $S$ - $J$ -Noetherian, then the polynomial ring $R[X_1, \ldots, X_n]$ is also $S$ - $J$ -Noetherian. Proof. By Theorem 2.7, $R$ is $S$ -Noetherian ring. Then, by [3, Proposition 9], $R[X_1, \ldots, X_n]$ is $S$ -Noetherian. This implies $R[X_1, \ldots, X_n]$ is $S$ - $J$ -Noetherian. Recall, let $M$ be an $R$ -module. The idealization of $R$ -module $M$ , $R(+)M = \{(r, m) \mid r \in R, m \in M\}$ is a commutative ring with componentwise addition and multiplication defined by $(\alpha_{1}, m_{1})(\alpha_{2}, m_{2}) = (\alpha_{1}\alpha_{2}, \alpha_{1}m_{2} + \alpha_{2}m_{1})$ for all $\alpha_{1}, \alpha_{2} \in R$ and $m_{1}, m_{2} \in M$ . It is straightforward to verify that $S(+)M = \{(s, m) \mid s \in S, m \in M\}$ forms a multiplicative set in $R(+)M$ . The following example shows that the polynomial ring over an $S-J$ -Noetherian ring need not be $S-J$ -Noetherian. Example 2.9. Let $V$ be an infinite dimensional vector space over a field $K$ . Then $R = K(+)V$ is an $S$ - $J$ -Noetherian ring for every multiplicative subset $S$ of $R$ . Moreover, if $0 \notin S$ , then $R[X]$ is not an $S$ - $J$ -Noetherian ring. In particular, if $J = \operatorname{Nil}(R)$ , then the proof follows from [10, Example 2.4]. We next show that the polynomial ring $R[X]$ is $S$ - $J$ -Noetherian if and only if it is $S$ -Noetherian. Corollary 2.10. Let $R$ be a ring, $S \subseteq R$ be a multiplicative set and $J$ be an ideal of $R$ . Then $R[X]$ is an $S-J[X]$ -Noetherian ring if and only if $R[X]$ is an $S$ -Noetherian ring. Proof. Suppose \( R[X] \) is an \( S-J[X] \)-Noetherian ring. Then we show that \( R[X] \) is an \( S \)-Noetherian ring. To prove this, by Theorem 2.7, it is sufficient to show that \( J[X] \) is \( S \)-finite. Define the ideal \( Q = J[X] + XR[X] \) of \( R[X] \). Note that \( Q \) is a \( J[X] \)-ideal since \( Q \not\subseteq J[X] \). Therefore \( Q \) is \( S \)-finite. So there exist \( s \in S \) and \( f_1, \ldots, f_n \in R[X] \) such that \( s(J[X] + XR[X]) \subseteq $f_{1}R[X] + \dots + f_{n}R[X] \subseteq J[X] + XR[X]$ . As a result, we get $sJ \subseteq f_{1}(0)R + \dots + f_{n}(0)R \subseteq J$ . This implies that $sJ[X] \subseteq f_{1}(0)R[X] + \dots + f_{n}(0)R[X] \subseteq J[X]$ . Thus $J[X]$ is an $S$ -finite ideal of $R[X]$ . The converse is trivially true. Proposition 2.11. Let $R$ be an $S$ - $J$ -Noetherian ring. Then $R / J$ is an $\overline{S}$ -Noetherian ring. Proof. A nonzero prime ideal (disjoint from $\overline{S}$ ) of $R / J$ is of the form $P / J$ with $P \in \operatorname{Spec}(R)$ and $J \subsetneq P$ . Evidently, $P$ is a $J$ -ideal with $P \cap S = \emptyset$ since $P / J$ is nonzero and $P / J \cap \overline{S} = \emptyset$ . By the hypothesis, $P$ is $S$ -finite. Then there exist $s \in S$ and $p_1, \ldots, p_n \in P$ such that $sP \subseteq (p_1, \ldots, p_n) \subseteq P$ . Let $x \in P$ . Then we can find $a_1, \ldots, a_n \in R$ such that $sx = a_1p_1 + \dots + a_np_n$ . It follows that $(s + J)(x + J) = (a_1 + J)(p_1 + J) + \ldots + (a_n + J)(p_n + J)$ , where $s + J \in \overline{S}$ and $a_1 + J, \ldots, a_n + J \in R / J$ . This implies that $(s + J)(P / J) \subseteq (p_1 + J, \ldots, p_n + J) \subseteq P / J$ , i.e., $P / J$ is $\overline{S}$ -finite. By [3, Corollary 5], $R / J$ is $\overline{S}$ -Noetherian. Corollary 2.12. Let $S \subseteq R$ be an anti-Archimedean multiplicative set. If $R$ is $S$ - $J$ -Noetherian, then polynomial ring $(R / J)[X_1, \ldots, X_n]$ is $\overline{S}$ - $J[X_1, \ldots, X_n]$ -Noetherian. Proof. By Proposition 2.11, $R / J$ is $\overline{S}$ -Noetherian. Then, by [3, Proposition 9], $(R / J)[X_1, \ldots, X_n]$ is $\overline{S}$ -Noetherian. This implies $(R / J)[X_1, \ldots, X_n]$ is $\overline{S}$ - $J[X_1, \ldots, X_n]$ -Noetherian. Definition 2.13. An ideal $I$ of a ring $R$ is called divided if $I \subset xR$ for every $x \in R \backslash I$ . Theorem 2.14. Let $R$ be an $S$ - $J$ -Noetherian ring, and $I$ be a $J$ -ideal of $R$ disjoint from $S$ . If $J$ is divided ideal, then there exist $s \in S$ and $S$ -prime ideals $P_{1}, \ldots, P_{n}$ of $R$ such that $s(P_{1} \cdots P_{n}) \subseteq I$ . Proof. Since $I \not\subset J$ and $J$ is divided, then $J \subset (x) \subseteq I$ for some $x \in I \setminus J$ . Thus $I / J$ is an ideal of the $\overline{S}$ -Noetherian ring $R / J$ . Since $I \cap S = \emptyset$ , then $(I / J) \cap \overline{S} = \emptyset$ . For this, if $(I / J) \cap \overline{S} \neq \emptyset$ , then $s + J = i + J$ for some $s \in S$ and $i \in I$ . Consequently, $s - i \in J \subset I$ , and so $s \in I$ , a contradiction as $I \cap S = \emptyset$ . Thus $I / J$ is disjoint from $\overline{S}$ . It follows that there exist $\bar{s} \in \overline{S}$ and $\overline{S}$ -prime ideals $Q_1, \ldots, Q_n$ of $R / J$ containing $I / J$ such that $\bar{s}(Q_1 \ldots Q_n) \subseteq I / J$ , by [1, Theorem 5]. Clearly, $Q_i \cap \overline{S} = \emptyset$ for each $i = 1, \ldots, n$ since each $Q_i$ is $\overline{S}$ -prime. Then, by [1, Proposition 3], for each $1 \leq i \leq n$ , there exists an $S$ -prime ideal $P_i$ of $R$ containing $J$ such that $Q_i = P_i / J$ . Therefore $\bar{s}((P_1 \cdots P_n) / J) \subseteq I / J$ since $P_1 / J \cdots P_n / J = (P_1 \cdots P_n) / J$ . For every $a \in P_1 \cdots P_n$ , $(s + J)(a + J) = b + J$ for some $b \in I$ . Consequently, $sa - b \in J \subset I$ , and so $sa \in I + Rb \subseteq I + J = I$ . Thus $s(P_1 \cdots P_n) \subseteq I$ . Proposition 2.15. Let $R \subseteq R'$ be an extension of rings such that $IR' \cap R = I$ for each ideal $I$ of $R$ , and let $S \subseteq R$ be a multiplicative set. If $R'$ is an $S$ -J-Noetherian ring, then $R$ is $S$ -J-Noetherian. Proof. Let $I$ be a $J$ -ideal of $R$ and $I \subseteq IR'$ . If $IR' \subseteq J$ , then $I \subseteq J$ , which is not possible since $I \not\subseteq J$ . Thus $IR'$ is a $J$ -ideal of $R'$ . Since the ring $R'$ is $S$ -J-Noetherian, there exist $s \in S$ and $i_1, \ldots, i_n \in I$ such that $sIR' \subseteq (i_1, \ldots, i_n)R' \subseteq IR'$ . By hypothesis, $sI = sIR' \cap R \subseteq (i_1, \ldots, i_n)R' \cap R \subseteq IR' \cap R = I$ . Then $I$ is an $S$ -finite ideal of $R$ , as desired. Proposition 2.16. Let $R$ be an $S$ - $J$ -Noetherian ring and $I$ be a $J$ -ideal of $R$ disjoint from $S$ . Then there exist $t \in S$ and $m \in \mathbb{N}$ such that $t(rad(I))^m \subseteq I$ . Proof. Let $I$ be a $J$ -ideal of $R$ . Then $\text{rad}(I)$ is also a $J$ -ideal of $R$ , and hence $\text{rad}(I)$ is $S$ -finite. Consequently, there exist $s \in S$ and $x_1, \ldots, x_n \in \text{rad}(I)$ such that $s(\text{rad}(I)) \subseteq K \subseteq \text{rad}(I)$ , where $K = (x_1, \ldots, x_n)$ . Suppose $m_i \in \mathbb{N}$ be such that $x_i^{m_i} \in I$ for any $1 \leq i \leq n$ . Then choose sufficiently large $m \in \mathbb{N}$ such that $K^m \subseteq I$ . Therefore $t(\text{rad}(I))^m \subseteq I$ , where $t = s^m \in S$ . Lemma 2.17. Let $R$ be an $S$ -J-Noetherian and $I$ be an $J$ -ideal of $R$ . Then $R / I$ is an $\overline{S}$ -Noetherian ring. Proof. Let $\{I_i / I\}_{i \in \Lambda}$ be an ascending chain of non-zero ideals of $R / I$ . As a result, $\{I_i\}_{i \in \Lambda}$ is an ascending chain of $J$ -ideal of $R$ and hence, by Theorem 2.5, there exist $s \in S$ and $k \in \Lambda$ such that $sI_i \subseteq I_k$ for every $i \in \Lambda$ . Therefore $(s + I)(I_i / I) \subseteq I_k / I$ for every $i \in \Lambda$ and hence $(I_i / I)_{n \in \Lambda}$ is $\overline{S}$ -stationary. By [6, Theorem 2.3], $R / I$ is $\overline{S}$ -Noetherian. Recall that a ring $R$ is said to be decomposable if $R$ admits a non-trivial idempotent. Let $\operatorname{Idem}(R)$ denote the set of idempotent elements of $R$ . Theorem 2.18. Let $R$ be a decomposable ring and $J$ be an ideal of $R$ with $eJ \neq (e)$ for each $e \in \operatorname{Idem}(R) \setminus \{0,1\}$ . Then $R$ is $S$ - $J$ -Noetherian if and only if $R$ is $S$ -Noetherian. Proof. It is sufficient to prove that if $R$ is $S$ -J-Noetherian, then $R$ is $S$ -Noetherian. To prove this, first we prove that $R / (e)$ is $\overline{S}$ -Noetherian for each $e \in \operatorname{Idem}(R) \setminus \{0,1\}$ . Consider $e \in \operatorname{Idem}(R) \setminus \{0,1\}$ . Let $L$ be an ideal of $R$ which contains $(e)$ . Then $e \notin J$ since $eJ \neq (e)$ , and so $L \not\subseteq J$ . Thus $L$ is a $J$ -ideal, and so by Lemma 2.17, $R / L$ is $\overline{S}$ -Noetherian. This implies that $R / (e)$ is $\overline{S}$ -Noetherian since $(e) \subseteq L$ . Now, let $K$ be an ideal of $R$ such that $K \subseteq (e)$ for each $e \in \operatorname{Idem}(R) \setminus \{0,1\}$ . We claim that $K$ is $S$ -finite. Clearly, $eK = K$ . If $K = (0)$ , then $K$ is $S$ -finite. So we may assume that $K \neq 0$ . If $K \subseteq (1 - e)$ , then $eK \subseteq (e - e^2) = (0)$ , i.e., $eK = K = 0$ , a contradiction as $K \neq 0$ . Therefore $K \not\subseteq (1 - e)$ . Since $1 - e \in \operatorname{Idem}(R) \setminus \{0,1\}$ , $R / (1 - e)$ is a $\overline{S}$ -Noetherian ring. Set $I = (1 - e)$ for simplicity. Then $L = (K + I) / I$ is an $\overline{S}$ -finite ideal of $R / I$ . Then there exist $\alpha_1 + I, \ldots, \alpha_n + I \in R / I$ , where $\alpha_1, \ldots, \alpha_n \in K$ and $s' = s + I \in \overline{S}$ such that $s'L \subseteq (\alpha_1 + I, \ldots, \alpha_n + I) \subseteq L$ . Let $\beta \in K + I$ . Then $\beta + I \in L$ , and so $s\beta + I \in s'L \subseteq (\alpha_1 + I, \ldots, \alpha_n + I)$ . This implies that $s\beta + I = (u_1 + I)(\alpha_1 + I) + \dots + (u_n + I)(\alpha_n + I)$ for some $u_1 + I, \ldots, u_n + I \in R / I$ . Consequently, $s\beta - (u_1\alpha_1 + \dots + u_n\alpha_n) \in I$ , $s\beta - (u_1\alpha_1 + \dots + u_n\alpha_n) \in F$ , where $F = (\alpha_1, \ldots, \alpha_n, 1 - e)$ since $I \subseteq F$ . Thus $s\beta \in F$ , and hence $s(K + (1 - e)) \subseteq F \subseteq K + (1 - e)$ . Therefore $K + (1 - e)$ is $S$ -finite. Consequently, $K = Ke = (K + (1 - e))e$ is an $S$ -finite ideal of $R$ , as claimed. Now, let $T$ be an ideal of $R$ . Since $eT \subseteq (e)$ and $(1 - e)T \subseteq K + (1 - e)T \subseteq K + (1 - e)$ for each $e \in \operatorname{Idem}(R) \setminus \{0,1\}$ , $eT$ and $(1 - e)T$ are $S$ -finite. It follows that $T = eT + (1 - e)T$ is $S$ -finite, and hence $R$ is $S$ -Noetherian ring. Definition 2.19. An ideal $Q$ (disjoint from $S$ ) of the ring $R$ is called $S$ -irreducible if $s(I \cap K) \subseteq Q \subseteq I \cap K$ for some $s \in S$ and some ideals $I, K$ of $R$ , then there exists $s' \in S$ such that either $ss'I \subseteq Q$ or $ss'K \subseteq Q$ . It is clear from the definition that every irreducible ideal is an $S$ -irreducible ideal. However, the following example shows that an $S$ -irreducible ideal need not be irreducible. Example 2.20. Let $R = \mathbb{Z}$ , $S = \mathbb{Z} \setminus 3\mathbb{Z}$ and $I = 6\mathbb{Z}$ . Since $I = 2\mathbb{Z} \cap 3\mathbb{Z}$ , therefore $I$ is not an irreducible ideal of $R$ . Now, take $s = 2 \in S$ . Then $2(3\mathbb{Z}) = 6\mathbb{Z} \subseteq I$ . Thus $I$ is an $S$ -irreducible ideal of $R$ . Recall [11, Definition 2.1], a proper ideal $Q$ of a ring $R$ disjoint from $S$ is said to be $S$ -primary if there exists an $s \in S$ such that for all $a, b \in R$ , if $ab \in Q$ , then either $sa \in Q$ or $sb \in rad(Q)$ . Following from, let $I$ be an ideal of $R$ such that $I \cap S = \emptyset$ . Then $I$ admits $S$ -primary decomposition if $I$ can be written as a finite intersection of $S$ -primary ideals of $R$ . Now, we extend $S$ -primary decomposition theorem for $S-J$ -Noetherian rings. We start with the following lemma. Lemma 2.21. Let $R$ be an $S$ - $J$ -Noetherian ring. Then every $S$ -irreducible $J$ -ideal of $R$ is $S$ -primary. Proof. Suppose $Q$ is an $S$ -irreducible $J$ -ideal of $R$ . Let $a, b \in R$ be such that $ab \in Q$ and $sb \notin Q$ for all $s \in S$ . Our aim is to show that there exists $t \in S$ such that $ta \in rad(Q)$ . Consider $A_{n} = \{x \in R \mid a^{n}x \in Q\}$ for $n \in \mathbb{N}$ . Since $Q$ is a $J$ ideal, there exists $\alpha \in Q \setminus J$ . Then $a^{n}\alpha \in Q$ for each $n \in \mathbb{N}$ . This implies that $\alpha \in A_{n}$ but $\alpha \notin J$ for each $n \in \mathbb{N}$ . Consequently, each $A_{n}$ is a $J$ -ideal of $R$ and $A_{1} \subseteq A_{2} \subseteq A_{3} \subseteq \dots$ is an increasing chain of ideals of $R$ . Since $R$ is a $S-J$ -Noetherian, by Theorem 2.5, this chain is $S$ -stationary, i.e., there exist $k \in \mathbb{N}$ and $s \in S$ such that $sA_{n} \subseteq A_{k}$ for all $n \geq k$ . Consider the two ideals $I = (a^{k}) + Q$ and $K = (b) + Q$ of $R$ . Then $Q \subseteq I \cap K$ . For the reverse containment, let $y \in I \cap K$ . Write $y = a^{k}z + q$ for some $z \in R$ and $q \in Q$ . Since $ab \in Q$ , $aK \subseteq Q$ ; whence $ay \in Q$ . Now, $a^{k+1}z = a(a^{k}z) = a(y - q) \in Q$ . This implies that $z \in A_{k+1}$ , and so $sz \in sA_{k+1} \subseteq A_{k}$ . Consequently, $a^{k}sz \in Q$ which implies that $a^{k}sz + sq = sy \in Q$ . Thus we have $s(I \cap K) \subseteq Q \subseteq I \cap K$ . This implies that there exists $s' \in S$ such that either $ss'I \subseteq Q$ or $ss'K \subseteq Q$ since $Q$ is $S$ -irreducible. If $ss'K \subseteq Q$ , then $ss'b \in Q$ which is not possible. Therefore $ss'I \subseteq Q$ which implies that $ss'a^{k} \in Q$ . Put $t = ss' \in S$ . Then $(ta)^{k} \in Q$ and hence $ta \in rad(Q)$ , as desired. Theorem 2.22. Let $R$ be an $S-J$ -Noetherian ring. Then every proper $J$ -ideal of $R$ disjoint with $S$ can be written as a finite intersection of $S$ -primary ideals. Proof. Let $E$ be the collection of $J$ -ideals of $R$ which are disjoint with $S$ and can not be written as a finite intersection of $S$ -primary ideals. We wish to show $E = \emptyset$ . On the contrary suppose $E \neq \emptyset$ . Since $R$ is an $S-J$ -Noetherian ring, by Theorem 2.5, there exists an $S$ -maximal element in $E$ , say $I$ . Evidently, $I$ is not an $S$ -primary ideal, by Lemma 2.21, $I$ is not an $S$ -irreducible ideal, and so $I$ is not an irreducible ideal. This implies that $I = K \cap L$ for some ideals $K$ and $L$ of $R$ with $I \neq K$ and $I \neq L$ . As $I$ is not $S$ -irreducible, and so $sK \not\subseteq I$ and $sL \not\subseteq I$ for all $s \in S$ . Now, we claim that $K, L \notin E$ . For this, if $K$ (respectively, $L$ ) belongs to $E$ , then since $I$ is an $S$ -maximal element of $E$ and $I \subset K$ (respectively, $I \subset L$ ), there exists $s'$ (respectively, $s''$ ) from $S$ such that $s'K \subseteq I$ (respectively, $s''L \subseteq I$ ). This is not possible, as $I$ is not $S$ -irreducible. Therefore $K, L \notin E$ . Also, if $K \cap S \neq \emptyset$ (respectively, $L \cap S \neq \emptyset$ ), then there exist $s_1 \in K \cap S$ (respectively, $s_2 \in L \cap S$ ). This implies that $s's_1 \in s'K \subseteq I$ (respectively, $s''s_2 \in s''L \subseteq I$ ), which is a contradiction because $I$ disjoint with $S$ . Thus $K$ and $L$ are also disjoint with $S$ . This implies that $K$ and $L$ can be written as a finite intersection of $S$ -primary ideals. Consequently, $I$ can also be written as a finite intersection of $S$ -primary ideals since $I = K \cap L$ , a contradiction as $I \in E$ . Thus $E = \emptyset$ , i.e., every proper $J$ -ideal of $R$ disjoint with $S$ can be written as a finite intersection of $S$ -primary ideals.
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15078
{"title": "On S-J-Noetherian Rings", "raw_content": "# On $S$ - $J$ -Noetherian Rings\n\nTushar Singh $^{1}$ , Ajim Uddin Ansari $^{2}$ , and Shiv Datt Kumar\n\n$^{1,3}$ Department of Mathematics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj 211004, India\n\nEmails: sjstusharsingh0019@gmail.com, tushar.2021rma11@mnnit.ac.in, sdt@mnnit.ac.in\n\n$^{2}$ Department of Mathematics, CMP Degree College, University of Allahabad, Prayagraj-211002, India \nEmail: ajimmatau@gmail.com\n\nDecember 18, 2025\n\n# Abstract\n\nLet $R$ be a commutative ring with identity, $S \\subseteq R$ be a multiplicative set and $J$ be an ideal of $R$ . In this paper, we introduce the concept of $S$ - $J$ -Noetherian rings, which generalizes both $J$ -Noetherian rings and $S$ -Noetherian rings. We study several properties and characterizations of this new class of rings. For instance, we prove Cohen's-type theorem for $S$ - $J$ -Noetherian rings. Among other results, we establish the existence of $S$ -primary decomposition in $S$ - $J$ -Noetherian rings as a generalization of classical Lasker-Noether theorem.\n\nKeywords: $J$ -ideals, $S$ - $J$ -Noetherian rings, $S$ -Noetherian rings.\n\nMSC(2020): 13A15, 13B02, 13C05, 13E05.\n\n# 1 Introduction\n\nThroughout the paper, let $R$ be a commutative ring with identity, $S \\subseteq R$ be a multiplicative set, and $J$ be a fixed ideal of $R$ . For an ideal $I$ of $R$ , we denote $\\overline{S} = \\{s + I \\mid s \\in S\\}$ which is a multiplicative closed subset of $R / I$ . The Noetherian property of rings plays a crucial role in areas such as commutative algebra and algebraic geometry. Given the significance of Noetherian rings, numerous authors attempted to generalize the concept of Noetherian rings (see [2], [3], [7], [8], [9], [13], and [14]). As one of its crucial generalizations, Anderson and Dumitrescu [3] introduced the concept of $S$ -Noetherian rings. An ideal $I$ of $R$ is $S$ -finite if there exists an element $s \\in S$ and a finitely generated ideal $F$ of $R$ such that $sI \\subseteq F \\subseteq I$ . A ring $R$ is called $S$ -Noetherian if every ideal of $R$ is $S$ -finite. Recently, Alhazmy et al. [2] introduced the concept of $J$ -Noetherian rings as a generalization of Noetherian rings. An ideal $I$ of $R$ is called a $J$ -ideal if $I \\nsubseteq J$ and $R$ is said to be $J$ -Noetherian if every $J$ -ideal is finitely generated. A particular interesting case occurs when $J = Nil(R)$ , the ideal consisting of all nilpotent elements of $R$ . In this situation, a $J$ -Noetherian ring is referred to as a Nonnil-Noetherian ring, which was first introduced and studied by Badawi in [5]. Furthermore, when $J = J(R)$ , the Jacobson radical of $R$ , the $J$ -Noetherian ring is termed\n\na non- $J$ -Noetherian ring. This class of rings was first introduced by Dabbabi [7] et al. in 2024, where they characterized various properties of non- $J$ -Noetherian rings.\n\nThe primary objective of this paper is to introduce and study the notion of $S$ - $J$ -Noetherian rings. We present an example of an $S$ - $J$ -Noetherian ring which is not an $S$ -Noetherian ring (see Example 2.4). We generalize various properties and characterizations of both $J$ -Noetherian and $S$ -Noetherian rings to this new class of rings. For instance, we establish Cohen-type theorem for $S$ - $J$ -Noetherian rings and prove that the polynomial ring $R[X]$ is $S$ - $J$ -Noetherian if and only if it is $S$ -Noetherian. Also, we show that the quotient of an $S$ - $J$ -Noetherian ring is an $\\overline{S}$ -Noetherian ring (see Proposition 2.11). Moreover, we provide necessary and sufficient conditions for an $S$ - $J$ -Noetherian ring to belong to the class of $S$ -Noetherian rings (see Theorem 2.7 and 2.18). In [15, Theorem 2.10], among the other result, Singh et al. generalized the classical Lasker-Noether theorem for $S$ -Noetherian modules. We end the paper by extending the classical Lasker-Noether theorem for the class of $S$ - $J$ -Noetherian rings (see Theorem 2.22).\n\n# 2 Main Results\n\nWe begin by introducing the concept of $S$ - $J$ -Noetherian rings.\n\nDefinition 2.1. Let $R$ be a ring, $S \\subseteq R$ be a multiplicative set, and $J$ an ideal of $R$ . An ideal $I$ of $R$ is said to be a $J$ -ideal if $I \\not\\subseteq J$ . We say that $R$ is an $S-J$ -Noetherian ring if each $J$ -ideal of $R$ is $S$ -finite.\n\nIt is evident that every $J$ -Noetherian ring is an $S$ - $J$ -Noetherian ring when $S = \\{1\\}$ . However, the following example illustrates that the converse is not true in general.\n\nExample 2.2. Consider the ring $R = \\mathcal{F}[X_1, X_2, \\ldots]$ , where $\\mathcal{F}$ is a field, and let $J = (0)$ . Define the ideal $I = (X_1, \\ldots, X_n, \\ldots)$ . Clearly, $I$ is a $J$ -ideal but is not finitely generated. Hence $R$ is not a $J$ -Noetherian ring. Now, let $S = R \\setminus \\{0\\}$ be the multiplicative closed subset of $R$ . Let $K$ be a nonzero proper ideal of $R$ . Evidently, $K$ is $J$ -ideal and $K \\cap S \\neq \\emptyset$ . Therefore, by [3, Proposition 2(a)], $K$ is $S$ -finite. Hence $R$ is $S$ - $J$ -Noetherian.\n\nCohen's theorem is the classical result which states that a ring is Noetherian if all its prime ideals are finitely generated. Now, we extend this result for $S$ - $J$ -Noetherian rings.\n\nTheorem 2.3. A ring $R$ is $S$ -J-Noetherian if and only if its prime $J$ -ideals (disjoint from $S$ ) are $S$ -finite.\n\nProof. If $R$ is $S-J$ -Noetherian, then it is obvious that all prime $J$ -ideals of $R$ are $S$ -finite. Now, suppose that all prime $J$ -ideals (disjoint from $S$ ) of $R$ are $S$ -finite and assume that $R$ is not $S-J$ -Noetherian. Therefore the set $\\mathcal{F}$ of all $J$ -ideals that are non- $S$ -finite is a non-empty set which is ordered by the inclusion. By Zorn's lemma, choose $P$ maximal in $\\mathcal{F}$ . This implies $P$ is not a $S$ -finite and so $P \\cap S = \\emptyset$ . We show that $P$ is a prime ideal of $R$ . This makes $P$ a $J$ -prime ideal (disjoint from $S$ ) that is $S$ -finite, which is a contradiction to the fact that $P \\in \\mathcal{F}$ . Suppose there exists $a, b \\in R \\setminus P$ such that $ab \\in P$ . If $P + aR \\subseteq J$ , $P \\subseteq J$ , a contradiction, as $P$ is $J$ -ideal. Therefore $P + aR$ is $J$ -ideal. Since $P \\subsetneq P + aR$ , it follows that $P + aR$ is $S$ -finite since $P$ is a maximal element of $\\mathcal{F}$ . Then, there exist $s \\in S$ , $\\alpha_1, \\ldots, \\alpha_n \\in P$ and $x_1, \\ldots, x_n \\in R$ such\n\nthat $s(P + aR) \\subseteq (\\alpha_1 + ax_1, \\ldots, \\alpha_n + ax_n) \\subseteq P + aR$ . Consider the ideal $Q = (P : a) = \\{x \\in R \\mid ax \\in P\\}$ . Evidently, $Q$ is $J$ -ideal and $P \\subsetneq Q$ as $b \\in Q \\setminus P$ . By the maximality of $P$ , $Q$ is an $S$ -finite ideal. Then there exist $t \\in S$ and $\\beta_1, \\ldots, \\beta_k \\in Q$ such that $tQ \\subseteq (\\beta_1, \\ldots, \\beta_k) \\subseteq Q$ . Let $x \\in P$ . Then $sx \\in s(P + aR) \\subseteq (\\alpha_1 + ax_1, \\ldots, \\alpha_n + ax_n)$ , and so there exist $u_1, \\ldots, u_n \\in R$ such that $sx = u_1(\\alpha_1 + ax_1) + \\dots + u_n(\\alpha_n + ax_n) = u_1\\alpha_1 + \\dots + u_na_n + a(u_1x_1 + \\dots + u_nx_n)$ . So $a(u_1x_1 + \\dots + u_nx_n) = sx - (u_1\\alpha_1 + \\dots + u_na_n) \\in P$ . Then $u_1x_1 + \\dots + u_nx_n \\in (P : a) = Q$ . Therefore we can find $w_1, \\ldots, w_k \\in R$ such that $t(u_1x_1 + \\ldots + u_nx_n) = w_1\\beta_1 + \\dots + w_k\\beta_k$ , which states that $stx = t(u_1\\alpha_1 + \\dots + u_na_n) + at(u_1x_1 + \\dots + u_nx_n) = t(u_1\\alpha_1 + \\dots + u_na_n) + a(w_1\\beta_1 + \\dots + w_k\\beta_k)$ . Hence we obtain $uP \\subseteq (\\alpha_1, \\ldots, \\alpha_n, a\\beta_1, \\ldots, a\\beta_k) \\subseteq P$ , where $u = st \\in S$ , which means that $P$ is $S$ -finite. This contradicts to the choice of $P$ . Thus $R$ is $S-J$ -Noetherian.\n\n![](images/93e68ab490a0e41aeddc5a6f5508fa9dbbf3cdac4af0675c3d3a097a378d0539.jpg)\n\nEvery $S$ -Noetherian ring is clearly an $S$ - $J$ -Noetherian ring. However, an $S$ - $J$ -Noetherian ring need not be an $S$ -Noetherian ring. For this, consider the following example.\n\nExample 2.4. Consider a ring $R_1 = \\mathcal{F}[X_1, \\ldots, X_n, \\ldots]$ , where $\\mathcal{F}$ is a field, and $I = (X_i^2; i \\in \\mathbb{N})$ be an ideal of $R_1$ . Let $R = R_1 / I$ . Consider the prime ideal $P = (X_i; i \\in \\mathbb{N})$ of $R_1$ . Note that any prime ideal of the ring $R$ contains $P / I$ . Then the unique minimal prime ideal of $R$ is $P / I$ . Take $J = P / I$ and $S = R \\setminus (P / I)$ is a multiplicative subset of $R$ . Any $J$ -prime ideal $P'$ of $R$ contains properly $P / I$ , and then $P' \\cap S \\neq \\emptyset$ . By [3, Proposition 2(a)], $P'$ is $S$ -finite. By Theorem 2.3, $R$ is an $S$ -J-Noetherian ring. Next, our aim is to show that $R$ is not a $S$ -Noetherian ring. Suppose the ideal $P / I$ is $S$ -finite. There exist $\\bar{s} \\in S$ and $i_1, \\ldots, i_n \\in \\mathbb{N}$ such that $\\bar{s}(P / I) \\subseteq (\\overline{X_{i_1}}, \\ldots, \\overline{X_{i_n}}) \\subseteq (P / I)$ . The polynomial $\\bar{s}$ of $R$ uses a finite number of variables $X_{j_1}, \\ldots, X_{j_m}$ and its constant term $d \\neq 0$ . Let $k \\in \\mathbb{N} \\setminus \\{i_1, \\ldots, i_n, j_1, \\ldots, j_m\\}$ . Then, $\\bar{s}\\overline{X_k} = f_1\\overline{X_{i_1}} + \\dots + f_n\\overline{X_{i_n}}$ , where $f_1, \\ldots, f_n \\in R$ . Thus $sX_k - f_1X_{i_1} - \\dots - f_nX_{i_n} \\in I$ . This implies that $X_{i_1} = \\dots = X_{i_n} = X_{j_1} = \\dots = X_{j_m} = 0$ , we obtain $dX_k \\in (X_i^2 | i \\in \\mathbb{N} \\setminus \\{i_1, \\ldots, i_n, j_1, \\ldots, j_m\\})$ . This is a contradiction.\n\nExamples 2.2 and 2.4 demonstrate that the concept of $S$ - $J$ -Noetherian rings is a proper generalization of both the $J$ -Noetherian rings and $S$ -Noetherian rings.\n\nRecall [6], let $E$ be a family of ideals of a ring $R$ . An element $I \\in E$ is said to be an $S$ -maximal element of $E$ if there exists an $s \\in S$ such that for each $J \\in E$ , if $I \\subseteq J$ , then $sJ \\subseteq I$ . Also, a chain of ideals $(I_i)_{i \\in \\Lambda}$ of $R$ is called $S$ -stationary if there exist $k \\in \\Lambda$ and $s \\in S$ such that $sI_i \\subseteq I_k$ for all $i \\in \\Lambda$ , where $\\Lambda$ is an arbitrary indexing set. A family $\\mathcal{F}$ of ideals of $R$ is said to be $S$ -saturated if it satisfies the following property: for every ideal $I$ of $R$ , if there exist $s \\in S$ and $J \\in \\mathcal{F}$ such that $sI \\subseteq J$ , then $I \\in \\mathcal{F}$ .\n\nTheorem 2.5. Let $J$ be a proper ideal of $R$ . Then the following statements are equivalent.\n\n1. $R$ is an $S$ -J-Noetherian. \n2. Every ascending chain of $J$ -ideals of $R$ is $S$ -stationary. \n3. Every nonempty $S$ -saturated set of $J$ -ideals of $R$ has a maximal element. \n4. Every nonempty family of $J$ -ideals has an $S$ -maximal element with respect to inclusion.\n\nProof.\n\n$(1) \\Rightarrow (2)$ . Let $(I_n)_{n \\in \\Lambda}$ be an increasing sequence of $J$ -ideals of $R$ . Define the ideal $I = \\bigcup_{n \\in \\Lambda} I_n$ . If $I \\subseteq J$ , then $I_n \\subseteq J$ , which is not possible since each $I_n$ is a $J$ -ideal. Thus $I$ is a $J$ -ideal of $R$ . Also, $I$ is $S$ -finite since $R$ is $S-J$ -Noetherian. Consequently, there exist a finitely generated ideal $F \\subseteq R$ and $s \\in S$ such that $sI \\subseteq F \\subseteq I$ . Since $F$ is finitely generated, there is a $k \\in \\Lambda$ satisfying $F \\subseteq I_k$ . Then we have $sI \\subseteq F \\subseteq I_k$ , from which it follows that $sI_n \\subseteq I_k$ for each $n \\in \\Lambda$ . \n$(2) \\Rightarrow (3)$ . Let $\\mathcal{D}$ be an $S$ -saturated set of $J$ -ideals of $R$ . Given any chain $\\{I_n\\}_{n \\in \\Lambda} \\subseteq \\mathcal{D}$ , we claim that $I = \\bigcup_{n \\in \\Lambda} I_n$ belongs to $\\mathcal{D}$ , which will establish that $I$ as an upper bound for the chain. Indeed, by (2), there exist $k \\in \\Lambda$ and $s \\in S$ such that $sI_n \\subseteq I_k$ for every $n \\in \\Lambda$ . Consequently, we obtain $sI = s\\left(\\bigcup_{n \\in \\Lambda} I_n\\right) \\subseteq I_k$ . Since $\\mathcal{D}$ is $S$ -saturated, it follows that $I \\in \\mathcal{D}$ , as required. Applying Zorn's lemma, we conclude that $\\mathcal{D}$ has a maximal element. \n$(3) \\Rightarrow (4)$ . Let $\\mathcal{D}$ be a nonempty set of $J$ -ideals of $R$ . Consider the family $\\mathcal{D}^S$ of all $J$ -ideals $L \\subseteq R$ such that there exist some $s \\in S$ and $L_0 \\in \\mathcal{D}$ with $sL \\subseteq L_0$ . Clearly, $\\mathcal{D} \\subseteq \\mathcal{D}^S$ , so $\\mathcal{D}^S \\neq \\emptyset$ . It is straightforward to see that $\\mathcal{D}^S$ is $S$ -saturated. Thus, by (3) $\\mathcal{D}^S$ has a maximal element $K \\in \\mathcal{D}^S$ . Fix $s \\in S$ and $Q \\in \\mathcal{D}$ such that $sK \\subseteq Q$ . Now, we claim that $Q$ is an $S$ -maximal element of $\\mathcal{D}$ ; specifically, given $L \\in \\mathcal{D}$ with $Q \\subseteq L$ , we will show that $sL \\subseteq Q$ . Note that $K + L$ satisfies $s(K + L) = sK + sL \\subseteq Q + L \\subseteq L$ , so that $K + L \\in \\mathcal{D}^S$ . Also, if $(K + L) \\subseteq J$ , then $K \\subseteq J$ , which is not possible since $K$ is a $J$ -ideal of $R$ . Thus $K + L$ is a $J$ -ideal of $R$ . Therefore maximality of $K$ implies $K = K + L$ , so that $L \\subseteq K$ . But then $sL \\subseteq sK \\subseteq Q$ , as desired. \n$(4) \\Rightarrow (1)$ . Let $I$ be a $J$ -ideal of $R$ , which we will prove to be $S$ -finite. Let $\\mathcal{D}$ be the family of finitely generated $J$ -ideal of $R$ such that $J \\subseteq I$ . Choose $x \\in I \\setminus J$ . Then $L = (x) \\subseteq I$ , and $L \\nsubseteq J$ . This implies that $L \\in \\mathcal{D}$ , and so $\\mathcal{D}$ is nonempty. Then $\\mathcal{D}$ has an $S$ -maximal element $K \\in \\mathcal{D}$ . Fixing $x \\in I$ , take a finitely generated ideal of the form $Q = K + xR$ . Since $K \\subseteq I$ and $x \\in I$ , so $Q \\subseteq I$ . Consequently, $Q \\in \\mathcal{D}$ such that $K \\subseteq Q$ . This implies that there exists $s \\in S$ such that $sQ \\subseteq K$ ; in particular, $sx \\in K$ . This verifies $sI \\subseteq K \\subseteq I$ , so that $I$ is $S$ -finite. It follows that $R$ is $S-J$ -Noetherian.\n\nLet $f: R \\to R'$ be a homomorphism and $S$ a multiplicative closed subset of $R$ . Then it is easy to see that $f(S)$ is a multiplicative closed subset of $R'$ if $0 \\notin f(S)$ and $1 \\in f(S)$ .\n\nProposition 2.6. Let $f: R \\to R'$ be an epimorphism and $J$ be an ideal of $R'$ . If $R$ is an $S$ - $f^{-1}(J)$ -Noetherian ring with $0 \\notin f(S)$ , then $R'$ is a $f(S)$ - $J$ -Noetherian ring.\n\nProof. Suppose $\\{I_i\\}_{i \\in \\Lambda}$ is any increasing chain of $J$ -ideals of $R'$ . Then $I_i \\not\\subseteq J$ for each $i \\in \\Lambda$ . Suppose contrary that, for each $i$ there exist $\\alpha_i \\in I_i \\setminus J$ such that $f^{-1}(\\alpha_i) \\subseteq f^{-1}(J)$ . Then $\\alpha_i \\in f(f^{-1}(\\alpha_i)) \\subseteq f(f^{-1}(J)) = J$ , for $f$ is an epimorphism. This is a contradiction, as $\\alpha_i \\notin J$ . Thus $f^{-1}(I_i) \\not\\subsetneq f^{-1}(J)$ for each $i \\in \\Lambda$ and hence $f^{-1}(I_i)$ is $f^{-1}(J)$ ideal of $R$ . Then we have an increasing chain $\\{f^{-1}(I_i)\\}_{i \\in \\Lambda}$ of $f^{-1}(J)$ -ideal of $R$ . Since $R$ is an $S$ - $f^{-1}(J)$ -Noetherian, there\n\nexist $k \\in \\wedge$ and $s \\in S$ such that $sf^{-1}(I_i) \\subseteq f^{-1}(I_k)$ for all $i \\in \\wedge$ . Applying $f$ to both sides, we obtain $f(tf^{-1}(I_i)) = f(s)f(f^{-1}(I_i)) \\subseteq f(f^{-1}(I_k))$ for all $i \\in \\wedge$ . Since $f$ is an epimorphism, it follows that $f(s)I_i \\subseteq I_k$ for all $i \\in \\wedge$ . Hence, by Theorem 2.5, $R'$ is a $f(S)$ -J-Noetherian ring.\n\nTheorem 2.7. Let $S$ be a multiplicative subset of a ring $R$ . The following statements are equivalent:\n\n1. $R$ is S-Noetherian. \n2. $R$ is $S$ -J-Noetherian and $J$ is an $S$ -finite ideal of $R$ .\n\nProof. $(1) \\Rightarrow (2)$ . This implication is obvious. $(2) \\Rightarrow (1)$ . Let $P$ be a prime ideal of $R$ . If $P \\subseteq J$ , then $P$ is $S$ -finite by the assumption. Suppose that $P$ contains properly in $J$ . Then $P$ is a $J$ -ideal of $R$ disjoint with $S$ . Since $R$ is $S-J$ -Noetherian, then $P$ is $S$ -finite disjoint from $S$ . So, by [3, Corollary 5], $R$ is $S$ -Noetherian.\n\nLet $R$ be a ring and $S$ be a multiplicative subset of $R$ . Recall [3], let $S$ be an anti-Archimedean subset of $R$ if $\\bigcap_{n\\geq 1}s^{n}R\\cap S\\neq \\emptyset$ for all $s\\in S$ .\n\nCorollary 2.8. Let $S \\subseteq R$ be an anti-Archimedean multiplicative set and $J$ is $S$ -finite. If $R$ is $S$ - $J$ -Noetherian, then the polynomial ring $R[X_1, \\ldots, X_n]$ is also $S$ - $J$ -Noetherian.\n\nProof. By Theorem 2.7, $R$ is $S$ -Noetherian ring. Then, by [3, Proposition 9], $R[X_1, \\ldots, X_n]$ is $S$ -Noetherian. This implies $R[X_1, \\ldots, X_n]$ is $S$ - $J$ -Noetherian.\n\nRecall [4], let $M$ be an $R$ -module. The idealization of $R$ -module $M$ , $R(+)M = \\{(r, m) \\mid r \\in R, m \\in M\\}$ is a commutative ring with componentwise addition and multiplication defined by $(\\alpha_{1}, m_{1})(\\alpha_{2}, m_{2}) = (\\alpha_{1}\\alpha_{2}, \\alpha_{1}m_{2} + \\alpha_{2}m_{1})$ for all $\\alpha_{1}, \\alpha_{2} \\in R$ and $m_{1}, m_{2} \\in M$ . It is straightforward to verify that $S(+)M = \\{(s, m) \\mid s \\in S, m \\in M\\}$ forms a multiplicative set in $R(+)M$ . The following example shows that the polynomial ring over an $S-J$ -Noetherian ring need not be $S-J$ -Noetherian.\n\nExample 2.9. Let $V$ be an infinite dimensional vector space over a field $K$ . Then $R = K(+)V$ is an $S$ - $J$ -Noetherian ring for every multiplicative subset $S$ of $R$ . Moreover, if $0 \\notin S$ , then $R[X]$ is not an $S$ - $J$ -Noetherian ring. In particular, if $J = \\operatorname{Nil}(R)$ , then the proof follows from [10, Example 2.4].\n\nWe next show that the polynomial ring $R[X]$ is $S$ - $J$ -Noetherian if and only if it is $S$ -Noetherian.\n\nCorollary 2.10. Let $R$ be a ring, $S \\subseteq R$ be a multiplicative set and $J$ be an ideal of $R$ . Then $R[X]$ is an $S-J[X]$ -Noetherian ring if and only if $R[X]$ is an $S$ -Noetherian ring.\n\nProof. Suppose \\( R[X] \\) is an \\( S-J[X] \\)-Noetherian ring. Then we show that \\( R[X] \\) is an \\( S \\)-Noetherian ring. To prove this, by Theorem 2.7, it is sufficient to show that \\( J[X] \\) is \\( S \\)-finite. Define the ideal \\( Q = J[X] + XR[X] \\) of \\( R[X] \\). Note that \\( Q \\) is a \\( J[X] \\)-ideal since \\( Q \\not\\subseteq J[X] \\). Therefore \\( Q \\) is \\( S \\)-finite. So there exist \\( s \\in S \\) and \\( f_1, \\ldots, f_n \\in R[X] \\) such that \\( s(J[X] + XR[X]) \\subseteq\n\n$f_{1}R[X] + \\dots + f_{n}R[X] \\subseteq J[X] + XR[X]$ . As a result, we get $sJ \\subseteq f_{1}(0)R + \\dots + f_{n}(0)R \\subseteq J$ . This implies that $sJ[X] \\subseteq f_{1}(0)R[X] + \\dots + f_{n}(0)R[X] \\subseteq J[X]$ . Thus $J[X]$ is an $S$ -finite ideal of $R[X]$ . The converse is trivially true.\n\nProposition 2.11. Let $R$ be an $S$ - $J$ -Noetherian ring. Then $R / J$ is an $\\overline{S}$ -Noetherian ring.\n\nProof. A nonzero prime ideal (disjoint from $\\overline{S}$ ) of $R / J$ is of the form $P / J$ with $P \\in \\operatorname{Spec}(R)$ and $J \\subsetneq P$ . Evidently, $P$ is a $J$ -ideal with $P \\cap S = \\emptyset$ since $P / J$ is nonzero and $P / J \\cap \\overline{S} = \\emptyset$ . By the hypothesis, $P$ is $S$ -finite. Then there exist $s \\in S$ and $p_1, \\ldots, p_n \\in P$ such that $sP \\subseteq (p_1, \\ldots, p_n) \\subseteq P$ . Let $x \\in P$ . Then we can find $a_1, \\ldots, a_n \\in R$ such that $sx = a_1p_1 + \\dots + a_np_n$ . It follows that $(s + J)(x + J) = (a_1 + J)(p_1 + J) + \\ldots + (a_n + J)(p_n + J)$ , where $s + J \\in \\overline{S}$ and $a_1 + J, \\ldots, a_n + J \\in R / J$ . This implies that $(s + J)(P / J) \\subseteq (p_1 + J, \\ldots, p_n + J) \\subseteq P / J$ , i.e., $P / J$ is $\\overline{S}$ -finite. By [3, Corollary 5], $R / J$ is $\\overline{S}$ -Noetherian.\n\nCorollary 2.12. Let $S \\subseteq R$ be an anti-Archimedean multiplicative set. If $R$ is $S$ - $J$ -Noetherian, then polynomial ring $(R / J)[X_1, \\ldots, X_n]$ is $\\overline{S}$ - $J[X_1, \\ldots, X_n]$ -Noetherian.\n\nProof. By Proposition 2.11, $R / J$ is $\\overline{S}$ -Noetherian. Then, by [3, Proposition 9], $(R / J)[X_1, \\ldots, X_n]$ is $\\overline{S}$ -Noetherian. This implies $(R / J)[X_1, \\ldots, X_n]$ is $\\overline{S}$ - $J[X_1, \\ldots, X_n]$ -Noetherian.\n\nDefinition 2.13. [5] An ideal $I$ of a ring $R$ is called divided if $I \\subset xR$ for every $x \\in R \\backslash I$ .\n\nTheorem 2.14. Let $R$ be an $S$ - $J$ -Noetherian ring, and $I$ be a $J$ -ideal of $R$ disjoint from $S$ . If $J$ is divided ideal, then there exist $s \\in S$ and $S$ -prime ideals $P_{1}, \\ldots, P_{n}$ of $R$ such that $s(P_{1} \\cdots P_{n}) \\subseteq I$ .\n\nProof. Since $I \\not\\subset J$ and $J$ is divided, then $J \\subset (x) \\subseteq I$ for some $x \\in I \\setminus J$ . Thus $I / J$ is an ideal of the $\\overline{S}$ -Noetherian ring $R / J$ . Since $I \\cap S = \\emptyset$ , then $(I / J) \\cap \\overline{S} = \\emptyset$ . For this, if $(I / J) \\cap \\overline{S} \\neq \\emptyset$ , then $s + J = i + J$ for some $s \\in S$ and $i \\in I$ . Consequently, $s - i \\in J \\subset I$ , and so $s \\in I$ , a contradiction as $I \\cap S = \\emptyset$ . Thus $I / J$ is disjoint from $\\overline{S}$ . It follows that there exist $\\bar{s} \\in \\overline{S}$ and $\\overline{S}$ -prime ideals $Q_1, \\ldots, Q_n$ of $R / J$ containing $I / J$ such that $\\bar{s}(Q_1 \\ldots Q_n) \\subseteq I / J$ , by [1, Theorem 5]. Clearly, $Q_i \\cap \\overline{S} = \\emptyset$ for each $i = 1, \\ldots, n$ since each $Q_i$ is $\\overline{S}$ -prime. Then, by [1, Proposition 3], for each $1 \\leq i \\leq n$ , there exists an $S$ -prime ideal $P_i$ of $R$ containing $J$ such that $Q_i = P_i / J$ . Therefore $\\bar{s}((P_1 \\cdots P_n) / J) \\subseteq I / J$ since $P_1 / J \\cdots P_n / J = (P_1 \\cdots P_n) / J$ . For every $a \\in P_1 \\cdots P_n$ , $(s + J)(a + J) = b + J$ for some $b \\in I$ . Consequently, $sa - b \\in J \\subset I$ , and so $sa \\in I + Rb \\subseteq I + J = I$ . Thus $s(P_1 \\cdots P_n) \\subseteq I$ .\n\nProposition 2.15. Let $R \\subseteq R'$ be an extension of rings such that $IR' \\cap R = I$ for each ideal $I$ of $R$ , and let $S \\subseteq R$ be a multiplicative set. If $R'$ is an $S$ -J-Noetherian ring, then $R$ is $S$ -J-Noetherian.\n\nProof. Let $I$ be a $J$ -ideal of $R$ and $I \\subseteq IR'$ . If $IR' \\subseteq J$ , then $I \\subseteq J$ , which is not possible since $I \\not\\subseteq J$ . Thus $IR'$ is a $J$ -ideal of $R'$ . Since the ring $R'$ is $S$ -J-Noetherian, there exist $s \\in S$ and $i_1, \\ldots, i_n \\in I$ such that $sIR' \\subseteq (i_1, \\ldots, i_n)R' \\subseteq IR'$ . By hypothesis, $sI = sIR' \\cap R \\subseteq (i_1, \\ldots, i_n)R' \\cap R \\subseteq IR' \\cap R = I$ . Then $I$ is an $S$ -finite ideal of $R$ , as desired.\n\nProposition 2.16. Let $R$ be an $S$ - $J$ -Noetherian ring and $I$ be a $J$ -ideal of $R$ disjoint from $S$ . Then there exist $t \\in S$ and $m \\in \\mathbb{N}$ such that $t(rad(I))^m \\subseteq I$ .\n\nProof. Let $I$ be a $J$ -ideal of $R$ . Then $\\text{rad}(I)$ is also a $J$ -ideal of $R$ , and hence $\\text{rad}(I)$ is $S$ -finite. Consequently, there exist $s \\in S$ and $x_1, \\ldots, x_n \\in \\text{rad}(I)$ such that $s(\\text{rad}(I)) \\subseteq K \\subseteq \\text{rad}(I)$ , where $K = (x_1, \\ldots, x_n)$ . Suppose $m_i \\in \\mathbb{N}$ be such that $x_i^{m_i} \\in I$ for any $1 \\leq i \\leq n$ . Then choose sufficiently large $m \\in \\mathbb{N}$ such that $K^m \\subseteq I$ . Therefore $t(\\text{rad}(I))^m \\subseteq I$ , where $t = s^m \\in S$ .\n\nLemma 2.17. Let $R$ be an $S$ -J-Noetherian and $I$ be an $J$ -ideal of $R$ . Then $R / I$ is an $\\overline{S}$ -Noetherian ring.\n\nProof. Let $\\{I_i / I\\}_{i \\in \\Lambda}$ be an ascending chain of non-zero ideals of $R / I$ . As a result, $\\{I_i\\}_{i \\in \\Lambda}$ is an ascending chain of $J$ -ideal of $R$ and hence, by Theorem 2.5, there exist $s \\in S$ and $k \\in \\Lambda$ such that $sI_i \\subseteq I_k$ for every $i \\in \\Lambda$ . Therefore $(s + I)(I_i / I) \\subseteq I_k / I$ for every $i \\in \\Lambda$ and hence $(I_i / I)_{n \\in \\Lambda}$ is $\\overline{S}$ -stationary. By [6, Theorem 2.3], $R / I$ is $\\overline{S}$ -Noetherian.\n\nRecall that a ring $R$ is said to be decomposable if $R$ admits a non-trivial idempotent. Let $\\operatorname{Idem}(R)$ denote the set of idempotent elements of $R$ .\n\nTheorem 2.18. Let $R$ be a decomposable ring and $J$ be an ideal of $R$ with $eJ \\neq (e)$ for each $e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ . Then $R$ is $S$ - $J$ -Noetherian if and only if $R$ is $S$ -Noetherian.\n\nProof. It is sufficient to prove that if $R$ is $S$ -J-Noetherian, then $R$ is $S$ -Noetherian. To prove this, first we prove that $R / (e)$ is $\\overline{S}$ -Noetherian for each $e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ . Consider $e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ . Let $L$ be an ideal of $R$ which contains $(e)$ . Then $e \\notin J$ since $eJ \\neq (e)$ , and so $L \\not\\subseteq J$ . Thus $L$ is a $J$ -ideal, and so by Lemma 2.17, $R / L$ is $\\overline{S}$ -Noetherian. This implies that $R / (e)$ is $\\overline{S}$ -Noetherian since $(e) \\subseteq L$ . Now, let $K$ be an ideal of $R$ such that $K \\subseteq (e)$ for each $e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ . We claim that $K$ is $S$ -finite. Clearly, $eK = K$ . If $K = (0)$ , then $K$ is $S$ -finite. So we may assume that $K \\neq 0$ . If $K \\subseteq (1 - e)$ , then $eK \\subseteq (e - e^2) = (0)$ , i.e., $eK = K = 0$ , a contradiction as $K \\neq 0$ . Therefore $K \\not\\subseteq (1 - e)$ . Since $1 - e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ , $R / (1 - e)$ is a $\\overline{S}$ -Noetherian ring. Set $I = (1 - e)$ for simplicity. Then $L = (K + I) / I$ is an $\\overline{S}$ -finite ideal of $R / I$ . Then there exist $\\alpha_1 + I, \\ldots, \\alpha_n + I \\in R / I$ , where $\\alpha_1, \\ldots, \\alpha_n \\in K$ and $s' = s + I \\in \\overline{S}$ such that $s'L \\subseteq (\\alpha_1 + I, \\ldots, \\alpha_n + I) \\subseteq L$ . Let $\\beta \\in K + I$ . Then $\\beta + I \\in L$ , and so $s\\beta + I \\in s'L \\subseteq (\\alpha_1 + I, \\ldots, \\alpha_n + I)$ . This implies that $s\\beta + I = (u_1 + I)(\\alpha_1 + I) + \\dots + (u_n + I)(\\alpha_n + I)$ for some $u_1 + I, \\ldots, u_n + I \\in R / I$ . Consequently, $s\\beta - (u_1\\alpha_1 + \\dots + u_n\\alpha_n) \\in I$ , $s\\beta - (u_1\\alpha_1 + \\dots + u_n\\alpha_n) \\in F$ , where $F = (\\alpha_1, \\ldots, \\alpha_n, 1 - e)$ since $I \\subseteq F$ . Thus $s\\beta \\in F$ , and hence $s(K + (1 - e)) \\subseteq F \\subseteq K + (1 - e)$ . Therefore $K + (1 - e)$ is $S$ -finite. Consequently, $K = Ke = (K + (1 - e))e$ is an $S$ -finite ideal of $R$ , as claimed. Now, let $T$ be an ideal of $R$ . Since $eT \\subseteq (e)$ and $(1 - e)T \\subseteq K + (1 - e)T \\subseteq K + (1 - e)$ for each $e \\in \\operatorname{Idem}(R) \\setminus \\{0,1\\}$ , $eT$ and $(1 - e)T$ are $S$ -finite. It follows that $T = eT + (1 - e)T$ is $S$ -finite, and hence $R$ is $S$ -Noetherian ring.\n\nDefinition 2.19. [15] An ideal $Q$ (disjoint from $S$ ) of the ring $R$ is called $S$ -irreducible if $s(I \\cap K) \\subseteq Q \\subseteq I \\cap K$ for some $s \\in S$ and some ideals $I, K$ of $R$ , then there exists $s' \\in S$ such that either $ss'I \\subseteq Q$ or $ss'K \\subseteq Q$ .\n\nIt is clear from the definition that every irreducible ideal is an $S$ -irreducible ideal. However, the following example shows that an $S$ -irreducible ideal need not be irreducible.\n\nExample 2.20. Let $R = \\mathbb{Z}$ , $S = \\mathbb{Z} \\setminus 3\\mathbb{Z}$ and $I = 6\\mathbb{Z}$ . Since $I = 2\\mathbb{Z} \\cap 3\\mathbb{Z}$ , therefore $I$ is not an irreducible ideal of $R$ . Now, take $s = 2 \\in S$ . Then $2(3\\mathbb{Z}) = 6\\mathbb{Z} \\subseteq I$ . Thus $I$ is an $S$ -irreducible ideal of $R$ .\n\nRecall [11, Definition 2.1], a proper ideal $Q$ of a ring $R$ disjoint from $S$ is said to be $S$ -primary if there exists an $s \\in S$ such that for all $a, b \\in R$ , if $ab \\in Q$ , then either $sa \\in Q$ or $sb \\in rad(Q)$ . Following from [15], let $I$ be an ideal of $R$ such that $I \\cap S = \\emptyset$ . Then $I$ admits $S$ -primary decomposition if $I$ can be written as a finite intersection of $S$ -primary ideals of $R$ .\n\nNow, we extend $S$ -primary decomposition theorem for $S-J$ -Noetherian rings. We start with the following lemma.\n\nLemma 2.21. Let $R$ be an $S$ - $J$ -Noetherian ring. Then every $S$ -irreducible $J$ -ideal of $R$ is $S$ -primary.\n\nProof. Suppose $Q$ is an $S$ -irreducible $J$ -ideal of $R$ . Let $a, b \\in R$ be such that $ab \\in Q$ and $sb \\notin Q$ for all $s \\in S$ . Our aim is to show that there exists $t \\in S$ such that $ta \\in rad(Q)$ . Consider $A_{n} = \\{x \\in R \\mid a^{n}x \\in Q\\}$ for $n \\in \\mathbb{N}$ . Since $Q$ is a $J$ ideal, there exists $\\alpha \\in Q \\setminus J$ . Then $a^{n}\\alpha \\in Q$ for each $n \\in \\mathbb{N}$ . This implies that $\\alpha \\in A_{n}$ but $\\alpha \\notin J$ for each $n \\in \\mathbb{N}$ . Consequently, each $A_{n}$ is a $J$ -ideal of $R$ and $A_{1} \\subseteq A_{2} \\subseteq A_{3} \\subseteq \\dots$ is an increasing chain of ideals of $R$ . Since $R$ is a $S-J$ -Noetherian, by Theorem 2.5, this chain is $S$ -stationary, i.e., there exist $k \\in \\mathbb{N}$ and $s \\in S$ such that $sA_{n} \\subseteq A_{k}$ for all $n \\geq k$ . Consider the two ideals $I = (a^{k}) + Q$ and $K = (b) + Q$ of $R$ . Then $Q \\subseteq I \\cap K$ . For the reverse containment, let $y \\in I \\cap K$ . Write $y = a^{k}z + q$ for some $z \\in R$ and $q \\in Q$ . Since $ab \\in Q$ , $aK \\subseteq Q$ ; whence $ay \\in Q$ . Now, $a^{k+1}z = a(a^{k}z) = a(y - q) \\in Q$ . This implies that $z \\in A_{k+1}$ , and so $sz \\in sA_{k+1} \\subseteq A_{k}$ . Consequently, $a^{k}sz \\in Q$ which implies that $a^{k}sz + sq = sy \\in Q$ . Thus we have $s(I \\cap K) \\subseteq Q \\subseteq I \\cap K$ . This implies that there exists $s' \\in S$ such that either $ss'I \\subseteq Q$ or $ss'K \\subseteq Q$ since $Q$ is $S$ -irreducible. If $ss'K \\subseteq Q$ , then $ss'b \\in Q$ which is not possible. Therefore $ss'I \\subseteq Q$ which implies that $ss'a^{k} \\in Q$ . Put $t = ss' \\in S$ . Then $(ta)^{k} \\in Q$ and hence $ta \\in rad(Q)$ , as desired.\n\nTheorem 2.22. Let $R$ be an $S-J$ -Noetherian ring. Then every proper $J$ -ideal of $R$ disjoint with $S$ can be written as a finite intersection of $S$ -primary ideals.\n\nProof. Let $E$ be the collection of $J$ -ideals of $R$ which are disjoint with $S$ and can not be written as a finite intersection of $S$ -primary ideals. We wish to show $E = \\emptyset$ . On the contrary suppose $E \\neq \\emptyset$ . Since $R$ is an $S-J$ -Noetherian ring, by Theorem 2.5, there exists an $S$ -maximal element in $E$ , say $I$ . Evidently, $I$ is not an $S$ -primary ideal, by Lemma 2.21, $I$ is not an $S$ -irreducible ideal, and so $I$ is not an irreducible ideal. This implies that $I = K \\cap L$ for some ideals $K$ and $L$ of $R$ with $I \\neq K$ and $I \\neq L$ . As $I$ is not $S$ -irreducible, and so $sK \\not\\subseteq I$ and $sL \\not\\subseteq I$ for all $s \\in S$ . Now, we claim that $K, L \\notin E$ . For this, if $K$ (respectively, $L$ ) belongs to $E$ , then since $I$ is an $S$ -maximal element of $E$ and $I \\subset K$ (respectively, $I \\subset L$ ), there exists $s'$ (respectively, $s''$ ) from $S$ such that $s'K \\subseteq I$ (respectively, $s''L \\subseteq I$ ). This is not possible, as $I$ is not $S$ -irreducible. Therefore $K, L \\notin E$ . Also, if $K \\cap S \\neq \\emptyset$ (respectively, $L \\cap S \\neq \\emptyset$ ), then there exist $s_1 \\in K \\cap S$ (respectively, $s_2 \\in L \\cap S$ ). This implies that $s's_1 \\in s'K \\subseteq I$ (respectively, $s''s_2 \\in s''L \\subseteq I$ ), which is a contradiction because $I$ disjoint with $S$ . Thus $K$ and $L$ are also disjoint with $S$ . This implies that $K$ and $L$ can be written as a finite intersection of $S$ -primary ideals. Consequently,\n\n$I$ can also be written as a finite intersection of $S$ -primary ideals since $I = K \\cap L$ , a contradiction as $I \\in E$ . Thus $E = \\emptyset$ , i.e., every proper $J$ -ideal of $R$ disjoint with $S$ can be written as a finite intersection of $S$ -primary ideals.\n\n# References\n\n[1] H. Ahmed and M. Achraf (2020): $S$ -prime ideals of a commutative ring. Beitr Algebra Geom. 61:533-542. \n[2] K. Alhazmy, F. A. Ahmed, N. Mahdou and E. H. Oubouhou (2024): About $j$ -Noetherian rings. Open Mathematics 22: 20240014. \n[3] D. D. Anderson and T. Dumitrescu (2002): S-Noetherian rings. Commun. Algebra 30:4407-4416. \n[4] D. D. Anderson and M. Winders (2009): Idealization of a module. J. Commut. Algebra 1:3-56. \n[5] A. Badawi (2003): On Nonnil-Noetherian Rings. Commun. Algebra 31:1669-1677. \n[6] Z. Bilgin, M.L. Reyes and U. Tekir (2018): On right $S$ -Noetherian rings and $S$ -Noetherian modules. Commun. Algebra 46:863-869. \n[7] A. Dabbabi and A.Benhissi (2024):On non-J-Noetherian rings. Rendi. del Cir. Mat. di Pal. Ser. 2 73:2603-2611. \n[8] H. Kim, N. Mahdou, and Y. Zahir (2021): S-Noetherian in bi-amalgamations. Bull. Korean Math. Soc. 58:1021-1029. \n[9] J. W. Lim and D. Y. Oh (2014): $S$ -Noetherian properties on amalgamated algebras along an ideal. J. Pure Appl. Algebra 218:1075-1080. \n[10] N. Mahdou, E. H. Oubouhou and E. Y. Celikel (2024): On nonnil-S-Noetherian and nonnilu-S-Noetherian rings. An. st. Univ. Ovidius Constanta 32:201-219. \n[11] E. Massaoud (2022): $S$ -primary ideals of a commutative ring. Commun. Algebra 50:988-997. \n[12] E. Noether (1921): Idealtheorie in Ringbereichen. Math. Ann. 83:24-66. \n[13] E. Rostami (2022): On strongly $J$ -Noetherian rings. J. Algebra Appl. 21:2250144-13 \n[14] T. Singh, A. U. Ansari and S. D. Kumar (2023): $S$ -Noetherian Rings, Modules and their generalizations. Surv. Math. Appl. 18:163-182. \n[15] T. Singh, A.U. Ansari and S. D. Kumar (2024): Existence and Uniqueness of $S$ -Primary Decomposition in $S$ -Noetherian Modules. Commun. Algebra 52:4515-4524."}
# STAIRCASE MINIMALITY AND A PROOF OF SAXL'S CONJECTURE ABSTRACT. Saxl's conjecture (2012) asserts that for the staircase partition $\rho_{k} = (k,k - 1,\ldots ,1)$ , the tensor square of the corresponding irreducible representation of the symmetric group $S_{T_k}$ contains every irreducible representation as a constituent, where $T_{k} = k(k + 1) / 2$ is the $k$ th triangular number. We prove this conjecture unconditionally. Our proof introduces the Staircase Minimality Theorem: among all 2-regular partitions of $T_{k}$ , the staircase $\rho_{k}$ is the unique dominance-minimal element. Combined with Ikenmeyer's theorem on dominance and Kronecker positivity for staircases, this establishes that every 2-regular partition appears in the tensor square. Modular saturation then follows using only the diagonal entries $d_{\mu \mu} = 1$ of the decomposition matrix, and the Bessenrodt-Bowman-Sutton lifting theorem completes the proof. We further prove that at triangular numbers, staircases are the only Kronecker-universal self-conjugate partitions, providing a complete characterization. # 1. INTRODUCTION The Kronecker coefficients $g(\lambda, \mu, \nu)$ govern the decomposition of tensor products of irreducible representations of symmetric groups: $$ S ^ {\mu} \otimes S ^ {\nu} \cong \bigoplus_ {\lambda \vdash n} g (\lambda , \mu , \nu) S ^ {\lambda}. $$ Despite their fundamental importance in representation theory, algebraic combinatorics, and quantum information theory, these coefficients resist combinatorial description—no closed formula is known, and determining positivity is computationally hard [BI08, IMW17]. Saxl's conjecture [HSTZ13, Ike15] predicts a remarkable universality phenomenon: for the staircase partition $\rho_{k} = (k,k - 1,\dots ,1)$ , the tensor square contains every irreducible representation of the symmetric group. # 1.1. Main Result. Theorem 1.1 (Saxl's Conjecture). Let $\rho_{k} = (k,k - 1,\ldots ,1)$ be the staircase partition of the triangular number $T_{k} = k(k + 1) / 2$ . Then $$ g (\lambda , \rho_ {k}, \rho_ {k}) \geq 1 \quad f o r a l l \lambda \vdash T _ {k}. $$ This strengthens the tensor cube theorem of Harman-Ryba [HR23] to the optimal tensor square. 1.2. Proof Strategy. Our approach introduces a new structural theorem: Theorem 1.2 (Staircase Minimality). Among all 2-regular partitions of $T_{k}$ , the staircase $\rho_{k}$ is the unique dominance-minimal element. That is, $\mu \supseteq \rho_{k}$ for every $\mu \in \mathcal{R}_{T_k}$ , with equality if and only if $\mu = \rho_{k}$ . This has an immediate consequence: Corollary 1.3. For every 2-regular partition $\mu \vdash T_k$ : $g(\mu, \rho_k, \rho_k) \geq 1$ . Proof. By Theorem 1.2, $\mu \geq \rho_{k}$ . Since this is precisely the dominance condition required by Ikenmeyer's theorem (Theorem 2.3 below), we obtain $g(\mu, \rho_{k}, \rho_{k}) \geq 1$ . The proof of Saxl's conjecture follows a clean logical chain: $$ \boxed {\text {S t a i r c a s e M i n i m a l i t y}} \Rightarrow \boxed {2 - R e g u l a r P o s i t i v i t y} \Rightarrow \boxed {\text {M o d u l a r S a t u r a t i o n}} \Rightarrow \boxed {\text {S a x l}} $$ Key innovation. The modular saturation step uses only diagonal entries of the decomposition matrix: $d_{\mu \mu} = 1$ for 2-regular $\mu$ . This avoids any computation of off-diagonal decomposition numbers, making the argument entirely self-contained. 1.3. Historical Context. Substantial progress on Saxl's conjecture includes: (1) Ikenmeyer (2015) [Ike15]: Positivity for partitions dominance-comparable to staircases. (2) Pak-Panova-Vallejo (2016) [PPV16]: Positivity for hooks and the corner formula. (3) Bessenrodt-Bowman-Sutton (2021) [BBS21]: Modular framework; positivity for height-zero characters. (4) Harman-Ryba (2023) [HR23]: The tensor cube is universal. Our contribution is the Staircase Minimality Theorem, which proves that all 2-regular partitions dominate the staircase, thereby completing the program initiated by Bessenrodt, Bowman, and Sutton. # 2. PRELIMINARIES 2.1. Partitions and Dominance. A partition $\lambda = (\lambda_1, \dots, \lambda_\ell)$ of $n$ is a weakly decreasing sequence of positive integers summing to $n$ . We write $\lambda \vdash n$ and $|\lambda| = n$ . The length $\ell(\lambda)$ is the number of parts. The conjugate partition $\lambda'$ is obtained by transposing the Young diagram. Definition 2.1 (Dominance Order). For partitions $\lambda, \mu \vdash n$ , we write $\lambda \supseteq \mu$ if $$ S _ {j} (\lambda) := \sum_ {i = 1} ^ {j} \lambda_ {i} \geq \sum_ {i = 1} ^ {j} \mu_ {i} =: S _ {j} (\mu) $$ for all $j \geq 1$ . We write $\lambda \triangleright \mu$ for strict dominance ( $\lambda \supseteq \mu$ and $\lambda \neq \mu$ ). Definition 2.2 (2-Regular Partitions). A partition is 2-regular if all parts are distinct. Let $\mathcal{R}_n$ denote the set of 2-regular partitions of $n$ . 2.2. Kronecker Coefficients. The following theorem of Ikenmeyer is central to our approach. We emphasize that this result applies specifically to the staircase partition. Theorem 2.3 (Ikenmeyer [Ike15, Theorem 2.1]). Let $\rho_{k} = (k,k - 1,\ldots ,1)$ be the staircase partition of $T_{k} = k(k + 1) / 2$ . If a partition $\lambda \vdash T_{k}$ satisfies $\lambda \supseteq \rho_{k}$ or $\rho_{k}\supseteq \lambda$ , then $g(\lambda ,\rho_k,\rho_k)\geq 1$ . Remark 2.4. The hypothesis that $\rho_{k}$ is a staircase is essential. For a general self-conjugate partition $\gamma$ , dominance comparability does not imply Kronecker positivity. For instance, $(3,1)\supseteq (2,2)$ but $g((3,1),(2,2),(2,2)) = 0$ . 2.3. Modular Representation Theory. We work over characteristic $p = 2$ . A partition is a 2-core if it has no removable rim 2-hooks. Theorem 2.5 (James [Jam78]). The decomposition matrix $D = (d_{\lambda \mu})$ for symmetric groups in characteristic 2 satisfies: (i) $d_{\lambda \mu}\geq 0$ for all $\lambda \vdash n$ $\mu \in \mathcal{R}_n$ (ii) $d_{\mu \mu} = 1$ for all $\mu \in \mathcal{R}_n$ (iii) $d_{\lambda \mu} > 0$ implies $\lambda \supseteq \mu$ (iv) $S^{\lambda}\otimes \mathbb{F}_2$ is projective if and only if $\lambda$ is a 2-core. Definition 2.6 (Modular Saturation). For a 2-core $\gamma \vdash n$ , the projective multiplicity of $\mu \in \mathcal{R}_n$ is $$ a _ {\mu} := \left[ \left(S ^ {\gamma} \otimes \mathbb {F} _ {2}\right) ^ {\otimes 2}: P (\mu) \right] = \sum_ {\lambda \vdash n} g (\lambda , \gamma , \gamma) \cdot d _ {\lambda \mu}. $$ We say $\gamma$ achieves modular saturation if $a_{\mu} \geq 1$ for all $\mu \in \mathcal{R}_n$ . The following lifting theorem is the key connection between modular and ordinary representation theory. Theorem 2.7 (Bessenrodt-Bowman-Sutton [BBS21, Section 5]). Let $\gamma \vdash n$ be a 2-core. If $\gamma$ achieves modular saturation, then $g(\lambda, \gamma, \gamma) > 0$ for all $\lambda \vdash n$ . # 2.4. The Staircase Partition. Lemma 2.8. The staircase $\rho_{k} = (k,k - 1,\dots ,1)$ satisfies: (i) $|\rho_k| = T_k \coloneqq k(k + 1) / 2$ . (ii) $\rho_{k}$ is self-conjugate: $\rho_{k} = \rho_{k}^{\prime}$ . (iii) $\rho_{k}$ is 2-regular (all parts distinct). (iv) $\rho_{k}$ is a 2-core. Proof. Parts (i)-(iii) are immediate from the definition. For (iv), the beta-numbers of $\rho_{k}$ are $\beta_{i} = \rho_{i} + k - i = (k - i + 1) + (k - i) = 2(k - i) + 1$ for $1\leq i\leq k$ . These are the odd integers $\{2k - 1,2k - 3,\ldots ,3,1\}$ . On the 2-abacus, all beads lie on runner 1 at consecutive positions with no gaps, so $\rho_{k}$ is a 2-core. # 3. THE STAIRCASE MINIMALITY THEOREM We prove that $\rho_{k}$ is the unique dominance-minimal 2-regular partition of $T_{k}$ . The key insight is that among 2-regular partitions, staircases have the most "spread out" structure. # 3.1. Constraints on 2-Regular Partitions. Lemma 3.1. Let $\mu$ be a 2-regular partition with $\ell$ parts. Then $\mu_i \geq \ell - i + 1$ for all $1 \leq i \leq \ell$ . Proof. The parts of $\mu$ are strictly decreasing positive integers. The smallest possible values are $\mu_{\ell} = 1$ , $\mu_{\ell - 1} = 2$ , ..., $\mu_1 = \ell$ , achieved uniquely by the staircase $\rho_{\ell}$ . Corollary 3.2. Every 2-regular partition with $\ell$ parts has size at least $T_{\ell} = \ell (\ell +1) / 2$ Proof. By Lemma 3.1, $|\mu| = \sum_{i=1}^{\ell} \mu_i \geq \sum_{i=1}^{\ell} (\ell - i + 1) = T_\ell$ . Lemma 3.3. Every $\mu \in \mathcal{R}_{T_k}$ satisfies $\ell (\mu)\leq k$ Proof. By Corollary 3.2, $|\mu| \geq T_{\ell(\mu)}$ . Since $|\mu| = T_k$ , we have $T_{\ell(\mu)} \leq T_k$ , hence $\ell(\mu) \leq k$ . Lemma 3.4. The staircase $\rho_{k}$ is the unique 2-regular partition of $T_{k}$ with exactly $k$ parts. Proof. A 2-regular partition with exactly $k$ parts consists of $k$ distinct positive integers. For these to sum to $T_{k} = 1 + 2 + \dots + k$ , they must be exactly $\{1, 2, \ldots, k\}$ . Arranged in decreasing order, this gives $\rho_{k}$ . # 3.2. The Partial Sum Inequality. Lemma 3.5. For the staircase $\rho_{k}$ : $S_{j}(\rho_{k}) = jk - \binom{j}{2}$ for $1\leq j\leq k$ Proof. Direct computation: $$ S _ {j} (\rho_ {k}) = \sum_ {i = 1} ^ {j} (k - i + 1) = j k - \sum_ {i = 0} ^ {j - 1} i = j k - \binom {j} {2}. $$ Proposition 3.6 (Strict Dominance for Shorter Partitions). Let $\mu \in \mathcal{R}_{T_k}$ with $\ell = \ell (\mu) < k$ . Then $\mu \triangleright \rho_{k}$ . Proof. We show $S_{j}(\mu) \geq S_{j}(\rho_{k})$ for all $j \geq 1$ , with strict inequality for some $j$ . Case 1: $j > \ell$ . Then $S_{j}(\mu) = |\mu| = T_{k}$ , while $S_{j}(\rho_{k}) \leq S_{k}(\rho_{k}) = T_{k}$ . So $S_{j}(\mu) \geq S_{j}(\rho_{k})$ . Case 2: $j \leq \ell$ . Define the shift sequence $\delta_i := \mu_i - (\ell - i + 1)$ for $1
# STAIRCASE MINIMALITY AND A PROOF OF SAXL'S CONJECTURE ABSTRACT. Saxl's conjecture (2012) asserts that for the staircase partition $\rho_{k} = (k,k - 1,\ldots ,1)$ , the tensor square of the corresponding irreducible representation of the symmetric group $S_{T_k}$ contains every irreducible representation as a constituent, where $T_{k} = k(k + 1) / 2$ is the $k$ th triangular number. We prove this conjecture unconditionally. Our proof introduces the Staircase Minimality Theorem: among all 2-regular partitions of $T_{k}$ , the staircase $\rho_{k}$ is the unique dominance-minimal element. Combined with Ikenmeyer's theorem on dominance and Kronecker positivity for staircases, this establishes that every 2-regular partition appears in the tensor square. Modular saturation then follows using only the diagonal entries $d_{\mu \mu} = 1$ of the decomposition matrix, and the Bessenrodt-Bowman-Sutton lifting theorem completes the proof. We further prove that at triangular numbers, staircases are the only Kronecker-universal self-conjugate partitions, providing a complete characterization. # 1. INTRODUCTION The Kronecker coefficients $g(\lambda, \mu, \nu)$ govern the decomposition of tensor products of irreducible representations of symmetric groups: $$ S ^ {\mu} \otimes S ^ {\nu} \cong \bigoplus_ {\lambda \vdash n} g (\lambda , \mu , \nu) S ^ {\lambda}. $$ Despite their fundamental importance in representation theory, algebraic combinatorics, and quantum information theory, these coefficients resist combinatorial description—no closed formula is known, and determining positivity is computationally hard [BI08, IMW17]. Saxl's conjecture [HSTZ13, Ike15] predicts a remarkable universality phenomenon: for the staircase partition $\rho_{k} = (k,k - 1,\dots ,1)$ , the tensor square contains every irreducible representation of the symmetric group. # 1.1. Main Result. Theorem 1.1 (Saxl's Conjecture). Let $\rho_{k} = (k,k - 1,\ldots ,1)$ be the staircase partition of the triangular number $T_{k} = k(k + 1) / 2$ . Then $$ g (\lambda , \rho_ {k}, \rho_ {k}) \geq 1 \quad f o r a l l \lambda \vdash T _ {k}. $$ This strengthens the tensor cube theorem of Harman-Ryba [HR23] to the optimal tensor square. 1.2. Proof Strategy. Our approach introduces a new structural theorem: Theorem 1.2 (Staircase Minimality). Among all 2-regular partitions of $T_{k}$ , the staircase $\rho_{k}$ is the unique dominance-minimal element. That is, $\mu \supseteq \rho_{k}$ for every $\mu \in \mathcal{R}_{T_k}$ , with equality if and only if $\mu = \rho_{k}$ . This has an immediate consequence: Corollary 1.3. For every 2-regular partition $\mu \vdash T_k$ : $g(\mu, \rho_k, \rho_k) \geq 1$ . Proof. By Theorem 1.2, $\mu \geq \rho_{k}$ . Since this is precisely the dominance condition required by Ikenmeyer's theorem (Theorem 2.3 below), we obtain $g(\mu, \rho_{k}, \rho_{k}) \geq 1$ . The proof of Saxl's conjecture follows a clean logical chain: $$ \boxed {\text {S t a i r c a s e M i n i m a l i t y}} \Rightarrow \boxed {2 - R e g u l a r P o s i t i v i t y} \Rightarrow \boxed {\text {M o d u l a r S a t u r a t i o n}} \Rightarrow \boxed {\text {S a x l}} $$ Key innovation. The modular saturation step uses only diagonal entries of the decomposition matrix: $d_{\mu \mu} = 1$ for 2-regular $\mu$ . This avoids any computation of off-diagonal decomposition numbers, making the argument entirely self-contained. 1.3. Historical Context. Substantial progress on Saxl's conjecture includes: (1) Ikenmeyer (2015) [Ike15]: Positivity for partitions dominance-comparable to staircases. (2) Pak-Panova-Vallejo (2016) [PPV16]: Positivity for hooks and the corner formula. (3) Bessenrodt-Bowman-Sutton (2021) [BBS21]: Modular framework; positivity for height-zero characters. (4) Harman-Ryba (2023) [HR23]: The tensor cube is universal. Our contribution is the Staircase Minimality Theorem, which proves that all 2-regular partitions dominate the staircase, thereby completing the program initiated by Bessenrodt, Bowman, and Sutton. # 2. PRELIMINARIES 2.1. Partitions and Dominance. A partition $\lambda = (\lambda_1, \dots, \lambda_\ell)$ of $n$ is a weakly decreasing sequence of positive integers summing to $n$ . We write $\lambda \vdash n$ and $|\lambda| = n$ . The length $\ell(\lambda)$ is the number of parts. The conjugate partition $\lambda'$ is obtained by transposing the Young diagram. Definition 2.1 (Dominance Order). For partitions $\lambda, \mu \vdash n$ , we write $\lambda \supseteq \mu$ if $$ S _ {j} (\lambda) := \sum_ {i = 1} ^ {j} \lambda_ {i} \geq \sum_ {i = 1} ^ {j} \mu_ {i} =: S _ {j} (\mu) $$ for all $j \geq 1$ . We write $\lambda \triangleright \mu$ for strict dominance ( $\lambda \supseteq \mu$ and $\lambda \neq \mu$ ). Definition 2.2 (2-Regular Partitions). A partition is 2-regular if all parts are distinct. Let $\mathcal{R}_n$ denote the set of 2-regular partitions of $n$ . 2.2. Kronecker Coefficients. The following theorem of Ikenmeyer is central to our approach. We emphasize that this result applies specifically to the staircase partition. Theorem 2.3 (Ikenmeyer [Ike15, Theorem 2.1]). Let $\rho_{k} = (k,k - 1,\ldots ,1)$ be the staircase partition of $T_{k} = k(k + 1) / 2$ . If a partition $\lambda \vdash T_{k}$ satisfies $\lambda \supseteq \rho_{k}$ or $\rho_{k}\supseteq \lambda$ , then $g(\lambda ,\rho_k,\rho_k)\geq 1$ . Remark 2.4. The hypothesis that $\rho_{k}$ is a staircase is essential. For a general self-conjugate partition $\gamma$ , dominance comparability does not imply Kronecker positivity. For instance, $(3,1)\supseteq (2,2)$ but $g((3,1),(2,2),(2,2)) = 0$ . 2.3. Modular Representation Theory. We work over characteristic $p = 2$ . A partition is a 2-core if it has no removable rim 2-hooks. Theorem 2.5 (James [Jam78]). The decomposition matrix $D = (d_{\lambda \mu})$ for symmetric groups in characteristic 2 satisfies: (i) $d_{\lambda \mu}\geq 0$ for all $\lambda \vdash n$ $\mu \in \mathcal{R}_n$ (ii) $d_{\mu \mu} = 1$ for all $\mu \in \mathcal{R}_n$ (iii) $d_{\lambda \mu} > 0$ implies $\lambda \supseteq \mu$ (iv) $S^{\lambda}\otimes \mathbb{F}_2$ is projective if and only if $\lambda$ is a 2-core. Definition 2.6 (Modular Saturation). For a 2-core $\gamma \vdash n$ , the projective multiplicity of $\mu \in \mathcal{R}_n$ is $$ a _ {\mu} := \left[ \left(S ^ {\gamma} \otimes \mathbb {F} _ {2}\right) ^ {\otimes 2}: P (\mu) \right] = \sum_ {\lambda \vdash n} g (\lambda , \gamma , \gamma) \cdot d _ {\lambda \mu}. $$ We say $\gamma$ achieves modular saturation if $a_{\mu} \geq 1$ for all $\mu \in \mathcal{R}_n$ . The following lifting theorem is the key connection between modular and ordinary representation theory. Theorem 2.7 (Bessenrodt-Bowman-Sutton [BBS21, Section 5]). Let $\gamma \vdash n$ be a 2-core. If $\gamma$ achieves modular saturation, then $g(\lambda, \gamma, \gamma) > 0$ for all $\lambda \vdash n$ . # 2.4. The Staircase Partition. Lemma 2.8. The staircase $\rho_{k} = (k,k - 1,\dots ,1)$ satisfies: (i) $|\rho_k| = T_k \coloneqq k(k + 1) / 2$ . (ii) $\rho_{k}$ is self-conjugate: $\rho_{k} = \rho_{k}^{\prime}$ . (iii) $\rho_{k}$ is 2-regular (all parts distinct). (iv) $\rho_{k}$ is a 2-core. Proof. Parts (i)-(iii) are immediate from the definition. For (iv), the beta-numbers of $\rho_{k}$ are $\beta_{i} = \rho_{i} + k - i = (k - i + 1) + (k - i) = 2(k - i) + 1$ for $1\leq i\leq k$ . These are the odd integers $\{2k - 1,2k - 3,\ldots ,3,1\}$ . On the 2-abacus, all beads lie on runner 1 at consecutive positions with no gaps, so $\rho_{k}$ is a 2-core. # 3. THE STAIRCASE MINIMALITY THEOREM We prove that $\rho_{k}$ is the unique dominance-minimal 2-regular partition of $T_{k}$ . The key insight is that among 2-regular partitions, staircases have the most "spread out" structure. # 3.1. Constraints on 2-Regular Partitions. Lemma 3.1. Let $\mu$ be a 2-regular partition with $\ell$ parts. Then $\mu_i \geq \ell - i + 1$ for all $1 \leq i \leq \ell$ . Proof. The parts of $\mu$ are strictly decreasing positive integers. The smallest possible values are $\mu_{\ell} = 1$ , $\mu_{\ell - 1} = 2$ , ..., $\mu_1 = \ell$ , achieved uniquely by the staircase $\rho_{\ell}$ . Corollary 3.2. Every 2-regular partition with $\ell$ parts has size at least $T_{\ell} = \ell (\ell +1) / 2$ Proof. By Lemma 3.1, $|\mu| = \sum_{i=1}^{\ell} \mu_i \geq \sum_{i=1}^{\ell} (\ell - i + 1) = T_\ell$ . Lemma 3.3. Every $\mu \in \mathcal{R}_{T_k}$ satisfies $\ell (\mu)\leq k$ Proof. By Corollary 3.2, $|\mu| \geq T_{\ell(\mu)}$ . Since $|\mu| = T_k$ , we have $T_{\ell(\mu)} \leq T_k$ , hence $\ell(\mu) \leq k$ . Lemma 3.4. The staircase $\rho_{k}$ is the unique 2-regular partition of $T_{k}$ with exactly $k$ parts. Proof. A 2-regular partition with exactly $k$ parts consists of $k$ distinct positive integers. For these to sum to $T_{k} = 1 + 2 + \dots + k$ , they must be exactly $\{1, 2, \ldots, k\}$ . Arranged in decreasing order, this gives $\rho_{k}$ . # 3.2. The Partial Sum Inequality. Lemma 3.5. For the staircase $\rho_{k}$ : $S_{j}(\rho_{k}) = jk - \binom{j}{2}$ for $1\leq j\leq k$ Proof. Direct computation: $$ S _ {j} (\rho_ {k}) = \sum_ {i = 1} ^ {j} (k - i + 1) = j k - \sum_ {i = 0} ^ {j - 1} i = j k - \binom {j} {2}. $$ Proposition 3.6 (Strict Dominance for Shorter Partitions). Let $\mu \in \mathcal{R}_{T_k}$ with $\ell = \ell (\mu) < k$ . Then $\mu \triangleright \rho_{k}$ . Proof. We show $S_{j}(\mu) \geq S_{j}(\rho_{k})$ for all $j \geq 1$ , with strict inequality for some $j$ . Case 1: $j > \ell$ . Then $S_{j}(\mu) = |\mu| = T_{k}$ , while $S_{j}(\rho_{k}) \leq S_{k}(\rho_{k}) = T_{k}$ . So $S_{j}(\mu) \geq S_{j}(\rho_{k})$ . Case 2: $j \leq \ell$ . Define the shift sequence $\delta_i := \mu_i - (\ell - i + 1)$ for $1 \leq i \leq \ell$ . By Lemma 3.1, $\delta_i \geq 0$ . Since $\mu$ is 2-regular, its parts are strictly decreasing: $\mu_{i} > \mu_{i + 1}$ . Thus $$ \delta_ {i} - \delta_ {i + 1} = (\mu_ {i} - \mu_ {i + 1}) - 1 \geq 0, $$ so $\delta_1 \geq \delta_2 \geq \dots \geq \delta_\ell \geq 0$ . The total shift is $\sum_{i=1}^\ell \delta_i = |\mu| - T_\ell = T_k - T_\ell$ . For any $j\leq \ell$ $$ S _ {j} (\mu) = \sum_ {i = 1} ^ {j} \mu_ {i} = \sum_ {i = 1} ^ {j} [ (\ell - i + 1) + \delta_ {i} ] = \frac {j (2 \ell - j + 1)}{2} + \sum_ {i = 1} ^ {j} \delta_ {i}. $$ Since the $\delta_{i}$ are weakly decreasing with total sum $T_{k} - T_{\ell}$ , the first $j$ terms satisfy: $$ \sum_ {i = 1} ^ {j} \delta_ {i} \geq \frac {j (T _ {k} - T _ {\ell})}{\ell}. $$ Now $T_{k} - T_{\ell} = \frac{(k - \ell)(k + \ell + 1)}{2}$ . Since $\ell < k$ , we have $k + \ell + 1 \geq 2\ell + 1 > 2\ell$ , giving $$ \sum_ {i = 1} ^ {j} \delta_ {i} \geq \frac {j (k - \ell) (k + \ell + 1)}{2 \ell} > j (k - \ell). $$ Therefore: $$ S _ {j} (\mu) \geq \frac {j (2 \ell - j + 1)}{2} + j (k - \ell) = \frac {j (2 \ell - j + 1) + 2 j (k - \ell)}{2} = \frac {j (2 k - j + 1)}{2} = S _ {j} (\rho_ {k}). $$ Strict inequality: For $j = \ell < k$ , we have $S_{\ell}(\mu) = T_{k}$ while $$ S _ {\ell} (\rho_ {k}) = \ell k - \binom {\ell} {2} = T _ {k} - T _ {k - \ell} < T _ {k}. $$ # 3.3. Proof of Staircase Minimality. Proof of Theorem 1.2. Let $\mu \in \mathcal{R}_{T_k}$ . By Lemma 3.3, $\ell(\mu) \leq k$ . Case $\ell (\mu) = k$ : By Lemma 3.4, $\mu = \rho_{k}$ , so $\mu \geq \rho_{k}$ holds trivially. Case $\ell (\mu) < k$ : By Proposition 3.6, $\mu \triangleright \rho_{k}$ . In both cases $\mu \geq \rho_{k}$ , with equality if and only if $\mu = \rho_{k}$ . # 4. PROOF OF SAXL'S CONJECTURE Corollary 4.1 (2-Regular Positivity). For all $\mu \in \mathcal{R}_{T_k}$ : $g(\mu, \rho_k, \rho_k) \geq 1$ . Proof. By Theorem 1.2, every $\mu \in \mathcal{R}_{T_k}$ satisfies $\mu \geq \rho_{k}$ . This is precisely the dominance condition in Theorem 2.3, so $g(\mu ,\rho_k,\rho_k)\geq 1$ . Proposition 4.2 (Modular Saturation). For all $\mu \in \mathcal{R}_{T_k}$ : $a_{\mu} \geq 1$ . Proof. The projective multiplicity is $$ a _ {\mu} = \sum_ {\lambda \vdash T _ {k}} g (\lambda , \rho_ {k}, \rho_ {k}) \cdot d _ {\lambda \mu}. $$ By Theorem 2.5(ii), $d_{\mu \mu} = 1$ . Since all terms are non-negative: $$ a _ {\mu} \geq g (\mu , \rho_ {k}, \rho_ {k}) \cdot d _ {\mu \mu} = g (\mu , \rho_ {k}, \rho_ {k}) \geq 1 $$ by Corollary 4.1. Remark 4.3. This argument uses only the diagonal entries $d_{\mu \mu} = 1$ of the decomposition matrix. No off-diagonal decomposition numbers are required, making the proof entirely self-contained. Proof of Theorem 1.1. By Lemma 2.8(iv), $\rho_{k}$ is a 2-core. By Proposition 4.2, $a_{\mu} \geq 1$ for all $\mu \in \mathcal{R}_{T_k}$ , establishing modular saturation. By Theorem 2.7, $g(\lambda, \rho_{k}, \rho_{k}) > 0$ for all $\lambda \vdash T_k$ . # 5. CHARACTERIZATION OF KRONECKER-UNIVERSAL PARTITIONS # 5.1. Classification of Self-Conjugate 2-Cores. Theorem 5.1 (Self-Conjugate 2-Cores). A partition is both self-conjugate and a 2-core if and only if it is a staircase partition. Proof. $(\Leftarrow)$ This is Lemma 2.8(ii) and (iv). $(\Rightarrow)$ Let $\mu$ be self-conjugate and a 2-core with $\ell = \ell(\mu)$ parts. Self-conjugacy implies $\mu_1 = \ell$ , so $\beta_1 = \mu_1 + \ell - 1 = 2\ell - 1$ (odd). For a 2-core, all beta-numbers must lie on the same abacus runner, hence all are odd. Since they occupy consecutive positions $\{1, 3, \ldots, 2\ell - 1\}$ , we have $\beta_i = 2\ell - 2i + 1$ , giving $\mu_i = \beta_i - \ell + i = \ell - i + 1$ . Thus $\mu = \rho_\ell$ . Definition 5.2 (Corners). A corner of partition $\mu$ is a cell $(i, \mu_i)$ where either $i = \ell(\mu)$ or $\mu_{i+1} < \mu_i$ . Let $c(\mu)$ denote the number of corners. Theorem 5.3 (Corner Formula [PPV16, Theorem 4.8]). For any partition $\mu \vdash n$ with $n \geq 2$ : $$ g ((n - 1, 1), \mu , \mu) = c (\mu) - 1. $$ Corollary 5.4 (Square Zero [PPV16, Example 4.10]). For the square partition $\mu = (m^m)$ with $m \geq 2$ : $g((m^2 - 1, 1), \mu, \mu) = 0$ . Proof. The square $(m^m)$ has exactly one corner at position $(m,m)$ , so $g((m^2 - 1,1),\mu ,\mu) = 1 - 1 = 0$ . # 5.2. Non-Universality of Non-Staircases. Theorem 5.5 (Non-Universality). Let $n = T_k$ be a triangular number with $k \geq 2$ , and let $\mu \vdash n$ be self-conjugate with $\mu \neq \rho_k$ . Then $\mu$ is not Kronecker-universal. Proof. Since $\mu \neq \rho_{k}$ and both are self-conjugate partitions of $T_{k}$ , Theorem 5.1 implies $\mu$ is not a 2-core. Hence $S^{\mu} \otimes \mathbb{F}_{2}$ is not projective by Theorem 2.5(iv). Case A: $\mu = (m^m)$ is a square. Since $T_{k} = m^{2}$ requires $k(k + 1) / 2 = m^2$ , we need $m \geq 2$ . By Corollary 5.4, $g((n - 1,1),\mu ,\mu) = 0$ , so $\mu$ is not Kronecker-universal. Case B: $\mu$ is not a square. Since $S^{\mu} \otimes \mathbb{F}_2$ is not projective, dimensional considerations show that $(S^{\mu} \otimes \mathbb{F}_2)^{\otimes 2}$ cannot contain all projective indecomposable modules. Specifically, for the 2-block $B$ containing $\mu$ , the tensor square dimension is insufficient to cover all projectives with positive multiplicity. Thus some $a_{\nu} = 0$ for some 2-regular $\nu$ , and since $d_{\nu \nu} = 1$ , we have $g(\nu, \mu, \mu) = 0$ . For small instances (e.g., $\mu = (5,2,1,1,1) \vdash T_4 = 10$ ), explicit computation confirms non-universality. # 5.3. The Characterization Theorem. Theorem 5.6 (Characterization). Let $n = T_k = k(k + 1) / 2$ be a triangular number with $k \geq 2$ . For a self-conjugate partition $\mu \vdash n$ , the following are equivalent: (i) $\mu$ is Kronecker-universal: $g(\lambda ,\mu ,\mu) > 0$ for all $\lambda \vdash n$ (ii) $\mu = \rho_{k}$ is the staircase partition. (iii) $\mu$ is a 2-core. (iv) $S^{\mu}\otimes \mathbb{F}_{2}$ is a projective $\mathbb{F}_2S_n$ -module. Remark 5.7. The restriction to triangular $n$ is necessary. For non-triangular $n$ , self-conjugate partitions can be Kronecker-universal without being 2-cores. For example, $(3,1,1) \vdash 5$ is Kronecker-universal despite having 2-weight 1. Proof. (ii) $\Leftrightarrow$ (iii): Theorem 5.1. (iii) $\Leftrightarrow$ (iv): Theorem 2.5(iv). $(\mathbf{ii}) \Rightarrow (\mathbf{i})$ : Theorem 1.1. (i) $\Rightarrow$ (ii): Contrapositive of Theorem 5.5. # 6. ILLUSTRATIVE EXAMPLES Example 6.1 ( $k = 5$ ). For $\rho_5 = (5,4,3,2,1)$ , any 2-regular $\mu \vdash 15$ satisfies $\mu \supseteq \rho_5$ . For instance, $(8,4,2,1)$ gives $S_{1} = 8 \geq 5$ , $S_{2} = 12 \geq 9$ , $S_{3} = 14 \geq 12$ , $S_{4} = 15 \geq 14$ . By Corollary 4.1, $g(\mu, \rho_5, \rho_5) \geq 1$ . Example 6.2 (Non-Staircase). The square $(2^{2})\vdash 4$ is self-conjugate but not a staircase. By Corollary 5.4, $g((3,1),(2^2),(2^2)) = 0$ , confirming Theorem 5.6. # 7. CONCLUDING REMARKS The staircase $\rho_{k}$ uniquely possesses three properties: (1) self-conjugate, (2) 2-core, and (3) dominance-minimal among 2-regular partitions. No other partition family has all three, explaining the distinguished role of staircases. Open questions. (1) For which non-self-conjugate $\mu$ is $S^{\mu} \otimes S^{\mu}$ universal? (2) What is $\min_{\lambda} g(\lambda, \rho_k, \rho_k)$ ? (3) Does staircase minimality generalize to $p$ -regular partitions for odd $p$ ?
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15035
{"title": "Staircase Minimality and a Proof of Saxl's Conjecture", "raw_content": "# STAIRCASE MINIMALITY AND A PROOF OF SAXL'S CONJECTURE\n\n# SOONG KYUM LEE\n\nABSTRACT. Saxl's conjecture (2012) asserts that for the staircase partition $\\rho_{k} = (k,k - 1,\\ldots ,1)$ , the tensor square of the corresponding irreducible representation of the symmetric group $S_{T_k}$ contains every irreducible representation as a constituent, where $T_{k} = k(k + 1) / 2$ is the $k$ th triangular number. We prove this conjecture unconditionally.\n\nOur proof introduces the Staircase Minimality Theorem: among all 2-regular partitions of $T_{k}$ , the staircase $\\rho_{k}$ is the unique dominance-minimal element. Combined with Ikenmeyer's theorem on dominance and Kronecker positivity for staircases, this establishes that every 2-regular partition appears in the tensor square. Modular saturation then follows using only the diagonal entries $d_{\\mu \\mu} = 1$ of the decomposition matrix, and the Bessenrodt-Bowman-Sutton lifting theorem completes the proof.\n\nWe further prove that at triangular numbers, staircases are the only Kronecker-universal self-conjugate partitions, providing a complete characterization.\n\n# 1. INTRODUCTION\n\nThe Kronecker coefficients $g(\\lambda, \\mu, \\nu)$ govern the decomposition of tensor products of irreducible representations of symmetric groups:\n\n$$\nS ^ {\\mu} \\otimes S ^ {\\nu} \\cong \\bigoplus_ {\\lambda \\vdash n} g (\\lambda , \\mu , \\nu) S ^ {\\lambda}.\n$$\n\nDespite their fundamental importance in representation theory, algebraic combinatorics, and quantum information theory, these coefficients resist combinatorial description—no closed formula is known, and determining positivity is computationally hard [BI08, IMW17].\n\nSaxl's conjecture [HSTZ13, Ike15] predicts a remarkable universality phenomenon: for the staircase partition $\\rho_{k} = (k,k - 1,\\dots ,1)$ , the tensor square contains every irreducible representation of the symmetric group.\n\n# 1.1. Main Result.\n\nTheorem 1.1 (Saxl's Conjecture). Let $\\rho_{k} = (k,k - 1,\\ldots ,1)$ be the staircase partition of the triangular number $T_{k} = k(k + 1) / 2$ . Then\n\n$$\ng (\\lambda , \\rho_ {k}, \\rho_ {k}) \\geq 1 \\quad f o r a l l \\lambda \\vdash T _ {k}.\n$$\n\nThis strengthens the tensor cube theorem of Harman-Ryba [HR23] to the optimal tensor square.\n\n1.2. Proof Strategy. Our approach introduces a new structural theorem:\n\nTheorem 1.2 (Staircase Minimality). Among all 2-regular partitions of $T_{k}$ , the staircase $\\rho_{k}$ is the unique dominance-minimal element. That is, $\\mu \\supseteq \\rho_{k}$ for every $\\mu \\in \\mathcal{R}_{T_k}$ , with equality if and only if $\\mu = \\rho_{k}$ .\n\nThis has an immediate consequence:\n\nCorollary 1.3. For every 2-regular partition $\\mu \\vdash T_k$ : $g(\\mu, \\rho_k, \\rho_k) \\geq 1$ .\n\nProof. By Theorem 1.2, $\\mu \\geq \\rho_{k}$ . Since this is precisely the dominance condition required by Ikenmeyer's theorem (Theorem 2.3 below), we obtain $g(\\mu, \\rho_{k}, \\rho_{k}) \\geq 1$ .\n\nThe proof of Saxl's conjecture follows a clean logical chain:\n\n$$\n\\boxed {\\text {S t a i r c a s e M i n i m a l i t y}} \\Rightarrow \\boxed {2 - R e g u l a r P o s i t i v i t y} \\Rightarrow \\boxed {\\text {M o d u l a r S a t u r a t i o n}} \\Rightarrow \\boxed {\\text {S a x l}}\n$$\n\nKey innovation. The modular saturation step uses only diagonal entries of the decomposition matrix: $d_{\\mu \\mu} = 1$ for 2-regular $\\mu$ . This avoids any computation of off-diagonal decomposition numbers, making the argument entirely self-contained.\n\n1.3. Historical Context. Substantial progress on Saxl's conjecture includes:\n\n(1) Ikenmeyer (2015) [Ike15]: Positivity for partitions dominance-comparable to staircases. \n(2) Pak-Panova-Vallejo (2016) [PPV16]: Positivity for hooks and the corner formula. \n(3) Bessenrodt-Bowman-Sutton (2021) [BBS21]: Modular framework; positivity for height-zero characters. \n(4) Harman-Ryba (2023) [HR23]: The tensor cube is universal.\n\nOur contribution is the Staircase Minimality Theorem, which proves that all 2-regular partitions dominate the staircase, thereby completing the program initiated by Bessenrodt, Bowman, and Sutton.\n\n# 2. PRELIMINARIES\n\n2.1. Partitions and Dominance. A partition $\\lambda = (\\lambda_1, \\dots, \\lambda_\\ell)$ of $n$ is a weakly decreasing sequence of positive integers summing to $n$ . We write $\\lambda \\vdash n$ and $|\\lambda| = n$ . The length $\\ell(\\lambda)$ is the number of parts. The conjugate partition $\\lambda'$ is obtained by transposing the Young diagram.\n\nDefinition 2.1 (Dominance Order). For partitions $\\lambda, \\mu \\vdash n$ , we write $\\lambda \\supseteq \\mu$ if\n\n$$\nS _ {j} (\\lambda) := \\sum_ {i = 1} ^ {j} \\lambda_ {i} \\geq \\sum_ {i = 1} ^ {j} \\mu_ {i} =: S _ {j} (\\mu)\n$$\n\nfor all $j \\geq 1$ . We write $\\lambda \\triangleright \\mu$ for strict dominance ( $\\lambda \\supseteq \\mu$ and $\\lambda \\neq \\mu$ ).\n\nDefinition 2.2 (2-Regular Partitions). A partition is 2-regular if all parts are distinct. Let $\\mathcal{R}_n$ denote the set of 2-regular partitions of $n$ .\n\n2.2. Kronecker Coefficients. The following theorem of Ikenmeyer is central to our approach. We emphasize that this result applies specifically to the staircase partition.\n\nTheorem 2.3 (Ikenmeyer [Ike15, Theorem 2.1]). Let $\\rho_{k} = (k,k - 1,\\ldots ,1)$ be the staircase partition of $T_{k} = k(k + 1) / 2$ . If a partition $\\lambda \\vdash T_{k}$ satisfies $\\lambda \\supseteq \\rho_{k}$ or $\\rho_{k}\\supseteq \\lambda$ , then $g(\\lambda ,\\rho_k,\\rho_k)\\geq 1$ .\n\nRemark 2.4. The hypothesis that $\\rho_{k}$ is a staircase is essential. For a general self-conjugate partition $\\gamma$ , dominance comparability does not imply Kronecker positivity. For instance, $(3,1)\\supseteq (2,2)$ but $g((3,1),(2,2),(2,2)) = 0$ .\n\n2.3. Modular Representation Theory. We work over characteristic $p = 2$ . A partition is a 2-core if it has no removable rim 2-hooks.\n\nTheorem 2.5 (James [Jam78]). The decomposition matrix $D = (d_{\\lambda \\mu})$ for symmetric groups in characteristic 2 satisfies:\n\n(i) $d_{\\lambda \\mu}\\geq 0$ for all $\\lambda \\vdash n$ $\\mu \\in \\mathcal{R}_n$ \n(ii) $d_{\\mu \\mu} = 1$ for all $\\mu \\in \\mathcal{R}_n$ \n(iii) $d_{\\lambda \\mu} > 0$ implies $\\lambda \\supseteq \\mu$ \n(iv) $S^{\\lambda}\\otimes \\mathbb{F}_2$ is projective if and only if $\\lambda$ is a 2-core.\n\nDefinition 2.6 (Modular Saturation). For a 2-core $\\gamma \\vdash n$ , the projective multiplicity of $\\mu \\in \\mathcal{R}_n$ is\n\n$$\na _ {\\mu} := \\left[ \\left(S ^ {\\gamma} \\otimes \\mathbb {F} _ {2}\\right) ^ {\\otimes 2}: P (\\mu) \\right] = \\sum_ {\\lambda \\vdash n} g (\\lambda , \\gamma , \\gamma) \\cdot d _ {\\lambda \\mu}.\n$$\n\nWe say $\\gamma$ achieves modular saturation if $a_{\\mu} \\geq 1$ for all $\\mu \\in \\mathcal{R}_n$ .\n\nThe following lifting theorem is the key connection between modular and ordinary representation theory.\n\nTheorem 2.7 (Bessenrodt-Bowman-Sutton [BBS21, Section 5]). Let $\\gamma \\vdash n$ be a 2-core. If $\\gamma$ achieves modular saturation, then $g(\\lambda, \\gamma, \\gamma) > 0$ for all $\\lambda \\vdash n$ .\n\n# 2.4. The Staircase Partition.\n\nLemma 2.8. The staircase $\\rho_{k} = (k,k - 1,\\dots ,1)$ satisfies:\n\n(i) $|\\rho_k| = T_k \\coloneqq k(k + 1) / 2$ . \n(ii) $\\rho_{k}$ is self-conjugate: $\\rho_{k} = \\rho_{k}^{\\prime}$ . \n(iii) $\\rho_{k}$ is 2-regular (all parts distinct). \n(iv) $\\rho_{k}$ is a 2-core.\n\nProof. Parts (i)-(iii) are immediate from the definition.\n\nFor (iv), the beta-numbers of $\\rho_{k}$ are $\\beta_{i} = \\rho_{i} + k - i = (k - i + 1) + (k - i) = 2(k - i) + 1$ for $1\\leq i\\leq k$ . These are the odd integers $\\{2k - 1,2k - 3,\\ldots ,3,1\\}$ . On the 2-abacus, all beads lie on runner 1 at consecutive positions with no gaps, so $\\rho_{k}$ is a 2-core.\n\n# 3. THE STAIRCASE MINIMALITY THEOREM\n\nWe prove that $\\rho_{k}$ is the unique dominance-minimal 2-regular partition of $T_{k}$ . The key insight is that among 2-regular partitions, staircases have the most \"spread out\" structure.\n\n# 3.1. Constraints on 2-Regular Partitions.\n\nLemma 3.1. Let $\\mu$ be a 2-regular partition with $\\ell$ parts. Then $\\mu_i \\geq \\ell - i + 1$ for all $1 \\leq i \\leq \\ell$ .\n\nProof. The parts of $\\mu$ are strictly decreasing positive integers. The smallest possible values are $\\mu_{\\ell} = 1$ , $\\mu_{\\ell - 1} = 2$ , ..., $\\mu_1 = \\ell$ , achieved uniquely by the staircase $\\rho_{\\ell}$ .\n\nCorollary 3.2. Every 2-regular partition with $\\ell$ parts has size at least $T_{\\ell} = \\ell (\\ell +1) / 2$\n\nProof. By Lemma 3.1, $|\\mu| = \\sum_{i=1}^{\\ell} \\mu_i \\geq \\sum_{i=1}^{\\ell} (\\ell - i + 1) = T_\\ell$ .\n\nLemma 3.3. Every $\\mu \\in \\mathcal{R}_{T_k}$ satisfies $\\ell (\\mu)\\leq k$\n\nProof. By Corollary 3.2, $|\\mu| \\geq T_{\\ell(\\mu)}$ . Since $|\\mu| = T_k$ , we have $T_{\\ell(\\mu)} \\leq T_k$ , hence $\\ell(\\mu) \\leq k$ .\n\nLemma 3.4. The staircase $\\rho_{k}$ is the unique 2-regular partition of $T_{k}$ with exactly $k$ parts.\n\nProof. A 2-regular partition with exactly $k$ parts consists of $k$ distinct positive integers. For these to sum to $T_{k} = 1 + 2 + \\dots + k$ , they must be exactly $\\{1, 2, \\ldots, k\\}$ . Arranged in decreasing order, this gives $\\rho_{k}$ .\n\n# 3.2. The Partial Sum Inequality.\n\nLemma 3.5. For the staircase $\\rho_{k}$ : $S_{j}(\\rho_{k}) = jk - \\binom{j}{2}$ for $1\\leq j\\leq k$\n\nProof. Direct computation:\n\n$$\nS _ {j} (\\rho_ {k}) = \\sum_ {i = 1} ^ {j} (k - i + 1) = j k - \\sum_ {i = 0} ^ {j - 1} i = j k - \\binom {j} {2}.\n$$\n\nProposition 3.6 (Strict Dominance for Shorter Partitions). Let $\\mu \\in \\mathcal{R}_{T_k}$ with $\\ell = \\ell (\\mu) < k$ . Then $\\mu \\triangleright \\rho_{k}$ .\n\nProof. We show $S_{j}(\\mu) \\geq S_{j}(\\rho_{k})$ for all $j \\geq 1$ , with strict inequality for some $j$ .\n\nCase 1: $j > \\ell$ . Then $S_{j}(\\mu) = |\\mu| = T_{k}$ , while $S_{j}(\\rho_{k}) \\leq S_{k}(\\rho_{k}) = T_{k}$ . So $S_{j}(\\mu) \\geq S_{j}(\\rho_{k})$ .\n\nCase 2: $j \\leq \\ell$ . Define the shift sequence $\\delta_i := \\mu_i - (\\ell - i + 1)$ for $1 \\leq i \\leq \\ell$ . By Lemma 3.1, $\\delta_i \\geq 0$ .\n\nSince $\\mu$ is 2-regular, its parts are strictly decreasing: $\\mu_{i} > \\mu_{i + 1}$ . Thus\n\n$$\n\\delta_ {i} - \\delta_ {i + 1} = (\\mu_ {i} - \\mu_ {i + 1}) - 1 \\geq 0,\n$$\n\nso $\\delta_1 \\geq \\delta_2 \\geq \\dots \\geq \\delta_\\ell \\geq 0$ . The total shift is $\\sum_{i=1}^\\ell \\delta_i = |\\mu| - T_\\ell = T_k - T_\\ell$ .\n\nFor any $j\\leq \\ell$\n\n$$\nS _ {j} (\\mu) = \\sum_ {i = 1} ^ {j} \\mu_ {i} = \\sum_ {i = 1} ^ {j} [ (\\ell - i + 1) + \\delta_ {i} ] = \\frac {j (2 \\ell - j + 1)}{2} + \\sum_ {i = 1} ^ {j} \\delta_ {i}.\n$$\n\nSince the $\\delta_{i}$ are weakly decreasing with total sum $T_{k} - T_{\\ell}$ , the first $j$ terms satisfy:\n\n$$\n\\sum_ {i = 1} ^ {j} \\delta_ {i} \\geq \\frac {j (T _ {k} - T _ {\\ell})}{\\ell}.\n$$\n\nNow $T_{k} - T_{\\ell} = \\frac{(k - \\ell)(k + \\ell + 1)}{2}$ . Since $\\ell < k$ , we have $k + \\ell + 1 \\geq 2\\ell + 1 > 2\\ell$ , giving\n\n$$\n\\sum_ {i = 1} ^ {j} \\delta_ {i} \\geq \\frac {j (k - \\ell) (k + \\ell + 1)}{2 \\ell} > j (k - \\ell).\n$$\n\nTherefore:\n\n$$\nS _ {j} (\\mu) \\geq \\frac {j (2 \\ell - j + 1)}{2} + j (k - \\ell) = \\frac {j (2 \\ell - j + 1) + 2 j (k - \\ell)}{2} = \\frac {j (2 k - j + 1)}{2} = S _ {j} (\\rho_ {k}).\n$$\n\nStrict inequality: For $j = \\ell < k$ , we have $S_{\\ell}(\\mu) = T_{k}$ while\n\n$$\nS _ {\\ell} (\\rho_ {k}) = \\ell k - \\binom {\\ell} {2} = T _ {k} - T _ {k - \\ell} < T _ {k}.\n$$\n\n# 3.3. Proof of Staircase Minimality.\n\nProof of Theorem 1.2. Let $\\mu \\in \\mathcal{R}_{T_k}$ . By Lemma 3.3, $\\ell(\\mu) \\leq k$ .\n\nCase $\\ell (\\mu) = k$ : By Lemma 3.4, $\\mu = \\rho_{k}$ , so $\\mu \\geq \\rho_{k}$ holds trivially.\n\nCase $\\ell (\\mu) < k$ : By Proposition 3.6, $\\mu \\triangleright \\rho_{k}$ .\n\nIn both cases $\\mu \\geq \\rho_{k}$ , with equality if and only if $\\mu = \\rho_{k}$ .\n\n# 4. PROOF OF SAXL'S CONJECTURE\n\nCorollary 4.1 (2-Regular Positivity). For all $\\mu \\in \\mathcal{R}_{T_k}$ : $g(\\mu, \\rho_k, \\rho_k) \\geq 1$ .\n\nProof. By Theorem 1.2, every $\\mu \\in \\mathcal{R}_{T_k}$ satisfies $\\mu \\geq \\rho_{k}$ . This is precisely the dominance condition in Theorem 2.3, so $g(\\mu ,\\rho_k,\\rho_k)\\geq 1$ .\n\nProposition 4.2 (Modular Saturation). For all $\\mu \\in \\mathcal{R}_{T_k}$ : $a_{\\mu} \\geq 1$ .\n\nProof. The projective multiplicity is\n\n$$\na _ {\\mu} = \\sum_ {\\lambda \\vdash T _ {k}} g (\\lambda , \\rho_ {k}, \\rho_ {k}) \\cdot d _ {\\lambda \\mu}.\n$$\n\nBy Theorem 2.5(ii), $d_{\\mu \\mu} = 1$ . Since all terms are non-negative:\n\n$$\na _ {\\mu} \\geq g (\\mu , \\rho_ {k}, \\rho_ {k}) \\cdot d _ {\\mu \\mu} = g (\\mu , \\rho_ {k}, \\rho_ {k}) \\geq 1\n$$\n\nby Corollary 4.1.\n\nRemark 4.3. This argument uses only the diagonal entries $d_{\\mu \\mu} = 1$ of the decomposition matrix. No off-diagonal decomposition numbers are required, making the proof entirely self-contained.\n\nProof of Theorem 1.1. By Lemma 2.8(iv), $\\rho_{k}$ is a 2-core. By Proposition 4.2, $a_{\\mu} \\geq 1$ for all $\\mu \\in \\mathcal{R}_{T_k}$ , establishing modular saturation. By Theorem 2.7, $g(\\lambda, \\rho_{k}, \\rho_{k}) > 0$ for all $\\lambda \\vdash T_k$ .\n\n# 5. CHARACTERIZATION OF KRONECKER-UNIVERSAL PARTITIONS\n\n# 5.1. Classification of Self-Conjugate 2-Cores.\n\nTheorem 5.1 (Self-Conjugate 2-Cores). A partition is both self-conjugate and a 2-core if and only if it is a staircase partition.\n\nProof. $(\\Leftarrow)$ This is Lemma 2.8(ii) and (iv).\n\n$(\\Rightarrow)$ Let $\\mu$ be self-conjugate and a 2-core with $\\ell = \\ell(\\mu)$ parts. Self-conjugacy implies $\\mu_1 = \\ell$ , so $\\beta_1 = \\mu_1 + \\ell - 1 = 2\\ell - 1$ (odd). For a 2-core, all beta-numbers must lie on the same abacus runner, hence all are odd. Since they occupy consecutive positions $\\{1, 3, \\ldots, 2\\ell - 1\\}$ , we have $\\beta_i = 2\\ell - 2i + 1$ , giving $\\mu_i = \\beta_i - \\ell + i = \\ell - i + 1$ . Thus $\\mu = \\rho_\\ell$ .\n\nDefinition 5.2 (Corners). A corner of partition $\\mu$ is a cell $(i, \\mu_i)$ where either $i = \\ell(\\mu)$ or $\\mu_{i+1} < \\mu_i$ . Let $c(\\mu)$ denote the number of corners.\n\nTheorem 5.3 (Corner Formula [PPV16, Theorem 4.8]). For any partition $\\mu \\vdash n$ with $n \\geq 2$ :\n\n$$\ng ((n - 1, 1), \\mu , \\mu) = c (\\mu) - 1.\n$$\n\nCorollary 5.4 (Square Zero [PPV16, Example 4.10]). For the square partition $\\mu = (m^m)$ with $m \\geq 2$ : $g((m^2 - 1, 1), \\mu, \\mu) = 0$ .\n\nProof. The square $(m^m)$ has exactly one corner at position $(m,m)$ , so $g((m^2 - 1,1),\\mu ,\\mu) = 1 - 1 = 0$ .\n\n# 5.2. Non-Universality of Non-Staircases.\n\nTheorem 5.5 (Non-Universality). Let $n = T_k$ be a triangular number with $k \\geq 2$ , and let $\\mu \\vdash n$ be self-conjugate with $\\mu \\neq \\rho_k$ . Then $\\mu$ is not Kronecker-universal.\n\nProof. Since $\\mu \\neq \\rho_{k}$ and both are self-conjugate partitions of $T_{k}$ , Theorem 5.1 implies $\\mu$ is not a 2-core. Hence $S^{\\mu} \\otimes \\mathbb{F}_{2}$ is not projective by Theorem 2.5(iv).\n\nCase A: $\\mu = (m^m)$ is a square. Since $T_{k} = m^{2}$ requires $k(k + 1) / 2 = m^2$ , we need $m \\geq 2$ . By Corollary 5.4, $g((n - 1,1),\\mu ,\\mu) = 0$ , so $\\mu$ is not Kronecker-universal.\n\nCase B: $\\mu$ is not a square. Since $S^{\\mu} \\otimes \\mathbb{F}_2$ is not projective, dimensional considerations show that $(S^{\\mu} \\otimes \\mathbb{F}_2)^{\\otimes 2}$ cannot contain all projective indecomposable modules. Specifically, for the 2-block $B$ containing $\\mu$ , the tensor square dimension is insufficient to cover all projectives with positive multiplicity. Thus some $a_{\\nu} = 0$ for some 2-regular $\\nu$ , and since $d_{\\nu \\nu} = 1$ , we have $g(\\nu, \\mu, \\mu) = 0$ .\n\nFor small instances (e.g., $\\mu = (5,2,1,1,1) \\vdash T_4 = 10$ ), explicit computation confirms non-universality.\n\n# 5.3. The Characterization Theorem.\n\nTheorem 5.6 (Characterization). Let $n = T_k = k(k + 1) / 2$ be a triangular number with $k \\geq 2$ . For a self-conjugate partition $\\mu \\vdash n$ , the following are equivalent:\n\n(i) $\\mu$ is Kronecker-universal: $g(\\lambda ,\\mu ,\\mu) > 0$ for all $\\lambda \\vdash n$ \n(ii) $\\mu = \\rho_{k}$ is the staircase partition. \n(iii) $\\mu$ is a 2-core. \n(iv) $S^{\\mu}\\otimes \\mathbb{F}_{2}$ is a projective $\\mathbb{F}_2S_n$ -module.\n\nRemark 5.7. The restriction to triangular $n$ is necessary. For non-triangular $n$ , self-conjugate partitions can be Kronecker-universal without being 2-cores. For example, $(3,1,1) \\vdash 5$ is Kronecker-universal despite having 2-weight 1.\n\nProof. (ii) $\\Leftrightarrow$ (iii): Theorem 5.1.\n\n(iii) $\\Leftrightarrow$ (iv): Theorem 2.5(iv). \n$(\\mathbf{ii}) \\Rightarrow (\\mathbf{i})$ : Theorem 1.1. \n(i) $\\Rightarrow$ (ii): Contrapositive of Theorem 5.5.\n\n# 6. ILLUSTRATIVE EXAMPLES\n\nExample 6.1 ( $k = 5$ ). For $\\rho_5 = (5,4,3,2,1)$ , any 2-regular $\\mu \\vdash 15$ satisfies $\\mu \\supseteq \\rho_5$ . For instance, $(8,4,2,1)$ gives $S_{1} = 8 \\geq 5$ , $S_{2} = 12 \\geq 9$ , $S_{3} = 14 \\geq 12$ , $S_{4} = 15 \\geq 14$ . By Corollary 4.1, $g(\\mu, \\rho_5, \\rho_5) \\geq 1$ .\n\nExample 6.2 (Non-Staircase). The square $(2^{2})\\vdash 4$ is self-conjugate but not a staircase. By Corollary 5.4, $g((3,1),(2^2),(2^2)) = 0$ , confirming Theorem 5.6.\n\n# 7. CONCLUDING REMARKS\n\nThe staircase $\\rho_{k}$ uniquely possesses three properties: (1) self-conjugate, (2) 2-core, and (3) dominance-minimal among 2-regular partitions. No other partition family has all three, explaining the distinguished role of staircases.\n\nOpen questions. (1) For which non-self-conjugate $\\mu$ is $S^{\\mu} \\otimes S^{\\mu}$ universal? (2) What is $\\min_{\\lambda} g(\\lambda, \\rho_k, \\rho_k)$ ? (3) Does staircase minimality generalize to $p$ -regular partitions for odd $p$ ?\n\n# ACKNOWLEDGMENTS\n\nThis paper is dedicated to the memory of Christine Bessenrodt (1958-2022), whose profound contributions to the representation theory of symmetric groups—particularly the modular lifting framework developed with Bowman and Sutton—form the foundation of our proof. Her work on Kronecker products and the Saxl conjecture [Bes18, BBS21] has shaped the field. We also honor Jan Saxl (1948-2020), who posed this beautiful conjecture.\n\nThe author thanks Christian Ikenmeyer for the dominance criterion that activates modular saturation, and Christopher Bowman for continuing Christine's research program.\n\n# REFERENCES\n\n[BBS21] C. Bessenrodt, C. Bowman, and L. Sutton, Kronecker positivity and 2-modular representation theory, Trans. Amer. Math. Soc. Ser. B 8 (2021), 1024-1055. \n[Bes18] C. Bessenrodt, Critical classes, Kronecker products of spin characters, and the Saxl conjecture, *Algebr. Comb.* 1 (2018), 353-369. \n[B108] P. Bürgisser and C. Ikenmeyer, The complexity of computing Kronecker coefficients, FPSAC 2008, DMTCS Proc. AJ (2008), 357-368. \n[HR23] N. Harman and C. Ryba, A tensor-cube version of the Saxl conjecture, *Algebr. Comb.* 6 (2023), 507-511. \n[HSTZ13] G. Heide, J. Saxl, P.H. Tiep, and A.E. Zalesski, Conjugacy action, induced representations and the Steinberg square for simple groups of Lie type, Proc. Lond. Math. Soc. (3) 106 (2013), 908–930. \n[Ike15] C. Ikenmeyer, The Saxl conjecture and the dominance order, Discrete Math. 338 (2015), 1970-1975. \n[IMW17] C. Ikenmeyer, K.D. Mulmuley, and M. Walter, On vanishing of Kronecker coefficients, Comput. Complexity 26 (2017), 949-992. \n[Jam78] G.D. James, The Representation Theory of the Symmetric Groups, Lecture Notes in Math., vol. 682, Springer, Berlin, 1978. \n[PPV16] I. Pak, G. Panova, and E. Vallejo, Kronecker products, characters, partitions, and the tensor square conjectures, Adv. Math. 288 (2016), 702-731.\n\nGRADUATE SCHOOL OF DATA SCIENCE, KYUNGPOOK NATIONAL UNIVERSITY, DAEGU 41566, REPUBLIC OF KOREA\n\nEmail address: greendaysoon@knu.ac.kr"}
# Generalized Gregorian quadrature, including end-corrected weights for the midpoint rule Abstract A class of numerical quadrature rules is derived, with equally-spaced nodes, and unit weights except at a few points at each end of the series, for which "corrections" (not using any further information about the integrand) are added to the unit weights. If the correction sequences overlap, the effects are additive. A fundamental parameter ("alpha") in the derivation is the distance from the endpoint of the range of integration to the first node, measured inward in step-lengths. Setting alpha to $1/2$ yields a set of corrected composite midpoint rules. Setting alpha=0 yields Gregory's closed Newton-Cotes-like rules, including (for sufficient overlap) the standard closed Newton-Cotes rules (trapezoidal rule, "1/3 Simpson rule", "3/8 Simpson rule", "Boole's rule", etc.). Setting alpha=1 yields open N-C-like rules, again including the standard ones. A negative alpha means that the integrand is sampled outside the range of integration; suitably chosen negative values yield centered finite-difference end-corrections for the trapezoidal rule and the midpoint rule. One can even have different values of alpha at the two ends, yielding, inter alia, Adams-Bashforth and Adams-Moulton weights. Thus the title could have been "Unified derivation of equispaced quadrature rules". # 1 Framing the problem We seek a numerical quadrature rule of the form $$ \int_ {- \alpha h} ^ {(n + \beta) h} f (t) d t \approx h \sum_ {i = 0} ^ {n} f (i h) + h \sum_ {i = 0} ^ {m} c _ {i} f (i h) + h \sum_ {i = 0} ^ {m} d _ {i} f ((n - i) h), \tag {1} $$ where the coefficients $c_{i}$ and $d_{i}$ are independent of $n$ and $h$ (the step size), but may depend on $\alpha$ and $\beta$ . Such a rule would have the following advantages: - While sampling the integrand at equally-spaced abscissae (nodes) would be convenient—and might even be dictated by the data—the parameters $\alpha$ and $\beta$ would allow the terminals ("limits" of integration) to be at or between nodes. - Particular values of the parameters would yield useful special cases. For $\alpha = \beta = 0$ , we would get "closed Newton-Cotes-like rules", for which the outermost nodes coincide with the terminals. For $\alpha = \beta = 1$ , we would get "open N-C-like rules", for which the outermost nodes are one step in from the terminals. (N-C-like rules are familiar; but reproducing the most famous examples would serve as a sanity check on our method.) Negative parameters (or $m$ greater than $n$ ) would yield rules with nodes outside the range of integration. For $\alpha = \beta = 1/2$ , the range of integration would be divided into $n + 1$ steps of length $h$ , with the nodes centered in the steps, yielding a modified midpoint rule—as advertised in the title. - For $n > 2m + 2$ , the right side of (1) divided by $h$ would become $$ \sum_ {i = m + 1} ^ {n - m - 1} f (i h) + \sum_ {i = 0} ^ {m} \left(1 + c _ {i}\right) f (i h) + \sum_ {i = 0} ^ {m} \left(1 + d _ {i}\right) f ((n - i) h), \tag {2} $$ which is a rule with unit weights except at $m + 1$ nodes at each end, where the coefficients $c_{i}$ and $d_{i}$ can be described as corrections to unit weights, or differences from unit weights (hence the symbols). The sequence of unit weights in the interior would have no cycles with periods longer than the step size, minimizing risk of bias due to cycles in the weights interacting with oscillations in the integrand, eliminating the need for the number of steps to be a multiple of any cycle length, and expediting the task of entering the weights into a spreadsheet ("Fill down!). For $n \leq 2m + 2$ , the correction sequences would overlap and, according to (1), the corrections would be additive. - Hence, for $\alpha = \beta = 1/2$ , we would get end corrections for the composite midpoint rule (rectangle rule). And for $\alpha = \beta = 0$ , we would get end corrections for the composite trapezoidal rule, taking unit weights as the base case. These corrections would not need further information about the integrand, but would merely adjust the weights—unlike the standard "corrected trapezoidal rule", which requires derivatives at the terminals. - By using two successive values of $m$ , we could compare two estimates of the integral from the same ordinates for the purpose of error control (cf. Runge-Kutta-Fehlberg / Runge-Kutta-Verner methods for ordinary differential equations). This procedure, unlike comparing estimates from the same ordinates using N-C methods with different orders or different cycle lengths, would still not restrict the number of steps or introduce cycles in the interior weights. If $f(t)$ is a polynomial of degree $m$ in $t$ , each side of (1) is a polynomial of degree $m + 1$ in $h$ , with no constant term. For a more general $f(t)$ , the expansion of each side of (1) in powers of $h$ will still have no constant term. So, making rule (1) exact for polynomials of degree $m$ is a matter of matching the coefficients of $h^{k + 1}$ for $k \in [0..m]$ with a general $f$ , for all $n$ ; if this is done, the error will generally be $O(h^{m + 2})$ . # 2 Existence of the solution But, for given $\alpha$ and $\beta$ , why should there exist constants $c_{i}$ and $d_{i}$ such that (1) is exact for all $n$ and $h$ , for polynomials $f(t)$ of degree up to $m$ ? To answer this, let us first split the interval of integration: $$ \int_ {- \alpha h} ^ {(n + \beta) h} f (t) d t = \int_ {- \alpha h} ^ {0} f (t) d t + \int_ {0} ^ {n h} f (t) d t + \int_ {n h} ^ {(n + \beta) h} f (t) d t. \tag {3} $$ For the first term on the right, we replace $f(t)$ by its Taylor series about $t = 0$ (which terminates at the $m^{\text{th}}$ power), and integrate term-by-term, obtaining $$ \int_ {- \alpha h} ^ {0} f (t) d t = h \sum_ {k = 0} ^ {m} \frac {(- \alpha) ^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (0). \tag {4} $$ For the last term we do likewise except that the Taylor series is about $t = nh$ : $$ \int_ {n h} ^ {(n + \beta) h} f (t) d t = h \sum_ {k = 0} ^ {m} \frac {\beta^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (n h). \tag {5} $$ For the middle term, we know from the Euler-Maclaurin series [4, p.167] that $$ \begin{array}{l} \int_ {0} ^ {n h} f (t) d t = h \sum_ {i = 0} ^ {n} f (i h) - h \left[ \frac {1}{2} f (0) + \frac {1}{2} f (n h) \right] \\ + h \sum_ {k = 1} ^ {m} a _ {k} h ^ {k} \left[ f ^ {(k)} (0) - f ^ {(k)} (n h) \right], \tag {6} \\ \end{array} $$ where the last sum terminates at the degree $m$ of the polynomial (and is taken as an empty sum if $m = 0$ ), and the coefficients $a_{k}$ are constants whose details need not concern us here (except to acknowledge in passing that $a_{k} = 0$ for positive even $k$ ). Now the sum of the right-hand sides of eqs. (4) to (6) is of the form of the right-hand side of (1), because: - The first sum on the right in (6) is the same as in (1); - In the next term in (6), the factor in square brackets is a weighted sum of $f(0)$ and $f(nh)$ ; - $f^{(k)}(0)$ in (4) and (6) is given exactly as a weighted sum of the ordinates $f(kh)$ for $k \in [0..m]$ , because $f$ itself, being a polynomial of degree $m$ , is given exactly as a weighted sum of the same $m + 1$ ordinates; and in order to be dimensionally correct, the former weighted sum must have a common factor $h^{-k}$ , which cancels with $h^k$ ; - Similarly, $f^{(k)}(nh)$ in (5) and (6) is given exactly as a weighted sum of the ordinates $f((n - k)h)$ for $k \in [0..m]$ , and has a common factor that cancels with $h^k$ ; and - The weights in the aforesaid weighted sums are subsumed under $c_{i}$ and $d_{i}$ in (1). That explains the form of (1) and the conditions under which it can be made exact. But there are other implications. In eqs. (4) to (6), $\alpha$ appears only in (4), where it is related not to $f^{(k)}(nh)$ but only to $f^{(k)}(0)$ , which is given by a weighted sum whose weights are subsumed under $c_{i}$ . Similarly, $\beta$ is related to weights subsumed under $d_{i}$ . So, to our initial concession that "the coefficients $c_{i}$ and $d_{i}\ldots$ may depend on $\alpha$ and $\beta$ ," we could add "respectively." Moreover, the change-of-variable $f(t) = g(u)$ where $u = nh - t$ (whence $t = nh - u$ and $dt = -du$ ) transforms rule (1) into $$ \int_ {- \beta h} ^ {(n + \alpha) h} g (u) d u \approx h \sum_ {i = 0} ^ {n} g (i h) + h \sum_ {i = 0} ^ {m} d _ {i} g (i h) + h \sum_ {i = 0} ^ {m} c _ {i} g ((n - i) h), \tag {7} $$ which is the same rule except that $\alpha$ and $c_{i}$ have swapped places with $\beta$ and $d_{i}$ . So if the rule is consistent, the corrections $d_{i}$ depend on $\beta$ as the corrections $c_{i}$ depend on $\alpha$ . In particular, if $\beta = \alpha$ , then $d_{i} = c_{i}$ and rule (1) reduces to $$ \int_ {- \alpha h} ^ {(n + \alpha) h} f (t) d t \approx h \sum_ {i = 0} ^ {n} f (i h) + h \sum_ {i = 0} ^ {m} c _ {i} \left[ f (i h) + f ((n - i) h) \right]. \tag {8} $$ This special case, by its symmetry about $t = nh / 2$ , is exact if $f(t)$ is any odd power of $(t - nh / 2)$ so that, if it is exact for a polynomial of even degree $m$ , it is also exact for degree $m + 1$ . This raising of the maximum degree for exactness does not happen when $\beta \neq \alpha$ . And when it does happen (when $\beta = \alpha$ ), it does not change the order of the error for a general analytic $f$ ; it happens because when $f$ is of degree $m + 1$ , the coefficients of $h^{m + 2}$ in (8) are matched by the antisymmetry of the highest-power term in $f$ , whereas a more general $f$ generally breaks the antisymmetry. More precise error bounds for the case $\alpha = \beta = 0$ are given by Barrett, Martensen, and De Swardt & De Villiers [2, p.131] (citing [12, pp. 161-3]). # 3
# Generalized Gregorian quadrature, including end-corrected weights for the midpoint rule Abstract A class of numerical quadrature rules is derived, with equally-spaced nodes, and unit weights except at a few points at each end of the series, for which "corrections" (not using any further information about the integrand) are added to the unit weights. If the correction sequences overlap, the effects are additive. A fundamental parameter ("alpha") in the derivation is the distance from the endpoint of the range of integration to the first node, measured inward in step-lengths. Setting alpha to $1/2$ yields a set of corrected composite midpoint rules. Setting alpha=0 yields Gregory's closed Newton-Cotes-like rules, including (for sufficient overlap) the standard closed Newton-Cotes rules (trapezoidal rule, "1/3 Simpson rule", "3/8 Simpson rule", "Boole's rule", etc.). Setting alpha=1 yields open N-C-like rules, again including the standard ones. A negative alpha means that the integrand is sampled outside the range of integration; suitably chosen negative values yield centered finite-difference end-corrections for the trapezoidal rule and the midpoint rule. One can even have different values of alpha at the two ends, yielding, inter alia, Adams-Bashforth and Adams-Moulton weights. Thus the title could have been "Unified derivation of equispaced quadrature rules". # 1 Framing the problem We seek a numerical quadrature rule of the form $$ \int_ {- \alpha h} ^ {(n + \beta) h} f (t) d t \approx h \sum_ {i = 0} ^ {n} f (i h) + h \sum_ {i = 0} ^ {m} c _ {i} f (i h) + h \sum_ {i = 0} ^ {m} d _ {i} f ((n - i) h), \tag {1} $$ where the coefficients $c_{i}$ and $d_{i}$ are independent of $n$ and $h$ (the step size), but may depend on $\alpha$ and $\beta$ . Such a rule would have the following advantages: - While sampling the integrand at equally-spaced abscissae (nodes) would be convenient—and might even be dictated by the data—the parameters $\alpha$ and $\beta$ would allow the terminals ("limits" of integration) to be at or between nodes. - Particular values of the parameters would yield useful special cases. For $\alpha = \beta = 0$ , we would get "closed Newton-Cotes-like rules", for which the outermost nodes coincide with the terminals. For $\alpha = \beta = 1$ , we would get "open N-C-like rules", for which the outermost nodes are one step in from the terminals. (N-C-like rules are familiar; but reproducing the most famous examples would serve as a sanity check on our method.) Negative parameters (or $m$ greater than $n$ ) would yield rules with nodes outside the range of integration. For $\alpha = \beta = 1/2$ , the range of integration would be divided into $n + 1$ steps of length $h$ , with the nodes centered in the steps, yielding a modified midpoint rule—as advertised in the title. - For $n > 2m + 2$ , the right side of (1) divided by $h$ would become $$ \sum_ {i = m + 1} ^ {n - m - 1} f (i h) + \sum_ {i = 0} ^ {m} \left(1 + c _ {i}\right) f (i h) + \sum_ {i = 0} ^ {m} \left(1 + d _ {i}\right) f ((n - i) h), \tag {2} $$ which is a rule with unit weights except at $m + 1$ nodes at each end, where the coefficients $c_{i}$ and $d_{i}$ can be described as corrections to unit weights, or differences from unit weights (hence the symbols). The sequence of unit weights in the interior would have no cycles with periods longer than the step size, minimizing risk of bias due to cycles in the weights interacting with oscillations in the integrand, eliminating the need for the number of steps to be a multiple of any cycle length, and expediting the task of entering the weights into a spreadsheet ("Fill down!). For $n \leq 2m + 2$ , the correction sequences would overlap and, according to (1), the corrections would be additive. - Hence, for $\alpha = \beta = 1/2$ , we would get end corrections for the composite midpoint rule (rectangle rule). And for $\alpha = \beta = 0$ , we would get end corrections for the composite trapezoidal rule, taking unit weights as the base case. These corrections would not need further information about the integrand, but would merely adjust the weights—unlike the standard "corrected trapezoidal rule", which requires derivatives at the terminals. - By using two successive values of $m$ , we could compare two estimates of the integral from the same ordinates for the purpose of error control (cf. Runge-Kutta-Fehlberg / Runge-Kutta-Verner methods for ordinary differential equations). This procedure, unlike comparing estimates from the same ordinates using N-C methods with different orders or different cycle lengths, would still not restrict the number of steps or introduce cycles in the interior weights. If $f(t)$ is a polynomial of degree $m$ in $t$ , each side of (1) is a polynomial of degree $m + 1$ in $h$ , with no constant term. For a more general $f(t)$ , the expansion of each side of (1) in powers of $h$ will still have no constant term. So, making rule (1) exact for polynomials of degree $m$ is a matter of matching the coefficients of $h^{k + 1}$ for $k \in [0..m]$ with a general $f$ , for all $n$ ; if this is done, the error will generally be $O(h^{m + 2})$ . # 2 Existence of the solution But, for given $\alpha$ and $\beta$ , why should there exist constants $c_{i}$ and $d_{i}$ such that (1) is exact for all $n$ and $h$ , for polynomials $f(t)$ of degree up to $m$ ? To answer this, let us first split the interval of integration: $$ \int_ {- \alpha h} ^ {(n + \beta) h} f (t) d t = \int_ {- \alpha h} ^ {0} f (t) d t + \int_ {0} ^ {n h} f (t) d t + \int_ {n h} ^ {(n + \beta) h} f (t) d t. \tag {3} $$ For the first term on the right, we replace $f(t)$ by its Taylor series about $t = 0$ (which terminates at the $m^{\text{th}}$ power), and integrate term-by-term, obtaining $$ \int_ {- \alpha h} ^ {0} f (t) d t = h \sum_ {k = 0} ^ {m} \frac {(- \alpha) ^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (0). \tag {4} $$ For the last term we do likewise except that the Taylor series is about $t = nh$ : $$ \int_ {n h} ^ {(n + \beta) h} f (t) d t = h \sum_ {k = 0} ^ {m} \frac {\beta^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (n h). \tag {5} $$ For the middle term, we know from the Euler-Maclaurin series [4, p.167] that $$ \begin{array}{l} \int_ {0} ^ {n h} f (t) d t = h \sum_ {i = 0} ^ {n} f (i h) - h \left[ \frac {1}{2} f (0) + \frac {1}{2} f (n h) \right] \\ + h \sum_ {k = 1} ^ {m} a _ {k} h ^ {k} \left[ f ^ {(k)} (0) - f ^ {(k)} (n h) \right], \tag {6} \\ \end{array} $$ where the last sum terminates at the degree $m$ of the polynomial (and is taken as an empty sum if $m = 0$ ), and the coefficients $a_{k}$ are constants whose details need not concern us here (except to acknowledge in passing that $a_{k} = 0$ for positive even $k$ ). Now the sum of the right-hand sides of eqs. (4) to (6) is of the form of the right-hand side of (1), because: - The first sum on the right in (6) is the same as in (1); - In the next term in (6), the factor in square brackets is a weighted sum of $f(0)$ and $f(nh)$ ; - $f^{(k)}(0)$ in (4) and (6) is given exactly as a weighted sum of the ordinates $f(kh)$ for $k \in [0..m]$ , because $f$ itself, being a polynomial of degree $m$ , is given exactly as a weighted sum of the same $m + 1$ ordinates; and in order to be dimensionally correct, the former weighted sum must have a common factor $h^{-k}$ , which cancels with $h^k$ ; - Similarly, $f^{(k)}(nh)$ in (5) and (6) is given exactly as a weighted sum of the ordinates $f((n - k)h)$ for $k \in [0..m]$ , and has a common factor that cancels with $h^k$ ; and - The weights in the aforesaid weighted sums are subsumed under $c_{i}$ and $d_{i}$ in (1). That explains the form of (1) and the conditions under which it can be made exact. But there are other implications. In eqs. (4) to (6), $\alpha$ appears only in (4), where it is related not to $f^{(k)}(nh)$ but only to $f^{(k)}(0)$ , which is given by a weighted sum whose weights are subsumed under $c_{i}$ . Similarly, $\beta$ is related to weights subsumed under $d_{i}$ . So, to our initial concession that "the coefficients $c_{i}$ and $d_{i}\ldots$ may depend on $\alpha$ and $\beta$ ," we could add "respectively." Moreover, the change-of-variable $f(t) = g(u)$ where $u = nh - t$ (whence $t = nh - u$ and $dt = -du$ ) transforms rule (1) into $$ \int_ {- \beta h} ^ {(n + \alpha) h} g (u) d u \approx h \sum_ {i = 0} ^ {n} g (i h) + h \sum_ {i = 0} ^ {m} d _ {i} g (i h) + h \sum_ {i = 0} ^ {m} c _ {i} g ((n - i) h), \tag {7} $$ which is the same rule except that $\alpha$ and $c_{i}$ have swapped places with $\beta$ and $d_{i}$ . So if the rule is consistent, the corrections $d_{i}$ depend on $\beta$ as the corrections $c_{i}$ depend on $\alpha$ . In particular, if $\beta = \alpha$ , then $d_{i} = c_{i}$ and rule (1) reduces to $$ \int_ {- \alpha h} ^ {(n + \alpha) h} f (t) d t \approx h \sum_ {i = 0} ^ {n} f (i h) + h \sum_ {i = 0} ^ {m} c _ {i} \left[ f (i h) + f ((n - i) h) \right]. \tag {8} $$ This special case, by its symmetry about $t = nh / 2$ , is exact if $f(t)$ is any odd power of $(t - nh / 2)$ so that, if it is exact for a polynomial of even degree $m$ , it is also exact for degree $m + 1$ . This raising of the maximum degree for exactness does not happen when $\beta \neq \alpha$ . And when it does happen (when $\beta = \alpha$ ), it does not change the order of the error for a general analytic $f$ ; it happens because when $f$ is of degree $m + 1$ , the coefficients of $h^{m + 2}$ in (8) are matched by the antisymmetry of the highest-power term in $f$ , whereas a more general $f$ generally breaks the antisymmetry. More precise error bounds for the case $\alpha = \beta = 0$ are given by Barrett, Martensen, and De Swardt & De Villiers [2, p.131] (citing [12, pp. 161-3]). # 3 Finding the solution Given that there exist coefficients $c_{i}$ depending on $\alpha$ alone, and $d_{i}$ depending identically on $\beta$ alone, which make rule (1) correct in a certain sense for all functions $f(t)$ of a certain class, we can choose any convenient member of that class for the purpose of finding the coefficients. But what is "convenient"? First hint: If the rule works for arbitrary $n$ , it must work as $n \to \infty$ , provided of course that the integral converges, in which case the integrand and hence the right-hand sum in (1) must go to zero, so that we are left with $$ \int_ {- \alpha h} ^ {\infty} f (t) d t \approx h \sum_ {i = 0} ^ {\infty} f (i h) + h \sum_ {i = 0} ^ {m} c _ {i} f (i h), \tag {9} $$ in which both sides are functions of $h$ . In the Taylor expansions of the two sides about $h = 0$ , the constant terms automatically match because both sides of (9) approach the same integral as $h \to 0$ . And by adjusting the $m + 1$ coefficients $c_{i}$ , we should be able to match the terms in $h^{1}$ to $h^{m + 1}$ , so that the error is $O(h^{m + 2})$ . The rest of the argument takes copious hints from Fornberg, who took less-copious hints from Froberg [8, pp. 194-6] and a fragment of a letter by James Gregory to John Collins [9, at pp. 208-9] dated 1670—the year before Newton stated the "3/8 Simpson rule", and more than 40 years before Cotes computed closed "Newton-Cotes" weights for up to 11 points [10, p. 130]. My generalization via the parameter $\alpha$ is largely anticipated by Fornberg & Lawrence [6, pp. 4-5], whose parameter $\xi$ corresponds to my $-\alpha$ . Their approach is less general in that they restrict the range of the parameter (because they are interested in dealing with discontinuities between samples), but more general in that they use some degrees of freedom to reduce oscillations in the weights. (And it is more detailed than mine in some ways, as noted below.) Second hint: Newton's method of polynomial interpolation [10, pp. 10-12] suggests that the coefficient-matching can be simplified by rewriting (9) in terms of differences instead of ordinates. If we define the operator $\Delta$ by $$ \Delta f (t) = f (t + h) - f (t) \tag {10} $$ so that $$ \begin{array}{l} \Delta^ {0} f (0) = f (0) \\ \Delta^ {1} f (0) = f (h) - f (0) \\ \Delta^ {2} f (0) = f (2 h) - 2 f (h) + f (0) \tag {11} \\ \end{array} $$ 中 中 $$ \Delta^ {m} f (0) = \sum_ {j = 0} ^ {m} {\binom {m} {j}} (- 1) ^ {m - j} f (j h), $$ then the second sum in (9), namely $$ \sum_ {i = 0} ^ {m} c _ {i} f (i h), \tag {12} $$ can be written in the form $$ \sum_ {k = 0} ^ {m} b _ {k} \Delta^ {k} f (0). \tag {13} $$ For, if we equate the last two sums and expand the latter with the aid of (11), we get $$ \sum_ {k = 0} ^ {m} \left(b _ {k} \sum_ {j = 0} ^ {k} \binom {k} {j} (- 1) ^ {k - j} f (j h)\right) = \sum_ {i = 0} ^ {m} c _ {i} f (i h) \tag {14} $$ or, equating the coefficients of $f(ih)$ , $$ \sum_ {k = i} ^ {m} \binom {k} {i} (- 1) ^ {k - i} b _ {k} = c _ {i}; \quad i \in [ 0.. m ]. \tag {15} $$ This is an upper-triangular unit-diagonal system of linear equations, which can be solved for $b_{m}$ to $b_{0}$ (in that order) by back-substitution (Fornberg & Lawrence [6, p. 3] show the equations in matrix form). Thus, given the corrections $c_{i}$ , we can find the difference coefficients $b_{k}$ as claimed. [And of course, given the $b_{k}$ , we can use (15) to find the $c_{i}$ .] Substituting (13) for (12) in (9), we obtain the rule in the desired form $$ \int_ {- \alpha h} ^ {\infty} f (t) d t \approx h \sum_ {i = 0} ^ {\infty} f (i h) + h \sum_ {k = 0} ^ {m} b _ {k} \Delta^ {k} f (0), \tag {16} $$ and the problem is to find the $m + 1$ constants $b_{k}$ which equate the coefficients of the powers of $h$ from $h^1$ to $h^{m + 1}$ . So a "convenient" choice of $f(t)$ should turn the first sum in (16) into something tractable—e.g., a decaying geometric series. If we choose $$ f (t) = e ^ {- s t / h} \tag {17} $$ where $\operatorname{Re}(s) > 0$ , then $$ \sum_ {i = 0} ^ {\infty} f (i h) = \sum_ {i = 0} ^ {\infty} \left(e ^ {- s}\right) ^ {i} = \frac {1}{1 - e ^ {- s}} \tag {18} $$ and $$ \int_ {- \alpha h} ^ {\infty} f (t) d t = \left. \frac {e ^ {- s t / h}}{- s / h} \right| _ {t = - \alpha h} ^ {t \rightarrow \infty} = h \frac {e ^ {\alpha s}}{s} \tag {19} $$ and $$ f (0) = 1, \tag {20} $$ and increasing $t$ by $h$ multiplies $f(t)$ by $e^{-s}$ so that, in operational terms, $$ \Delta = \left(e ^ {- s} - 1\right). \tag {21} $$ If we make these four substitutions in (16), we can cancel $h$ and obtain $$ \frac {e ^ {\alpha s}}{s} \approx \frac {- 1}{e ^ {- s} - 1} + \sum_ {k = 0} ^ {m} b _ {k} \left(e ^ {- s} - 1\right) ^ {k}. \tag {22} $$ Now if we put $$ x = e ^ {- s} - 1, \tag {23} $$ so that $e^{\alpha s} = (1 + x)^{-\alpha}$ , $s = -\ln(1 + x)$ , and $x \to 0^{-}$ as $s \to 0^{+}$ , then (22) becomes $$ \frac {- (1 + x) ^ {- \alpha}}{\ln (1 + x)} \approx \frac {- 1}{x} + \sum_ {k = 0} ^ {m} b _ {k} x ^ {k}, \tag {24} $$ i.e., $$ \left(\sum_ {k = 0} ^ {m} b _ {k} x ^ {k}\right) \ln (1 + x) \approx \frac {\ln (1 + x)}{x} - (1 + x) ^ {- \alpha}. \tag {25} $$ Taking the geometric series for $(1 + x)^{-1}$ and integrating term-by-term (putting $x = 0$ to set the constant), we get $$ \ln (1 + x) = \sum_ {i = 0} ^ {\infty} \frac {(- 1) ^ {i}}{i + 1} x ^ {i + 1} \tag {26} $$ which, upon dividing by $x$ and renaming the counter, becomes $$ \frac {\ln (1 + x)}{x} = \sum_ {j = 0} ^ {\infty} \frac {(- 1) ^ {j}}{j + 1} x ^ {j}. \tag {27} $$ The remaining term in (25) has the binomial expansion $$ (1 + x) ^ {- \alpha} = \sum_ {j = 0} ^ {\infty} \binom {- \alpha} {j} x ^ {j}. \tag {28} $$ With these three substitutions, equation (25) becomes $$ \left(\sum_ {k = 0} ^ {m} b _ {k} x ^ {k}\right) \left(\sum_ {i = 0} ^ {\infty} \frac {(- 1) ^ {i}}{i + 1} x ^ {i + 1}\right) \approx \sum_ {j = 0} ^ {\infty} \left[ \frac {(- 1) ^ {j}}{j + 1} - \binom {- \alpha} {j} \right] x ^ {j}. \tag {29} $$ Now we can equate coefficients in (29). On the left side, there is no term in $x^0$ , due to the index $i + 1$ . On the right, the coefficient of $x^0$ is $$ \frac {(- 1) ^ {0}}{0 + 1} - \binom {- \alpha} {0} = 1 - 1 = 0, \tag {30} $$ which agrees with the left side. So the coefficients $b_{k}$ must be fixed so as to match the coefficients of $x^{1}$ to $x^{m + 1}$ . If the product-of-sums on the left is expanded, there will be $j$ terms in $x^{j}$ , with $k$ ranging from 0 to $j - 1$ , and $i$ ranging from $j - 1$ to 0 respectively. So, to equate the coefficients of $x^{j}$ , we take the second sum on the left inside the first sum, select the inner term with $i = j - k - 1$ , and take the outer sum up to $k = j - 1$ , obtaining $$ \sum_ {k = 0} ^ {j - 1} \frac {(- 1) ^ {j - k - 1}}{j - k} b _ {k} = \frac {(- 1) ^ {j}}{j + 1} - \binom {- \alpha} {j}; \quad j \in [ 1.. m + 1 ]. \tag {31} $$ To minimize confusion, let us rename the dummy index $j$ as $i + 1$ , so that both indices count from zero. This yields $$ \sum_ {k = 0} ^ {i} \frac {(- 1) ^ {i - k}}{i - k + 1} b _ {k} = \frac {(- 1) ^ {i + 1}}{i + 2} - \binom {- \alpha} {i + 1}; \quad i \in [ 0.. m ], \tag {32} $$ in which the coefficient of $b_{k}$ is 1 if $k = i$ , and there are no terms for $k > i$ . So (32) is a lower-triangular unit-diagonal system of linear equations in $b_{k}$ , which can be solved for $b_{0}$ to $b_{m}$ (in that order) by forward substitution. This forward order means that we can increase $m$ , adding more equations, without invalidating the solutions found so far. But, having found as many coefficients $b_{k}$ as we want, we then need to find the corrections $c_{i}$ by direct substitution into the upper-triangular system (15), in which higher-index values of $b_{k}$ do affect lower-index values of $c_{i}$ . Fornberg & Lawrence [6, p.4] give explicit formulae showing the variation of $b_{k}$ with their parameter $\xi$ (our $-\alpha$ ) for selected $k$ , and the asymptotic behavior of $b_{k}$ for extreme values of $\xi$ , the latter behavior being relevant to the quest for high-order rules with many well-behaved non-unit weights. The following survey, in contrast, ignores their restriction on the parameter $(0 \leq \alpha < 1$ in our notation) and seeks rules with relatively few non-unit weights. # 4 Examples As Fornberg suggests, we might reasonably solve the equations in MATLAB if we want the coefficients in decimal form, or in Wolfram Mathematica if we want them in exact rational form. I used a spreadsheet! Values of $k$ were filled across the top, and values of $i$ down the left-hand edge. For the binomial coefficient in (32), the value of $-\alpha$ was entered manually into the appropriate cell—the most consequential cell in the sheet—and subsequent values were built recursively. The matrix-inversion and matrix-multiplication functions were used where convenient. In the course of the inquiry, I computed two columns of corrections $c_{i}$ for which exact rational values were desirable but not obvious; for these I found a common denominator by expanding a decimal value in a continued fraction. # 4.1 Validation: Reproducing known rules Case $\alpha = 0$ : This is the case considered by Fornberg, after Gregory, assuming ab initio that the outermost nodes coincide with the terminals. In his "Table 1" [4, p.170], where his $p$ is our $m + 2$ , Fornberg gives the corrections, which are duly reproduced by our eqs. (32) and (15).<sup>2</sup> E.g., the corrections for $m = 2$ are $$ - \frac {5}{8}, \frac {1}{6}, - \frac {1}{2 4}, $$ which, when applied from each end of a unit-weight rule with six or more points, give the "Lacroix rule" [2, p.131] —a Gregorian rule with its own name, having the same order of accuracy as the "Simpson" rules. And if we apply the four corrections for $m = 3$ from each end of a sufficiently long unit-weight rule, and then (say) halve $h$ and double $n$ , we find that the error is $O(h^5)$ , which is one better than the "Simpson" rules; and so on. Moreover, Hamming [11, pp. 342-4] notes that if we apply a Gregorian correction sequence from each end of the unit-weight rule of the same length (i.e., if $n = m$ ), we get the standard closed Newton-Cotes rule of that length: $m = 1$ gives the trapezoidal rule, $m = 2$ gives "Simpson's 1/3 rule", $m = 3$ gives "Simpson's 3/8 rule", $m = 4$ gives "Boole's rule", etc. This, he says, is "perhaps the simplest way to find the actual coefficients" of the N-C rules [11, pp. 342]. We should add that if $m$ is even (so that the number of corrections is odd), we get a standard closed N-C rule not only by applying the corrections to $m + 1$ unit weights, but also by applying them to $m + 2$ unit weights. Thus the trapezoidal rule (with two points) is obtained from $m = 1$ (two corrections) or $m = 0$ (one correction), and "Simpson's 3/8 rule" (with four points) is obtained from $m = 3$ (four corrections) or $m = 2$ (three corrections), and so on. But if $m$ is odd (so that the number of corrections is even), we get a standard closed N-C rule only by applying the corrections to $m + 1$ unit weights, not by applying them to $m + 2$ unit weights. Thus we do not get "Simpson's 1/3 rule" (three points) from $m = 1$ (two corrections), nor "Boole's rule" (five points) from $m = 3$ (four corrections), although the associated Gregorian rules still have the full expected accuracy, with error $O(h^{m + 2})$ . The Gregorian rule for $m = 1$ has an alternative explanation. The "corrected" composite trapezoidal rule, which uses derivatives at the terminals, is two orders more accurate than the uncorrected one (that is, it has the same order as the "Simpson" rules). If the Gregorian corrections for $m = 1$ , namely $-\frac{7}{12}$ , $\frac{1}{12}$ , are re-expressed as corrections to the composite trapezoidal rule, they become $-\frac{1}{12}$ , $\frac{1}{12}$ ; and this sequence is recognizable as a finite-difference estimate of the end "correction" to the trapezoidal rule, taking the derivative at the distance $h/2$ from the terminal instead of $at$ the terminal, and thereby giving an order of accuracy between the uncorrected and "corrected" trapezoidal rules. In general, the corrections for even $m$ make Gregory's rule exact for degrees up to $m + 1$ , like the closed $(m + 1)$ -point and $(m + 2)$ -point N-C rules, which are the unique closed equispaced rules of their lengths that are exact up to that degree. And the corrections for odd $m$ make Gregory's rule exact for degrees up to $m$ , like the closed $(m + 1)$ -point N-C rule, which is the unique closed equispaced rule of its length that is exact up to that degree. Thus Gregory's method must generate every closed Newton-Cotes rule—twice if the rule has an even number of points (for an odd-degree interpolating polynomial). Yet Gregory's letter to Collins predates every closed N-C rule except the trapezoidal rule and Kepler's barrel rule (also known as Simpson's 1/3 rule). Five years after he wrote that letter, Gregory was dead. Tradition holds that he suffered a stroke while showing his students the moons of Jupiter, whereas the earliest surviving account says: "By a cold caught in the castle, he grew blind in on[e] night, and shortly after dyed". He was 36. Case $\alpha = 1$ : This case departs from Gregory/Fornberg by yielding open rules whose outermost nodes are one step in from the terminals. By analogy with the preceding case, the corrections given by eqs. (32) and (15) for even $m$ should yield the open $(m + 1)$ -point and $(m + 2)$ -point N-C rules (listed by Weisstein for up to 7 points), whereas the corrections for odd $m$ should yield the open $(m + 1)$ -point N-C rule. Let us check. For $m = 0$ , the sole correction is $c_0 = \frac{1}{2}$ . When applied (twice) to the 1-point unit-weight rule, this gives the single weight 2, which agrees with the 1-point open N-C rule. Applied from each end of the 2-point unit-weight rule, it gives the weights of the 2-point open N-C rule: $$ \frac {3}{2}, \frac {3}{2}. $$ For $m = 1$ , the correction sequence is $$ \begin{array}{c} \frac {1 1}{1 2}, - \frac {5}{1 2}. \end{array} $$ Applied from each end of the 2-point unit-weight rule, this gives the 2-point open N-C rule again. For $m = 2$ , the correction sequence is $$ \frac {3 1}{2 4}, - \frac {7}{6}, \frac {3}{8}. $$ Applied from each end of the 3-point unit-weight rule, this gives the weight sequence $$ \frac {8}{3}, - \frac {4}{3}, \frac {8}{3}, $$ which is the 3-point open N-C rule. And applied from each end of the 4-point unit-weight rule, it gives the weight sequence $$ \begin{array}{c} \frac {5 5}{2 4}, \frac {5}{2 4}, \frac {5}{2 4}, \frac {5 5}{2 4}, \end{array} $$ which is the 4-point open N-C rule. For $m = 3$ , the correction sequence is $$ \frac {1 1 8 1}{7 2 0}, - \frac {1 5 9 3}{7 2 0}, \frac {1 0 2 3}{7 2 0}, - \frac {2 5 1}{7 2 0}. $$ Applied from each end of the 4-point unit-weight rule, this gives the 4-point open N-C rule again. For $m = 4$ , the correction sequence is $$ \frac {2 8 3 7}{1 4 4 0}, - \frac {5 0 8 6}{1 4 4 0}, \frac {4 8 9 6}{1 4 4 0}, - \frac {2 4 0 2}{1 4 4 0}, \frac {4 7 5}{1 4 4 0}. $$ Applied from each end of the 5-point unit-weight rule, this gives the simple but oscillatory weight sequence $$ \frac {3 3}{1 0}, - \frac {2 1}{5}, \frac {3 9}{5}, - \frac {2 1}{5}, \frac {3 3}{1 0}, $$ which is the 5-point open N-C rule. So far: so good. Case $\alpha = -1$ : This yields rules for which the outermost nodes are one step outside the range of integration. One of these rules is easily confirmed. For $m = 2$ the computed corrections are $$ - \frac {2 5}{2 4}, - \frac {1}{2}, \frac {1}{2 4}. $$ When these are applied to a sufficiently long sequence of unit weights, the first three weights are $$ - \frac {1}{2 4}, \frac {1}{2}, \frac {2 5}{2 4}. $$ The respective weights for the composite trapezoidal rule (with the range of integration beginning at the second node) are $$ 0, \frac {1}{2}, 1 $$ so that, by subtraction, the corrections to the composite trapezoidal rule given by $m = 2$ are $$ - \frac {1}{2 4}, 0, \frac {1}{2 4}. $$ The corresponding contribution to the right-hand side of (1) is $$ h \left[ - \frac {1}{2 4} f (0) + \frac {1}{2 4} f (2 h) \right] = \frac {1}{1 2} h ^ {2} \frac {f (2 h) - f (0)}{2 h} \approx \frac {1}{1 2} h ^ {2} f ^ {\prime} (h), \tag {33} $$ where the right-hand expression is the standard left-hand correction in the "corrected trapezoidal rule" (the argument $h$ is the lower limit of integration). Thus, by taking $\alpha = -1$ and $m = 2$ , we get a discretized corrected composite trapezoidal rule. An equivalent rule is given by Weisstein, who describes it as a "2-point open extended formula" without further explanation. For a single interval (single step), this rule has the weights $$ - \frac {1}{2 4}, \frac {1 3}{2 4}, \frac {1 3}{2 4}, - \frac {1}{2 4}, $$ which we have obtained by setting $\alpha = -1$ , $m = 2$ , and $n = 3$ . But they can also be obtained by setting $\alpha = 0$ , $m = 2$ , and $n = 1$ , so that $m > n$ ; in the latter case, the corrections that overshoot the unit weights are added to 0 instead of 1. We could pursue higher-order discrete corrections to the trapezoidal rule by taking $\alpha = -2$ and $m = 4$ ; $\alpha = -3$ and $m = 6$ ; etc. But, having come this far in order to demonstrate the effectiveness of our method, let us now use it to derive some less familiar rules. # 4.2 Application: Correcting the midpoint rule Case $\alpha = 1/2$ : This yields open rules for which the outermost nodes are a half-step in from the terminals—as in the composite midpoint rule, which of course is a unit-weight rule, so that the corrections to the unit-weight rule can also be called corrections to the midpoint rule. For $m = 0$ , the sole "correction" is $c_0 = 0$ , leaving the midpoint rule uncorrected. For $m = 1$ , the order improves by 1. For $m \in \{2,3,4\}$ , we get rules that we might actually want to use. As open rules, they avoid evaluating the integrand at the terminals, where it may not be defined. Even if the integrand has a finite limit as we approach the terminal, there is some convenience in not having to deal with a singularity, or the possibility of a singularity, at the terminal, wherefore one might say that open rules are better than closed rules as general-purpose rules. For $\alpha = 1 / 2$ , the corrections for the nominated values of $m$ are $$ \begin{array}{l} m = 2: \frac {1}{1 2}, - \frac {1}{8}, \frac {1}{2 4}; \\ m = 3: \frac {7 0 3}{5 7 6 0}, - \frac {1 3 8 9}{5 7 6 0}, \frac {9 0 9}{5 7 6 0}, - \frac {2 2 3}{5 7 6 0}; \\ m = 4: \begin{array}{c} 9 0 9 \\ \hline 5 7 6 0 \end{array} , - \begin{array}{c} 2 2 1 3 \\ \hline 5 7 6 0 \end{array} , \begin{array}{c} 2 1 4 5 \\ \hline 5 7 6 0 \end{array} , - \begin{array}{c} 1 0 4 7 \\ \hline 5 7 6 0 \end{array} , \begin{array}{c} 2 0 6 \\ \hline 5 7 6 0 \end{array} . \\ \end{array} $$ The corresponding weights (if there are unit weights left over) are $$ \begin{array}{l} m = 2: \frac {1 3}{1 2}, \frac {7}{8}, \frac {2 5}{2 4}, 1, \dots , 1, \frac {2 5}{2 4}, \frac {7}{8}, \frac {1 3}{1 2}; \\ m = 3: \frac {6 4 6 3}{5 7 6 0}, \frac {4 3 7 1}{5 7 6 0}, \frac {6 6 6 9}{5 7 6 0}, \frac {5 5 3 7}{5 7 6 0}, 1, \text {e t c .}; \\ m = 4: \begin{array}{c} \frac {6 6 6 9}{5 7 6 0}, \frac {3 5 4 7}{5 7 6 0}, \frac {7 9 0 5}{5 7 6 0}, \frac {4 7 1 3}{5 7 6 0}, \frac {5 9 6 6}{5 7 6 0}, 1, \text {e t c .} \end{array} \\ \end{array} $$ The denominator for $m = 3$ and $m = 4$ was found by expanding one correction in a continued fraction. (The same approach to $m = 5$ made it clear that the exact rational coefficients would be unwieldy.) As a check, it is worth noting that if we apply the corrections for $m = 3$ from each end of the 4-point unit-weight rule, we get the same weights—namely $$ \frac {1 3}{1 2}, \frac {1 1}{1 2}, \frac {1 1}{1 2}, \frac {1 3}{1 2} $$ —as if we do likewise with the corrections for $m = 2$ . As a further check, it is easily confirmed by experiment that the resulting rules for $m = 2$ and $m = 3$ are exact (to machine precision) for integrands of degree up to 3 (like the "Simpson" rules) while the resulting rule for $m = 4$ is exact for integrands of degree up to 5 (like "Boole's rule"). And as a test, if we integrate $f(t) = 7t^6$ from 0 to 1, with 10 nodes, for $m = 2$ , $m = 3$ , and $m = 4$ , and then double the number of nodes, the errors are reduced by the approximate factors 13.5, 27.7, and 47.3 respectively, whence it is not hard to believe that the errors are $O(h^4)$ , $O(h^5)$ , and $O(h^6)$ respectively. Case $\alpha = -1 / 2$ : This yields rules for which the outermost nodes are a half-step outside the range of integration. Again one rule from the series is easily confirmed. For $m = 1$ , the computed corrections are $$ - \frac {2 3}{2 4}, - \frac {1}{2 4}. $$ When these are applied to a sufficiently long sequence of unit weights, the first two weights are $$ \begin{array}{c} \frac {1}{2 4}, \frac {2 3}{2 4}. \end{array} $$ The respective weights for the composite midpoint rule (the first midpoint being the second node) are $$ 0, 1 $$ so that, by subtraction, the corrections to that composite midpoint rule given by $m = 1$ are $$ \frac {1}{2 4}, - \frac {1}{2 4}. $$ The corresponding contribution to the right-hand side of (1) is $$ h \left[ \frac {1}{2 4} f (0) - \frac {1}{2 4} f (h) \right] = - \frac {1}{2 4} h ^ {2} \frac {f (h) - f (0)}{h} \approx - \frac {1}{2 4} h ^ {2} f ^ {\prime} (h / 2), \tag {34} $$ where the right-hand expression is minus one half of the standard left-hand correction in the "corrected trapezoidal rule" (the argument $h / 2$ is the lower limit of integration). But it is clear that $-1 / 2$ of the leading-order correction to the trapezoidal rule is the leading-order correction to the midpoint rule; e.g., the "Simpson" weights $\left(\frac{1}{3}, \frac{4}{3}, \frac{1}{3}\right)$ are $2/3$ of the way from the trapezoidal weights $(1,0,1)$ to the midpoint weights $(0,2,0)$ . So the rule for $\alpha = -1 / 2$ and $m = 1$ is a discretized corrected composite midpoint rule. We could pursue higher-order discrete corrections to the midpoint rule by taking $\alpha = -3 / 2$ and $m = 3$ ; $\alpha = -5 / 2$ and $m = 5$ ; etc. (The resulting rules are unusual in that for odd $m$ , they are exact for integrands of degree up to $m + 2$ and thereafter give errors of $O(h^{m + 3})$ . For they have the same symmetry about the terminal as the above "discretized corrected composite trapezoidal rule", in which, for even $m$ , the correction at the terminal is zero, so that the effective number of corrections is one fewer than would normally be required for the same order of accuracy.) By the way, I initially derived the rule for $\alpha = +1/2$ and $m = 2$ by treating it as a corrected midpoint rule, with a different discrete estimate of $f'$ at the lower terminal. But I used only the generalized Gregory/Fornberg approach to find the corresponding rules for $m = 3$ and $m = 4$ . # 4.3 Asymmetrical rules $(\beta \neq \alpha)$ The examples given so far have used the same value of $\alpha$ at each end of the range of integration; that is, in the notation of eqs. (1) to (7), they have set $\beta = \alpha$ . But we can also set $\alpha$ and $\beta$ independently. This is useful if we have a function sampled at fixed equispaced abscissae and want to be able to integrate it between arbitrary limits. For an illustration, let us take $\alpha = 1 / 2$ and $\beta = 0$ , so that the rule is midpoint-like from the left and closed-N-C-like from the right (such a rule, being open at one end and closed at the other, is described as semi-open). If any unit weights remain, the weights for $m = 2$ are $$ \frac {1 3}{1 2}, \frac {7}{8}, \frac {2 5}{2 4}, 1, \dots , 1, \frac {2 3}{2 4}, \frac {7}{6}, \frac {3}{8}, $$ and the weights for $m = 3$ are $$ \frac {6 4 6 3}{5 7 6 0}, \frac {4 3 7 1}{5 7 6 0}, \frac {6 6 6 9}{5 7 6 0}, \frac {5 5 3 7}{5 7 6 0}, 1, \dots , 1, \frac {7 3 9}{7 2 0}, \frac {2 1 1}{2 4 0}, \frac {2 9 9}{2 4 0}, \frac {2 5 1}{7 2 0} $$ where the last four are given by Fornberg [3, p.8]. If we integrate from 0 to 1 with $h = 2 / 19$ (giving 10 nodes), we find that the rule for $m = 2$ is exact for integrands of degree up to 2 (not 3 as for the symmetrical rules) while the rule for $m = 3$ is exact for integrands of degree up to 3 (as for the symmetrical rules). For $f(t) = 5t^4$ , if we reduce $h$ from 2/19 (10 nodes) to 2/39 (20 nodes), the error is reduced by a factor 18.5 for $m = 2$ , and 36.4 for $m = 3$ , whence it is not hard to believe that the errors are $O(h^4)$ and $O(h^5)$ respectively; recall that, after eq. (8), the error for a general $f$ was expected to be $O(h^{m + 2})$ , regardless of whether the error canceled for $f$ of degree $m + 1$ for even $m$ . For another illustration, one of the "single interval extrapolative rules" listed by Weisstein, namely $$ \int_ {- h} ^ {0} f (t) d t \approx h \left[ \frac {2 3}{1 2} f (0) - \frac {4}{3} f (h) + \frac {5}{1 2} f (2 h) \right], \tag {35} $$ is recognizable as a backward Adams-Bashforth rule, and can be obtained by setting $\alpha = 1$ , $\beta = -2$ , and $m = n = 2$ . In general, forward Adams-Bashforth weights are given by $\alpha = -m = -n$ and $\beta = 1$ , and forward Adams-Moulton weights by $\alpha = 1 - m = 1 - n$ and $\beta = 0$ . # 5 Conclusion The working equations (15) and (32) bear repeating, and the former bears switching left-to-right for actual use. So, in summary, the naive unit-weight equispaced quadrature rule may be corrected exactly for integrands up to degree $m$ by adding $m + 1$ "corrections" to the weights at each end, starting with the outermost weight and working inward. The corrections are given by $$ c _ {i} = \sum_ {k = i} ^ {m} \binom {k} {i} (- 1) ^ {k - i} b _ {k}; \quad i \in [ 0.. m ], \tag {36} $$ where the $m + 1$ coefficients $b_{k}$ are the solutions of the lower-triangular system $$ \sum_ {k = 0} ^ {i} \frac {(- 1) ^ {i - k}}{i - k + 1} b _ {k} = \frac {(- 1) ^ {i + 1}}{i + 2} - \binom {- \alpha} {i + 1}; \quad i \in [ 0.. m ], \tag {37} $$ where $\alpha$ is the distance from the limit-of-integration to the first node, measured inward in step-lengths. The corrections may overlap, in which case they are cumulative (and if any corrections overshoot the unit weights, they are added to 0 instead of 1). The values of $\alpha$ at the two ends need not be the same. If they are the same (" $\beta = \alpha$ " ) and $m$ is even, the rule is exact for integrands of degree up to $m + 1$ (instead of $m$ ). Be that as it may, the error in the integral is generally $O(h^{m + 2})$ , where $h$ is the step-length. Whereas the original purpose of this study was to find corrected weights for the composite midpoint rule (which, for better or worse, determined the sign convention for the parameter $\alpha$ ), a wide variety of closed, open, and extrapolative equispaced rules may be derived from the same two equations by suitably choosing $\alpha, \beta$ , and $m$ , and (for overlapping corrections) the initial number of unit weights.
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15806
{"title": "Generalized Gregorian quadrature, including end-corrected weights for the midpoint rule", "raw_content": "# Generalized Gregorian quadrature, including end-corrected weights for the midpoint rule\n\nGavin R. Putland*\n\n17 December 2025\n\n# Abstract\n\nA class of numerical quadrature rules is derived, with equally-spaced nodes, and unit weights except at a few points at each end of the series, for which \"corrections\" (not using any further information about the integrand) are added to the unit weights. If the correction sequences overlap, the effects are additive. A fundamental parameter (\"alpha\") in the derivation is the distance from the endpoint of the range of integration to the first node, measured inward in step-lengths. Setting alpha to $1/2$ yields a set of corrected composite midpoint rules. Setting alpha=0 yields Gregory's closed Newton-Cotes-like rules, including (for sufficient overlap) the standard closed Newton-Cotes rules (trapezoidal rule, \"1/3 Simpson rule\", \"3/8 Simpson rule\", \"Boole's rule\", etc.). Setting alpha=1 yields open N-C-like rules, again including the standard ones. A negative alpha means that the integrand is sampled outside the range of integration; suitably chosen negative values yield centered finite-difference end-corrections for the trapezoidal rule and the midpoint rule. One can even have different values of alpha at the two ends, yielding, inter alia, Adams-Bashforth and Adams-Moulton weights. Thus the title could have been \"Unified derivation of equispaced quadrature rules\".\n\n# 1 Framing the problem\n\nWe seek a numerical quadrature rule of the form\n\n$$\n\\int_ {- \\alpha h} ^ {(n + \\beta) h} f (t) d t \\approx h \\sum_ {i = 0} ^ {n} f (i h) + h \\sum_ {i = 0} ^ {m} c _ {i} f (i h) + h \\sum_ {i = 0} ^ {m} d _ {i} f ((n - i) h), \\tag {1}\n$$\n\nwhere the coefficients $c_{i}$ and $d_{i}$ are independent of $n$ and $h$ (the step size), but may depend on $\\alpha$ and $\\beta$ . Such a rule would have the following advantages:\n\n- While sampling the integrand at equally-spaced abscissae (nodes) would be convenient—and might even be dictated by the data—the parameters $\\alpha$ and $\\beta$ would allow the terminals (\"limits\" of integration) to be at or between nodes. \n- Particular values of the parameters would yield useful special cases. For $\\alpha = \\beta = 0$ , we would get \"closed Newton-Cotes-like rules\", for which the outermost nodes coincide with the terminals. For $\\alpha = \\beta = 1$ , we would get \"open N-C-like rules\", for which the outermost nodes are one step in from the terminals. (N-C-like rules are familiar; but reproducing the most famous examples would serve as a sanity check on our method.) Negative parameters (or $m$ greater than $n$ ) would yield rules with nodes outside the range of integration. For $\\alpha = \\beta = 1/2$ , the range of integration would be divided into $n + 1$ steps of length $h$ , with the nodes centered in the steps, yielding a modified midpoint rule—as advertised in the title.\n\n- For $n > 2m + 2$ , the right side of (1) divided by $h$ would become\n\n$$\n\\sum_ {i = m + 1} ^ {n - m - 1} f (i h) + \\sum_ {i = 0} ^ {m} \\left(1 + c _ {i}\\right) f (i h) + \\sum_ {i = 0} ^ {m} \\left(1 + d _ {i}\\right) f ((n - i) h), \\tag {2}\n$$\n\nwhich is a rule with unit weights except at $m + 1$ nodes at each end, where the coefficients $c_{i}$ and $d_{i}$ can be described as corrections to unit weights, or differences from unit weights (hence the symbols). The sequence of unit weights in the interior would have no cycles with periods longer than the step size, minimizing risk of bias due to cycles in the weights interacting with oscillations in the integrand, eliminating the need for the number of steps to be a multiple of any cycle length, and expediting the task of entering the weights into a spreadsheet (\"Fill down!). For $n \\leq 2m + 2$ , the correction sequences would overlap and, according to (1), the corrections would be additive.\n\n- Hence, for $\\alpha = \\beta = 1/2$ , we would get end corrections for the composite midpoint rule (rectangle rule). And for $\\alpha = \\beta = 0$ , we would get end corrections for the composite trapezoidal rule, taking unit weights as the base case. These corrections would not need further information about the integrand, but would merely adjust the weights—unlike the standard \"corrected trapezoidal rule\", which requires derivatives at the terminals.\n\n- By using two successive values of $m$ , we could compare two estimates of the integral from the same ordinates for the purpose of error control (cf. Runge-Kutta-Fehlberg / Runge-Kutta-Verner methods for ordinary differential equations). This procedure, unlike comparing estimates from the same ordinates using N-C methods with different orders or different cycle lengths, would still not restrict the number of steps or introduce cycles in the interior weights.\n\nIf $f(t)$ is a polynomial of degree $m$ in $t$ , each side of (1) is a polynomial of degree $m + 1$ in $h$ , with no constant term. For a more general $f(t)$ , the expansion of each side of (1) in powers of $h$ will still have no constant term. So, making rule (1) exact for polynomials of degree $m$ is a matter of matching the coefficients of $h^{k + 1}$ for $k \\in [0..m]$ with a general $f$ , for all $n$ ; if this is done, the error will generally be $O(h^{m + 2})$ .\n\n# 2 Existence of the solution\n\nBut, for given $\\alpha$ and $\\beta$ , why should there exist constants $c_{i}$ and $d_{i}$ such that (1) is exact for all $n$ and $h$ , for polynomials $f(t)$ of degree up to $m$ ? To answer this, let us first split the interval of integration:\n\n$$\n\\int_ {- \\alpha h} ^ {(n + \\beta) h} f (t) d t = \\int_ {- \\alpha h} ^ {0} f (t) d t + \\int_ {0} ^ {n h} f (t) d t + \\int_ {n h} ^ {(n + \\beta) h} f (t) d t. \\tag {3}\n$$\n\nFor the first term on the right, we replace $f(t)$ by its Taylor series about $t = 0$ (which terminates at the $m^{\\text{th}}$ power), and integrate term-by-term, obtaining\n\n$$\n\\int_ {- \\alpha h} ^ {0} f (t) d t = h \\sum_ {k = 0} ^ {m} \\frac {(- \\alpha) ^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (0). \\tag {4}\n$$\n\nFor the last term we do likewise except that the Taylor series is about $t = nh$ :\n\n$$\n\\int_ {n h} ^ {(n + \\beta) h} f (t) d t = h \\sum_ {k = 0} ^ {m} \\frac {\\beta^ {k + 1}}{(k + 1) !} h ^ {k} f ^ {(k)} (n h). \\tag {5}\n$$\n\nFor the middle term, we know from the Euler-Maclaurin series [4, p.167] that\n\n$$\n\\begin{array}{l} \\int_ {0} ^ {n h} f (t) d t = h \\sum_ {i = 0} ^ {n} f (i h) - h \\left[ \\frac {1}{2} f (0) + \\frac {1}{2} f (n h) \\right] \\\\ + h \\sum_ {k = 1} ^ {m} a _ {k} h ^ {k} \\left[ f ^ {(k)} (0) - f ^ {(k)} (n h) \\right], \\tag {6} \\\\ \\end{array}\n$$\n\nwhere the last sum terminates at the degree $m$ of the polynomial (and is taken as an empty sum if $m = 0$ ), and the coefficients $a_{k}$ are constants whose details need not concern us here (except to acknowledge in passing that $a_{k} = 0$ for positive even $k$ ). Now the sum of the right-hand sides of eqs. (4) to (6) is of the form of the right-hand side of (1), because:\n\n- The first sum on the right in (6) is the same as in (1); \n- In the next term in (6), the factor in square brackets is a weighted sum of $f(0)$ and $f(nh)$ ; \n- $f^{(k)}(0)$ in (4) and (6) is given exactly as a weighted sum of the ordinates $f(kh)$ for $k \\in [0..m]$ , because $f$ itself, being a polynomial of degree $m$ , is given exactly as a weighted sum of the same $m + 1$ ordinates; and in order to be dimensionally correct, the former weighted sum must have a common factor $h^{-k}$ , which cancels with $h^k$ ; \n- Similarly, $f^{(k)}(nh)$ in (5) and (6) is given exactly as a weighted sum of the ordinates $f((n - k)h)$ for $k \\in [0..m]$ , and has a common factor that cancels with $h^k$ ; and \n- The weights in the aforesaid weighted sums are subsumed under $c_{i}$ and $d_{i}$ in (1).\n\nThat explains the form of (1) and the conditions under which it can be made exact. But there are other implications. In eqs. (4) to (6), $\\alpha$ appears only in (4), where it is related not to $f^{(k)}(nh)$ but only to $f^{(k)}(0)$ , which is given by a weighted sum whose weights are subsumed under $c_{i}$ . Similarly, $\\beta$ is related to weights subsumed under $d_{i}$ . So, to our initial concession that \"the coefficients $c_{i}$ and $d_{i}\\ldots$ may depend on $\\alpha$ and $\\beta$ ,\" we could add \"respectively.\" Moreover, the change-of-variable $f(t) = g(u)$ where $u = nh - t$ (whence $t = nh - u$ and $dt = -du$ ) transforms rule (1) into\n\n$$\n\\int_ {- \\beta h} ^ {(n + \\alpha) h} g (u) d u \\approx h \\sum_ {i = 0} ^ {n} g (i h) + h \\sum_ {i = 0} ^ {m} d _ {i} g (i h) + h \\sum_ {i = 0} ^ {m} c _ {i} g ((n - i) h), \\tag {7}\n$$\n\nwhich is the same rule except that $\\alpha$ and $c_{i}$ have swapped places with $\\beta$ and $d_{i}$ . So if the rule is consistent, the corrections $d_{i}$ depend on $\\beta$ as the corrections $c_{i}$ depend on $\\alpha$ . In particular, if $\\beta = \\alpha$ , then $d_{i} = c_{i}$ and rule (1) reduces to\n\n$$\n\\int_ {- \\alpha h} ^ {(n + \\alpha) h} f (t) d t \\approx h \\sum_ {i = 0} ^ {n} f (i h) + h \\sum_ {i = 0} ^ {m} c _ {i} \\left[ f (i h) + f ((n - i) h) \\right]. \\tag {8}\n$$\n\nThis special case, by its symmetry about $t = nh / 2$ , is exact if $f(t)$ is any odd power of $(t - nh / 2)$ so that, if it is exact for a polynomial of even degree $m$ , it is also exact for degree $m + 1$ . This raising of the maximum degree for exactness does not happen when $\\beta \\neq \\alpha$ . And when it does happen (when $\\beta = \\alpha$ ), it does not change the order of the error for a general analytic $f$ ; it happens because when $f$ is of degree $m + 1$ , the coefficients of $h^{m + 2}$ in (8) are matched by the antisymmetry of the highest-power term in $f$ , whereas a more general $f$ generally breaks the antisymmetry. More precise error bounds for the case $\\alpha = \\beta = 0$ are given by Barrett [1], Martensen [13], and De Swardt & De Villiers [2, p.131] (citing [12, pp. 161-3]).\n\n# 3 Finding the solution\n\nGiven that there exist coefficients $c_{i}$ depending on $\\alpha$ alone, and $d_{i}$ depending identically on $\\beta$ alone, which make rule (1) correct in a certain sense for all functions $f(t)$ of a certain class, we can choose any convenient member of that class for the purpose of finding the coefficients. But what is \"convenient\"?\n\nFirst hint: If the rule works for arbitrary $n$ , it must work as $n \\to \\infty$ , provided of course that the integral converges, in which case the integrand and hence the right-hand sum in (1) must go to zero, so that we are left with\n\n$$\n\\int_ {- \\alpha h} ^ {\\infty} f (t) d t \\approx h \\sum_ {i = 0} ^ {\\infty} f (i h) + h \\sum_ {i = 0} ^ {m} c _ {i} f (i h), \\tag {9}\n$$\n\nin which both sides are functions of $h$ . In the Taylor expansions of the two sides about $h = 0$ , the constant terms automatically match because both sides of (9) approach the same integral as $h \\to 0$ . And by adjusting the $m + 1$ coefficients $c_{i}$ , we should be able to match the terms in $h^{1}$ to $h^{m + 1}$ , so that the error is $O(h^{m + 2})$ .\n\nThe rest of the argument takes copious hints from Fornberg [3, 4], who took less-copious hints from Froberg [8, pp. 194-6] and a fragment of a letter by James Gregory to John Collins [9, at pp. 208-9] dated 1670—the year before Newton stated the \"3/8 Simpson rule\", and more than 40 years before Cotes computed closed \"Newton-Cotes\" weights for up to 11 points [10, p. 130].\n\nMy generalization via the parameter $\\alpha$ is largely anticipated by Fornberg & Lawrence [6, pp. 4-5], whose parameter $\\xi$ corresponds to my $-\\alpha$ . Their approach is less general in that they restrict the range of the parameter (because they are interested in dealing with discontinuities between samples), but more general in that they use some degrees of freedom to reduce oscillations in the weights. (And it is more detailed than mine in some ways, as noted below.)\n\nSecond hint: Newton's method of polynomial interpolation [10, pp. 10-12] suggests that the coefficient-matching can be simplified by rewriting (9) in terms of differences instead of ordinates. If we define the operator $\\Delta$ by\n\n$$\n\\Delta f (t) = f (t + h) - f (t) \\tag {10}\n$$\n\nso that\n\n$$\n\\begin{array}{l} \\Delta^ {0} f (0) = f (0) \\\\ \\Delta^ {1} f (0) = f (h) - f (0) \\\\ \\Delta^ {2} f (0) = f (2 h) - 2 f (h) + f (0) \\tag {11} \\\\ \\end{array}\n$$\n\n中 中\n\n$$\n\\Delta^ {m} f (0) = \\sum_ {j = 0} ^ {m} {\\binom {m} {j}} (- 1) ^ {m - j} f (j h),\n$$\n\nthen the second sum in (9), namely\n\n$$\n\\sum_ {i = 0} ^ {m} c _ {i} f (i h), \\tag {12}\n$$\n\ncan be written in the form\n\n$$\n\\sum_ {k = 0} ^ {m} b _ {k} \\Delta^ {k} f (0). \\tag {13}\n$$\n\nFor, if we equate the last two sums and expand the latter with the aid of (11), we get\n\n$$\n\\sum_ {k = 0} ^ {m} \\left(b _ {k} \\sum_ {j = 0} ^ {k} \\binom {k} {j} (- 1) ^ {k - j} f (j h)\\right) = \\sum_ {i = 0} ^ {m} c _ {i} f (i h) \\tag {14}\n$$\n\nor, equating the coefficients of $f(ih)$ ,\n\n$$\n\\sum_ {k = i} ^ {m} \\binom {k} {i} (- 1) ^ {k - i} b _ {k} = c _ {i}; \\quad i \\in [ 0.. m ]. \\tag {15}\n$$\n\nThis is an upper-triangular unit-diagonal system of linear equations, which can be solved for $b_{m}$ to $b_{0}$ (in that order) by back-substitution (Fornberg\n\n& Lawrence [6, p. 3] show the equations in matrix form). Thus, given the corrections $c_{i}$ , we can find the difference coefficients $b_{k}$ as claimed. [And of course, given the $b_{k}$ , we can use (15) to find the $c_{i}$ .] Substituting (13) for (12) in (9), we obtain the rule in the desired form\n\n$$\n\\int_ {- \\alpha h} ^ {\\infty} f (t) d t \\approx h \\sum_ {i = 0} ^ {\\infty} f (i h) + h \\sum_ {k = 0} ^ {m} b _ {k} \\Delta^ {k} f (0), \\tag {16}\n$$\n\nand the problem is to find the $m + 1$ constants $b_{k}$ which equate the coefficients of the powers of $h$ from $h^1$ to $h^{m + 1}$ .\n\nSo a \"convenient\" choice of $f(t)$ should turn the first sum in (16) into something tractable—e.g., a decaying geometric series. If we choose\n\n$$\nf (t) = e ^ {- s t / h} \\tag {17}\n$$\n\nwhere $\\operatorname{Re}(s) > 0$ , then\n\n$$\n\\sum_ {i = 0} ^ {\\infty} f (i h) = \\sum_ {i = 0} ^ {\\infty} \\left(e ^ {- s}\\right) ^ {i} = \\frac {1}{1 - e ^ {- s}} \\tag {18}\n$$\n\nand\n\n$$\n\\int_ {- \\alpha h} ^ {\\infty} f (t) d t = \\left. \\frac {e ^ {- s t / h}}{- s / h} \\right| _ {t = - \\alpha h} ^ {t \\rightarrow \\infty} = h \\frac {e ^ {\\alpha s}}{s} \\tag {19}\n$$\n\nand\n\n$$\nf (0) = 1, \\tag {20}\n$$\n\nand increasing $t$ by $h$ multiplies $f(t)$ by $e^{-s}$ so that, in operational terms,\n\n$$\n\\Delta = \\left(e ^ {- s} - 1\\right). \\tag {21}\n$$\n\nIf we make these four substitutions in (16), we can cancel $h$ and obtain\n\n$$\n\\frac {e ^ {\\alpha s}}{s} \\approx \\frac {- 1}{e ^ {- s} - 1} + \\sum_ {k = 0} ^ {m} b _ {k} \\left(e ^ {- s} - 1\\right) ^ {k}. \\tag {22}\n$$\n\nNow if we put\n\n$$\nx = e ^ {- s} - 1, \\tag {23}\n$$\n\nso that $e^{\\alpha s} = (1 + x)^{-\\alpha}$ , $s = -\\ln(1 + x)$ , and $x \\to 0^{-}$ as $s \\to 0^{+}$ , then (22) becomes\n\n$$\n\\frac {- (1 + x) ^ {- \\alpha}}{\\ln (1 + x)} \\approx \\frac {- 1}{x} + \\sum_ {k = 0} ^ {m} b _ {k} x ^ {k}, \\tag {24}\n$$\n\ni.e.,\n\n$$\n\\left(\\sum_ {k = 0} ^ {m} b _ {k} x ^ {k}\\right) \\ln (1 + x) \\approx \\frac {\\ln (1 + x)}{x} - (1 + x) ^ {- \\alpha}. \\tag {25}\n$$\n\nTaking the geometric series for $(1 + x)^{-1}$ and integrating term-by-term (putting $x = 0$ to set the constant), we get\n\n$$\n\\ln (1 + x) = \\sum_ {i = 0} ^ {\\infty} \\frac {(- 1) ^ {i}}{i + 1} x ^ {i + 1} \\tag {26}\n$$\n\nwhich, upon dividing by $x$ and renaming the counter, becomes\n\n$$\n\\frac {\\ln (1 + x)}{x} = \\sum_ {j = 0} ^ {\\infty} \\frac {(- 1) ^ {j}}{j + 1} x ^ {j}. \\tag {27}\n$$\n\nThe remaining term in (25) has the binomial expansion\n\n$$\n(1 + x) ^ {- \\alpha} = \\sum_ {j = 0} ^ {\\infty} \\binom {- \\alpha} {j} x ^ {j}. \\tag {28}\n$$\n\nWith these three substitutions, equation (25) becomes\n\n$$\n\\left(\\sum_ {k = 0} ^ {m} b _ {k} x ^ {k}\\right) \\left(\\sum_ {i = 0} ^ {\\infty} \\frac {(- 1) ^ {i}}{i + 1} x ^ {i + 1}\\right) \\approx \\sum_ {j = 0} ^ {\\infty} \\left[ \\frac {(- 1) ^ {j}}{j + 1} - \\binom {- \\alpha} {j} \\right] x ^ {j}. \\tag {29}\n$$\n\nNow we can equate coefficients in (29). On the left side, there is no term in $x^0$ , due to the index $i + 1$ . On the right, the coefficient of $x^0$ is\n\n$$\n\\frac {(- 1) ^ {0}}{0 + 1} - \\binom {- \\alpha} {0} = 1 - 1 = 0, \\tag {30}\n$$\n\nwhich agrees with the left side. So the coefficients $b_{k}$ must be fixed so as to match the coefficients of $x^{1}$ to $x^{m + 1}$ . If the product-of-sums on the left is expanded, there will be $j$ terms in $x^{j}$ , with $k$ ranging from 0 to $j - 1$ , and $i$ ranging from $j - 1$ to 0 respectively. So, to equate the coefficients of $x^{j}$ , we take the second sum on the left inside the first sum, select the inner term with $i = j - k - 1$ , and take the outer sum up to $k = j - 1$ , obtaining\n\n$$\n\\sum_ {k = 0} ^ {j - 1} \\frac {(- 1) ^ {j - k - 1}}{j - k} b _ {k} = \\frac {(- 1) ^ {j}}{j + 1} - \\binom {- \\alpha} {j}; \\quad j \\in [ 1.. m + 1 ]. \\tag {31}\n$$\n\nTo minimize confusion, let us rename the dummy index $j$ as $i + 1$ , so that both indices count from zero. This yields\n\n$$\n\\sum_ {k = 0} ^ {i} \\frac {(- 1) ^ {i - k}}{i - k + 1} b _ {k} = \\frac {(- 1) ^ {i + 1}}{i + 2} - \\binom {- \\alpha} {i + 1}; \\quad i \\in [ 0.. m ], \\tag {32}\n$$\n\nin which the coefficient of $b_{k}$ is 1 if $k = i$ , and there are no terms for $k > i$ .\n\nSo (32) is a lower-triangular unit-diagonal system of linear equations in $b_{k}$ , which can be solved for $b_{0}$ to $b_{m}$ (in that order) by forward substitution. This forward order means that we can increase $m$ , adding more equations, without invalidating the solutions found so far. But, having found as many coefficients $b_{k}$ as we want, we then need to find the corrections $c_{i}$ by direct substitution into the upper-triangular system (15), in which higher-index values of $b_{k}$ do affect lower-index values of $c_{i}$ .\n\nFornberg & Lawrence [6, p.4] give explicit formulae showing the variation of $b_{k}$ with their parameter $\\xi$ (our $-\\alpha$ ) for selected $k$ , and the asymptotic behavior of $b_{k}$ for extreme values of $\\xi$ , the latter behavior being relevant to the quest for high-order rules with many well-behaved non-unit weights. The following survey, in contrast, ignores their restriction on the parameter $(0 \\leq \\alpha < 1$ in our notation) and seeks rules with relatively few non-unit weights.\n\n# 4 Examples\n\nAs Fornberg suggests [3], we might reasonably solve the equations in MATLAB if we want the coefficients in decimal form, or in Wolfram Mathematica if we want them in exact rational form. I used a spreadsheet! Values of $k$ were filled across the top, and values of $i$ down the left-hand edge. For the binomial coefficient in (32), the value of $-\\alpha$ was entered manually into the appropriate cell—the most consequential cell in the sheet—and subsequent values were built recursively. The matrix-inversion and matrix-multiplication functions were used where convenient. In the course of the inquiry, I computed two columns of corrections $c_{i}$ for which exact rational values were desirable but not obvious; for these I found a common denominator by expanding a decimal value in a continued fraction.\n\n# 4.1 Validation: Reproducing known rules\n\nCase $\\alpha = 0$ : This is the case considered by Fornberg, after Gregory, assuming ab initio that the outermost nodes coincide with the terminals. In his \"Table 1\" [4, p.170], where his $p$ is our $m + 2$ , Fornberg gives the corrections, which are duly reproduced by our eqs. (32) and (15).<sup>2</sup> E.g., the corrections for $m = 2$ are\n\n$$\n- \\frac {5}{8}, \\frac {1}{6}, - \\frac {1}{2 4},\n$$\n\nwhich, when applied from each end of a unit-weight rule with six or more points, give the \"Lacroix rule\" [2, p.131] —a Gregorian rule with its own name, having the same order of accuracy as the \"Simpson\" rules. And if we apply the\n\nfour corrections for $m = 3$ from each end of a sufficiently long unit-weight rule, and then (say) halve $h$ and double $n$ , we find that the error is $O(h^5)$ , which is one better than the \"Simpson\" rules [15]; and so on.\n\nMoreover, Hamming [11, pp. 342-4] notes that if we apply a Gregorian correction sequence from each end of the unit-weight rule of the same length (i.e., if $n = m$ ), we get the standard closed Newton-Cotes rule of that length: $m = 1$ gives the trapezoidal rule, $m = 2$ gives \"Simpson's 1/3 rule\", $m = 3$ gives \"Simpson's 3/8 rule\", $m = 4$ gives \"Boole's rule\", etc. This, he says, is \"perhaps the simplest way to find the actual coefficients\" of the N-C rules [11, pp. 342].\n\nWe should add that if $m$ is even (so that the number of corrections is odd), we get a standard closed N-C rule not only by applying the corrections to $m + 1$ unit weights, but also by applying them to $m + 2$ unit weights. Thus the trapezoidal rule (with two points) is obtained from $m = 1$ (two corrections) or $m = 0$ (one correction), and \"Simpson's 3/8 rule\" (with four points) is obtained from $m = 3$ (four corrections) or $m = 2$ (three corrections), and so on.\n\nBut if $m$ is odd (so that the number of corrections is even), we get a standard closed N-C rule only by applying the corrections to $m + 1$ unit weights, not by applying them to $m + 2$ unit weights. Thus we do not get \"Simpson's 1/3 rule\" (three points) from $m = 1$ (two corrections), nor \"Boole's rule\" (five points) from $m = 3$ (four corrections), although the associated Gregorian rules still have the full expected accuracy, with error $O(h^{m + 2})$ .\n\nThe Gregorian rule for $m = 1$ has an alternative explanation. The \"corrected\" composite trapezoidal rule, which uses derivatives at the terminals, is two orders more accurate than the uncorrected one (that is, it has the same order as the \"Simpson\" rules). If the Gregorian corrections for $m = 1$ , namely $-\\frac{7}{12}$ , $\\frac{1}{12}$ , are re-expressed as corrections to the composite trapezoidal rule, they become $-\\frac{1}{12}$ , $\\frac{1}{12}$ ; and this sequence is recognizable as a finite-difference estimate of the end \"correction\" to the trapezoidal rule, taking the derivative at the distance $h/2$ from the terminal instead of $at$ the terminal, and thereby giving an order of accuracy between the uncorrected and \"corrected\" trapezoidal rules.\n\nIn general, the corrections for even $m$ make Gregory's rule exact for degrees up to $m + 1$ , like the closed $(m + 1)$ -point and $(m + 2)$ -point N-C rules, which are the unique closed equispaced rules of their lengths that are exact up to that degree. And the corrections for odd $m$ make Gregory's rule exact for degrees up to $m$ , like the closed $(m + 1)$ -point N-C rule, which is the unique closed equispaced rule of its length that is exact up to that degree. Thus Gregory's method must generate every closed Newton-Cotes rule—twice if the rule has an even number of points (for an odd-degree interpolating polynomial).\n\nYet Gregory's letter to Collins [9] predates every closed N-C rule except the trapezoidal rule and Kepler's barrel rule (also known as Simpson's 1/3 rule). Five years after he wrote that letter, Gregory was dead. Tradition holds that he suffered a stroke while showing his students the moons of Jupiter, whereas the earliest surviving account says: \"By a cold caught in the castle, he grew blind in on[e] night, and shortly after dyed\" [7]. He was 36.\n\nCase $\\alpha = 1$ : This case departs from Gregory/Fornberg by yielding open rules whose outermost nodes are one step in from the terminals. By analogy with the preceding case, the corrections given by eqs. (32) and (15) for even $m$ should yield the open $(m + 1)$ -point and $(m + 2)$ -point N-C rules (listed by Weisstein [16] for up to 7 points), whereas the corrections for odd $m$ should yield the open $(m + 1)$ -point N-C rule. Let us check.\n\nFor $m = 0$ , the sole correction is $c_0 = \\frac{1}{2}$ . When applied (twice) to the 1-point unit-weight rule, this gives the single weight 2, which agrees with the 1-point open N-C rule. Applied from each end of the 2-point unit-weight rule, it gives the weights of the 2-point open N-C rule:\n\n$$\n\\frac {3}{2}, \\frac {3}{2}.\n$$\n\nFor $m = 1$ , the correction sequence is\n\n$$\n\\begin{array}{c} \\frac {1 1}{1 2}, - \\frac {5}{1 2}. \\end{array}\n$$\n\nApplied from each end of the 2-point unit-weight rule, this gives the 2-point open N-C rule again.\n\nFor $m = 2$ , the correction sequence is\n\n$$\n\\frac {3 1}{2 4}, - \\frac {7}{6}, \\frac {3}{8}.\n$$\n\nApplied from each end of the 3-point unit-weight rule, this gives the weight sequence\n\n$$\n\\frac {8}{3}, - \\frac {4}{3}, \\frac {8}{3},\n$$\n\nwhich is the 3-point open N-C rule. And applied from each end of the 4-point unit-weight rule, it gives the weight sequence\n\n$$\n\\begin{array}{c} \\frac {5 5}{2 4}, \\frac {5}{2 4}, \\frac {5}{2 4}, \\frac {5 5}{2 4}, \\end{array}\n$$\n\nwhich is the 4-point open N-C rule.\n\nFor $m = 3$ , the correction sequence is\n\n$$\n\\frac {1 1 8 1}{7 2 0}, - \\frac {1 5 9 3}{7 2 0}, \\frac {1 0 2 3}{7 2 0}, - \\frac {2 5 1}{7 2 0}.\n$$\n\nApplied from each end of the 4-point unit-weight rule, this gives the 4-point open N-C rule again.\n\nFor $m = 4$ , the correction sequence is\n\n$$\n\\frac {2 8 3 7}{1 4 4 0}, - \\frac {5 0 8 6}{1 4 4 0}, \\frac {4 8 9 6}{1 4 4 0}, - \\frac {2 4 0 2}{1 4 4 0}, \\frac {4 7 5}{1 4 4 0}.\n$$\n\nApplied from each end of the 5-point unit-weight rule, this gives the simple but oscillatory weight sequence\n\n$$\n\\frac {3 3}{1 0}, - \\frac {2 1}{5}, \\frac {3 9}{5}, - \\frac {2 1}{5}, \\frac {3 3}{1 0},\n$$\n\nwhich is the 5-point open N-C rule. So far: so good.\n\nCase $\\alpha = -1$ : This yields rules for which the outermost nodes are one step outside the range of integration.\n\nOne of these rules is easily confirmed. For $m = 2$ the computed corrections are\n\n$$\n- \\frac {2 5}{2 4}, - \\frac {1}{2}, \\frac {1}{2 4}.\n$$\n\nWhen these are applied to a sufficiently long sequence of unit weights, the first three weights are\n\n$$\n- \\frac {1}{2 4}, \\frac {1}{2}, \\frac {2 5}{2 4}.\n$$\n\nThe respective weights for the composite trapezoidal rule (with the range of integration beginning at the second node) are\n\n$$\n0, \\frac {1}{2}, 1\n$$\n\nso that, by subtraction, the corrections to the composite trapezoidal rule given by $m = 2$ are\n\n$$\n- \\frac {1}{2 4}, 0, \\frac {1}{2 4}.\n$$\n\nThe corresponding contribution to the right-hand side of (1) is\n\n$$\nh \\left[ - \\frac {1}{2 4} f (0) + \\frac {1}{2 4} f (2 h) \\right] = \\frac {1}{1 2} h ^ {2} \\frac {f (2 h) - f (0)}{2 h} \\approx \\frac {1}{1 2} h ^ {2} f ^ {\\prime} (h), \\tag {33}\n$$\n\nwhere the right-hand expression is the standard left-hand correction in the \"corrected trapezoidal rule\" (the argument $h$ is the lower limit of integration). Thus, by taking $\\alpha = -1$ and $m = 2$ , we get a discretized corrected composite trapezoidal rule. An equivalent rule is given by Weisstein [16], who describes it as a \"2-point open extended formula\" without further explanation. For a single interval (single step), this rule has the weights\n\n$$\n- \\frac {1}{2 4}, \\frac {1 3}{2 4}, \\frac {1 3}{2 4}, - \\frac {1}{2 4},\n$$\n\nwhich we have obtained by setting $\\alpha = -1$ , $m = 2$ , and $n = 3$ . But they can also be obtained by setting $\\alpha = 0$ , $m = 2$ , and $n = 1$ , so that $m > n$ ; in the latter case, the corrections that overshoot the unit weights are added to 0 instead of 1.\n\nWe could pursue higher-order discrete corrections to the trapezoidal rule by taking $\\alpha = -2$ and $m = 4$ ; $\\alpha = -3$ and $m = 6$ ; etc. But, having come this far\n\nin order to demonstrate the effectiveness of our method, let us now use it to derive some less familiar rules.\n\n# 4.2 Application: Correcting the midpoint rule\n\nCase $\\alpha = 1/2$ : This yields open rules for which the outermost nodes are a half-step in from the terminals—as in the composite midpoint rule, which of course is a unit-weight rule, so that the corrections to the unit-weight rule can also be called corrections to the midpoint rule. For $m = 0$ , the sole \"correction\" is $c_0 = 0$ , leaving the midpoint rule uncorrected. For $m = 1$ , the order improves by 1. For $m \\in \\{2,3,4\\}$ , we get rules that we might actually want to use. As open rules, they avoid evaluating the integrand at the terminals, where it may not be defined. Even if the integrand has a finite limit as we approach the terminal, there is some convenience in not having to deal with a singularity, or the possibility of a singularity, at the terminal, wherefore one might say that open rules are better than closed rules as general-purpose rules.\n\nFor $\\alpha = 1 / 2$ , the corrections for the nominated values of $m$ are\n\n$$\n\\begin{array}{l} m = 2: \\frac {1}{1 2}, - \\frac {1}{8}, \\frac {1}{2 4}; \\\\ m = 3: \\frac {7 0 3}{5 7 6 0}, - \\frac {1 3 8 9}{5 7 6 0}, \\frac {9 0 9}{5 7 6 0}, - \\frac {2 2 3}{5 7 6 0}; \\\\ m = 4: \\begin{array}{c} 9 0 9 \\\\ \\hline 5 7 6 0 \\end{array} , - \\begin{array}{c} 2 2 1 3 \\\\ \\hline 5 7 6 0 \\end{array} , \\begin{array}{c} 2 1 4 5 \\\\ \\hline 5 7 6 0 \\end{array} , - \\begin{array}{c} 1 0 4 7 \\\\ \\hline 5 7 6 0 \\end{array} , \\begin{array}{c} 2 0 6 \\\\ \\hline 5 7 6 0 \\end{array} . \\\\ \\end{array}\n$$\n\nThe corresponding weights (if there are unit weights left over) are\n\n$$\n\\begin{array}{l} m = 2: \\frac {1 3}{1 2}, \\frac {7}{8}, \\frac {2 5}{2 4}, 1, \\dots , 1, \\frac {2 5}{2 4}, \\frac {7}{8}, \\frac {1 3}{1 2}; \\\\ m = 3: \\frac {6 4 6 3}{5 7 6 0}, \\frac {4 3 7 1}{5 7 6 0}, \\frac {6 6 6 9}{5 7 6 0}, \\frac {5 5 3 7}{5 7 6 0}, 1, \\text {e t c .}; \\\\ m = 4: \\begin{array}{c} \\frac {6 6 6 9}{5 7 6 0}, \\frac {3 5 4 7}{5 7 6 0}, \\frac {7 9 0 5}{5 7 6 0}, \\frac {4 7 1 3}{5 7 6 0}, \\frac {5 9 6 6}{5 7 6 0}, 1, \\text {e t c .} \\end{array} \\\\ \\end{array}\n$$\n\nThe denominator for $m = 3$ and $m = 4$ was found by expanding one correction in a continued fraction. (The same approach to $m = 5$ made it clear that the exact rational coefficients would be unwieldy.)\n\nAs a check, it is worth noting that if we apply the corrections for $m = 3$ from each end of the 4-point unit-weight rule, we get the same weights—namely\n\n$$\n\\frac {1 3}{1 2}, \\frac {1 1}{1 2}, \\frac {1 1}{1 2}, \\frac {1 3}{1 2}\n$$\n\n—as if we do likewise with the corrections for $m = 2$ . As a further check, it is easily confirmed by experiment that the resulting rules for $m = 2$ and $m = 3$ are exact (to machine precision) for integrands of degree up to 3 (like the\n\n\"Simpson\" rules) while the resulting rule for $m = 4$ is exact for integrands of degree up to 5 (like \"Boole's rule\").\n\nAnd as a test, if we integrate $f(t) = 7t^6$ from 0 to 1, with 10 nodes, for $m = 2$ , $m = 3$ , and $m = 4$ , and then double the number of nodes, the errors are reduced by the approximate factors 13.5, 27.7, and 47.3 respectively, whence it is not hard to believe that the errors are $O(h^4)$ , $O(h^5)$ , and $O(h^6)$ respectively.\n\nCase $\\alpha = -1 / 2$ : This yields rules for which the outermost nodes are a half-step outside the range of integration.\n\nAgain one rule from the series is easily confirmed. For $m = 1$ , the computed corrections are\n\n$$\n- \\frac {2 3}{2 4}, - \\frac {1}{2 4}.\n$$\n\nWhen these are applied to a sufficiently long sequence of unit weights, the first two weights are\n\n$$\n\\begin{array}{c} \\frac {1}{2 4}, \\frac {2 3}{2 4}. \\end{array}\n$$\n\nThe respective weights for the composite midpoint rule (the first midpoint being the second node) are\n\n$$\n0, 1\n$$\n\nso that, by subtraction, the corrections to that composite midpoint rule given by $m = 1$ are\n\n$$\n\\frac {1}{2 4}, - \\frac {1}{2 4}.\n$$\n\nThe corresponding contribution to the right-hand side of (1) is\n\n$$\nh \\left[ \\frac {1}{2 4} f (0) - \\frac {1}{2 4} f (h) \\right] = - \\frac {1}{2 4} h ^ {2} \\frac {f (h) - f (0)}{h} \\approx - \\frac {1}{2 4} h ^ {2} f ^ {\\prime} (h / 2), \\tag {34}\n$$\n\nwhere the right-hand expression is minus one half of the standard left-hand correction in the \"corrected trapezoidal rule\" (the argument $h / 2$ is the lower limit of integration). But it is clear that $-1 / 2$ of the leading-order correction to the trapezoidal rule is the leading-order correction to the midpoint rule; e.g., the \"Simpson\" weights $\\left(\\frac{1}{3}, \\frac{4}{3}, \\frac{1}{3}\\right)$ are $2/3$ of the way from the trapezoidal weights $(1,0,1)$ to the midpoint weights $(0,2,0)$ . So the rule for $\\alpha = -1 / 2$ and $m = 1$ is a discretized corrected composite midpoint rule. We could pursue higher-order discrete corrections to the midpoint rule by taking $\\alpha = -3 / 2$ and $m = 3$ ; $\\alpha = -5 / 2$ and $m = 5$ ; etc. (The resulting rules are unusual in that for odd $m$ , they are exact for integrands of degree up to $m + 2$ and thereafter give errors of $O(h^{m + 3})$ . For they have the same symmetry about the terminal as the above \"discretized corrected composite trapezoidal rule\", in which, for even $m$ , the\n\ncorrection at the terminal is zero, so that the effective number of corrections is one fewer than would normally be required for the same order of accuracy.)\n\nBy the way, I initially derived the rule for $\\alpha = +1/2$ and $m = 2$ by treating it as a corrected midpoint rule, with a different discrete estimate of $f'$ at the lower terminal [14]. But I used only the generalized Gregory/Fornberg approach to find the corresponding rules for $m = 3$ and $m = 4$ .\n\n# 4.3 Asymmetrical rules $(\\beta \\neq \\alpha)$\n\nThe examples given so far have used the same value of $\\alpha$ at each end of the range of integration; that is, in the notation of eqs. (1) to (7), they have set $\\beta = \\alpha$ . But we can also set $\\alpha$ and $\\beta$ independently. This is useful if we have a function sampled at fixed equispaced abscissae and want to be able to integrate it between arbitrary limits.\n\nFor an illustration, let us take $\\alpha = 1 / 2$ and $\\beta = 0$ , so that the rule is midpoint-like from the left and closed-N-C-like from the right (such a rule, being open at one end and closed at the other, is described as semi-open). If any unit weights remain, the weights for $m = 2$ are\n\n$$\n\\frac {1 3}{1 2}, \\frac {7}{8}, \\frac {2 5}{2 4}, 1, \\dots , 1, \\frac {2 3}{2 4}, \\frac {7}{6}, \\frac {3}{8},\n$$\n\nand the weights for $m = 3$ are\n\n$$\n\\frac {6 4 6 3}{5 7 6 0}, \\frac {4 3 7 1}{5 7 6 0}, \\frac {6 6 6 9}{5 7 6 0}, \\frac {5 5 3 7}{5 7 6 0}, 1, \\dots , 1, \\frac {7 3 9}{7 2 0}, \\frac {2 1 1}{2 4 0}, \\frac {2 9 9}{2 4 0}, \\frac {2 5 1}{7 2 0}\n$$\n\nwhere the last four are given by Fornberg [3, p.8]. If we integrate from 0 to 1 with $h = 2 / 19$ (giving 10 nodes), we find that the rule for $m = 2$ is exact for integrands of degree up to 2 (not 3 as for the symmetrical rules) while the rule for $m = 3$ is exact for integrands of degree up to 3 (as for the symmetrical rules). For $f(t) = 5t^4$ , if we reduce $h$ from 2/19 (10 nodes) to 2/39 (20 nodes), the error is reduced by a factor 18.5 for $m = 2$ , and 36.4 for $m = 3$ , whence it is not hard to believe that the errors are $O(h^4)$ and $O(h^5)$ respectively; recall that, after eq. (8), the error for a general $f$ was expected to be $O(h^{m + 2})$ , regardless of whether the error canceled for $f$ of degree $m + 1$ for even $m$ .\n\nFor another illustration, one of the \"single interval extrapolative rules\" listed by Weisstein [16], namely\n\n$$\n\\int_ {- h} ^ {0} f (t) d t \\approx h \\left[ \\frac {2 3}{1 2} f (0) - \\frac {4}{3} f (h) + \\frac {5}{1 2} f (2 h) \\right], \\tag {35}\n$$\n\nis recognizable as a backward Adams-Bashforth rule, and can be obtained by setting $\\alpha = 1$ , $\\beta = -2$ , and $m = n = 2$ . In general, forward Adams-Bashforth\n\nweights are given by $\\alpha = -m = -n$ and $\\beta = 1$ , and forward Adams-Moulton weights by $\\alpha = 1 - m = 1 - n$ and $\\beta = 0$ .\n\n# 5 Conclusion\n\nThe working equations (15) and (32) bear repeating, and the former bears switching left-to-right for actual use. So, in summary, the naive unit-weight equispaced quadrature rule may be corrected exactly for integrands up to degree $m$ by adding $m + 1$ \"corrections\" to the weights at each end, starting with the outermost weight and working inward. The corrections are given by\n\n$$\nc _ {i} = \\sum_ {k = i} ^ {m} \\binom {k} {i} (- 1) ^ {k - i} b _ {k}; \\quad i \\in [ 0.. m ], \\tag {36}\n$$\n\nwhere the $m + 1$ coefficients $b_{k}$ are the solutions of the lower-triangular system\n\n$$\n\\sum_ {k = 0} ^ {i} \\frac {(- 1) ^ {i - k}}{i - k + 1} b _ {k} = \\frac {(- 1) ^ {i + 1}}{i + 2} - \\binom {- \\alpha} {i + 1}; \\quad i \\in [ 0.. m ], \\tag {37}\n$$\n\nwhere $\\alpha$ is the distance from the limit-of-integration to the first node, measured inward in step-lengths. The corrections may overlap, in which case they are cumulative (and if any corrections overshoot the unit weights, they are added to 0 instead of 1).\n\nThe values of $\\alpha$ at the two ends need not be the same. If they are the same (\" $\\beta = \\alpha$ \" ) and $m$ is even, the rule is exact for integrands of degree up to $m + 1$ (instead of $m$ ). Be that as it may, the error in the integral is generally $O(h^{m + 2})$ , where $h$ is the step-length.\n\nWhereas the original purpose of this study was to find corrected weights for the composite midpoint rule (which, for better or worse, determined the sign convention for the parameter $\\alpha$ ), a wide variety of closed, open, and extrapolative equispaced rules may be derived from the same two equations by suitably choosing $\\alpha, \\beta$ , and $m$ , and (for overlapping corrections) the initial number of unit weights.\n\n# 6 Acknowledgment\n\nI thank Professor Bengt Fornberg for saving me much embarrassment by giving feedback on an advanced draft of this paper—and for his just-published tribute to James Gregory [5], which led me directly or indirectly to references [1, 2, 7, 11, 13]. Any remaining errors or omissions are my own.\n\n# References\n\n[1] W. Barrett, \"On the remainder term in numerical integration formulae\" (read 15 Nov. 1951), J. London Mathematical Soc., vol. s1-27, no. 4 (Oct. 1952), pp. 465-70; doi.org/10.1112/jlms/s1-27.4.465. \n[2] S.A. De Swardt & J.M. De Villiers, “Gregory type quadrature based on quadratic nodal spline interpolation”, Numerische Mathematik, vol. 85 (2000), pp. 129–53; doi.org/10.1007/s002110050480. \n[3] B. Fornberg, \"Gregory's quadrature method\", University of Colorado Boulder, 2011; colorado.edu/amath/sites/default/files/attached-files/gregory.pdf. \n[4] B. Fornberg, “Improving the accuracy of the trapezoidal rule”, SIAM Review, vol. 63, no.1 (2021), pp. 167–80; doi.org/10.1137/18M1229353 (open access). \n[5] B. Fornberg, \"James Gregory's numerical quadrature\", *Astronomy & Geophysics*, vol. 66, no.6 (Dec. 2025), pp. 17-21; doi.org/10.1093/astrogeo/ataf070 (online 28 Nov. 2025). \n[6] B. Fornberg & A. Lawrence, “Enhanced trapezoidal rule for discontinuous functions”, Journal of Computational Physics, vol. 491, 15 Oct. 2023, 112386; doi.org/10.1016/j.jcp.2023.112386. \n[7] Frarghall, letter to Colin Campbell, 24 Jan. 1676, quoted in A. Malet, Studies on James Gregorie (1638-1675), Ph.D. thesis, Princeton, 1989, p. 86. \n[8] C.-E. Froberg, Introduction to Numerical Analysis, Addison-Wesley, 1965; archive.org/details/introductiontonu0000frob. \n[9] J. Gregory, ed. J. Collins, “Extract of a letter from J. Gregory to Collins” (23 Nov. 1670), in S.P. & S.J. Rigaud (eds.), Correspondence of Scientific Men of the Seventeenth Century, vol. 2, Oxford, 1841, archive.org/details/correspondences01rigagoog, pp. 203–12. \n[10] E. Hairer & G. Wanner, Analysis by its History, Springer, 2008. \n[11] R.W. Hamming, Numerical Methods for Scientists and Engineers, 2nd Ed., McGraw-Hill, 1973 (reprinted Mineola, NY: Dover, 1986); archive.org/details/numericalmethods00hamm_0. \n[12] E. Martensen, \"Optimale Fehlerschranken für die Quadraturformel von Gregory\", Zeitschrift für Angewandte Mathematik und Mechanik, vol. 44 (1964), no. 4-5, pp. 159-68; doi.org/10.1002/zamm.19640440403 (cited in [2]). \n[13] E. Martensen, “Zur Restglieddarstellung der Gregoryschen Quadraturformel ungerader Ordnung”, Numerische Mathematik, vol. 15 (1970), pp. 229–33; doi.org/10.1007/BF02168972. \n[14] G.R. Putland, “End-corrected weights for the midpoint rule”, 16 Nov. 2025; linkedin.com/feed/update/urn:li:activity:7395696367576338432. \n[15] G.R. Putland, “Gregorian numerical integration”, 23 Nov. 2025; linkedin.com/feed/update/urn:li:activity:7398111129383862273. \n[16] E.W. Weisstein, “Newton-Cotes Formulas”, Wolfram MathWorld, 2025; mathworld.wolfram.com/Newton-CotesFormulas.html."}
# ON NONPROJECTEDNESS OF SUPERMODULI WITH NEVEU-SCHWARZ AND RAMOND PUNCTURES ABSTRACT. We study the supermoduli space $\mathfrak{M}_{g,n,2r}$ of Super Riemann Surfaces (SRS) of genus $g$ , with $n$ Neveu-Schwarz punctures and $2r$ Ramond punctures. We improve the result of Donagi, Witten, and Ott by showing that the supermoduli space $\mathfrak{M}_{g,n,2r}$ is not projected if $g \geq n + 5r + 3$ . # 1. INTRODUCTION Supermanifolds are the natural backgrounds for supersymmetric field theories, and are extensively studied mathematical objects. In perturbative superstring theory, particles are replaced by strings; hence, if one is interested in calculating scattering amplitudes to certain perturbative orders, then the natural generalization in superstring theory is to consider amplitude contributions coming from Super Riemann surfaces (SRS) with genus up to that order. An example is illustrated by Figure 1. Moreover, if one wants to talk about insertions of spacetime bosons and fermions, one needs to consider Neveu-Schwarz (NS) and Ramond (R) punctures, respectively, on an SRS, which label two different kinds of vertex-operator insertions. FIGURE 1. With particles replaced by strings, a 1-loop Feynman diagram (left) becomes a Super Riemann surface of genus 1 (right) in superstring perturbation theory. The formulation of superstring theory using the path integral leads naturally to moduli spaces $\mathfrak{M}_{g,n,2r}$ of SRS with punctures. The role played by Feynman diagrams in ordinary QFT is taken, instead, by (families of) SRS with punctures: one integrates a suitable superstring measure over the supermoduli space of those surfaces. By far, one of the most successful computations was done in 2-loop order, which used the property that $\mathfrak{M}_g$ is split for $g = 2$ . However, a major mathematical complication is that for sufficiently large genus $g$ , the supermoduli space $\mathfrak{M}_g$ is not projected, hence does not split. This result was proved by Donagi and Witten in. In fact, they proved an even stronger result when the NS punctures are also present: They showed that $\mathfrak{M}_{g,n,0}$ is not projected for $g \geq 2$ and $g - 1 \geq n \geq 1$ . Then following this work, Donagi and Ott proved that $\mathfrak{M}_{g,0,2r}$ is not projected for $g \geq 5r + 1 \geq 6$ , i.e., when the R punctures are present, but no NS puncture is present. The purpose of this paper is to improve the result of Donagi, Witten, and Ott by proving nonprojectedness of supermoduli $\mathfrak{M}_{g,n,2r}$ for large genus $g$ . The main result of this paper is that $\mathfrak{M}_{g,n,2r}$ is not projected if $g \geq n + 5r + 3$ , which is given in Theorem 4.2. This paper is structured as follows: In section 2, we collect several key results in the literature that we will use repeatedly in our proof. In section 3, we introduce the setup of our proof, and check that several ingredients required in the proof are in place. Our main results, which include a proof of Theorem 4.2, will be established in Section 4. # 2. PRELIMINARIES The theory of supermanifolds and SRS has been studied extensively in literature such as. Therefore, in this section, we will only briefly recall relevant definitions and refer the readers to the mentioned literature for details. Definition 2.1 (split supermanifold). Let $M$ be a manifold and $V$ a vector bundle over $M$ . Then we define the split supermanifold $S \coloneqq S(M, V)$ to be the locally ringed space $(M, \mathcal{O}_S)$ , where $\mathcal{O}_S \coloneqq \mathcal{O}_M \otimes \Lambda^\bullet V^*$ is the sheaf of $\mathcal{O}_M$ -valued sections of the exterior algebra $\Lambda^\bullet V^*$ . In other words, $$ S (M, V) = \left(M, \mathcal {O} _ {M} \otimes \Lambda^ {\bullet} V ^ {*}\right). $$ If we interpret $V$ as a locally free sheaf of $\mathcal{O}_M$ -modules, then $\mathcal{O}_S$ is simply $\Lambda^{\bullet}V^{*}$ . Then $\mathcal{O}_S$ is $\mathbb{Z}_2$ -graded, supercommutative, and its stalks are local rings. Example 2.2. $S(\mathbb{R}^m,\mathcal{O}_{\mathbb{R}^m}^{\oplus n})$ is just the smooth manifold $\mathbb{R}^m$ , with a sheaf of supercommutative ring $$ \mathcal {O}: U \mapsto C ^ {\infty} (U) [ \theta^ {1}, \dots , \theta^ {n} ], $$ that sends $U$ to the Grassmann algebra with generators $\theta^1, \ldots, \theta^n$ over $C^\infty(U) = \mathcal{O}_{\mathbb{R}^m}(U)$ . This is the structure sheaf. An element of the structure sheaf looks like $f = \sum_I f_I \theta^I$ where $I = (i_1, \ldots, i_k) \subset \{1, \ldots, n\}$ is an index set and $\theta^I = \theta^{i_1} \theta^{i_2} \cdots \theta^{i_k}$ . We will call this model space $\mathbb{R}^{m|n}$ . Thus $\mathbb{R}^{m|n}$ has even coordinates $x^1, \ldots, x^m$ and odd coordinates $\theta^1, \ldots, \theta^n$ . For brevity, we will usually write an element $f$ in the structure sheaf as $f(x^\mu, \theta^\alpha)$ , and we will denote the structure sheaf $\mathcal{O}$ by $\mathcal{O}_{\mathbb{R}^{m|n}}$ if we want to be specific. Similarly, one can define $\mathbb{C}^{m|n}$ . Definition 2.3. A supermanifold of dimension $m|n$ is a locally ringed space $(S, \mathcal{O}_S)$ that is locally isomorphic to some model space $S(M, V)$ , where $\dim M = m$ and $\operatorname{rank} V = n$ . We will usually just refer to this supermanifold as $S$ with the understanding that it also comes with the structure sheaf $\mathcal{O}_S$ . The manifold $M$ is called the reduced space or the bosonic reduction of $S$ , and we usually write $|S| = M$ . A morphism between supermanifolds is simply a morphism of locally ringed spaces. Definition 2.4. A sheaf $\mathcal{F}$ on a supermanifold $S$ is simply a sheaf on the reduced space $M$ , and sheaf cohomology $H^{k}(S,\mathcal{F})$ of $\mathcal{F}$ on $S$ is simply the sheaf cohomology $H^{k}(M,\mathcal{F})$ on the reduced space. Definition 2.5 (tangent bundle). The tangent bundle $T_{S}$ of supermanifold $S$ is the sheaf of $\mathcal{O}_{S}$ -modules given by derivations of the structure sheaf, i.e., as sheaves $$ T _ {S} := \operatorname {D e r} \left(\mathcal {O} _ {S}\right). $$ It is a $\mathbb{Z}_2$ -graded vector bundle, or a locally free sheaf of $\mathcal{O}_S$ -modules. In the simplest example where $S = \mathbb{R}^{m|n}$ or $\mathbb{C}^{m|n}$ , $T_{S}$ is the free $\mathcal{O}_S$ -module generated by even tangent vectors $\frac{\partial}{\partial x^{\mu}}$ for $\mu = 1, \ldots, m$ and odd tangent vectors $\frac{\partial}{\partial \theta^{\alpha}}$ for $\alpha = 1, \ldots, n$ . In general, for a split supermanifold $S = S(M,V)$ , the tangent bundle $T_{S}$ need not have distinguished complementary even and odd subbundles: The even and odd parts are not sheaves of $\mathcal{O}_S$ modules. For example, $\frac{\partial}{\partial\theta^1}$ is an odd vector field, but multiplication by an element of $\mathcal{O}_S$ might change parity: Consider $\theta^1\frac{\partial}{\partial\theta^1}$ . This is now an even vector field. However, we note that since elements in $\mathcal{O}_M$ are even, if we restrict ourselves to multiplying only elements in $\mathcal{O}_M$ then parity is preserved. More specifically, what we are saying is that the restriction $T_{S}|_{M}$ of $T_{S}$ to the reduced space $M$ does split (By restriction to $M$ we mean pullback, under the natural inclusion $i:M\to S$ , of a sheaf of $\mathcal{O}_S$ -modules to a sheaf of $\mathcal{O}_M$ -modules. In this case, this is accomplished by setting all odd functions to 0). Explicitly, the splitting is given by $$ T _ {S, +} := T _ {M}, $$ $$ T _ {S, -}: = V. $$ For a general supermanifold $S$ , we only have local isomorphisms between $S$ and $S(M, V)$ for some model space $M$ and vector bundle $V$ , but there is no canonical identification $T_{S, -} = V$ . However, there is still a way to define $T_{S, -}$ . We consider the natural inclusion $i: M \to S$ , where we view $M$ as the subspace that is locally given by $\theta^{\alpha} = 0$ for all $\alpha$ . Then then we have a normal bundle sequence $$ 0 \to T _ {M} \to T _ {S} | _ {M} \to N _ {M / S} \to 0. $$ Now $N_{M / S}$ is a purely odd bundle. Therefore, we may define $T_{S, + } = T_{M}$ and $T_{S, - } = N_{M / S}$ . We note that the rank of $T_{S, + } = T_{S}$ is $m|0$ and the rank of $T_{S, - }$ is $0|n$ if $\dim S = m|n$ . Next, we come to the definition of split and projected supermanifolds. Definition 2.6 (split and projected supermanifold). A supermanifold $S$ is said to be split if it is globally isomorphic to $S(M, V)$ for some manifold $M$ and some vector bundle $V \to M$ . In other words, it is globally isomorphic to some split supermanifold. A supermanifold $S$ is said to be projected if there exists a projection map $\pi : S \to M$ , such that the inclusion $i : M \to S$ , which is identity on the reduced space $i : |M| \to |S|$ , and $i^{*} : \mathcal{O}_{S} \to \mathcal{O}_{M}$ given by imposing $\theta^{1} = \dots = \theta^{n} = 0$ , is a section of $\pi$ , i.e., $\pi i = \mathrm{id}_{M}$ and $i^{*}\pi^{*} = \mathrm{id}_{\mathcal{O}_{M}}$ . # Lemma 2.7. Any split supermanifold $S$ is projected. Proof. Indeed, if $S$ is split, then $\mathcal{O}_S \cong \mathcal{O}_M \otimes \Lambda^\bullet V^*$ . Then we have a morphism $\pi^* : \mathcal{O}_M \to \mathcal{O}_S$ via embedding $\mathcal{O}_M$ to be the degree zero part. And it is easy to see that $i^*\pi^* = \mathrm{id}_{\mathcal{O}_M}$ . Now to construct $\pi : S \to M$ with the desired property, we choose the map of underlying space $\pi : |S| \to |M|$ to be identity, and $\pi^* : \mathcal{O}_M \to \pi_*\mathcal{O}_S$ to be the map given above. A natural question to ask is whether we can characterize the obstruction to splitting or projection of a supermanifold via
# ON NONPROJECTEDNESS OF SUPERMODULI WITH NEVEU-SCHWARZ AND RAMOND PUNCTURES ABSTRACT. We study the supermoduli space $\mathfrak{M}_{g,n,2r}$ of Super Riemann Surfaces (SRS) of genus $g$ , with $n$ Neveu-Schwarz punctures and $2r$ Ramond punctures. We improve the result of Donagi, Witten, and Ott by showing that the supermoduli space $\mathfrak{M}_{g,n,2r}$ is not projected if $g \geq n + 5r + 3$ . # 1. INTRODUCTION Supermanifolds are the natural backgrounds for supersymmetric field theories, and are extensively studied mathematical objects. In perturbative superstring theory, particles are replaced by strings; hence, if one is interested in calculating scattering amplitudes to certain perturbative orders, then the natural generalization in superstring theory is to consider amplitude contributions coming from Super Riemann surfaces (SRS) with genus up to that order. An example is illustrated by Figure 1. Moreover, if one wants to talk about insertions of spacetime bosons and fermions, one needs to consider Neveu-Schwarz (NS) and Ramond (R) punctures, respectively, on an SRS, which label two different kinds of vertex-operator insertions. FIGURE 1. With particles replaced by strings, a 1-loop Feynman diagram (left) becomes a Super Riemann surface of genus 1 (right) in superstring perturbation theory. The formulation of superstring theory using the path integral leads naturally to moduli spaces $\mathfrak{M}_{g,n,2r}$ of SRS with punctures. The role played by Feynman diagrams in ordinary QFT is taken, instead, by (families of) SRS with punctures: one integrates a suitable superstring measure over the supermoduli space of those surfaces. By far, one of the most successful computations was done in 2-loop order, which used the property that $\mathfrak{M}_g$ is split for $g = 2$ . However, a major mathematical complication is that for sufficiently large genus $g$ , the supermoduli space $\mathfrak{M}_g$ is not projected, hence does not split. This result was proved by Donagi and Witten in. In fact, they proved an even stronger result when the NS punctures are also present: They showed that $\mathfrak{M}_{g,n,0}$ is not projected for $g \geq 2$ and $g - 1 \geq n \geq 1$ . Then following this work, Donagi and Ott proved that $\mathfrak{M}_{g,0,2r}$ is not projected for $g \geq 5r + 1 \geq 6$ , i.e., when the R punctures are present, but no NS puncture is present. The purpose of this paper is to improve the result of Donagi, Witten, and Ott by proving nonprojectedness of supermoduli $\mathfrak{M}_{g,n,2r}$ for large genus $g$ . The main result of this paper is that $\mathfrak{M}_{g,n,2r}$ is not projected if $g \geq n + 5r + 3$ , which is given in Theorem 4.2. This paper is structured as follows: In section 2, we collect several key results in the literature that we will use repeatedly in our proof. In section 3, we introduce the setup of our proof, and check that several ingredients required in the proof are in place. Our main results, which include a proof of Theorem 4.2, will be established in Section 4. # 2. PRELIMINARIES The theory of supermanifolds and SRS has been studied extensively in literature such as. Therefore, in this section, we will only briefly recall relevant definitions and refer the readers to the mentioned literature for details. Definition 2.1 (split supermanifold). Let $M$ be a manifold and $V$ a vector bundle over $M$ . Then we define the split supermanifold $S \coloneqq S(M, V)$ to be the locally ringed space $(M, \mathcal{O}_S)$ , where $\mathcal{O}_S \coloneqq \mathcal{O}_M \otimes \Lambda^\bullet V^*$ is the sheaf of $\mathcal{O}_M$ -valued sections of the exterior algebra $\Lambda^\bullet V^*$ . In other words, $$ S (M, V) = \left(M, \mathcal {O} _ {M} \otimes \Lambda^ {\bullet} V ^ {*}\right). $$ If we interpret $V$ as a locally free sheaf of $\mathcal{O}_M$ -modules, then $\mathcal{O}_S$ is simply $\Lambda^{\bullet}V^{*}$ . Then $\mathcal{O}_S$ is $\mathbb{Z}_2$ -graded, supercommutative, and its stalks are local rings. Example 2.2. $S(\mathbb{R}^m,\mathcal{O}_{\mathbb{R}^m}^{\oplus n})$ is just the smooth manifold $\mathbb{R}^m$ , with a sheaf of supercommutative ring $$ \mathcal {O}: U \mapsto C ^ {\infty} (U) [ \theta^ {1}, \dots , \theta^ {n} ], $$ that sends $U$ to the Grassmann algebra with generators $\theta^1, \ldots, \theta^n$ over $C^\infty(U) = \mathcal{O}_{\mathbb{R}^m}(U)$ . This is the structure sheaf. An element of the structure sheaf looks like $f = \sum_I f_I \theta^I$ where $I = (i_1, \ldots, i_k) \subset \{1, \ldots, n\}$ is an index set and $\theta^I = \theta^{i_1} \theta^{i_2} \cdots \theta^{i_k}$ . We will call this model space $\mathbb{R}^{m|n}$ . Thus $\mathbb{R}^{m|n}$ has even coordinates $x^1, \ldots, x^m$ and odd coordinates $\theta^1, \ldots, \theta^n$ . For brevity, we will usually write an element $f$ in the structure sheaf as $f(x^\mu, \theta^\alpha)$ , and we will denote the structure sheaf $\mathcal{O}$ by $\mathcal{O}_{\mathbb{R}^{m|n}}$ if we want to be specific. Similarly, one can define $\mathbb{C}^{m|n}$ . Definition 2.3. A supermanifold of dimension $m|n$ is a locally ringed space $(S, \mathcal{O}_S)$ that is locally isomorphic to some model space $S(M, V)$ , where $\dim M = m$ and $\operatorname{rank} V = n$ . We will usually just refer to this supermanifold as $S$ with the understanding that it also comes with the structure sheaf $\mathcal{O}_S$ . The manifold $M$ is called the reduced space or the bosonic reduction of $S$ , and we usually write $|S| = M$ . A morphism between supermanifolds is simply a morphism of locally ringed spaces. Definition 2.4. A sheaf $\mathcal{F}$ on a supermanifold $S$ is simply a sheaf on the reduced space $M$ , and sheaf cohomology $H^{k}(S,\mathcal{F})$ of $\mathcal{F}$ on $S$ is simply the sheaf cohomology $H^{k}(M,\mathcal{F})$ on the reduced space. Definition 2.5 (tangent bundle). The tangent bundle $T_{S}$ of supermanifold $S$ is the sheaf of $\mathcal{O}_{S}$ -modules given by derivations of the structure sheaf, i.e., as sheaves $$ T _ {S} := \operatorname {D e r} \left(\mathcal {O} _ {S}\right). $$ It is a $\mathbb{Z}_2$ -graded vector bundle, or a locally free sheaf of $\mathcal{O}_S$ -modules. In the simplest example where $S = \mathbb{R}^{m|n}$ or $\mathbb{C}^{m|n}$ , $T_{S}$ is the free $\mathcal{O}_S$ -module generated by even tangent vectors $\frac{\partial}{\partial x^{\mu}}$ for $\mu = 1, \ldots, m$ and odd tangent vectors $\frac{\partial}{\partial \theta^{\alpha}}$ for $\alpha = 1, \ldots, n$ . In general, for a split supermanifold $S = S(M,V)$ , the tangent bundle $T_{S}$ need not have distinguished complementary even and odd subbundles: The even and odd parts are not sheaves of $\mathcal{O}_S$ modules. For example, $\frac{\partial}{\partial\theta^1}$ is an odd vector field, but multiplication by an element of $\mathcal{O}_S$ might change parity: Consider $\theta^1\frac{\partial}{\partial\theta^1}$ . This is now an even vector field. However, we note that since elements in $\mathcal{O}_M$ are even, if we restrict ourselves to multiplying only elements in $\mathcal{O}_M$ then parity is preserved. More specifically, what we are saying is that the restriction $T_{S}|_{M}$ of $T_{S}$ to the reduced space $M$ does split (By restriction to $M$ we mean pullback, under the natural inclusion $i:M\to S$ , of a sheaf of $\mathcal{O}_S$ -modules to a sheaf of $\mathcal{O}_M$ -modules. In this case, this is accomplished by setting all odd functions to 0). Explicitly, the splitting is given by $$ T _ {S, +} := T _ {M}, $$ $$ T _ {S, -}: = V. $$ For a general supermanifold $S$ , we only have local isomorphisms between $S$ and $S(M, V)$ for some model space $M$ and vector bundle $V$ , but there is no canonical identification $T_{S, -} = V$ . However, there is still a way to define $T_{S, -}$ . We consider the natural inclusion $i: M \to S$ , where we view $M$ as the subspace that is locally given by $\theta^{\alpha} = 0$ for all $\alpha$ . Then then we have a normal bundle sequence $$ 0 \to T _ {M} \to T _ {S} | _ {M} \to N _ {M / S} \to 0. $$ Now $N_{M / S}$ is a purely odd bundle. Therefore, we may define $T_{S, + } = T_{M}$ and $T_{S, - } = N_{M / S}$ . We note that the rank of $T_{S, + } = T_{S}$ is $m|0$ and the rank of $T_{S, - }$ is $0|n$ if $\dim S = m|n$ . Next, we come to the definition of split and projected supermanifolds. Definition 2.6 (split and projected supermanifold). A supermanifold $S$ is said to be split if it is globally isomorphic to $S(M, V)$ for some manifold $M$ and some vector bundle $V \to M$ . In other words, it is globally isomorphic to some split supermanifold. A supermanifold $S$ is said to be projected if there exists a projection map $\pi : S \to M$ , such that the inclusion $i : M \to S$ , which is identity on the reduced space $i : |M| \to |S|$ , and $i^{*} : \mathcal{O}_{S} \to \mathcal{O}_{M}$ given by imposing $\theta^{1} = \dots = \theta^{n} = 0$ , is a section of $\pi$ , i.e., $\pi i = \mathrm{id}_{M}$ and $i^{*}\pi^{*} = \mathrm{id}_{\mathcal{O}_{M}}$ . # Lemma 2.7. Any split supermanifold $S$ is projected. Proof. Indeed, if $S$ is split, then $\mathcal{O}_S \cong \mathcal{O}_M \otimes \Lambda^\bullet V^*$ . Then we have a morphism $\pi^* : \mathcal{O}_M \to \mathcal{O}_S$ via embedding $\mathcal{O}_M$ to be the degree zero part. And it is easy to see that $i^*\pi^* = \mathrm{id}_{\mathcal{O}_M}$ . Now to construct $\pi : S \to M$ with the desired property, we choose the map of underlying space $\pi : |S| \to |M|$ to be identity, and $\pi^* : \mathcal{O}_M \to \pi_*\mathcal{O}_S$ to be the map given above. A natural question to ask is whether we can characterize the obstruction to splitting or projection of a supermanifold via some cohomology class. The answer is affirmative. The obstruction theory of supermanifolds is developed in, and is summarized in with great detail. The result relevant to us is that a necessary condition for a supermanifold $S = (M, \mathcal{O}_S)$ to split and to be projected is that a certain cohomology class called the second obstruction class $$ \omega_ {2} \in H ^ {1} (M, T _ {+} \otimes \Lambda^ {2} V ^ {*}) $$ vanishes. The same construction is also discussed in, with a more geometric perspective. The second obstruction class will be useful in concluding that certain supermanifolds are not projected. Indeed, many basic examples of non-projected supermanifolds in were established by showing $\omega_{2} \neq 0$ for those supermanifolds. However, for complicated supermanifolds, like supermoduli spaces, it is very difficult to directly evaluate $\omega_{2}$ and show that it is nonzero. Fortunately, there are also some indirect results we can use. The next theorem, which is Corollary 2.8 of, says that a finite covering of a non-projected supermanifold is also non-projected. Theorem 2.8. Let $\pi : Y \to X$ be a finite covering map of supermanifolds. If $\omega_2(X) \neq 0$ , then $\omega_2(Y) \neq 0$ , so $Y$ is not projected. If we have a submanifold of a supermanifold, which we know is not projected, then it is natural to suspect that the big supermanifold itself is not projected. This is not true in general: For example, a supermanifold of dimension $n|2$ is split if and only if it is projected by Corollary 2.3 of. In particular, $\mathbb{CP}^{n|2}$ is split, but a generic hypersurface of $\mathbb{CP}^{n|2}$ is nonsplit<sup>1</sup>. However, in certain special cases the statement is true, which is made precise by a modified version of Corollary 2.12 of. Theorem 2.9. Let $a: S' \to S$ be an immersion of supermanifolds, with reduced spaces $M' \subset M$ , such that the normal sequence of $M'$ decomposes: $a^* T_M \cong T_M' \oplus N$ , where $N$ is the even component of the normal bundle to the map $a$ . If $\omega_2(S') \neq 0$ , then we also have $\omega_2(S) \neq 0$ , so $S$ is not projected. The remaining part of this section will be devoted to the central objects of study in this paper, Super Riemann Surfaces and their moduli. Again, much of the theory below is established in and. We will only collect relevant results here and refer the readers to the literature mentioned for detailed explanations. Definition 2.10. Let $S$ be a supermanifold. A distribution (i.e., a subsheaf of the tangent sheaf) $\mathcal{D} \subset T_S$ is called a superconformal structure if it is - an odd distribution, i.e., a subbundle of rank $0|1$ . - everywhere non-integrable. By Frobenius' theorem, a distribution is integrable if it is closed under the Lie bracket. Since $\mathcal{D}$ is an odd distribution, its Lie bracket will be denoted by the anticommutator. By assumption $\mathcal{D}$ is of rank $0|1$ , so it will be generated by a single odd vector field $v$ . Since $v^2 \coloneqq \frac{1}{2} \{v, v\}$ , we define $\mathcal{D}$ to be integrable if $v^2 \in \mathcal{D}$ , and define it to be everywhere non-integrable if $v^2$ is everywhere independent of $v$ over $\mathcal{O}_S$ . Definition 2.11. A Super Riemann Surface (SRS) is a pair $(S, \mathcal{D})$ , where $S = (C, \mathcal{O}_S)$ is a complex compact supermanifold of dimension $1|1$ , and $\mathcal{D}$ is a superconformal structure. Remark 2.12. Since $S$ is of dimension $1|1$ , $T_{S}$ has rank $1|1$ . Since $\mathcal{D}$ has rank $0|1$ and $\mathcal{D}^2$ is even, hence has rank $1|0$ , together they generate the entire tangent bundle $T_{S}$ . Moreover, we have an isomorphism $T_{S} / \mathcal{D} \cong \mathcal{D}^2$ . Therefore, we actually have an exact sequence of sheaves $$ 0 \to \mathcal {D} \to T _ {S} \to \mathcal {D} ^ {2} \to 0. $$ The next result is from, which gives direct characterizations of the generating vector field of the superconformal structure. Lemma 2.13. Locally on a SRS one can choose coordinates $z$ and $\theta$ , called a superconformal coordinate, such that $\mathcal{D}$ is generated by the vector field $$ v := \frac {\partial}{\partial \theta} + \theta \frac {\partial}{\partial z}. $$ Now we can do deformation theory on SRS. We must therefore consider automorphisms on the SRS. Locally, the infinitesimal automorphisms are generated by superconformal vector fields, which preserve the distribution $\mathcal{D}$ . In superconformal coordinates, a short calculation shows that the even superconformal vector field takes the form $$ \chi^ {+} = f (z) \frac {\partial}{\partial z} + \frac {f ^ {\prime} (z)}{2} \theta \frac {\partial}{\partial \theta} \tag {1} $$ while the odd one takes the form $$ \chi^ {-} = - g (z) \left(\frac {\partial}{\partial \theta} - \theta \frac {\partial}{\partial z}\right), \tag {2} $$ where $f, g$ are holomorphic (even) functions on $S$ that depends on $z$ only, not $\theta$ . We denote the sheaf of superconformal vector fields on $S$ by $\mathcal{A}_S$ , which is also the sheaf of infinitesimal automorphisms of $S$ . Now we may consider the moduli space $\mathfrak{M}_g$ of SRS of genus $g$ . Given a point $S \in \mathfrak{M}_g$ , we want to know what the tangent space $T_S\mathfrak{M}_g$ is. A standard argument in deformation theory shows $$ T _ {S} \mathfrak {M} _ {g} = H ^ {1} (S, \mathcal {A} _ {S}). $$ The notion of a puncture on an ordinary Riemann surface has two analogs on an SRS. In string theory, they are known as the Neveu-Schwarz puncture (NS) and Ramond puncture (R). An NS puncture on an SRS $S$ is the obvious analog of a puncture on an ordinary Riemann surface: it is simply the choice of a point in $S$ . If $S$ is a SRS with $n$ marked points $p_1, \dots, p_n$ , or in divisor form $P = p_1 + \dots + p_n$ , then it is an element of $\mathfrak{M}_{g,n,0}$ . The infinitesimal automorphisms on $S$ must preserve the superconformal structure $\mathcal{D}$ , and preserve the marked points, which imposes an extra condition on superconformal vector fields of the form (1) and (2) that they must vanish on the marked points, i.e., $f(0) = g(0) = 0$ in local coordinates if the marked point is given by the equation $z = 0$ . Therefore, we conclude that $$ T _ {S} \mathfrak {M} _ {g, n, 0} = H ^ {1} (S, \mathcal {A} _ {S} (- P)). $$ Now we introduce the second kind of puncture: Ramond puncture. In this situation, we assume the underlying supermanifold $(C,\mathcal{O}_S)$ is still smooth, but the odd distribution $\mathcal{D}$ is no longer everywhere non-integrable. The generator $v$ in the local form in Lemma 2.13 is replaced by $$ v := \frac {\partial}{\partial \theta} + z \theta \frac {\partial}{\partial z}. $$ In other words, $\mathcal{D}$ fails to be a maximally nonintegrable distribution along the divisor $z = 0$ . Therefore, a SRS with a Ramond puncture is technically no longer a SRS. We can also have multiple Ramond punctures, by which we mean $\mathcal{D}$ fails to be a distribution along multiple divisors, and near each divisor we can find local coordinates as described above. The topology of SRS restricts the number of Ramond punctures to always be even. In, it was shown that in the presence of Ramond punctures $R = q_{1} + \dots +q_{2r}$ , the sheaf of superconformal vector fields is given by $$ \mathcal {A} _ {S} \cong (T _ {S} / \mathcal {D}) \otimes \mathcal {O} _ {S} (- R). \tag {3} $$ Therefore, for a SRS $S$ with NS punctures $P = p_{1} + \dots +p_{n}$ and R punctures $R = q_{1} + \dots +q_{2r}$ , we still have $$ T _ {S} \mathfrak {M} _ {g, n, 2 r} = H ^ {1} (S, \mathcal {A} _ {S} (- P)), $$ but now $\mathcal{A}_S$ is given by (3). # 3. SETTINGS Our setup will be the following: Let $\pi : Y \to X$ be a branched cover of SRS. We use $g$ to denote the genus of $Y$ . Let us fix $g_0 = 2$ to be the genus of $X$ for the rest of the paper, and we require $X$ to have only 1 branch point. Let $d$ be the degree of $\pi$ . If $p \in X$ the branch point, and $\pi^{-1}(p) = \{q_1, \dots, q_s\}$ . Let $a_j$ denote the local degree of $\pi$ at $q_j$ , for $1 \leq j \leq s$ . Then we define the ramification pattern $\rho = (a_1, \dots, a_s)$ . Ramification points with odd local degree will correspond to NS punctures on $Y$ , while ramification points of even local degree will correspond to R punctures on $Y$ after blow ups. See section 3.4 of for details. There will always be an even number of R punctures, so we can denote the number of R punctures on $Y$ by $2r$ , and let $n$ denote the number of NS punctures on $Y$ . So $n + 2r = s$ . Now we allow the curves $Y, X$ to vary continuously, hence the covering map $\pi : Y \to X$ and the branch point in $X$ also vary continuously. But we require the genera $g, g_0$ of $Y, X$ , and the ramification patter $\rho$ to be fixed throughout the process. There is a moduli space $\mathfrak{M}_{d,\rho}$ parameterizing all such branched coverings. The map $$ \Phi : \mathfrak {M} _ {d, \rho} \to \mathfrak {M} _ {2, 1, 0} \quad (\pi : Y \to X) \mapsto X $$ is a finite covering by Lemma 14 of, and it was already established that $\mathfrak{M}_{2,1,0}$ is not projected by. Therefore, $\mathfrak{M}_{d,\rho}$ is not projected by Theorem 2.8. Moreover, Proposition 3.1. the map $$ \Psi : \mathfrak {M} _ {d, \rho} \rightarrow \mathfrak {M} _ {g, n, 2 r} \quad (\pi : Y \to X) \mapsto Y $$ is an immersion of supermanifolds. Proof. By Lemma 15 of, the composition $F \circ \Psi$ of $\Psi : \mathfrak{M}_{d,\rho} \to \mathfrak{M}_{g,n,2r}$ with the forgetful map $F: \mathfrak{M}_{g,n,2r} \to \mathfrak{M}_{g,0,2r}$ is an immersion. Hence $\Psi$ itself must be an immersion. Moreover, the associated normal bundle sequence of $\Psi$ splits, which we will check. The proof is basically the same as the proof of Proposition 5.2 of, but we can modify the proof and make it slightly more explicit by constructing a concrete lifting map of tangent vector fields of the base to the branched cover, in the case where the cover is Galois. Hence, we include the proof here. Proposition 3.2. Let $\psi : \mathcal{SM}_{d,\rho} \to \mathcal{SM}_{g,n,2r}$ denote the bosonic reduction of the map $\Psi$ . Then the induced normal bundle sequence on the reduced space $$ 0 \to T _ {\mathcal {S M} _ {d, \rho}} \to \psi^ {*} T _ {\mathcal {S M} _ {g, n, 2 r}} \to N \to 0 $$ is a split exact sequence. Proof. We pick a branched cover $\pi : (Y, \mathcal{L}_Y) \to (X, \mathcal{L}_X)$ of spin curves, representing a point in $\mathcal{SM}_{d,\rho}$ . First, we assume that the cover is $G$ -Galois. Under $\psi$ this point goes to $(Y, \mathcal{L}_Y)$ . We note that if $R_{Y}$ is the divisor on $Y$ corresponding to the Ramond punctures, then $\mathcal{L}_Y^2 \cong K_Y(-R_Y)$ . Deformation theory gives an identification of $\psi$ at this point: $$ \psi : H ^ {1} (X, T _ {X} (- P _ {X})) \to H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})). $$ Since $\psi$ takes the deformation of the base to the corresponding uniquely determined deformation of the branched cover, it is induced by lifting vector fields. In fact, there is an injection of sheaves $$ L: T _ {X} (- P _ {X}) \rightarrow \pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) \tag {4} $$ whose induced map on $H^1$ is $\psi$ . To see that $L$ is an injection, we cover $X$ locally by small open sets $X = \bigcup_{\alpha} U_{\alpha}$ , such that $U_{\alpha}$ and $\pi^{-1}(U_{\alpha})$ only contain at most one branch (or ramification) point for all $\alpha$ . If $U_{\alpha}$ does not contain any marked points, then by choosing sufficiently small open covers, we may assume $\pi: \pi^{-1}(U_{\alpha}) \to U_{\alpha}$ is an isomorphism, hence there is no problem constructing $L$ on $U_{\alpha}$ . Now we analyze the situation where $q \in \pi^{-1}(U_{\alpha})$ is a ramification point and $p = \pi(q) \in U_{\alpha}$ is a branch point. Let $e_q = k > 1$ be the local degree of $q$ . Then locally we may choose holomorphic coordinates $w$ on $Y$ and $z$ on $X$ such that $w(q) = z(p) = 0$ , and such that locally $\pi$ is given by $z = w^k$ . To construct $L$ , locally a section in $\Gamma(U_{\alpha}, T_X(-P_X))$ is of the form $$ \chi = f (z) \frac {\partial}{\partial z} $$ where $f$ is a holomorphic function with $f(0) = 0$ . A section in $\Gamma(U_{\alpha}, \pi_* T_Y(-P_Y - R_Y))$ is of the form $$ \tilde {\chi} = g (w) \frac {\partial}{\partial w} $$ with $g(0) = 0$ , where now $\tilde{\chi}$ is viewed as a vector field on $\pi^{-1}(U_{\alpha}) \subset Y$ . Now the condition that $\tilde{\chi}$ is a lift of $\chi$ , namely $\pi_{*}\tilde{\chi} = \chi$ , reads $$ \pi_ {*} \tilde {\chi} = g (w) \frac {\partial z}{\partial w} \frac {\partial}{\partial z} = k w ^ {k - 1} g (w) \frac {\partial}{\partial z} = f (w ^ {k}) \frac {\partial}{\partial z} = \chi , $$ which implies $$ g (w) = f \left(w ^ {k}\right) / k w ^ {k - 1}. $$ Note that the above expression is a well-defined holomorphic function: since $f(0) = 0$ , we have $f(w^{k}) = w^{k}h(w)$ for some holomorphic $h$ , and furthermore $g(0) = 0$ . It is also clear from the expression that $g$ is uniquely determined by $f$ , hence $\tilde{\chi}$ is uniquely determined by $\chi$ . Thus, we can also construct $L$ on this small neighborhood $U_{\alpha}$ , which is injective. Therefore, this construction gives rise to an injection by lifting infinitesimal automorphisms of vector fields on sufficiently small open sets near each ramification point, and these lifts are compatible on the intersections of small open sets. Hence, this glues to an injection of sheaves (4). The induced map on $H^{1}$ is precisely $$ \psi : H ^ {1} (X, T _ {X} (- P _ {X})) \to H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})). $$ We also note that if $\pi : Y \to X$ is $G$ -Galois and $\tilde{\chi}$ is a lift of $\chi \in \Gamma(X, T_X)$ , then $\tilde{\chi}$ must be a $G$ -invariant vector field. Hence, the lift in (4) splits $$ L: T _ {X} (- P _ {X}) \to \pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G} \oplus \mathcal {Q} $$ into the $G$ -invariant part and some other part $\mathcal{Q}$ . The inclusion $$ T _ {X} (- P _ {X}) \rightarrow \pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G} $$ is actually an isomorphism, where the isomorphisms are given by lift and projection. Hence taking $H^1$ we have $$ \psi : H ^ {1} (X, T _ {X} (- P _ {X})) \rightarrow H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) ^ {G} \oplus H ^ {1} (X, \mathcal {Q}) $$ where we used $$ H ^ {1} (X, \pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G}) = H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G}) \cong H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) ^ {G} $$ and $\psi$ is given by inclusion into the $G$ -invariant part in the summand. Hence, it immediately follows that the normal bundle sequence splits. Finally, in the case where the covering is not Galois, we pass to the Galois closure of $\pi : Y \to X$ . Let $\hat{Y}$ be the $G$ -Galois closure of $Y$ . Then there is a covering $\hat{\pi} : \hat{Y} \to Y$ with Galois group $H$ , where $H < G$ is the stabilizer subgroup of an unramified point of $Y$ . The pullback $\hat{\pi}^*$ includes cohomologies of $Y$ as the $H$ -invariant part of cohomologies of $\hat{Y}$ : $$ \hat {\pi} ^ {*}: H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) \cong H ^ {1} (\hat {Y}, T _ {\hat {Y}} (- P _ {\hat {Y}} - R _ {\hat {Y}})) ^ {H} \hookrightarrow H ^ {1} (\hat {Y}, T _ {\hat {Y}} (- P _ {\hat {Y}} - R _ {\hat {Y}})). $$ Introducing the notation $D_{\hat{Y}} = P_{\hat{Y}} + R_{\hat{Y}}$ and define $D_Y, D_X$ similarly, we have a commutative diagram with exact rows given by the normal bundle sequences evaluated at corresponding fibers: $$ \begin{array}{ccc}0\longrightarrow H^{1}(X,T_{X}(-D_{X})) & \longrightarrow H^{1}(Y,T_{Y}(-D_{Y})) & \longrightarrow N\longrightarrow 0\\ \Big\| & \hat{\pi}^{*}\Big{\downarrow} & \Big{\downarrow}i\\ 0\longrightarrow H^{1}(X,T_{X}(-D_{X})) & \longrightarrow H^{1}(\hat{Y},T_{\hat{Y}}(-D_{\hat{Y}})) & \longrightarrow \hat{N}\longrightarrow 0 \end{array} $$ where $i: N \to \hat{N}$ is the unique map that makes the square commute. A simple diagram chase shows that $i$ is injective, hence the spaces in the upper row can be viewed as subsets included in the corresponding spaces in the bottom row. By the previous argument, we already know that there exists a splitting $s: \hat{N} \to H^{1}(\hat{Y}, T_{\hat{Y}}(-D_{\hat{Y}}))$ . Restrict this splitting to the subspace gives an induced splitting $N \to H^{1}(Y, T_{Y}(-D_{Y}))$ , concluding the proof. Hence, we conclude that $\mathfrak{M}_{g,n,2r}$ is not projected by Theorem 2.9. The necessary and sufficient condition for such an immersion $\Psi$ to exist, or equivalently for the tuple $(g,n,r)$ to be realizable, using the terminology of, is that the genus $g$ determined by the Hurwitz formula $$ g = 1 + d \left(g _ {0} - 1\right) + \frac {1}{2} \sum_ {j = 1} ^ {s} \left(a _ {j} - 1\right) \tag {5} $$ is nonnegative, where we recall that $s = n + 2r$ , and $\rho = (a_1, \dots, a_s)$ is the ramification pattern of $\pi$ , with each $a_j$ a local degree such that $\sum_{j} a_j = d$ . Moreover, Theorem 4 of ensures this is the only constraint: As long as the configurations $\rho, d, g_0$ make $g \geq 0$ in (5), there exists a branched cover $\pi : Y \to X$ with the specified behavior. Our next task is to determine, to the best of our ability, the condition for the tuple $(g,n,r)$ to be realizable. This is given by Theorem 4.1 and Theorem 4.2 in the next section. # 4. PROOF OF MAIN RESULT Using the minimal model above with $g_0 = 2$ and only one branched point on $X$ , we can prove our first nonprojectedness theorem for supermoduli space. The proof is combinatorial. Theorem 4.1. Let $g, n, r$ be positive integers. The supermoduli space $\mathfrak{M}_{g,n,2r}$ is not projected if the following two conditions are met: (1) genus bound: $g \geq n + 5r + 1$ ; (2) congruence condition: $2g - 2 + n + 2r \equiv 0 \mod 3$ . Proof. Substituting $g_0 = 2$ in (5) shows that $2g = 2 + 3d - n - 2r$ . Hence, we must have $2g - 2 + n + 2r = 3d \equiv 0 \mod 3$ . This shows that if $(g,n,r)$ is a valid tuple arising from a branched cover of a $g_0 = 2$ SRS with 1 puncture, then the congruence condition must be satisfied. To derive the genus bound, we want to minimize $g$ according to (5) with $g_0 = 2$ and given $n,r > 0$ . The minimal choice of the ramification pattern is $$ \rho_ {\min } = (\underbrace {1 , \ldots , 1} _ {n}, \underbrace {2 , \ldots , 2} _ {2 r}). $$ The corresponding minimal degree of the branched cover is $d_{\mathrm{min}} = 1 \cdot n + 2 \cdot 2r = n + 4r$ . By the Hurwitz formula, the minimal genus is given by $2g_{\mathrm{min}} - 2 = 3d_{\mathrm{min}} - n - 2r$ . Hence the solution is $g_{\mathrm{min}} = n + 5r + 1$ . Hence, the genus bound is also derived. This shows that for the tuple $(g,n,r)$ to be realizable, the genus bound and congruence condition are necessary. Now it remains to show that if these two conditions are met, then there exists a branched cover $\pi : Y \to X$ with the specified behavior. Given the congruence condition, we note that the degree of the cover $$ d = \frac {1}{3} (2 g - 2 + n + 2 r) $$ is an integer. Moreover, the genus bound $g \geq n + 5r + 1$ implies $2g - 2 \geq 2n + 10r$ . Substituting this into the expression above gives $$ 3 d = 2 g - 2 + n + 2 r \geq (2 n + 1 0 r) + n + 2 r = 3 n + 1 2 r. $$ Hence $d \geq n + 4r = d_{\mathrm{min}}$ . Therefore, $d$ is at least as large as the minimal possible degree $d_{\mathrm{min}}$ . We must now show that there exists a partition $\rho$ of $d$ that has exactly $n$ odd parts and $2r$ even parts. The proof is constructive. Let $\delta_d = d - d_{\mathrm{min}}$ . The calculation above shows that $$ 3 \delta_ {d} = 3 (d - d _ {\min }) = (2 g - 2 + n + 2 r) - (2 g _ {\min } - 2 + n + 2 r) = 2 (g - g _ {\min }) $$ This implies that $3\delta_{d}$ is an even number. Since 3 is odd, $\delta_{d}$ itself must be an even number. Let $\delta_{d} = 2k$ for some nonnegative integer $k$ . We now need to find a partition of $d = d_{\min} + 2k$ with the correct number of even and odd parts. We start with the minimal partition $\rho_{\min}$ . We can modify this partition to increase its sum by an even number, $2k$ , without changing the parity count of its parts. For example, we can replace a part $a_{j} = 1$ with a part $a_{j} + 2 = 3$ . This increases the total sum by 2, and the new part is still odd, so the parity of the ramification pattern is preserved, while the total degree increases by 2. By repeatedly applying such modifications $k$ times, we can increase the sum of the partition from $d_{\min}$ to $d = d_{\min} + 2k$ while preserving the number of even and odd parts. The resulting partition $\rho$ has sum $d$ and corresponds to the puncture configuration $(n, 2r)$ . By construction, this partition, when used in the Hurwitz formula, yields the genus $g$ . To see this, we note that $$ g _ {\min } = 1 + d _ {\min } + \frac {1}{2} \sum_ {j} \left(a _ {j, \min } - 1\right). \tag {6} $$ Since $$ 6 k = 3 \delta_ {d} = 2 (g - g _ {\mathrm {m i n}}), $$ we have $g - g_{\mathrm{min}} = 3k$ , and we also have $$ (d - d _ {\min }) + \frac {1}{2} \sum_ {j = 1} ^ {s} \left(a _ {j} - a _ {j, \min }\right) = 3 k = g - g _ {\min }. \tag {7} $$ Now adding (6) and (7) gives the Hurwitz formula $g = 1 + d + \frac{1}{2}\sum_{j=1}^{s}(a_j - 1)$ , as desired. This concludes the proof. Now we aim to remove the congruence condition in Theorem 4.1 at the cost of a slightly stronger genus bound. Say we already know that $\mathfrak{M}_{g,n,2r}$ is not projected, we consider the forgetful map $$ F: \mathfrak {M} _ {g, n, 2 r} \to \mathfrak {M} _ {g, n - i, 2 r} $$ for $i = 1,2$ , and show that the composition $$ \Psi^ {\prime} = F \circ \Psi : \mathfrak {M} _ {d, \rho} \to \mathfrak {M} _ {g, n - i, 2 r} $$ is still an immersion, and the bosonic normal bundle sequence splits. Since $\Phi : \mathfrak{M}_{d,\rho} \to \mathfrak{M}_{2,1,0}$ is still a finite covering map, $\mathfrak{M}_{d,\rho}$ is not projected, we conclude that $\mathfrak{M}_{g,n - i,2r}$ is not projected by Theorem 2.8 and Theorem 2.9. Then, the congruence condition in Theorem 4.1 will be violated, so we can essentially remove this condition. In other words, the strategy is as follows: Given a triple $(g,n,r)$ . Our goal is to show that $\mathfrak{M}_{g,n,2r}$ is not projected. We want to find a helper tuple $(g,n_0,r)$ that satisfies the original two conditions in Theorem 4.1, and such that we have the forgetful immersion $\Psi^{\prime}:\mathfrak{M}_{d,\rho}\to \mathfrak{M}_{g,n_0,2r}\to \mathfrak{M}_{g,n,2r}$ , which would then show that $\mathfrak{M}_{g,n,2r}$ is not projected. Therefore, we must find a new genus bound for $(g,n,r)$ such that if the genus bound is met, then such a helper tuple $(g,n_0,r)$ is guaranteed to exist. For a given $(g,n,r)$ with fixed $g,r$ , the helper $(g,n_0,2r)$ must satisfy $$ n \leq n _ {0} \leq g - 5 r - 1 $$ where the first inequality is because we need the existence of the forgetful map, and the second inequality comes from the genus bound in Theorem 4.1. Now the congruence condition says that we must have $$ n _ {0} \equiv - 2 g + 2 - 2 r \mod 3. $$ Since $g$ and $r$ are fixed, the number $-2g + 2 - 2r$ is also fixed. Thus, the problem reduces to the following: can we find an integer $n_0$ on the interval $[n, g - 5r - 1]$ with a specified residue mod 3? Clearly, this can be done if $[n, g - 5r - 1]$ contains at least three integers. In other words, we only need $g - 5r - 1 - n + 1 \geq 3$ , or equivalently $g \geq n + 5r + 3$ . Hence, we obtain Theorem 4.2. Let $g, n, r$ be positive integers. The supermoduli space $\mathfrak{M}_{g,n,2r}$ is not projected if $g \geq n + 5r + 3$ . Now it remains to show that $\Psi'$ is an immersion and the bosonic normal bundle sequence splits for $i = 1,2$ , by Theorem 2.9. Proposition 4.3. The morphism $\Psi^{\prime}:\mathfrak{M}_{d,\rho}\xrightarrow{\Psi}\mathfrak{M}_{g,n,2r}\xrightarrow{F}\mathfrak{M}_{g,n - i,2r}$ is an immersion of supermanifolds, for $i = 1,2$ . Proof. This follows immediately from Lemma 15 of, which states that $\Psi'$ composed with the forgetful map $\mathfrak{M}_{g,n-i,2r} \to \mathfrak{M}_{g,0,2r}$ is an immersion. Hence it follows that $\Psi'$ is an immersion. Proposition 4.4. The normal bundle sequence associated with the bosonic reduction of $$ \Psi^ {\prime}: \mathfrak {M} _ {d, \rho} \xrightarrow {\Psi} \mathfrak {M} _ {g, n, 2 r} \xrightarrow {F} \mathfrak {M} _ {g, n - i, 2 r} $$ splits, for $i = 1,2$ Proof. Since $\Psi : \mathfrak{M}_{d,\rho} \to \mathfrak{M}_{g,n,2r}$ is an immersion, its bosonic reduction $\psi : \mathcal{SM}_{d,\rho} \to \mathcal{SM}_{g,n,2r}$ is still an immersion. Moreover, we know the normal bundle sequence of $\psi$ splits by Proposition 3.2. The map $F: \mathfrak{M}_{g,n,2r} \to \mathfrak{M}_{g,n-i,2r}$ is a fibration, hence so is its bosonic reduction $f: \mathcal{SM}_{g,n,2r} \to \mathcal{SM}_{g,n-i,2r}$ . Moreover, by Proposition 4.3 we know $f \circ \psi: \mathcal{SM}_{d,\rho} \to \mathcal{SM}_{g,n-i,2r}$ is still an immersion. Applying the following lemma will conclude the proof. The lemma was established in the proof of Theorem 1.3 of [1, p.48]. The proof of this statement is not very hard, so we give it here. Lemma 4.5. Suppose that $i: X \to Y$ is an immersion, $f: Y \to Z$ is a fibration, such that $f \circ i: X \to Z$ is still an immersion. If the normal bundle sequence of $i$ splits, then the normal bundle sequence of $f \circ i$ will also split. Proof. The differential $\mathrm{df}: T_{Y} \to f^{*}T_{Z}$ gives a bundle map over $Y$ , with kernel $T_{Y / Z}$ . Pulling back along $i$ we get a bundle map $i^{*}\mathrm{df}: i^{*}T_{Y} \to (f \circ i)^{*}T_{Z}$ over $X$ , with kernel $i^{*}T_{Y / Z}$ . This gives a commutative diagram with exact rows given by the normal bundle sequences: $$ \begin{array}{c} 0 \longrightarrow T _ {X} \longrightarrow i ^ {*} T _ {Y} \longrightarrow N _ {X, Y} \longrightarrow 0 \\ \Big \| \quad i ^ {*} \mathrm {d} f \Big \downarrow \\ 0 \longrightarrow T _ {X} \longrightarrow (f \circ i) ^ {*} T _ {Z} \longrightarrow N _ {X, Z} \longrightarrow 0 \end{array} $$ A direct application of the snake lemma shows that $\ker (N_{X,Y}\to N_{X,Z})\cong i^{*}T_{Y / Z}$ . Now, we are given a splitting $s:N_{X,Y}\rightarrow i^{*}T_{Y}$ of the top row. Composing it with the quotient map we obtain a new map $$ s ^ {\prime}: N _ {X, Y} \to i ^ {*} T _ {Y} \to i ^ {*} T _ {Y} / i ^ {*} T _ {Y / Z}. $$ Because $f$ is a submersion, $i^{*}\mathrm{d}f: i^{*}T_{Y} \to (f \circ i)^{*}T_{Z}$ is surjective, hence passing to the quotient, the map $N_{X,Y} \to N_{X,Z}$ is still surjective with kernel $i^{*}T_{Y / Z}$ as discussed above. Therefore, we see that $N_{X,Z} = N_{X,Y} / i^{*}T_{Y / Z}$ . Therefore, the map $s'$ above factors through $i^{*}T_{Y / Z}$ and we get a map $$ s ^ {\prime}: N _ {X, Z} \to i ^ {*} T _ {Y} / i ^ {*} T _ {Y / Z}. $$ But pulling back the relative tangent sequence $0 \to T_{Y / Z} \to T_Y \to f^* T_Z \to 0$ gives a short exact sequence $$ 0 \rightarrow i ^ {*} T _ {Y / Z} \rightarrow i ^ {*} T _ {Y} \rightarrow (f \circ i) ^ {*} T _ {Z} \rightarrow 0. $$ Hence we conclude that $(f\circ i)^{*}T_{Z} = i^{*}T_{Y} / i^{*}T_{Y / Z}$ , and the map $s^\prime$ is actually a map $$ s ^ {\prime}: N _ {X, Z} \to (f \circ i) ^ {*} T _ {Z}, $$ which we claim to be the desired splitting. Indeed, $s'$ is obtained by taking the right inverse $s$ for the map $N_{X,Y} \to i^* T_Y$ and then passing to the quotient, hence is a splitting. This concludes the proof.
arxiv_math
2025-12-05T00:00:00Z
https://arxiv.org/pdf/2512.15727
{"title": "On the Nonprojectedness of Supermoduli with Neveu-Schwarz and Ramond Punctures", "raw_content": "# ON NONPROJECTEDNESS OF SUPERMODULI WITH NEVEU-SCHWARZ AND RAMOND PUNCTURES\n\nTIANYIWANG\n\nABSTRACT. We study the supermoduli space $\\mathfrak{M}_{g,n,2r}$ of Super Riemann Surfaces (SRS) of genus $g$ , with $n$ Neveu-Schwarz punctures and $2r$ Ramond punctures. We improve the result of Donagi, Witten, and Ott [1, 2, 3] by showing that the supermoduli space $\\mathfrak{M}_{g,n,2r}$ is not projected if $g \\geq n + 5r + 3$ .\n\n# 1. INTRODUCTION\n\nSupermanifolds are the natural backgrounds for supersymmetric field theories, and are extensively studied mathematical objects [4]. In perturbative superstring theory, particles are replaced by strings; hence, if one is interested in calculating scattering amplitudes to certain perturbative orders, then the natural generalization in superstring theory is to consider amplitude contributions coming from Super Riemann surfaces (SRS) with genus up to that order [6]. An example is illustrated by Figure 1. Moreover, if one wants to talk about insertions of spacetime bosons and fermions, one needs to consider Neveu-Schwarz (NS) and Ramond (R) punctures, respectively, on an SRS, which label two different kinds of vertex-operator insertions.\n\n![](images/1b8e762a04ec168f13556562d57e31f9c84e593d3bae0af6fb08f150a6483b75.jpg)\n\n![](images/ef29eb04ddf12edfaa25f57ccec737dfdaa04a311a53e3b908297367d3a7653c.jpg) \nFIGURE 1. With particles replaced by strings, a 1-loop Feynman diagram (left) becomes a Super Riemann surface of genus 1 (right) in superstring perturbation theory.\n\nThe formulation of superstring theory using the path integral leads naturally to moduli spaces $\\mathfrak{M}_{g,n,2r}$ of SRS with punctures. The role played by Feynman diagrams in ordinary QFT is taken, instead, by (families of) SRS with punctures: one integrates a suitable superstring measure over the supermoduli space of those surfaces. By far, one of the most successful computations was done in 2-loop order [7, 8], which used the property that $\\mathfrak{M}_g$ is split for $g = 2$ . However, a major mathematical complication is that for sufficiently large genus $g$ , the supermoduli space $\\mathfrak{M}_g$ is not projected, hence does not split. This result was proved by Donagi and Witten in [1, 2]. In fact, they proved an even stronger result when the NS punctures are also present: They showed that $\\mathfrak{M}_{g,n,0}$ is not projected for $g \\geq 2$ and $g - 1 \\geq n \\geq 1$ . Then following this work, Donagi and Ott [3] proved that $\\mathfrak{M}_{g,0,2r}$ is not projected for $g \\geq 5r + 1 \\geq 6$ , i.e., when the R punctures are present, but no NS puncture is present.\n\nThe purpose of this paper is to improve the result of Donagi, Witten, and Ott by proving nonprojectedness of supermoduli $\\mathfrak{M}_{g,n,2r}$ for large genus $g$ . The main result of this paper is that $\\mathfrak{M}_{g,n,2r}$ is not projected if $g \\geq n + 5r + 3$ , which is given in Theorem 4.2.\n\nThis paper is structured as follows: In section 2, we collect several key results in the literature that we will use repeatedly in our proof. In section 3, we introduce the setup of our proof, and check that several ingredients required in the proof are in place. Our main results, which include a proof of Theorem 4.2, will be established in Section 4.\n\n# 2. PRELIMINARIES\n\nThe theory of supermanifolds and SRS has been studied extensively in literature such as [1, 4, 5]. Therefore, in this section, we will only briefly recall relevant definitions and refer the readers to the mentioned literature for details.\n\nDefinition 2.1 (split supermanifold). Let $M$ be a manifold and $V$ a vector bundle over $M$ . Then we define the split supermanifold $S \\coloneqq S(M, V)$ to be the locally ringed space $(M, \\mathcal{O}_S)$ , where $\\mathcal{O}_S \\coloneqq \\mathcal{O}_M \\otimes \\Lambda^\\bullet V^*$ is the sheaf of $\\mathcal{O}_M$ -valued sections of the exterior algebra $\\Lambda^\\bullet V^*$ . In other words,\n\n$$\nS (M, V) = \\left(M, \\mathcal {O} _ {M} \\otimes \\Lambda^ {\\bullet} V ^ {*}\\right).\n$$\n\nIf we interpret $V$ as a locally free sheaf of $\\mathcal{O}_M$ -modules, then $\\mathcal{O}_S$ is simply $\\Lambda^{\\bullet}V^{*}$ . Then $\\mathcal{O}_S$ is $\\mathbb{Z}_2$ -graded, supercommutative, and its stalks are local rings.\n\nExample 2.2. $S(\\mathbb{R}^m,\\mathcal{O}_{\\mathbb{R}^m}^{\\oplus n})$ is just the smooth manifold $\\mathbb{R}^m$ , with a sheaf of supercommutative ring\n\n$$\n\\mathcal {O}: U \\mapsto C ^ {\\infty} (U) [ \\theta^ {1}, \\dots , \\theta^ {n} ],\n$$\n\nthat sends $U$ to the Grassmann algebra with generators $\\theta^1, \\ldots, \\theta^n$ over $C^\\infty(U) = \\mathcal{O}_{\\mathbb{R}^m}(U)$ . This is the structure sheaf. An element of the structure sheaf looks like $f = \\sum_I f_I \\theta^I$ where $I = (i_1, \\ldots, i_k) \\subset \\{1, \\ldots, n\\}$ is an index set and $\\theta^I = \\theta^{i_1} \\theta^{i_2} \\cdots \\theta^{i_k}$ . We will call this model space $\\mathbb{R}^{m|n}$ . Thus $\\mathbb{R}^{m|n}$ has even coordinates $x^1, \\ldots, x^m$ and odd coordinates $\\theta^1, \\ldots, \\theta^n$ . For brevity, we will usually write an element $f$ in the structure sheaf as $f(x^\\mu, \\theta^\\alpha)$ , and we will denote the structure sheaf $\\mathcal{O}$ by $\\mathcal{O}_{\\mathbb{R}^{m|n}}$ if we want to be specific. Similarly, one can define $\\mathbb{C}^{m|n}$ .\n\nDefinition 2.3. A supermanifold of dimension $m|n$ is a locally ringed space $(S, \\mathcal{O}_S)$ that is locally isomorphic to some model space $S(M, V)$ , where $\\dim M = m$ and $\\operatorname{rank} V = n$ . We will usually just refer to this supermanifold as $S$ with the understanding that it also comes with the structure sheaf $\\mathcal{O}_S$ . The manifold $M$ is called the reduced space or the bosonic reduction of $S$ , and we usually write $|S| = M$ . A morphism between supermanifolds is simply a morphism of locally ringed spaces.\n\nDefinition 2.4. A sheaf $\\mathcal{F}$ on a supermanifold $S$ is simply a sheaf on the reduced space $M$ , and sheaf cohomology $H^{k}(S,\\mathcal{F})$ of $\\mathcal{F}$ on $S$ is simply the sheaf cohomology $H^{k}(M,\\mathcal{F})$ on the reduced space.\n\nDefinition 2.5 (tangent bundle). The tangent bundle $T_{S}$ of supermanifold $S$ is the sheaf of $\\mathcal{O}_{S}$ -modules given by derivations of the structure sheaf, i.e., as sheaves\n\n$$\nT _ {S} := \\operatorname {D e r} \\left(\\mathcal {O} _ {S}\\right).\n$$\n\nIt is a $\\mathbb{Z}_2$ -graded vector bundle, or a locally free sheaf of $\\mathcal{O}_S$ -modules.\n\nIn the simplest example where $S = \\mathbb{R}^{m|n}$ or $\\mathbb{C}^{m|n}$ , $T_{S}$ is the free $\\mathcal{O}_S$ -module generated by even tangent vectors $\\frac{\\partial}{\\partial x^{\\mu}}$ for $\\mu = 1, \\ldots, m$ and odd tangent vectors $\\frac{\\partial}{\\partial \\theta^{\\alpha}}$ for $\\alpha = 1, \\ldots, n$ .\n\nIn general, for a split supermanifold $S = S(M,V)$ , the tangent bundle $T_{S}$ need not have distinguished complementary even and odd subbundles: The even and odd parts are not sheaves of $\\mathcal{O}_S$ modules. For example, $\\frac{\\partial}{\\partial\\theta^1}$ is an odd vector field, but multiplication by an element of $\\mathcal{O}_S$ might change parity: Consider $\\theta^1\\frac{\\partial}{\\partial\\theta^1}$ . This is now an even vector field. However, we note that since elements in $\\mathcal{O}_M$ are even, if we restrict ourselves to multiplying only elements in $\\mathcal{O}_M$ then parity is preserved. More specifically, what we are saying is that the restriction $T_{S}|_{M}$ of $T_{S}$ to the reduced space $M$ does split (By restriction to $M$ we mean pullback, under the natural inclusion $i:M\\to S$ , of a sheaf of $\\mathcal{O}_S$ -modules to a sheaf of $\\mathcal{O}_M$ -modules. In this case, this is accomplished by setting all odd functions to 0). Explicitly, the splitting is given by\n\n$$\nT _ {S, +} := T _ {M},\n$$\n\n$$\nT _ {S, -}: = V.\n$$\n\nFor a general supermanifold $S$ , we only have local isomorphisms between $S$ and $S(M, V)$ for some model space $M$ and vector bundle $V$ , but there is no canonical identification $T_{S, -} = V$ . However, there is still a way to define $T_{S, -}$ . We consider the natural inclusion $i: M \\to S$ , where we view $M$ as the subspace that is locally given by $\\theta^{\\alpha} = 0$ for all $\\alpha$ . Then then we have a normal bundle sequence\n\n$$\n0 \\to T _ {M} \\to T _ {S} | _ {M} \\to N _ {M / S} \\to 0.\n$$\n\nNow $N_{M / S}$ is a purely odd bundle. Therefore, we may define $T_{S, + } = T_{M}$ and $T_{S, - } = N_{M / S}$ . We note that the rank of $T_{S, + } = T_{S}$ is $m|0$ and the rank of $T_{S, - }$ is $0|n$ if $\\dim S = m|n$ .\n\nNext, we come to the definition of split and projected supermanifolds.\n\nDefinition 2.6 (split and projected supermanifold). A supermanifold $S$ is said to be split if it is globally isomorphic to $S(M, V)$ for some manifold $M$ and some vector bundle $V \\to M$ . In other words, it is globally isomorphic to some split supermanifold.\n\nA supermanifold $S$ is said to be projected if there exists a projection map $\\pi : S \\to M$ , such that the inclusion $i : M \\to S$ , which is identity on the reduced space $i : |M| \\to |S|$ , and $i^{*} : \\mathcal{O}_{S} \\to \\mathcal{O}_{M}$ given by imposing $\\theta^{1} = \\dots = \\theta^{n} = 0$ , is a section of $\\pi$ , i.e., $\\pi i = \\mathrm{id}_{M}$ and $i^{*}\\pi^{*} = \\mathrm{id}_{\\mathcal{O}_{M}}$ .\n\n# Lemma 2.7. Any split supermanifold $S$ is projected.\n\nProof. Indeed, if $S$ is split, then $\\mathcal{O}_S \\cong \\mathcal{O}_M \\otimes \\Lambda^\\bullet V^*$ . Then we have a morphism $\\pi^* : \\mathcal{O}_M \\to \\mathcal{O}_S$ via embedding $\\mathcal{O}_M$ to be the degree zero part. And it is easy to see that $i^*\\pi^* = \\mathrm{id}_{\\mathcal{O}_M}$ . Now to construct $\\pi : S \\to M$ with the desired property, we choose the map of underlying space $\\pi : |S| \\to |M|$ to be identity, and $\\pi^* : \\mathcal{O}_M \\to \\pi_*\\mathcal{O}_S$ to be the map given above.\n\nA natural question to ask is whether we can characterize the obstruction to splitting or projection of a supermanifold via some cohomology class. The answer is affirmative. The obstruction theory of supermanifolds is developed in [4, 9, 10], and is summarized in [1] with great detail. The result relevant to us is that a necessary condition for a supermanifold $S = (M, \\mathcal{O}_S)$ to split and to be projected is that a certain cohomology class called the second obstruction class\n\n$$\n\\omega_ {2} \\in H ^ {1} (M, T _ {+} \\otimes \\Lambda^ {2} V ^ {*})\n$$\n\nvanishes. The same construction is also discussed in [4], with a more geometric perspective. The second obstruction class will be useful in concluding that certain supermanifolds are not projected. Indeed, many basic examples of non-projected supermanifolds in [1] were established by showing $\\omega_{2} \\neq 0$ for those supermanifolds.\n\nHowever, for complicated supermanifolds, like supermoduli spaces, it is very difficult to directly evaluate $\\omega_{2}$ and show that it is nonzero. Fortunately, there are also some indirect results we can use. The next theorem, which is Corollary 2.8 of [1], says that a finite covering of a non-projected supermanifold is also non-projected.\n\nTheorem 2.8. Let $\\pi : Y \\to X$ be a finite covering map of supermanifolds. If $\\omega_2(X) \\neq 0$ , then $\\omega_2(Y) \\neq 0$ , so $Y$ is not projected.\n\nIf we have a submanifold of a supermanifold, which we know is not projected, then it is natural to suspect that the big supermanifold itself is not projected. This is not true in general: For example, a supermanifold of dimension $n|2$ is split if and only if it is projected by Corollary 2.3 of [1]. In particular, $\\mathbb{CP}^{n|2}$ is split, but a generic hypersurface of $\\mathbb{CP}^{n|2}$ is nonsplit<sup>1</sup>. However, in certain special cases the statement is true, which is made precise by a modified version of Corollary 2.12 of [1].\n\nTheorem 2.9. Let $a: S' \\to S$ be an immersion of supermanifolds, with reduced spaces $M' \\subset M$ , such that the normal sequence of $M'$ decomposes: $a^* T_M \\cong T_M' \\oplus N$ , where $N$ is the even component of the normal bundle to the map $a$ . If $\\omega_2(S') \\neq 0$ , then we also have $\\omega_2(S) \\neq 0$ , so $S$ is not projected.\n\nThe remaining part of this section will be devoted to the central objects of study in this paper, Super Riemann Surfaces and their moduli. Again, much of the theory below is established in [5] and [1]. We will only collect relevant results here and refer the readers to the literature mentioned for detailed explanations.\n\nDefinition 2.10. Let $S$ be a supermanifold. A distribution (i.e., a subsheaf of the tangent sheaf) $\\mathcal{D} \\subset T_S$ is called a superconformal structure if it is\n\n- an odd distribution, i.e., a subbundle of rank $0|1$ . \n- everywhere non-integrable. By Frobenius' theorem, a distribution is integrable if it is closed under the Lie bracket. Since $\\mathcal{D}$ is an odd distribution, its Lie bracket will be denoted by the anticommutator. By assumption $\\mathcal{D}$ is of rank $0|1$ , so it will be generated by a single odd vector field $v$ . Since $v^2 \\coloneqq \\frac{1}{2} \\{v, v\\}$ , we define $\\mathcal{D}$ to be integrable if $v^2 \\in \\mathcal{D}$ , and define it to be everywhere non-integrable if $v^2$ is everywhere independent of $v$ over $\\mathcal{O}_S$ .\n\nDefinition 2.11. A Super Riemann Surface (SRS) is a pair $(S, \\mathcal{D})$ , where $S = (C, \\mathcal{O}_S)$ is a complex compact supermanifold of dimension $1|1$ , and $\\mathcal{D}$ is a superconformal structure.\n\nRemark 2.12. Since $S$ is of dimension $1|1$ , $T_{S}$ has rank $1|1$ . Since $\\mathcal{D}$ has rank $0|1$ and $\\mathcal{D}^2$ is even, hence has rank $1|0$ , together they generate the entire tangent bundle $T_{S}$ . Moreover, we have an isomorphism $T_{S} / \\mathcal{D} \\cong \\mathcal{D}^2$ . Therefore, we actually have an exact sequence of sheaves\n\n$$\n0 \\to \\mathcal {D} \\to T _ {S} \\to \\mathcal {D} ^ {2} \\to 0.\n$$\n\nThe next result is from [1], which gives direct characterizations of the generating vector field of the superconformal structure.\n\nLemma 2.13. Locally on a SRS one can choose coordinates $z$ and $\\theta$ , called a superconformal coordinate, such that $\\mathcal{D}$ is generated by the vector field\n\n$$\nv := \\frac {\\partial}{\\partial \\theta} + \\theta \\frac {\\partial}{\\partial z}.\n$$\n\nNow we can do deformation theory on SRS. We must therefore consider automorphisms on the SRS. Locally, the infinitesimal automorphisms are generated by superconformal vector fields, which preserve the distribution $\\mathcal{D}$ . In superconformal coordinates, a short calculation shows that the even superconformal vector field takes the form\n\n$$\n\\chi^ {+} = f (z) \\frac {\\partial}{\\partial z} + \\frac {f ^ {\\prime} (z)}{2} \\theta \\frac {\\partial}{\\partial \\theta} \\tag {1}\n$$\n\nwhile the odd one takes the form\n\n$$\n\\chi^ {-} = - g (z) \\left(\\frac {\\partial}{\\partial \\theta} - \\theta \\frac {\\partial}{\\partial z}\\right), \\tag {2}\n$$\n\nwhere $f, g$ are holomorphic (even) functions on $S$ that depends on $z$ only, not $\\theta$ . We denote the sheaf of superconformal vector fields on $S$ by $\\mathcal{A}_S$ , which is also the sheaf of infinitesimal automorphisms of $S$ . Now we may consider the moduli space $\\mathfrak{M}_g$ of SRS of genus $g$ . Given a point $S \\in \\mathfrak{M}_g$ , we want to know what the tangent space $T_S\\mathfrak{M}_g$ is. A standard argument in deformation theory shows\n\n$$\nT _ {S} \\mathfrak {M} _ {g} = H ^ {1} (S, \\mathcal {A} _ {S}).\n$$\n\nThe notion of a puncture on an ordinary Riemann surface has two analogs on an SRS. In string theory, they are known as the Neveu-Schwarz puncture (NS) and Ramond puncture (R). An NS puncture on an SRS $S$ is the obvious analog of a puncture on an ordinary Riemann surface: it is simply the choice of a point in $S$ .\n\nIf $S$ is a SRS with $n$ marked points $p_1, \\dots, p_n$ , or in divisor form $P = p_1 + \\dots + p_n$ , then it is an element of $\\mathfrak{M}_{g,n,0}$ . The infinitesimal automorphisms on $S$ must preserve the superconformal structure $\\mathcal{D}$ , and preserve the marked points, which imposes an extra condition on superconformal vector fields of the form (1) and (2) that they must vanish on the marked points, i.e., $f(0) = g(0) = 0$ in local coordinates if the marked point is given by the equation $z = 0$ . Therefore, we conclude that\n\n$$\nT _ {S} \\mathfrak {M} _ {g, n, 0} = H ^ {1} (S, \\mathcal {A} _ {S} (- P)).\n$$\n\nNow we introduce the second kind of puncture: Ramond puncture. In this situation, we assume the underlying supermanifold $(C,\\mathcal{O}_S)$ is still smooth, but the odd distribution $\\mathcal{D}$ is no longer everywhere non-integrable. The generator $v$ in the local form in Lemma 2.13 is replaced by\n\n$$\nv := \\frac {\\partial}{\\partial \\theta} + z \\theta \\frac {\\partial}{\\partial z}.\n$$\n\nIn other words, $\\mathcal{D}$ fails to be a maximally nonintegrable distribution along the divisor $z = 0$ . Therefore, a SRS with a Ramond puncture is technically no longer a SRS. We can also have multiple Ramond punctures, by which we mean $\\mathcal{D}$ fails to be a distribution along multiple divisors, and near each divisor we can find local coordinates as described above. The topology of SRS restricts the number of Ramond punctures to always be even. In [1], it was shown that in the presence of Ramond punctures $R = q_{1} + \\dots +q_{2r}$ , the sheaf of superconformal vector fields is given by\n\n$$\n\\mathcal {A} _ {S} \\cong (T _ {S} / \\mathcal {D}) \\otimes \\mathcal {O} _ {S} (- R). \\tag {3}\n$$\n\nTherefore, for a SRS $S$ with NS punctures $P = p_{1} + \\dots +p_{n}$ and R punctures $R = q_{1} + \\dots +q_{2r}$ , we still have\n\n$$\nT _ {S} \\mathfrak {M} _ {g, n, 2 r} = H ^ {1} (S, \\mathcal {A} _ {S} (- P)),\n$$\n\nbut now $\\mathcal{A}_S$ is given by (3).\n\n# 3. SETTINGS\n\nOur setup will be the following: Let $\\pi : Y \\to X$ be a branched cover of SRS. We use $g$ to denote the genus of $Y$ . Let us fix $g_0 = 2$ to be the genus of $X$ for the rest of the paper, and we require $X$ to have only 1 branch point. Let $d$ be the degree of $\\pi$ . If $p \\in X$ the branch point, and $\\pi^{-1}(p) = \\{q_1, \\dots, q_s\\}$ . Let $a_j$ denote the local degree of $\\pi$ at $q_j$ , for $1 \\leq j \\leq s$ . Then we define the ramification pattern $\\rho = (a_1, \\dots, a_s)$ . Ramification points with odd local degree will correspond to NS punctures on $Y$ , while ramification points of even local degree will correspond to R punctures on $Y$ after blow ups. See section 3.4 of [1] for details. There will always be an even number of R punctures [3], so we can denote the number of R punctures on $Y$ by $2r$ , and let $n$ denote the number of NS punctures on $Y$ . So $n + 2r = s$ .\n\nNow we allow the curves $Y, X$ to vary continuously, hence the covering map $\\pi : Y \\to X$ and the branch point in $X$ also vary continuously. But we require the genera $g, g_0$ of $Y, X$ , and the ramification patter $\\rho$ to be fixed throughout the process. There is a moduli space $\\mathfrak{M}_{d,\\rho}$ parameterizing all such branched coverings.\n\nThe map\n\n$$\n\\Phi : \\mathfrak {M} _ {d, \\rho} \\to \\mathfrak {M} _ {2, 1, 0} \\quad (\\pi : Y \\to X) \\mapsto X\n$$\n\nis a finite covering by Lemma 14 of [3], and it was already established that $\\mathfrak{M}_{2,1,0}$ is not projected by [1]. Therefore, $\\mathfrak{M}_{d,\\rho}$ is not projected by Theorem 2.8. Moreover,\n\nProposition 3.1. the map\n\n$$\n\\Psi : \\mathfrak {M} _ {d, \\rho} \\rightarrow \\mathfrak {M} _ {g, n, 2 r} \\quad (\\pi : Y \\to X) \\mapsto Y\n$$\n\nis an immersion of supermanifolds.\n\nProof. By Lemma 15 of [3], the composition $F \\circ \\Psi$ of $\\Psi : \\mathfrak{M}_{d,\\rho} \\to \\mathfrak{M}_{g,n,2r}$ with the forgetful map $F: \\mathfrak{M}_{g,n,2r} \\to \\mathfrak{M}_{g,0,2r}$ is an immersion. Hence $\\Psi$ itself must be an immersion.\n\nMoreover, the associated normal bundle sequence of $\\Psi$ splits, which we will check. The proof is basically the same as the proof of Proposition 5.2 of [1], but we can modify the proof and make it slightly more explicit by constructing a concrete lifting map of tangent vector fields of the base to the branched cover, in the case where the cover is Galois. Hence, we include the proof here.\n\nProposition 3.2. Let $\\psi : \\mathcal{SM}_{d,\\rho} \\to \\mathcal{SM}_{g,n,2r}$ denote the bosonic reduction of the map $\\Psi$ . Then the induced normal bundle sequence on the reduced space\n\n$$\n0 \\to T _ {\\mathcal {S M} _ {d, \\rho}} \\to \\psi^ {*} T _ {\\mathcal {S M} _ {g, n, 2 r}} \\to N \\to 0\n$$\n\nis a split exact sequence.\n\nProof. We pick a branched cover $\\pi : (Y, \\mathcal{L}_Y) \\to (X, \\mathcal{L}_X)$ of spin curves, representing a point in $\\mathcal{SM}_{d,\\rho}$ . First, we assume that the cover is $G$ -Galois. Under $\\psi$ this point goes to $(Y, \\mathcal{L}_Y)$ . We note\n\nthat if $R_{Y}$ is the divisor on $Y$ corresponding to the Ramond punctures, then $\\mathcal{L}_Y^2 \\cong K_Y(-R_Y)$ . Deformation theory gives an identification of $\\psi$ at this point:\n\n$$\n\\psi : H ^ {1} (X, T _ {X} (- P _ {X})) \\to H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})).\n$$\n\nSince $\\psi$ takes the deformation of the base to the corresponding uniquely determined deformation of the branched cover, it is induced by lifting vector fields. In fact, there is an injection of sheaves\n\n$$\nL: T _ {X} (- P _ {X}) \\rightarrow \\pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) \\tag {4}\n$$\n\nwhose induced map on $H^1$ is $\\psi$ . To see that $L$ is an injection, we cover $X$ locally by small open sets $X = \\bigcup_{\\alpha} U_{\\alpha}$ , such that $U_{\\alpha}$ and $\\pi^{-1}(U_{\\alpha})$ only contain at most one branch (or ramification) point for all $\\alpha$ . If $U_{\\alpha}$ does not contain any marked points, then by choosing sufficiently small open covers, we may assume $\\pi: \\pi^{-1}(U_{\\alpha}) \\to U_{\\alpha}$ is an isomorphism, hence there is no problem constructing $L$ on $U_{\\alpha}$ .\n\nNow we analyze the situation where $q \\in \\pi^{-1}(U_{\\alpha})$ is a ramification point and $p = \\pi(q) \\in U_{\\alpha}$ is a branch point. Let $e_q = k > 1$ be the local degree of $q$ . Then locally we may choose holomorphic coordinates $w$ on $Y$ and $z$ on $X$ such that $w(q) = z(p) = 0$ , and such that locally $\\pi$ is given by $z = w^k$ . To construct $L$ , locally a section in $\\Gamma(U_{\\alpha}, T_X(-P_X))$ is of the form\n\n$$\n\\chi = f (z) \\frac {\\partial}{\\partial z}\n$$\n\nwhere $f$ is a holomorphic function with $f(0) = 0$ . A section in $\\Gamma(U_{\\alpha}, \\pi_* T_Y(-P_Y - R_Y))$ is of the form\n\n$$\n\\tilde {\\chi} = g (w) \\frac {\\partial}{\\partial w}\n$$\n\nwith $g(0) = 0$ , where now $\\tilde{\\chi}$ is viewed as a vector field on $\\pi^{-1}(U_{\\alpha}) \\subset Y$ . Now the condition that $\\tilde{\\chi}$ is a lift of $\\chi$ , namely $\\pi_{*}\\tilde{\\chi} = \\chi$ , reads\n\n$$\n\\pi_ {*} \\tilde {\\chi} = g (w) \\frac {\\partial z}{\\partial w} \\frac {\\partial}{\\partial z} = k w ^ {k - 1} g (w) \\frac {\\partial}{\\partial z} = f (w ^ {k}) \\frac {\\partial}{\\partial z} = \\chi ,\n$$\n\nwhich implies\n\n$$\ng (w) = f \\left(w ^ {k}\\right) / k w ^ {k - 1}.\n$$\n\nNote that the above expression is a well-defined holomorphic function: since $f(0) = 0$ , we have $f(w^{k}) = w^{k}h(w)$ for some holomorphic $h$ , and furthermore $g(0) = 0$ . It is also clear from the expression that $g$ is uniquely determined by $f$ , hence $\\tilde{\\chi}$ is uniquely determined by $\\chi$ . Thus, we can also construct $L$ on this small neighborhood $U_{\\alpha}$ , which is injective. Therefore, this construction gives rise to an injection by lifting infinitesimal automorphisms of vector fields on sufficiently small open sets near each ramification point, and these lifts are compatible on the intersections of small open sets. Hence, this glues to an injection of sheaves (4). The induced map on $H^{1}$ is precisely\n\n$$\n\\psi : H ^ {1} (X, T _ {X} (- P _ {X})) \\to H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})).\n$$\n\nWe also note that if $\\pi : Y \\to X$ is $G$ -Galois and $\\tilde{\\chi}$ is a lift of $\\chi \\in \\Gamma(X, T_X)$ , then $\\tilde{\\chi}$ must be a $G$ -invariant vector field. Hence, the lift in (4) splits\n\n$$\nL: T _ {X} (- P _ {X}) \\to \\pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G} \\oplus \\mathcal {Q}\n$$\n\ninto the $G$ -invariant part and some other part $\\mathcal{Q}$ . The inclusion\n\n$$\nT _ {X} (- P _ {X}) \\rightarrow \\pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G}\n$$\n\nis actually an isomorphism, where the isomorphisms are given by lift and projection. Hence taking $H^1$ we have\n\n$$\n\\psi : H ^ {1} (X, T _ {X} (- P _ {X})) \\rightarrow H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) ^ {G} \\oplus H ^ {1} (X, \\mathcal {Q})\n$$\n\nwhere we used\n\n$$\nH ^ {1} (X, \\pi_ {*} T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G}) = H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y}) ^ {G}) \\cong H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) ^ {G}\n$$\n\nand $\\psi$ is given by inclusion into the $G$ -invariant part in the summand. Hence, it immediately follows that the normal bundle sequence splits.\n\nFinally, in the case where the covering is not Galois, we pass to the Galois closure of $\\pi : Y \\to X$ . Let $\\hat{Y}$ be the $G$ -Galois closure of $Y$ . Then there is a covering $\\hat{\\pi} : \\hat{Y} \\to Y$ with Galois group $H$ , where $H < G$ is the stabilizer subgroup of an unramified point of $Y$ . The pullback $\\hat{\\pi}^*$ includes cohomologies of $Y$ as the $H$ -invariant part of cohomologies of $\\hat{Y}$ :\n\n$$\n\\hat {\\pi} ^ {*}: H ^ {1} (Y, T _ {Y} (- P _ {Y} - R _ {Y})) \\cong H ^ {1} (\\hat {Y}, T _ {\\hat {Y}} (- P _ {\\hat {Y}} - R _ {\\hat {Y}})) ^ {H} \\hookrightarrow H ^ {1} (\\hat {Y}, T _ {\\hat {Y}} (- P _ {\\hat {Y}} - R _ {\\hat {Y}})).\n$$\n\nIntroducing the notation $D_{\\hat{Y}} = P_{\\hat{Y}} + R_{\\hat{Y}}$ and define $D_Y, D_X$ similarly, we have a commutative diagram with exact rows given by the normal bundle sequences evaluated at corresponding fibers:\n\n$$\n\\begin{array}{ccc}0\\longrightarrow H^{1}(X,T_{X}(-D_{X})) & \\longrightarrow H^{1}(Y,T_{Y}(-D_{Y})) & \\longrightarrow N\\longrightarrow 0\\\\ \\Big\\| & \\hat{\\pi}^{*}\\Big{\\downarrow} & \\Big{\\downarrow}i\\\\ 0\\longrightarrow H^{1}(X,T_{X}(-D_{X})) & \\longrightarrow H^{1}(\\hat{Y},T_{\\hat{Y}}(-D_{\\hat{Y}})) & \\longrightarrow \\hat{N}\\longrightarrow 0 \\end{array}\n$$\n\nwhere $i: N \\to \\hat{N}$ is the unique map that makes the square commute. A simple diagram chase shows that $i$ is injective, hence the spaces in the upper row can be viewed as subsets included in the corresponding spaces in the bottom row. By the previous argument, we already know that there exists a splitting $s: \\hat{N} \\to H^{1}(\\hat{Y}, T_{\\hat{Y}}(-D_{\\hat{Y}}))$ . Restrict this splitting to the subspace gives an induced splitting $N \\to H^{1}(Y, T_{Y}(-D_{Y}))$ , concluding the proof.\n\nHence, we conclude that $\\mathfrak{M}_{g,n,2r}$ is not projected by Theorem 2.9. The necessary and sufficient condition for such an immersion $\\Psi$ to exist, or equivalently for the tuple $(g,n,r)$ to be realizable, using the terminology of [3], is that the genus $g$ determined by the Hurwitz formula\n\n$$\ng = 1 + d \\left(g _ {0} - 1\\right) + \\frac {1}{2} \\sum_ {j = 1} ^ {s} \\left(a _ {j} - 1\\right) \\tag {5}\n$$\n\nis nonnegative, where we recall that $s = n + 2r$ , and $\\rho = (a_1, \\dots, a_s)$ is the ramification pattern of $\\pi$ , with each $a_j$ a local degree such that $\\sum_{j} a_j = d$ . Moreover, Theorem 4 of [11] ensures this is the only constraint: As long as the configurations $\\rho, d, g_0$ make $g \\geq 0$ in (5), there exists a branched cover $\\pi : Y \\to X$ with the specified behavior.\n\nOur next task is to determine, to the best of our ability, the condition for the tuple $(g,n,r)$ to be realizable. This is given by Theorem 4.1 and Theorem 4.2 in the next section.\n\n# 4. PROOF OF MAIN RESULT\n\nUsing the minimal model above with $g_0 = 2$ and only one branched point on $X$ , we can prove our first nonprojectedness theorem for supermoduli space. The proof is combinatorial.\n\nTheorem 4.1. Let $g, n, r$ be positive integers. The supermoduli space $\\mathfrak{M}_{g,n,2r}$ is not projected if the following two conditions are met:\n\n(1) genus bound: $g \\geq n + 5r + 1$ ; \n(2) congruence condition: $2g - 2 + n + 2r \\equiv 0 \\mod 3$ .\n\nProof. Substituting $g_0 = 2$ in (5) shows that $2g = 2 + 3d - n - 2r$ . Hence, we must have $2g - 2 + n + 2r = 3d \\equiv 0 \\mod 3$ . This shows that if $(g,n,r)$ is a valid tuple arising from a branched cover of a $g_0 = 2$ SRS with 1 puncture, then the congruence condition must be satisfied. To derive the genus bound, we want to minimize $g$ according to (5) with $g_0 = 2$ and given $n,r > 0$ . The minimal choice of the ramification pattern is\n\n$$\n\\rho_ {\\min } = (\\underbrace {1 , \\ldots , 1} _ {n}, \\underbrace {2 , \\ldots , 2} _ {2 r}).\n$$\n\nThe corresponding minimal degree of the branched cover is $d_{\\mathrm{min}} = 1 \\cdot n + 2 \\cdot 2r = n + 4r$ . By the Hurwitz formula, the minimal genus is given by $2g_{\\mathrm{min}} - 2 = 3d_{\\mathrm{min}} - n - 2r$ . Hence the solution is $g_{\\mathrm{min}} = n + 5r + 1$ . Hence, the genus bound is also derived. This shows that for the tuple $(g,n,r)$ to be realizable, the genus bound and congruence condition are necessary.\n\nNow it remains to show that if these two conditions are met, then there exists a branched cover $\\pi : Y \\to X$ with the specified behavior. Given the congruence condition, we note that the degree of the cover\n\n$$\nd = \\frac {1}{3} (2 g - 2 + n + 2 r)\n$$\n\nis an integer. Moreover, the genus bound $g \\geq n + 5r + 1$ implies $2g - 2 \\geq 2n + 10r$ . Substituting this into the expression above gives\n\n$$\n3 d = 2 g - 2 + n + 2 r \\geq (2 n + 1 0 r) + n + 2 r = 3 n + 1 2 r.\n$$\n\nHence $d \\geq n + 4r = d_{\\mathrm{min}}$ . Therefore, $d$ is at least as large as the minimal possible degree $d_{\\mathrm{min}}$ . We must now show that there exists a partition $\\rho$ of $d$ that has exactly $n$ odd parts and $2r$ even parts. The proof is constructive. Let $\\delta_d = d - d_{\\mathrm{min}}$ . The calculation above shows that\n\n$$\n3 \\delta_ {d} = 3 (d - d _ {\\min }) = (2 g - 2 + n + 2 r) - (2 g _ {\\min } - 2 + n + 2 r) = 2 (g - g _ {\\min })\n$$\n\nThis implies that $3\\delta_{d}$ is an even number. Since 3 is odd, $\\delta_{d}$ itself must be an even number. Let $\\delta_{d} = 2k$ for some nonnegative integer $k$ . We now need to find a partition of $d = d_{\\min} + 2k$ with the correct number of even and odd parts. We start with the minimal partition $\\rho_{\\min}$ . We can modify this partition to increase its sum by an even number, $2k$ , without changing the parity count of its parts. For example, we can replace a part $a_{j} = 1$ with a part $a_{j} + 2 = 3$ . This increases the total sum by 2, and the new part is still odd, so the parity of the ramification pattern is preserved, while the total degree increases by 2. By repeatedly applying such modifications $k$ times, we can increase the sum of the partition from $d_{\\min}$ to $d = d_{\\min} + 2k$ while preserving the number of even and odd parts. The resulting partition $\\rho$ has sum $d$ and corresponds to the puncture configuration $(n, 2r)$ . By construction, this partition, when used in the Hurwitz formula, yields the genus $g$ . To see this, we note that\n\n$$\ng _ {\\min } = 1 + d _ {\\min } + \\frac {1}{2} \\sum_ {j} \\left(a _ {j, \\min } - 1\\right). \\tag {6}\n$$\n\nSince\n\n$$\n6 k = 3 \\delta_ {d} = 2 (g - g _ {\\mathrm {m i n}}),\n$$\n\nwe have $g - g_{\\mathrm{min}} = 3k$ , and we also have\n\n$$\n(d - d _ {\\min }) + \\frac {1}{2} \\sum_ {j = 1} ^ {s} \\left(a _ {j} - a _ {j, \\min }\\right) = 3 k = g - g _ {\\min }. \\tag {7}\n$$\n\nNow adding (6) and (7) gives the Hurwitz formula $g = 1 + d + \\frac{1}{2}\\sum_{j=1}^{s}(a_j - 1)$ , as desired. This concludes the proof.\n\nNow we aim to remove the congruence condition in Theorem 4.1 at the cost of a slightly stronger genus bound. Say we already know that $\\mathfrak{M}_{g,n,2r}$ is not projected, we consider the forgetful map\n\n$$\nF: \\mathfrak {M} _ {g, n, 2 r} \\to \\mathfrak {M} _ {g, n - i, 2 r}\n$$\n\nfor $i = 1,2$ , and show that the composition\n\n$$\n\\Psi^ {\\prime} = F \\circ \\Psi : \\mathfrak {M} _ {d, \\rho} \\to \\mathfrak {M} _ {g, n - i, 2 r}\n$$\n\nis still an immersion, and the bosonic normal bundle sequence splits. Since $\\Phi : \\mathfrak{M}_{d,\\rho} \\to \\mathfrak{M}_{2,1,0}$ is still a finite covering map, $\\mathfrak{M}_{d,\\rho}$ is not projected, we conclude that $\\mathfrak{M}_{g,n - i,2r}$ is not projected by Theorem 2.8 and Theorem 2.9. Then, the congruence condition in Theorem 4.1 will be violated, so we can essentially remove this condition.\n\nIn other words, the strategy is as follows: Given a triple $(g,n,r)$ . Our goal is to show that $\\mathfrak{M}_{g,n,2r}$ is not projected. We want to find a helper tuple $(g,n_0,r)$ that satisfies the original two conditions in Theorem 4.1, and such that we have the forgetful immersion $\\Psi^{\\prime}:\\mathfrak{M}_{d,\\rho}\\to \\mathfrak{M}_{g,n_0,2r}\\to \\mathfrak{M}_{g,n,2r}$ , which would then show that $\\mathfrak{M}_{g,n,2r}$ is not projected. Therefore, we must find a new genus bound for $(g,n,r)$ such that if the genus bound is met, then such a helper tuple $(g,n_0,r)$ is guaranteed to exist. For a given $(g,n,r)$ with fixed $g,r$ , the helper $(g,n_0,2r)$ must satisfy\n\n$$\nn \\leq n _ {0} \\leq g - 5 r - 1\n$$\n\nwhere the first inequality is because we need the existence of the forgetful map, and the second inequality comes from the genus bound in Theorem 4.1. Now the congruence condition says that we must have\n\n$$\nn _ {0} \\equiv - 2 g + 2 - 2 r \\mod 3.\n$$\n\nSince $g$ and $r$ are fixed, the number $-2g + 2 - 2r$ is also fixed. Thus, the problem reduces to the following: can we find an integer $n_0$ on the interval $[n, g - 5r - 1]$ with a specified residue mod 3? Clearly, this can be done if $[n, g - 5r - 1]$ contains at least three integers. In other words, we only need $g - 5r - 1 - n + 1 \\geq 3$ , or equivalently $g \\geq n + 5r + 3$ . Hence, we obtain\n\nTheorem 4.2. Let $g, n, r$ be positive integers. The supermoduli space $\\mathfrak{M}_{g,n,2r}$ is not projected if $g \\geq n + 5r + 3$ .\n\nNow it remains to show that $\\Psi'$ is an immersion and the bosonic normal bundle sequence splits for $i = 1,2$ , by Theorem 2.9.\n\nProposition 4.3. The morphism $\\Psi^{\\prime}:\\mathfrak{M}_{d,\\rho}\\xrightarrow{\\Psi}\\mathfrak{M}_{g,n,2r}\\xrightarrow{F}\\mathfrak{M}_{g,n - i,2r}$ is an immersion of supermanifolds, for $i = 1,2$ .\n\nProof. This follows immediately from Lemma 15 of [3], which states that $\\Psi'$ composed with the forgetful map $\\mathfrak{M}_{g,n-i,2r} \\to \\mathfrak{M}_{g,0,2r}$ is an immersion. Hence it follows that $\\Psi'$ is an immersion.\n\nProposition 4.4. The normal bundle sequence associated with the bosonic reduction of\n\n$$\n\\Psi^ {\\prime}: \\mathfrak {M} _ {d, \\rho} \\xrightarrow {\\Psi} \\mathfrak {M} _ {g, n, 2 r} \\xrightarrow {F} \\mathfrak {M} _ {g, n - i, 2 r}\n$$\n\nsplits, for $i = 1,2$\n\nProof. Since $\\Psi : \\mathfrak{M}_{d,\\rho} \\to \\mathfrak{M}_{g,n,2r}$ is an immersion, its bosonic reduction $\\psi : \\mathcal{SM}_{d,\\rho} \\to \\mathcal{SM}_{g,n,2r}$ is still an immersion. Moreover, we know the normal bundle sequence of $\\psi$ splits by Proposition 3.2. The map $F: \\mathfrak{M}_{g,n,2r} \\to \\mathfrak{M}_{g,n-i,2r}$ is a fibration, hence so is its bosonic reduction $f: \\mathcal{SM}_{g,n,2r} \\to \\mathcal{SM}_{g,n-i,2r}$ . Moreover, by Proposition 4.3 we know $f \\circ \\psi: \\mathcal{SM}_{d,\\rho} \\to \\mathcal{SM}_{g,n-i,2r}$ is still an immersion. Applying the following lemma will conclude the proof.\n\nThe lemma was established in the proof of Theorem 1.3 of [1, p.48]. The proof of this statement is not very hard, so we give it here.\n\nLemma 4.5. Suppose that $i: X \\to Y$ is an immersion, $f: Y \\to Z$ is a fibration, such that $f \\circ i: X \\to Z$ is still an immersion. If the normal bundle sequence of $i$ splits, then the normal bundle sequence of $f \\circ i$ will also split.\n\nProof. The differential $\\mathrm{df}: T_{Y} \\to f^{*}T_{Z}$ gives a bundle map over $Y$ , with kernel $T_{Y / Z}$ . Pulling back along $i$ we get a bundle map $i^{*}\\mathrm{df}: i^{*}T_{Y} \\to (f \\circ i)^{*}T_{Z}$ over $X$ , with kernel $i^{*}T_{Y / Z}$ . This gives a commutative diagram with exact rows given by the normal bundle sequences:\n\n$$\n\\begin{array}{c} 0 \\longrightarrow T _ {X} \\longrightarrow i ^ {*} T _ {Y} \\longrightarrow N _ {X, Y} \\longrightarrow 0 \\\\ \\Big \\| \\quad i ^ {*} \\mathrm {d} f \\Big \\downarrow \\\\ 0 \\longrightarrow T _ {X} \\longrightarrow (f \\circ i) ^ {*} T _ {Z} \\longrightarrow N _ {X, Z} \\longrightarrow 0 \\end{array}\n$$\n\nA direct application of the snake lemma shows that $\\ker (N_{X,Y}\\to N_{X,Z})\\cong i^{*}T_{Y / Z}$ . Now, we are given a splitting $s:N_{X,Y}\\rightarrow i^{*}T_{Y}$ of the top row. Composing it with the quotient map we obtain a new map\n\n$$\ns ^ {\\prime}: N _ {X, Y} \\to i ^ {*} T _ {Y} \\to i ^ {*} T _ {Y} / i ^ {*} T _ {Y / Z}.\n$$\n\nBecause $f$ is a submersion, $i^{*}\\mathrm{d}f: i^{*}T_{Y} \\to (f \\circ i)^{*}T_{Z}$ is surjective, hence passing to the quotient, the map $N_{X,Y} \\to N_{X,Z}$ is still surjective with kernel $i^{*}T_{Y / Z}$ as discussed above. Therefore, we see that $N_{X,Z} = N_{X,Y} / i^{*}T_{Y / Z}$ . Therefore, the map $s'$ above factors through $i^{*}T_{Y / Z}$ and we get a map\n\n$$\ns ^ {\\prime}: N _ {X, Z} \\to i ^ {*} T _ {Y} / i ^ {*} T _ {Y / Z}.\n$$\n\nBut pulling back the relative tangent sequence $0 \\to T_{Y / Z} \\to T_Y \\to f^* T_Z \\to 0$ gives a short exact sequence\n\n$$\n0 \\rightarrow i ^ {*} T _ {Y / Z} \\rightarrow i ^ {*} T _ {Y} \\rightarrow (f \\circ i) ^ {*} T _ {Z} \\rightarrow 0.\n$$\n\nHence we conclude that $(f\\circ i)^{*}T_{Z} = i^{*}T_{Y} / i^{*}T_{Y / Z}$ , and the map $s^\\prime$ is actually a map\n\n$$\ns ^ {\\prime}: N _ {X, Z} \\to (f \\circ i) ^ {*} T _ {Z},\n$$\n\nwhich we claim to be the desired splitting. Indeed, $s'$ is obtained by taking the right inverse $s$ for the map $N_{X,Y} \\to i^* T_Y$ and then passing to the quotient, hence is a splitting. This concludes the proof.\n\n# ACKNOWLEDGMENT\n\nThe author thanks Ron Donagi for suggesting this problem and for his guidance. The author would also like to thank Tony Pantev, Nadia Ott, Edward Witten, David Kazhdan, Yuanyuan Shen, and Fanzhi Lu for reading an earlier version of this paper and providing valuable feedbacks. Last but not least, thanks to Daebeom Choi, Xingyu Meng, and Victor Alekseev for helpful discussions.\n\n# REFERENCES\n\n[1] Ron Donagi, Edward Witten, Supermoduli Space is Not Projected, 2013. \n[2] Ron Donagi, Edward Witten, Super Atiyah Classes and Obstructions to Splitting of Supermoduli Space, 2014. \n[3] Ron Donagi, Nadia Ott, Supermoduli Space with Ramond Punctures is Not Projected, 2023. \n[4] Yuri Manin, Gauge Field Theory and Complex Geometry, Springer, 1985. \n[5] Edward Witten, Notes on Super Riemann Surfaces and Their Moduli, 2013. \n[6] Edward Witten, More On Superstring Perturbation Theory: An Overview Of Superstring Perturbation Theory Via Super Riemann Surfaces, 2013. \n[7] Eric D'Hoker, D.H. Phong, Two-Loop Superstrings I, Main Formulas, 2001. \n[8] Eric D'Hoker, D.H. Phong, Lectures on Two-Loop Superstrings, 2002. \n[9] Paul Green, On Holomorphic Graded Manifolds, PAMS v.85, 1982. \n[10] Felix Berezin, Introduction to Superanalysis, Springer, 1987. \n[11] Dale Husemoller, Ramified Coverings of Riemann Surfaces, Duke Math. J., Volume 29 (1962) no. 1.\n\nDEPARTMENT OF MATHEMATICS, UNIVERSITY OF PENNSYLVANIA, PHILADELPHIA, PA 19104, USA \nEmail address: tywang9@sas.upenn.edu"}
# QUBO Formulations for MIP Symmetry Detection Abstract. Formulation symmetry in mixed-integer programming (MIP) can hinder solver performance by inducing redundant search, but detecting such symmetries is also a significant computational challenge. This paper explores the potential for quantum computing to handle symmetry detection. Quantum is a promising alternative to classical compute, but this emerging technology has limited hardware capacity in terms of input problem size. This paper explores the use of Quadratic Unconstrained Binary Optimization (QUBO) models for symmetry detection, as QUBO is the canonical format for quantum optimization platforms. To help address the input size bottleneck, we develop full, reduced, and decomposed QUBO as well as QUBO-Plus formulations for MIP symmetry detection. Computational experiments on the MIPLIB 2017 benchmark are used to estimate the quantum computing resources needed for practical problems. Keywords: MIP $\cdot$ QUBO $\cdot$ Quantum $\cdot$ Symmetry # 1 Introduction Consider a generic Mixed-Integer Programming (MIP) problem of the form $$ \begin{array}{l} \max _ {x} c ^ {T} x \\ \text {s . t .} \quad A x \leq b \tag {1} \\ x \in \mathbb {Z} ^ {p} \times \mathbb {R} ^ {n - p} \\ \end{array} $$ where $A$ is an $n \times m$ matrix. This is a general-purpose algebraic modeling paradigm that enables access to powerful solver software; for example, enabling billions of dollars in savings in power systems operations. As MIP is NP-hard in general, there is persistent demand for computational improvements of solvers. In this paper we explore the potential of quantum computing to accelerate MIP solvers, namely by considering the problem of symmetry detection. Symmetry can slow down solvers, as identical solutions may be repeatedly enumerated. To mitigate this redundant computation, one must first identify symmetries within the problem. In this paper, we present Quadratic Unconstrained Binary Optimization (QUBO) formulations for detecting symmetries within an MIP that can be accepted and heuristically solved for on a quantum annealer or other quantum-inspired solvers (see, e.g.). Our work adds to the literature on hybrid schemes for MIP, which has considered quantum subroutines involving: primal heuristics; Benders cut generation; combinatorial/pure integer subproblems; and branching. This paper is unique in considering the problem of formulation symmetry detection, which is notably suspected to be NP-intermediate (see Section 1.2), i.e. a problem in NP that is neither NP-Hard nor in P. In contrast, the aforementioned hybrid schemes all involve NP-hard problems for quantum computing, which may be beyond the realm of provable quantum advantage (see Section 1.1). We note also that efforts to accelerate linear programming with quantum computing (see, e.g.) can also be incorporated into MIP solvers in hybrid fashion. # 1.1 Quantum Computing Quantum computing has substantial potential; notably, Integer Factorization (IF) was proven to be in bounded-error quantum polynomial time (BQP) via Shor's Algorithm. IF is conjectured to be NP-Intermediate, hence challenging for classical compute; indeed, the effectiveness of the widely-used RSA encryption relies on the intractability of IF. Quantum computing has clear theoretical limits, however, as BQP is a subset of QIP=PSPACE; moreover, it remains an open question whether BQP contains NP or vice versa, though there are reasons to suspect they are incomparable (e.g.). At present, practice falls rather short of theory due to the many challenges with scaling quantum computers to sufficient size. For example, gate-based quantum computers struggle to run Shor's algorithm on numbers with more than 2 digits. In contrast, classical methods can handle 240-digit numbers. Indeed, at present, Shor's algorithm itself seems more effectively realized on a classical computer: Willsch et al. recently simulated Shor's Quantum Factoring Algorithm with classical computing to factor a 12-digit number. An alternative to gate-based quantum computers are quantum annealers, which are composed of physical qubits arranged in a weighted planar graph. Annealers attempt to determine the values of the qubits that lead to the lowest total energy in the graph. While attaining a true minimum is not guaranteed, the process can be repeated thousands of times very quickly, with each sample taking as little as 20ns. This process is easily extended to QUBO formulations, which is the canonical format for quantum-based optimization—a generic setup is provided in Figure 1. This type of quantum computer can thus serve as a fast heuristic in hybrid optimization schemes. We note that quantum annealers cannot run Shor's algorithm, but 7-digit numbers have been factorized via QUBO formulation—up to 13-digits when substantial classical compute is applied for preprocessing. There have also been advancements in hybrid quantum-classical hardware and algorithms that enable sampling of solutions to QUBO problems with some constraints: QUBO-Plus models. Common constraints include packing, covering, and knapsack constraints (see for details). In addition to the aforementioned IF results, there has been substantial recent efforts at benchmarking on optimization problems (e.g.), which tell a similar story: quantum computers may serve as an intriguing, rapid primal heuristic, but they are quite limited in input size at present. $$ \min _ {x \in \{0, 1 \} ^ {n}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} Q _ {i j} x _ {i} x _ {j} $$ Fig.1. A generic QUBO formulation # 1.2 MIP Symmetry The symmetry group of a MIP problem instance is the set of constraint and variable permutations that preserve the feasible region without changing the value of the objective function. However, working with this group is computationally challenging, as even determining feasibility of a MIP instance is NP-hard. Instead, solvers typically focus on formulation symmetry: a variable permutation $\pi \in S_{n}$ such that there exists a corresponding constraint permutation $\sigma \in S_{m}$ where: - $\pi (\{1,\ldots ,p\}) = \{1,\ldots ,p\}$ (integer variables preserved), - $\pi(c) = c$ (objective preserved), - $\sigma(b) = b$ (constraint constants preserved), and - $A_{\sigma(i), \pi(j)} = A_{ij}$ (rows permuted by $\sigma$ , columns by $\pi$ ). Formulation symmetry detection can be converted to a graph theoretic problem by representing the MIP formulation as a bipartite vertex- and edge-colored graph. In one partition, vertices represent MIP variables, with colors corresponding to their objective coefficients. In the other partition, vertices represent MIP constraints, with colors representing their constant terms. Edges are colored based on variable coefficients from the constraints. The automorphism group of this graph is equivalent to the formulation symmetry group, and so finding MIP symmetries reduces to finding graph automorphisms; consequently, software such as Nauty or Bliss can be applied to determine generators of the group. Once generators are obtained, orbits—that is, sets of variables which can be permuted without changing the MIP formulation—can be calculated and then exploited in the MIP solver. While these graph-based algorithms are often fast in practice, they have worst-case exponential run-times, and so in practice one may terminate search early after finding only a strict subset of generators. Indeed, the problem of graph automorphism detection is suspected to be NP-Intermediate; however, unlike IF it is a longstanding open question whether the problem is in BQP. After running symmetry detection, techniques can be applied to prevent symmetric solutions from being revisited in the branch-and-bound tree. Well-established methods include Isomorphic Pruning, Orbital Fixing, and symmetry-breaking constraints. Other attempts have also been made leveraging abstract algebraic tools. For descriptions of these techniques, as well as an experimental comparison of their performances, see. Solvers such as SCIP run their own native implementations of symmetry detection (working directly with MIP formulation data structures) to reduce overhead. In a similar vein, we develop in this paper QUBO formulations specifically catered to formulation symmetry detection. This avoids having to reduce a MIP to existing QUBOs for graph isomorphism. # 2 QUBO Symmetry Detection Formulations # 2.1 Setup and Notation The following setup is used for all our QUBO formulations. The decision variables are square, binary matrices $\pi \in \{0,1\}^{n\times n}$ (for variable permutation) and $\sigma \in \{0,1\}^{m\times m}$ (for constraint permutation). The output of these variables are interpreted as follows: - $x_{j}$ is mapped to $x_{j'}$ if and only if $\pi_{jj'} = 1$ - constraint $i$ is mapped to constraint $i'$ if and only if $\sigma_{ii'} = 1$ . The QUBO formulations contain bilinear terms of the form $\sigma_{ii'}\pi_{jj'}$ . Following the aforementioned interpretations, we can see that $A_{ij}$ is mapped to $A_{i'j'}$ in the coefficient matrix if and only if $\sigma_{ii'}\pi_{jj'} = 1$ . Moreover, we define a reasonable permutation as: - a permutation of variables that maps variables with the same domain and objective coefficients, or - a permutation of constraints that maps constraints with the same right-hand constants $b$ . Accordingly, let $r(a)$ be the index set of variables (resp. constraints) that can be reasonably permuted with $x_{a}$ (resp. constraint $a$ ). Furthermore, we denote the set of all reasonable variable permutations as $\varPi := \{\pi_{jj'} : j' \in r(j), j \in \{1, \ldots, n\}\}$ with complement $\varPi^{\complement}$ , and the set of all reasonable constraint permutations as $\Sigma := \{\sigma_{ii'} : i' \in r(i), i \in \{1, \ldots, m\}\}$ with complement $\Sigma^{\complement}$ . Note that $\varPi$ (resp.
# QUBO Formulations for MIP Symmetry Detection Abstract. Formulation symmetry in mixed-integer programming (MIP) can hinder solver performance by inducing redundant search, but detecting such symmetries is also a significant computational challenge. This paper explores the potential for quantum computing to handle symmetry detection. Quantum is a promising alternative to classical compute, but this emerging technology has limited hardware capacity in terms of input problem size. This paper explores the use of Quadratic Unconstrained Binary Optimization (QUBO) models for symmetry detection, as QUBO is the canonical format for quantum optimization platforms. To help address the input size bottleneck, we develop full, reduced, and decomposed QUBO as well as QUBO-Plus formulations for MIP symmetry detection. Computational experiments on the MIPLIB 2017 benchmark are used to estimate the quantum computing resources needed for practical problems. Keywords: MIP $\cdot$ QUBO $\cdot$ Quantum $\cdot$ Symmetry # 1 Introduction Consider a generic Mixed-Integer Programming (MIP) problem of the form $$ \begin{array}{l} \max _ {x} c ^ {T} x \\ \text {s . t .} \quad A x \leq b \tag {1} \\ x \in \mathbb {Z} ^ {p} \times \mathbb {R} ^ {n - p} \\ \end{array} $$ where $A$ is an $n \times m$ matrix. This is a general-purpose algebraic modeling paradigm that enables access to powerful solver software; for example, enabling billions of dollars in savings in power systems operations. As MIP is NP-hard in general, there is persistent demand for computational improvements of solvers. In this paper we explore the potential of quantum computing to accelerate MIP solvers, namely by considering the problem of symmetry detection. Symmetry can slow down solvers, as identical solutions may be repeatedly enumerated. To mitigate this redundant computation, one must first identify symmetries within the problem. In this paper, we present Quadratic Unconstrained Binary Optimization (QUBO) formulations for detecting symmetries within an MIP that can be accepted and heuristically solved for on a quantum annealer or other quantum-inspired solvers (see, e.g.). Our work adds to the literature on hybrid schemes for MIP, which has considered quantum subroutines involving: primal heuristics; Benders cut generation; combinatorial/pure integer subproblems; and branching. This paper is unique in considering the problem of formulation symmetry detection, which is notably suspected to be NP-intermediate (see Section 1.2), i.e. a problem in NP that is neither NP-Hard nor in P. In contrast, the aforementioned hybrid schemes all involve NP-hard problems for quantum computing, which may be beyond the realm of provable quantum advantage (see Section 1.1). We note also that efforts to accelerate linear programming with quantum computing (see, e.g.) can also be incorporated into MIP solvers in hybrid fashion. # 1.1 Quantum Computing Quantum computing has substantial potential; notably, Integer Factorization (IF) was proven to be in bounded-error quantum polynomial time (BQP) via Shor's Algorithm. IF is conjectured to be NP-Intermediate, hence challenging for classical compute; indeed, the effectiveness of the widely-used RSA encryption relies on the intractability of IF. Quantum computing has clear theoretical limits, however, as BQP is a subset of QIP=PSPACE; moreover, it remains an open question whether BQP contains NP or vice versa, though there are reasons to suspect they are incomparable (e.g.). At present, practice falls rather short of theory due to the many challenges with scaling quantum computers to sufficient size. For example, gate-based quantum computers struggle to run Shor's algorithm on numbers with more than 2 digits. In contrast, classical methods can handle 240-digit numbers. Indeed, at present, Shor's algorithm itself seems more effectively realized on a classical computer: Willsch et al. recently simulated Shor's Quantum Factoring Algorithm with classical computing to factor a 12-digit number. An alternative to gate-based quantum computers are quantum annealers, which are composed of physical qubits arranged in a weighted planar graph. Annealers attempt to determine the values of the qubits that lead to the lowest total energy in the graph. While attaining a true minimum is not guaranteed, the process can be repeated thousands of times very quickly, with each sample taking as little as 20ns. This process is easily extended to QUBO formulations, which is the canonical format for quantum-based optimization—a generic setup is provided in Figure 1. This type of quantum computer can thus serve as a fast heuristic in hybrid optimization schemes. We note that quantum annealers cannot run Shor's algorithm, but 7-digit numbers have been factorized via QUBO formulation—up to 13-digits when substantial classical compute is applied for preprocessing. There have also been advancements in hybrid quantum-classical hardware and algorithms that enable sampling of solutions to QUBO problems with some constraints: QUBO-Plus models. Common constraints include packing, covering, and knapsack constraints (see for details). In addition to the aforementioned IF results, there has been substantial recent efforts at benchmarking on optimization problems (e.g.), which tell a similar story: quantum computers may serve as an intriguing, rapid primal heuristic, but they are quite limited in input size at present. $$ \min _ {x \in \{0, 1 \} ^ {n}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} Q _ {i j} x _ {i} x _ {j} $$ Fig.1. A generic QUBO formulation # 1.2 MIP Symmetry The symmetry group of a MIP problem instance is the set of constraint and variable permutations that preserve the feasible region without changing the value of the objective function. However, working with this group is computationally challenging, as even determining feasibility of a MIP instance is NP-hard. Instead, solvers typically focus on formulation symmetry: a variable permutation $\pi \in S_{n}$ such that there exists a corresponding constraint permutation $\sigma \in S_{m}$ where: - $\pi (\{1,\ldots ,p\}) = \{1,\ldots ,p\}$ (integer variables preserved), - $\pi(c) = c$ (objective preserved), - $\sigma(b) = b$ (constraint constants preserved), and - $A_{\sigma(i), \pi(j)} = A_{ij}$ (rows permuted by $\sigma$ , columns by $\pi$ ). Formulation symmetry detection can be converted to a graph theoretic problem by representing the MIP formulation as a bipartite vertex- and edge-colored graph. In one partition, vertices represent MIP variables, with colors corresponding to their objective coefficients. In the other partition, vertices represent MIP constraints, with colors representing their constant terms. Edges are colored based on variable coefficients from the constraints. The automorphism group of this graph is equivalent to the formulation symmetry group, and so finding MIP symmetries reduces to finding graph automorphisms; consequently, software such as Nauty or Bliss can be applied to determine generators of the group. Once generators are obtained, orbits—that is, sets of variables which can be permuted without changing the MIP formulation—can be calculated and then exploited in the MIP solver. While these graph-based algorithms are often fast in practice, they have worst-case exponential run-times, and so in practice one may terminate search early after finding only a strict subset of generators. Indeed, the problem of graph automorphism detection is suspected to be NP-Intermediate; however, unlike IF it is a longstanding open question whether the problem is in BQP. After running symmetry detection, techniques can be applied to prevent symmetric solutions from being revisited in the branch-and-bound tree. Well-established methods include Isomorphic Pruning, Orbital Fixing, and symmetry-breaking constraints. Other attempts have also been made leveraging abstract algebraic tools. For descriptions of these techniques, as well as an experimental comparison of their performances, see. Solvers such as SCIP run their own native implementations of symmetry detection (working directly with MIP formulation data structures) to reduce overhead. In a similar vein, we develop in this paper QUBO formulations specifically catered to formulation symmetry detection. This avoids having to reduce a MIP to existing QUBOs for graph isomorphism. # 2 QUBO Symmetry Detection Formulations # 2.1 Setup and Notation The following setup is used for all our QUBO formulations. The decision variables are square, binary matrices $\pi \in \{0,1\}^{n\times n}$ (for variable permutation) and $\sigma \in \{0,1\}^{m\times m}$ (for constraint permutation). The output of these variables are interpreted as follows: - $x_{j}$ is mapped to $x_{j'}$ if and only if $\pi_{jj'} = 1$ - constraint $i$ is mapped to constraint $i'$ if and only if $\sigma_{ii'} = 1$ . The QUBO formulations contain bilinear terms of the form $\sigma_{ii'}\pi_{jj'}$ . Following the aforementioned interpretations, we can see that $A_{ij}$ is mapped to $A_{i'j'}$ in the coefficient matrix if and only if $\sigma_{ii'}\pi_{jj'} = 1$ . Moreover, we define a reasonable permutation as: - a permutation of variables that maps variables with the same domain and objective coefficients, or - a permutation of constraints that maps constraints with the same right-hand constants $b$ . Accordingly, let $r(a)$ be the index set of variables (resp. constraints) that can be reasonably permuted with $x_{a}$ (resp. constraint $a$ ). Furthermore, we denote the set of all reasonable variable permutations as $\varPi := \{\pi_{jj'} : j' \in r(j), j \in \{1, \ldots, n\}\}$ with complement $\varPi^{\complement}$ , and the set of all reasonable constraint permutations as $\Sigma := \{\sigma_{ii'} : i' \in r(i), i \in \{1, \ldots, m\}\}$ with complement $\Sigma^{\complement}$ . Note that $\varPi$ (resp. $\Sigma$ ) can be seen as a permuted block-diagonal submatrix of $\pi$ (resp. $\sigma$ ), as illustrated in Section 3.1. While this definition of reasonable is sufficient to ensure symmetry detection for the generic form of MIP seen in (1), we can add modifications to handle MIP structures: - variables with the same upper and lower bounds - constraints with the same constraint sense We can also sharpen the notion of reasonable to reduce formulation size, namely by eliminating: - variables that are present in the same number of constraints - constraints containing the same number of variables These modifications are detailed in Section 3. # 2.2 Full and Reduced QUBOs Following in the spirit of QUBO formulations for graph isomorphism, we seek to construct a QUBO with the property that for any optimal solution (with objective value 0) we can easily extract a corresponding formulation symmetry. First, let us ensure that $\pi$ and $\sigma$ are doubly stochastic and thus represent permutations. To do this, we incorporate the following penalty terms in the objective: $$ H _ {B, \pi} = \sum_ {j = 1} ^ {n} \left(\sum_ {j ^ {\prime} = 1} ^ {n} \pi_ {j j ^ {\prime}} - 1\right) ^ {2} + \sum_ {j ^ {\prime} = 1} ^ {n} \left(\sum_ {j = 1} ^ {n} \pi_ {j j ^ {\prime}} - 1\right) ^ {2} \tag {2} $$ $$ H _ {B, \sigma} = \sum_ {i = 1} ^ {m} \left(\sum_ {i ^ {\prime} = 1} ^ {m} \sigma_ {i i ^ {\prime}} - 1\right) ^ {2} + \sum_ {i ^ {\prime} = 1} ^ {m} \left(\sum_ {i = 1} ^ {m} \sigma_ {i i ^ {\prime}} - 1\right) ^ {2} \tag {3} $$ Note that further along in this paper we will be working with partial matrices, i.e. removing elements of $\pi$ and $\sigma$ from our formulations; for simplicity, we will preserve the notation for these expressions, and which elements are preserved in sums will be clear from context. Now, let us enforce that only reasonable permutations are allowed. We accomplish this with the following penalty terms: $$ H _ {\pi} = \sum_ {\Pi^ {\complement}} \pi_ {j j ^ {\prime}} ^ {2} \tag {4} $$ $$ H _ {\sigma} = \sum_ {\Sigma^ {\complement}} \sigma_ {i i ^ {\prime}} ^ {2} \tag {5} $$ Finally, we penalize any combination of variable and constraint permutations that map values of the coefficient matrix $A$ to entries with different values. $$ H _ {A} = \sum_ {A _ {i j} \neq A _ {i ^ {\prime} j ^ {\prime}}} \sigma_ {i i ^ {\prime}} \pi_ {j j ^ {\prime}}. \tag {6} $$ Putting these together gives our Full QUBO formulation: $$ \min _ {\pi , \sigma} H _ {F u l l} := H _ {B, \pi} + H _ {B, \sigma} + H _ {\pi} + H _ {\sigma} + H _ {A} \tag {7} $$ This formulation consists of $q_{full} \coloneqq n^2 + m^2$ variables. This can be reduced by restricting our formulation only to certain variables as defined by the reasonable permutation sets $\varPi$ and $\varSigma$ , subsequently allowing us to drop the $H_{\varPi}$ and $H_{\varSigma}$ terms. Hence, dropping certain variables gives the Reduced QUBO: $$ \min _ {\Pi , \Sigma} H _ {R e d u c e d} := H _ {B, \pi} + H _ {B, \sigma} + H _ {A} \tag {8} $$ Note that the variables excluded from the formulation, namely $\varPi^{\mathbb{C}},\varSigma^{\mathbb{C}}$ , are fixed to zero. The number of variables in the Reduced QUBO is $q_{Reduced}:= \nu +\mu$ where $\nu = |\varPi| = \sum_{j = 1}^{n}|r(i)|$ and $\mu = |\varSigma| = \sum_{i = 1}^{m}|r(j)|$ . Note that $n\leq \nu \leq n^2$ and $m\leq \mu \leq m^2$ . The values of $\nu$ and $\mu$ depend on the structure of the MIP, and we explore this empirically with the MIPLIB 2017 problem set in Section 3. Proposition 1. If $H_{Full}^{*} = 0$ for some $(\pi^{*},\sigma^{*})$ , then $\pi^{*}$ is a formulation symmetry. Moreover, if $\bar{\pi}$ is a formulation symmetry, then there exists some corresponding $\bar{\sigma}$ such that $H_{Full}(\bar{\pi},\bar{\sigma}) = 0$ . Proof. Let $H_{Full}^{*} = 0$ . Since $H_{B,\pi}, H_{B,\sigma}, H_{\pi}, H_{\sigma}$ , and $H_{A}$ all consist of sums of squared terms, then each individual term must be zero. Since $H_{B,\pi} = 0$ and $H_{B,\sigma} = 0$ , $\pi^{*}$ and $\sigma^{*}$ are doubly stochastic and thus represent permutations. $H_{\pi} = 0$ and $H_{\sigma} = 0$ only if reasonable permutations are mapped, which ensures that $\pi(c) = c$ , $\sigma(b) = b$ , and integer variables are preserved. Finally, we must have that $A_{\sigma(i),\pi(j)} = A_{ij}$ because $H_{A} = 0$ . Now, suppose $\bar{\pi}$ is a formulation symmetry and $\bar{\sigma}$ is a corresponding constraint permutation. Since both matrices are permutations, their matrix representations are doubly stochastic and thus $H_{B,\bar{\pi}} = H_{B,\bar{\sigma}} = 0$ . From the definition of formulation symmetry, we also have that integer variables are preserved, $\bar{\pi}(c) = c$ and $\bar{\sigma}(b) = b$ , so $H_{\bar{\pi}} = H_{\bar{\sigma}} = 0$ . Finally, since $A_{\bar{\sigma}(i),\bar{\pi}(j)} = A_{ij}$ , this gives us $H_A = 0$ , and so $H_{Full}^* = 0$ . Corollary 1. If $H_{Reduced}^{*} = 0$ for some $(\pi^{*},\sigma^{*})$ , then $\pi^{*}$ is a formulation symmetry. Moreover, if $\bar{\pi}$ is a formulation symmetry, then there exists some corresponding $\bar{\sigma}$ such that $H_{Reduced}(\bar{\pi},\bar{\sigma}) = 0$ . Proof. Suppose that $H_{Reduced}^{*} = 0$ for some $(\pi^{*},\sigma^{*})$ . Then by nonnegativity, each term $H_{B,\pi}$ and $H_{B,\sigma}$ must be zero, and so $\pi^{*}$ and $\sigma^{*}$ are doubly stochastic (permutation) matrices. Likewise, $H_{A} = 0$ . Furthermore, by construction, we have variables in $\Pi^{\mathbb{C}}$ and $\Sigma^{\mathbb{C}}$ fixed to zero, and so $H_{\pi} = H_{\sigma} = 0$ . Therefore, $H_{Full}^{*} = 0$ and so $\pi^{*}$ is a formulation symmetry by Proposition 1. Now suppose that $\pi^{*}$ is a formulation symmetry and $\sigma^{*}$ is a corresponding permutation of the constraints. Since they are both permutations, their matrix representations are doubly stochastic, so $H_{B,\pi}^{*} = H_{B,\sigma}^{*} = 0$ . By definition of formulation symmetry, the constraint matrix is preserved, thus $H_{A}^{*} = 0$ . Therefore, $H_{Reduced}^{*} = H_{B,\pi}^{*} + H_{B,\sigma}^{*} + H_{A}^{*} = 0$ . Maximum Required Qubits for DWave Embeddings We can estimate the resource requirements of our QUBO formulations. One of the key challenges to solving QUBO problems on a DWave quantum annealer is finding an embedding of the problem on the topology of the hardware, which involves limited number of qubits that are not fully connected. The first DWave quantum annealers arranged their qubits along a Chimera graph architecture, which allows efficient embeddings of complete graphs. Subsequent generations involved Pegasus graphs, which built off of the Chimera structure to allow for a higher degree of connectivity. The next generation of DWave technology will adopt the Zephyr graph, which are described by a grid parameter $g$ and a tile parameter $t$ and denoted as $Z_{g,t}$. Due to physical manufacturing and design concerns, it is easiest to fix the tile parameter to $t = 4$ while increasing $g$ in order to increase size of the computer—we will focus on such graphs throughout this section, simply denoted as $Z_{g}$ . Using the results in and, we can create an upper bound of the number of qubits required to embed our Symmetry Detecting QUBOs. Proposition 2. A QUBO problem with $q$ variables can be embedded in polynomial time on a $Z_g$ topology with at least $\frac{(q + 8)^2}{8} + q + 8$ fully functional qubits. Proof. The graph $Z_{g}$ contains $32g^{2} + 16g$ qubits. As described in, the largest complete graph that can be efficiently embedded on $Z_{g}$ using the algorithm in is of size $16g - 8$ , all with chains of length $g$ . Therefore, if we need to embed $q$ variables, we need $g \geq \frac{q + 8}{16}$ and thus at least $\frac{(q + 8)^2}{8} + q + 8$ qubits in our Zephyr graph. Once the complete graph is embedded, we then delete the unneeded edges for our particular problem. There are a few caveats with the result from Proposition 2. In practice, not all qubits on a quantum annealer are typically operable, and the arrangement of the inoperable qubits could require a larger typology. On the other hand, since the desired graphs are certainly not complete, it is very likely that smaller embeddings could be found. We explore this dynamic in Section 3 for our Reduced QUBO. QUBO-plus Variations We also consider QUBO-plus variants of our formulations, which allows us to transform some of our penalty terms into constraints. This is compatible with quantum computers such as DWave's hybrid solvers, which can handle larger-sized problems of up to 10000 variables. In particular, we move the requirement that $\pi$ and $\sigma$ are doubly stochastic into linear constraints for both Full and Reduced formulations. Both formulations require the same amount of variables as their respective QUBO formulations with the addition of $2n + 2m$ constraints and can be seen in Figure 2. # 2.3 Decompositions To further QUBO formulation size, we consider decomposing over reasonable permutations given by $r(j)$ . Our decomposition over $r(j)$ is similar to the Re- Fig. 2. Symmetry Detecting QUBO-Plus Formulations, Full-sized (left) and Reduced (Right) duced form, however we must take care to ensure that the variables which cannot be reasonably permuted with $x_{j}$ are fixed to ensure that the constraint matrix remains unchanged. As such, we will still include the variables $\pi_{j'j'}$ for all $j' \notin r(j)$ and either fix them as 1 or require via penalty terms or constraints that they must equal 1, thus ensuring the position of $x_{j}$ remains fixed. Let $H_{F,j} = \sum_{j' \notin r(j)} (1 - \pi_{j'j'})^2$ . We denote the complete set of $\pi$ variables in this decomposition as $\Pi_j := \{\pi_{j'j''} : j', j'' \in r(j)\} \cup \{\pi_{j'j'} : j' \notin r(j)\}$ . We then have the following QUBO Decomposition over $\Pi_j$ : $$ \min _ {\Pi_ {j}, \Sigma} H _ {D e c o m p} = H _ {B, \pi} + H _ {B, \sigma} + H _ {A} + H _ {F, j} \tag {19} $$ This formulation involves $q_{Decomp} = |r(j)|^2 + (n - |r(j)|) + \mu$ variables. As with the Reduced form, the size depends on problem structure, which we explore empirically in Section 3. Note that this formulation is restricted in the sense of only providing symmetries on the set of variables symmetric to $\pi_j$ rather than across the entire MIP. We note also that, in principle, decomposition could also be done over the set of constraints in $r(i)$ , although it is less desirable as the end-goal is generally to identify symmetric variables. Proposition 3. If $H_{Decomp}^{*} = 0$ for some $\pi^{*}, \sigma^{*}$ , then $\pi^{*}$ is a formulation symmetry that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_{j}$ . Furthermore, if $\bar{\pi}$ is a formulation symmetry that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_{j}$ , then there exists some $\bar{\sigma}$ such that $H_{Decomp}(\bar{\pi}, \bar{\sigma}) = 0$ . Proof. Suppose $H_{Decomp}^{*} = 0$ for some $\pi^{*}, \sigma^{*}$ . Since each term is nonnegative, we must have that each term is zero-valued at $(\pi^{*}, \sigma^{*})$ . As $H_{B,\pi} = 0$ and $H_{B,\sigma} = 0$ , then $\pi^{*}$ and $\sigma^{*}$ are double stochastic and thus represent permutations. Since the formulation only contains variables that represent mappings within reasonable permutations, the integer variables, objective coefficients, and constraint constants are preserved. Since $H_{F,j} = 0$ , the variables which cannot be reasonably permuted with $\pi_{j}$ must be fixed and thus the permutations remain valid. Finally, if $H_{A} = 0$ , then the coefficient matrix is preserved. Now consider some formulation symmetry $\bar{\pi}$ that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_j$ . By definition, there exists some corresponding $\bar{\sigma}$ that creates a valid permutation of the constraints. Since these are permutations, their matrix representations are doubly stochastic and so $H_{B,\pi} = H_{B,\sigma} = 0$ . If the position of $x_{j'}$ is fixed, then $\bar{x}_{j'j'} = 1$ and thus $H_{F,j} = 0$ . Finally, formulation symmetries preserve the coefficient matrix, so $H_A = 0$ . We also have the QUBO-Plus Decomposition over $\Pi_j$ : $$ \min _ {\Pi_ {j}, \Sigma} H _ {A} \tag {20} $$ $$ \text {s . t .} \quad \sum_ {j ^ {\prime} \in r (j)} \pi_ {j ^ {\prime} j ^ {\prime \prime}} = 1, \quad j ^ {\prime \prime} \in r (j) \tag {21} $$ $$ \sum_ {j ^ {\prime \prime} \in r (j)} \pi_ {j ^ {\prime} j ^ {\prime \prime}} = 1, \quad j ^ {\prime} \in r (j) \tag {22} $$ $$ \sum_ {i \in r (i)} \sigma_ {i i ^ {\prime}} = 1, \quad i ^ {\prime} \in \{1, \dots , m \} \tag {23} $$ $$ \sum_ {i ^ {\prime} \in r (i)} \sigma_ {i i ^ {\prime}} = 1, \quad i \in \{1, \dots , m \} \tag {24} $$ $$ \pi_ {j ^ {\prime} j ^ {\prime}} = 1, \quad j ^ {\prime} \notin r (j) \tag {25} $$ This again features $q_{Decomp}$ variables as well as $2|r(j)| + 2m + (n - |r(j)|)$ constraints. # 3 Experiments # 3.1 Example We begin with a simple knapsack problem as an example MIP instance: $$ \max _ {7} x _ {1} + x _ {2} + x _ {3} + 2 x _ {4} + 2 x _ {5} + 2 x _ {6} + 3 x _ {7} $$ $$ x \in \mathbb {Z} ^ {i} \tag {26} $$ $$ \text {s . t .} \quad x _ {1} + x _ {2} + 2 x _ {3} + x _ {4} + x _ {5} + x _ {6} + x _ {7} \leq 1 0 0 $$ The problem has the following orbits: $\{\pi_1, \pi_2\}, \{\pi_3\}, \{\pi_4, \pi_5, \pi_6\}, \{\pi_7\}$ . In our full QUBO, we have decision variables $\pi \in \{0, 1\}^{7 \times 7}$ and $\sigma_{1,1}$ . Thus, $q_{Full} =$ $7^{2} + 1^{1} = 50$ . We can, however, reduce the problem based on the following sets of reasonable permutations: $$ \begin{array}{l} r (1) = \{1, 2 \} \\ r (2) = \{1, 2 \} \\ r (3) = \{3 \} \\ r (4) = \{4, 5, 6 \} \\ r (5) = \{4, 5, 6 \} \\ r (6) = \{4, 5, 6 \} \\ r (7) = \{7 \} \\ \end{array} $$ The $\pi$ variables that are excluded are shown in Figure 3. $\nu = 15$ while $\mu = 1$ , Fig. 3. Visualizations of the Reduced (left) and Decomposed (right) forms. so $q_{Reduced} = 15 + 1 = 16$ . We can quantify the effectiveness of the reduction by calculating $\frac{q_{Full}}{q_{Reduced}} = 0.32$ , so the Reduced form requires $32\%$ as many variables as the Full. Should we require our QUBO to be even smaller, we can use the Decomposed form over any of the sets of reasonable permutations, say $\Pi_4$ , at the potential cost of dropping certain symmetries. Again, we can visualize which variables are excluded by looking at Figure 3. Here we now have $q_{Decomp} = 13$ , requiring $26\%$ of the variables of the Full formulation. # 3.2 Experiments with MIPLIB 2017 To study how much the Reduced and Decomposed forms of our formulations decrease the number of variables in our QUBOs in practice, we calculate values of $\nu$ , $\mu$ , and the size of the largest decomposition, which we refer to as MaxDecomp, for each problem in the MIPLIB 2017 collection. We then use these values to determine how many variables each formulation would require as a percent of the Full formulation. On average, the Reduced form requires $32\%$ of the number of QUBO variables, while the MaxDecomp form requires $30\%$ of the number of variables compared to the Full formulation. The full distribution of the percent of $\pi$ , $\sigma$ entries needed is shown in Figure 3.2. Fig. 4. Distributions of $q_{Reduced}$ (left) and $q_{MaxDecomp}$ (right) as a proportion of $q_{Full}$ within the MIPLIB 2017 Collection Set. When $\frac{q_{Reduced}}{q_{Full}}$ is closer to 1, more of the MIP variables and constraints could be reasonably permuted. However, the overall size of the QUBO will remain quite large. On the other hand, if $\frac{q_{Reduced}}{q_{Full}}$ is closer to 0, then the QUBO will be much smaller, but the MIP likely has very little symmetry worth exploiting. Therefore, when working with an MIP problem, the trade-off of QUBO size and potential for symmetry must be balanced. We have also calculated a power regression of the form $y = x^{k}$ to estimate $\nu$ and $\mu$ as a function of $n$ and $m$ in the test set. We have $\nu \approx n^{1.764}$ and $\mu \approx m^{1.834}$ . Scatter-plots of the data can be seen in Figure 5. Fig. 5. Power regressions of $\nu$ and $\mu$ as functions of $n$ and $m$ within the MIPLIB 2017 Collection Set. # 3.3 DWave Embeddings To use a DWave quantum annealer, the QUBO problem graph must be embedded on the hardware's topology. While we can use Proposition 2 to determine the number of qubits needed in a Zephyr graph (with $t = 4$ ), this is a rather loose bound in practice. Thus we apply the find_embedding routine from the minorminer library provided with Dwave software, which is a heuristic for embedding a source graph (our QUBO) to a target graph (an appropriately sized Zephyr graph). For each reduced QUBO, we set the target graph as $Z_{g}$ where $g = \frac{\mu + \mu + 8}{16}$ , the graph for which a $K_{\nu + \mu}$ graph could be embedded. Due to the computation time of the heuristic routine, we were only able to test the embedding heuristic on the 12 smallest embeddings. For the largest four instances within this set, no embedding was found within the routine's time limit. Complete results can be seen in Table 1. On average, the heuristic required $39.8\%$ the amount of qubits that the complete graph embedding requires. We run a regression on the number of physical qubits required to embed each problem on a Zephyr graph, finding that $qubits \approx 0.98q_{Reduced}^2$ . A stronger correlation was found with the number of QUBO terms, which is reasonable as the connectivity (or lack thereof) is what necessitates the embeddings in the first place. We found that $qubits \approx 0.324(\#terms)$ . Visualizations of these regressions can be seen in Figure 6. Based on the formula for the number of qubits required to embed the $K_{q_{Reduced}}$ graph as well as the fact that there are $O(v^2)$ edges on a graph with $v$ vertices, it is not surprising that the required physical qubits are still $O(q_{Reduced}^2)$ and $O(\#terms)$ . Combining our regressions, for the reduced symmetry formulation of an MIP with $n$ variables and $m$ constraints, we would expect to require a Zephyr topology quantum annealer with $0.98(n^{1.764} + m^{1.834})^2 \in O(n^{3.528} + m^{3.668})$ qubits. Fig. 6. Regressions of physical qubits required for embedding on Zephyr graphs vs $q_{Reduced}$ (left), where the dashed line is the bound described in Proposition 2, and the number of QUBO terms (right)
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15070
{"title": "QUBO Formulations for MIP Symmetry Detection", "raw_content": "# QUBO Formulations for MIP Symmetry Detection\n\nAlexander While and Chen Chen\n\nThe Ohio State University, Columbus OH, USA\n\n{while.1, chen.8018}@osu.edu\n\nAbstract. Formulation symmetry in mixed-integer programming (MIP) can hinder solver performance by inducing redundant search, but detecting such symmetries is also a significant computational challenge. This paper explores the potential for quantum computing to handle symmetry detection. Quantum is a promising alternative to classical compute, but this emerging technology has limited hardware capacity in terms of input problem size. This paper explores the use of Quadratic Unconstrained Binary Optimization (QUBO) models for symmetry detection, as QUBO is the canonical format for quantum optimization platforms. To help address the input size bottleneck, we develop full, reduced, and decomposed QUBO as well as QUBO-Plus formulations for MIP symmetry detection. Computational experiments on the MIPLIB 2017 benchmark are used to estimate the quantum computing resources needed for practical problems.\n\nKeywords: MIP $\\cdot$ QUBO $\\cdot$ Quantum $\\cdot$ Symmetry\n\n# 1 Introduction\n\nConsider a generic Mixed-Integer Programming (MIP) problem of the form\n\n$$\n\\begin{array}{l} \\max _ {x} c ^ {T} x \\\\ \\text {s . t .} \\quad A x \\leq b \\tag {1} \\\\ x \\in \\mathbb {Z} ^ {p} \\times \\mathbb {R} ^ {n - p} \\\\ \\end{array}\n$$\n\nwhere $A$ is an $n \\times m$ matrix. This is a general-purpose algebraic modeling paradigm that enables access to powerful solver software; for example, enabling billions of dollars in savings in power systems operations [14].\n\nAs MIP is NP-hard in general, there is persistent demand for computational improvements of solvers. In this paper we explore the potential of quantum computing to accelerate MIP solvers, namely by considering the problem of symmetry detection. Symmetry can slow down solvers, as identical solutions may be repeatedly enumerated [28, 7, 34, 37]. To mitigate this redundant computation, one must first identify symmetries within the problem. In this paper, we present\n\nQuadratic Unconstrained Binary Optimization (QUBO) formulations for detecting symmetries within an MIP that can be accepted and heuristically solved for on a quantum annealer or other quantum-inspired solvers (see, e.g. [4, 18, 19]).\n\nOur work adds to the literature on hybrid schemes for MIP, which has considered quantum subroutines involving: primal heuristics [41]; Benders cut generation [46, 35]; combinatorial/pure integer subproblems [16, 43, 5, 12, 45]; and branching [31]. This paper is unique in considering the problem of formulation symmetry detection, which is notably suspected to be NP-intermediate (see Section 1.2), i.e. a problem in NP that is neither NP-Hard nor in P. In contrast, the aforementioned hybrid schemes all involve NP-hard problems for quantum computing, which may be beyond the realm of provable quantum advantage (see Section 1.1). We note also that efforts to accelerate linear programming with quantum computing (see, e.g. [30, 33]) can also be incorporated into MIP solvers in hybrid fashion.\n\n# 1.1 Quantum Computing\n\nQuantum computing has substantial potential; notably, Integer Factorization (IF) was proven to be in bounded-error quantum polynomial time (BQP) via Shor's Algorithm [40]. IF is conjectured to be NP-Intermediate, hence challenging for classical compute; indeed, the effectiveness of the widely-used RSA encryption relies on the intractability of IF. Quantum computing has clear theoretical limits, however, as BQP is a subset of QIP=PSPACE [21]; moreover, it remains an open question whether BQP contains NP or vice versa, though there are reasons to suspect they are incomparable (e.g. [1]).\n\nAt present, practice falls rather short of theory due to the many challenges with scaling quantum computers to sufficient size. For example, gate-based quantum computers struggle to run Shor's algorithm on numbers with more than 2 digits [44]. In contrast, classical methods can handle 240-digit numbers [11]. Indeed, at present, Shor's algorithm itself seems more effectively realized on a classical computer: Willsch et al. [44] recently simulated Shor's Quantum Factoring Algorithm with classical computing to factor a 12-digit number.\n\nAn alternative to gate-based quantum computers are quantum annealers, which are composed of physical qubits arranged in a weighted planar graph. Annealers attempt to determine the values of the qubits that lead to the lowest total energy in the graph. While attaining a true minimum is not guaranteed, the process can be repeated thousands of times very quickly, with each sample taking as little as 20ns [4]. This process is easily extended to QUBO formulations, which is the canonical format for quantum-based optimization—a generic setup is provided in Figure 1. This type of quantum computer can thus serve as a fast heuristic in hybrid optimization schemes. We note that quantum annealers cannot run Shor's algorithm, but 7-digit numbers have been factorized via QUBO formulation [36]—up to 13-digits when substantial classical compute is applied for preprocessing [22].\n\nThere have also been advancements in hybrid quantum-classical hardware and algorithms that enable sampling of solutions to QUBO problems with some\n\nconstraints: QUBO-Plus models. Common constraints include packing, covering, and knapsack constraints (see [18, 19] for details). In addition to the aforementioned IF results, there has been substantial recent efforts at benchmarking on optimization problems (e.g. [3, 26, 38, 32, 39, 23]), which tell a similar story: quantum computers may serve as an intriguing, rapid primal heuristic, but they are quite limited in input size at present.\n\n$$\n\\min _ {x \\in \\{0, 1 \\} ^ {n}} \\sum_ {i = 1} ^ {n} \\sum_ {j = 1} ^ {n} Q _ {i j} x _ {i} x _ {j}\n$$\n\nFig.1. A generic QUBO formulation\n\n# 1.2 MIP Symmetry\n\nThe symmetry group of a MIP problem instance is the set of constraint and variable permutations that preserve the feasible region without changing the value of the objective function. However, working with this group is computationally challenging, as even determining feasibility of a MIP instance is NP-hard. Instead, solvers typically focus on formulation symmetry: a variable permutation $\\pi \\in S_{n}$ such that there exists a corresponding constraint permutation $\\sigma \\in S_{m}$ where:\n\n- $\\pi (\\{1,\\ldots ,p\\}) = \\{1,\\ldots ,p\\}$ (integer variables preserved), \n- $\\pi(c) = c$ (objective preserved), \n- $\\sigma(b) = b$ (constraint constants preserved), and \n- $A_{\\sigma(i), \\pi(j)} = A_{ij}$ (rows permuted by $\\sigma$ , columns by $\\pi$ ).\n\nFormulation symmetry detection can be converted to a graph theoretic problem by representing the MIP formulation as a bipartite vertex- and edge-colored graph. In one partition, vertices represent MIP variables, with colors corresponding to their objective coefficients. In the other partition, vertices represent MIP constraints, with colors representing their constant terms. Edges are colored based on variable coefficients from the constraints. The automorphism group of this graph is equivalent to the formulation symmetry group, and so finding MIP symmetries reduces to finding graph automorphisms; consequently, software such as Nauty [29] or Bliss [24] can be applied to determine generators of the group. Once generators are obtained, orbits—that is, sets of variables which can be permuted without changing the MIP formulation—can be calculated and then exploited in the MIP solver. While these graph-based algorithms are often fast in practice, they have worst-case exponential run-times, and so in practice one may terminate search early after finding only a strict subset of generators.\n\nIndeed, the problem of graph automorphism detection is suspected to be NP-Intermediate [6]; however, unlike IF it is a longstanding open question whether the problem is in BQP [2].\n\nAfter running symmetry detection, techniques can be applied to prevent symmetric solutions from being revisited in the branch-and-bound tree. Well-established methods include Isomorphic Pruning [28], Orbital Fixing [34], and symmetry-breaking constraints [25]. Other attempts have also been made leveraging abstract algebraic tools [7]. For descriptions of these techniques, as well as an experimental comparison of their performances, see [37].\n\nSolvers such as SCIP [8] run their own native implementations of symmetry detection (working directly with MIP formulation data structures) to reduce overhead. In a similar vein, we develop in this paper QUBO formulations specifically catered to formulation symmetry detection. This avoids having to reduce a MIP to existing QUBOs for graph isomorphism [27, 13, 20, 42].\n\n# 2 QUBO Symmetry Detection Formulations\n\n# 2.1 Setup and Notation\n\nThe following setup is used for all our QUBO formulations. The decision variables are square, binary matrices $\\pi \\in \\{0,1\\}^{n\\times n}$ (for variable permutation) and $\\sigma \\in \\{0,1\\}^{m\\times m}$ (for constraint permutation). The output of these variables are interpreted as follows:\n\n- $x_{j}$ is mapped to $x_{j'}$ if and only if $\\pi_{jj'} = 1$ \n- constraint $i$ is mapped to constraint $i'$ if and only if $\\sigma_{ii'} = 1$ .\n\nThe QUBO formulations contain bilinear terms of the form $\\sigma_{ii'}\\pi_{jj'}$ . Following the aforementioned interpretations, we can see that $A_{ij}$ is mapped to $A_{i'j'}$ in the coefficient matrix if and only if $\\sigma_{ii'}\\pi_{jj'} = 1$ .\n\nMoreover, we define a reasonable permutation as:\n\n- a permutation of variables that maps variables with the same domain and objective coefficients, or \n- a permutation of constraints that maps constraints with the same right-hand constants $b$ .\n\nAccordingly, let $r(a)$ be the index set of variables (resp. constraints) that can be reasonably permuted with $x_{a}$ (resp. constraint $a$ ). Furthermore, we denote the set of all reasonable variable permutations as $\\varPi := \\{\\pi_{jj'} : j' \\in r(j), j \\in \\{1, \\ldots, n\\}\\}$ with complement $\\varPi^{\\complement}$ , and the set of all reasonable constraint permutations as $\\Sigma := \\{\\sigma_{ii'} : i' \\in r(i), i \\in \\{1, \\ldots, m\\}\\}$ with complement $\\Sigma^{\\complement}$ . Note that $\\varPi$ (resp. $\\Sigma$ ) can be seen as a permuted block-diagonal submatrix of $\\pi$ (resp. $\\sigma$ ), as illustrated in Section 3.1.\n\nWhile this definition of reasonable is sufficient to ensure symmetry detection for the generic form of MIP seen in (1), we can add modifications to handle MIP structures:\n\n- variables with the same upper and lower bounds \n- constraints with the same constraint sense\n\nWe can also sharpen the notion of reasonable to reduce formulation size, namely by eliminating:\n\n- variables that are present in the same number of constraints \n- constraints containing the same number of variables\n\nThese modifications are detailed in Section 3.\n\n# 2.2 Full and Reduced QUBOs\n\nFollowing in the spirit of QUBO formulations for graph isomorphism, we seek to construct a QUBO with the property that for any optimal solution (with objective value 0) we can easily extract a corresponding formulation symmetry. First, let us ensure that $\\pi$ and $\\sigma$ are doubly stochastic and thus represent permutations [17]. To do this, we incorporate the following penalty terms in the objective:\n\n$$\nH _ {B, \\pi} = \\sum_ {j = 1} ^ {n} \\left(\\sum_ {j ^ {\\prime} = 1} ^ {n} \\pi_ {j j ^ {\\prime}} - 1\\right) ^ {2} + \\sum_ {j ^ {\\prime} = 1} ^ {n} \\left(\\sum_ {j = 1} ^ {n} \\pi_ {j j ^ {\\prime}} - 1\\right) ^ {2} \\tag {2}\n$$\n\n$$\nH _ {B, \\sigma} = \\sum_ {i = 1} ^ {m} \\left(\\sum_ {i ^ {\\prime} = 1} ^ {m} \\sigma_ {i i ^ {\\prime}} - 1\\right) ^ {2} + \\sum_ {i ^ {\\prime} = 1} ^ {m} \\left(\\sum_ {i = 1} ^ {m} \\sigma_ {i i ^ {\\prime}} - 1\\right) ^ {2} \\tag {3}\n$$\n\nNote that further along in this paper we will be working with partial matrices, i.e. removing elements of $\\pi$ and $\\sigma$ from our formulations; for simplicity, we will preserve the notation for these expressions, and which elements are preserved in sums will be clear from context.\n\nNow, let us enforce that only reasonable permutations are allowed. We accomplish this with the following penalty terms:\n\n$$\nH _ {\\pi} = \\sum_ {\\Pi^ {\\complement}} \\pi_ {j j ^ {\\prime}} ^ {2} \\tag {4}\n$$\n\n$$\nH _ {\\sigma} = \\sum_ {\\Sigma^ {\\complement}} \\sigma_ {i i ^ {\\prime}} ^ {2} \\tag {5}\n$$\n\nFinally, we penalize any combination of variable and constraint permutations that map values of the coefficient matrix $A$ to entries with different values.\n\n$$\nH _ {A} = \\sum_ {A _ {i j} \\neq A _ {i ^ {\\prime} j ^ {\\prime}}} \\sigma_ {i i ^ {\\prime}} \\pi_ {j j ^ {\\prime}}. \\tag {6}\n$$\n\nPutting these together gives our Full QUBO formulation:\n\n$$\n\\min _ {\\pi , \\sigma} H _ {F u l l} := H _ {B, \\pi} + H _ {B, \\sigma} + H _ {\\pi} + H _ {\\sigma} + H _ {A} \\tag {7}\n$$\n\nThis formulation consists of $q_{full} \\coloneqq n^2 + m^2$ variables. This can be reduced by restricting our formulation only to certain variables as defined by the reasonable permutation sets $\\varPi$ and $\\varSigma$ , subsequently allowing us to drop the $H_{\\varPi}$ and $H_{\\varSigma}$ terms. Hence, dropping certain variables gives the Reduced QUBO:\n\n$$\n\\min _ {\\Pi , \\Sigma} H _ {R e d u c e d} := H _ {B, \\pi} + H _ {B, \\sigma} + H _ {A} \\tag {8}\n$$\n\nNote that the variables excluded from the formulation, namely $\\varPi^{\\mathbb{C}},\\varSigma^{\\mathbb{C}}$ , are fixed to zero. The number of variables in the Reduced QUBO is $q_{Reduced}:= \\nu +\\mu$ where $\\nu = |\\varPi| = \\sum_{j = 1}^{n}|r(i)|$ and $\\mu = |\\varSigma| = \\sum_{i = 1}^{m}|r(j)|$ . Note that $n\\leq \\nu \\leq n^2$ and $m\\leq \\mu \\leq m^2$ . The values of $\\nu$ and $\\mu$ depend on the structure of the MIP, and we explore this empirically with the MIPLIB 2017 problem set in Section 3.\n\nProposition 1. If $H_{Full}^{*} = 0$ for some $(\\pi^{*},\\sigma^{*})$ , then $\\pi^{*}$ is a formulation symmetry. Moreover, if $\\bar{\\pi}$ is a formulation symmetry, then there exists some corresponding $\\bar{\\sigma}$ such that $H_{Full}(\\bar{\\pi},\\bar{\\sigma}) = 0$ .\n\nProof. Let $H_{Full}^{*} = 0$ . Since $H_{B,\\pi}, H_{B,\\sigma}, H_{\\pi}, H_{\\sigma}$ , and $H_{A}$ all consist of sums of squared terms, then each individual term must be zero. Since $H_{B,\\pi} = 0$ and $H_{B,\\sigma} = 0$ , $\\pi^{*}$ and $\\sigma^{*}$ are doubly stochastic and thus represent permutations. $H_{\\pi} = 0$ and $H_{\\sigma} = 0$ only if reasonable permutations are mapped, which ensures that $\\pi(c) = c$ , $\\sigma(b) = b$ , and integer variables are preserved. Finally, we must have that $A_{\\sigma(i),\\pi(j)} = A_{ij}$ because $H_{A} = 0$ .\n\nNow, suppose $\\bar{\\pi}$ is a formulation symmetry and $\\bar{\\sigma}$ is a corresponding constraint permutation. Since both matrices are permutations, their matrix representations are doubly stochastic and thus $H_{B,\\bar{\\pi}} = H_{B,\\bar{\\sigma}} = 0$ . From the definition of formulation symmetry, we also have that integer variables are preserved, $\\bar{\\pi}(c) = c$ and $\\bar{\\sigma}(b) = b$ , so $H_{\\bar{\\pi}} = H_{\\bar{\\sigma}} = 0$ . Finally, since $A_{\\bar{\\sigma}(i),\\bar{\\pi}(j)} = A_{ij}$ , this gives us $H_A = 0$ , and so $H_{Full}^* = 0$ .\n\nCorollary 1. If $H_{Reduced}^{*} = 0$ for some $(\\pi^{*},\\sigma^{*})$ , then $\\pi^{*}$ is a formulation symmetry. Moreover, if $\\bar{\\pi}$ is a formulation symmetry, then there exists some corresponding $\\bar{\\sigma}$ such that $H_{Reduced}(\\bar{\\pi},\\bar{\\sigma}) = 0$ .\n\nProof. Suppose that $H_{Reduced}^{*} = 0$ for some $(\\pi^{*},\\sigma^{*})$ . Then by nonnegativity, each term $H_{B,\\pi}$ and $H_{B,\\sigma}$ must be zero, and so $\\pi^{*}$ and $\\sigma^{*}$ are doubly stochastic (permutation) matrices. Likewise, $H_{A} = 0$ . Furthermore, by construction, we have variables in $\\Pi^{\\mathbb{C}}$ and $\\Sigma^{\\mathbb{C}}$ fixed to zero, and so $H_{\\pi} = H_{\\sigma} = 0$ . Therefore, $H_{Full}^{*} = 0$ and so $\\pi^{*}$ is a formulation symmetry by Proposition 1.\n\nNow suppose that $\\pi^{*}$ is a formulation symmetry and $\\sigma^{*}$ is a corresponding permutation of the constraints. Since they are both permutations, their matrix representations are doubly stochastic, so $H_{B,\\pi}^{*} = H_{B,\\sigma}^{*} = 0$ . By definition of formulation symmetry, the constraint matrix is preserved, thus $H_{A}^{*} = 0$ . Therefore, $H_{Reduced}^{*} = H_{B,\\pi}^{*} + H_{B,\\sigma}^{*} + H_{A}^{*} = 0$ .\n\nMaximum Required Qubits for DWave Embeddings We can estimate the resource requirements of our QUBO formulations. One of the key challenges to solving QUBO problems on a DWave quantum annealer is finding an embedding of the problem on the topology of the hardware, which involves limited number of qubits that are not fully connected. The first DWave quantum annealers arranged their qubits along a Chimera graph architecture, which allows efficient embeddings of complete graphs. Subsequent generations involved Pegasus graphs, which built off of the Chimera structure to allow for a higher degree of connectivity [10]. The next generation of DWave technology will adopt the Zephyr graph, which are described by a grid parameter $g$ and a tile parameter $t$ and denoted as $Z_{g,t}$ [9]. Due to physical manufacturing and design concerns, it is easiest to fix the tile parameter to $t = 4$ while increasing $g$ in order to increase size of the computer [10]—we will focus on such graphs throughout this section, simply denoted as $Z_{g}$ .\n\nUsing the results in [9] and [10], we can create an upper bound of the number of qubits required to embed our Symmetry Detecting QUBOs.\n\nProposition 2. A QUBO problem with $q$ variables can be embedded in polynomial time on a $Z_g$ topology with at least $\\frac{(q + 8)^2}{8} + q + 8$ fully functional qubits.\n\nProof. The graph $Z_{g}$ contains $32g^{2} + 16g$ qubits. As described in [9], the largest complete graph that can be efficiently embedded on $Z_{g}$ using the algorithm in [10] is of size $16g - 8$ , all with chains of length $g$ . Therefore, if we need to embed $q$ variables, we need $g \\geq \\frac{q + 8}{16}$ and thus at least $\\frac{(q + 8)^2}{8} + q + 8$ qubits in our Zephyr graph. Once the complete graph is embedded, we then delete the unneeded edges for our particular problem.\n\nThere are a few caveats with the result from Proposition 2. In practice, not all qubits on a quantum annealer are typically operable, and the arrangement of the inoperable qubits could require a larger typology. On the other hand, since the desired graphs are certainly not complete, it is very likely that smaller embeddings could be found. We explore this dynamic in Section 3 for our Reduced QUBO.\n\nQUBO-plus Variations We also consider QUBO-plus variants of our formulations, which allows us to transform some of our penalty terms into constraints. This is compatible with quantum computers such as DWave's hybrid solvers, which can handle larger-sized problems of up to 10000 variables [15]. In particular, we move the requirement that $\\pi$ and $\\sigma$ are doubly stochastic into linear constraints for both Full and Reduced formulations. Both formulations require the same amount of variables as their respective QUBO formulations with the addition of $2n + 2m$ constraints and can be seen in Figure 2.\n\n# 2.3 Decompositions\n\nTo further QUBO formulation size, we consider decomposing over reasonable permutations given by $r(j)$ . Our decomposition over $r(j)$ is similar to the Re-\n\n![](images/c6e141c228a6f15781f6e1f7792a5b70e0c0db1b57726111d7c0f1c7139886b0.jpg) \nFig. 2. Symmetry Detecting QUBO-Plus Formulations, Full-sized (left) and Reduced (Right)\n\nduced form, however we must take care to ensure that the variables which cannot be reasonably permuted with $x_{j}$ are fixed to ensure that the constraint matrix remains unchanged. As such, we will still include the variables $\\pi_{j'j'}$ for all $j' \\notin r(j)$ and either fix them as 1 or require via penalty terms or constraints that they must equal 1, thus ensuring the position of $x_{j}$ remains fixed.\n\nLet $H_{F,j} = \\sum_{j' \\notin r(j)} (1 - \\pi_{j'j'})^2$ . We denote the complete set of $\\pi$ variables in this decomposition as $\\Pi_j := \\{\\pi_{j'j''} : j', j'' \\in r(j)\\} \\cup \\{\\pi_{j'j'} : j' \\notin r(j)\\}$ . We then have the following QUBO Decomposition over $\\Pi_j$ :\n\n$$\n\\min _ {\\Pi_ {j}, \\Sigma} H _ {D e c o m p} = H _ {B, \\pi} + H _ {B, \\sigma} + H _ {A} + H _ {F, j} \\tag {19}\n$$\n\nThis formulation involves $q_{Decomp} = |r(j)|^2 + (n - |r(j)|) + \\mu$ variables. As with the Reduced form, the size depends on problem structure, which we explore empirically in Section 3. Note that this formulation is restricted in the sense of only providing symmetries on the set of variables symmetric to $\\pi_j$ rather than across the entire MIP. We note also that, in principle, decomposition could also be done over the set of constraints in $r(i)$ , although it is less desirable as the end-goal is generally to identify symmetric variables.\n\nProposition 3. If $H_{Decomp}^{*} = 0$ for some $\\pi^{*}, \\sigma^{*}$ , then $\\pi^{*}$ is a formulation symmetry that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_{j}$ . Furthermore, if $\\bar{\\pi}$ is a formulation symmetry that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_{j}$ , then there exists some $\\bar{\\sigma}$ such that $H_{Decomp}(\\bar{\\pi}, \\bar{\\sigma}) = 0$ .\n\nProof. Suppose $H_{Decomp}^{*} = 0$ for some $\\pi^{*}, \\sigma^{*}$ . Since each term is nonnegative, we must have that each term is zero-valued at $(\\pi^{*}, \\sigma^{*})$ . As $H_{B,\\pi} = 0$ and $H_{B,\\sigma} = 0$ ,\n\nthen $\\pi^{*}$ and $\\sigma^{*}$ are double stochastic and thus represent permutations. Since the formulation only contains variables that represent mappings within reasonable permutations, the integer variables, objective coefficients, and constraint constants are preserved. Since $H_{F,j} = 0$ , the variables which cannot be reasonably permuted with $\\pi_{j}$ must be fixed and thus the permutations remain valid. Finally, if $H_{A} = 0$ , then the coefficient matrix is preserved.\n\nNow consider some formulation symmetry $\\bar{\\pi}$ that fixes the position of all $x_{j'}$ which cannot be reasonably permuted with $x_j$ . By definition, there exists some corresponding $\\bar{\\sigma}$ that creates a valid permutation of the constraints. Since these are permutations, their matrix representations are doubly stochastic and so $H_{B,\\pi} = H_{B,\\sigma} = 0$ . If the position of $x_{j'}$ is fixed, then $\\bar{x}_{j'j'} = 1$ and thus $H_{F,j} = 0$ . Finally, formulation symmetries preserve the coefficient matrix, so $H_A = 0$ .\n\nWe also have the QUBO-Plus Decomposition over $\\Pi_j$ :\n\n$$\n\\min _ {\\Pi_ {j}, \\Sigma} H _ {A} \\tag {20}\n$$\n\n$$\n\\text {s . t .} \\quad \\sum_ {j ^ {\\prime} \\in r (j)} \\pi_ {j ^ {\\prime} j ^ {\\prime \\prime}} = 1, \\quad j ^ {\\prime \\prime} \\in r (j) \\tag {21}\n$$\n\n$$\n\\sum_ {j ^ {\\prime \\prime} \\in r (j)} \\pi_ {j ^ {\\prime} j ^ {\\prime \\prime}} = 1, \\quad j ^ {\\prime} \\in r (j) \\tag {22}\n$$\n\n$$\n\\sum_ {i \\in r (i)} \\sigma_ {i i ^ {\\prime}} = 1, \\quad i ^ {\\prime} \\in \\{1, \\dots , m \\} \\tag {23}\n$$\n\n$$\n\\sum_ {i ^ {\\prime} \\in r (i)} \\sigma_ {i i ^ {\\prime}} = 1, \\quad i \\in \\{1, \\dots , m \\} \\tag {24}\n$$\n\n$$\n\\pi_ {j ^ {\\prime} j ^ {\\prime}} = 1, \\quad j ^ {\\prime} \\notin r (j) \\tag {25}\n$$\n\nThis again features $q_{Decomp}$ variables as well as $2|r(j)| + 2m + (n - |r(j)|)$ constraints.\n\n# 3 Experiments\n\n# 3.1 Example\n\nWe begin with a simple knapsack problem as an example MIP instance:\n\n$$\n\\max _ {7} x _ {1} + x _ {2} + x _ {3} + 2 x _ {4} + 2 x _ {5} + 2 x _ {6} + 3 x _ {7}\n$$\n\n$$\nx \\in \\mathbb {Z} ^ {i} \\tag {26}\n$$\n\n$$\n\\text {s . t .} \\quad x _ {1} + x _ {2} + 2 x _ {3} + x _ {4} + x _ {5} + x _ {6} + x _ {7} \\leq 1 0 0\n$$\n\nThe problem has the following orbits: $\\{\\pi_1, \\pi_2\\}, \\{\\pi_3\\}, \\{\\pi_4, \\pi_5, \\pi_6\\}, \\{\\pi_7\\}$ . In our full QUBO, we have decision variables $\\pi \\in \\{0, 1\\}^{7 \\times 7}$ and $\\sigma_{1,1}$ . Thus, $q_{Full} =$\n\n$7^{2} + 1^{1} = 50$ . We can, however, reduce the problem based on the following sets of reasonable permutations:\n\n$$\n\\begin{array}{l} r (1) = \\{1, 2 \\} \\\\ r (2) = \\{1, 2 \\} \\\\ r (3) = \\{3 \\} \\\\ r (4) = \\{4, 5, 6 \\} \\\\ r (5) = \\{4, 5, 6 \\} \\\\ r (6) = \\{4, 5, 6 \\} \\\\ r (7) = \\{7 \\} \\\\ \\end{array}\n$$\n\nThe $\\pi$ variables that are excluded are shown in Figure 3. $\\nu = 15$ while $\\mu = 1$ ,\n\n![](images/dbb3f97be775f104c720ca714f2bb299c05799cdc17ddce8d1c7eb90cb98fd43.jpg) \nFig. 3. Visualizations of the Reduced (left) and Decomposed (right) forms.\n\nso $q_{Reduced} = 15 + 1 = 16$ . We can quantify the effectiveness of the reduction by calculating $\\frac{q_{Full}}{q_{Reduced}} = 0.32$ , so the Reduced form requires $32\\%$ as many variables as the Full. Should we require our QUBO to be even smaller, we can use the Decomposed form over any of the sets of reasonable permutations, say $\\Pi_4$ , at the potential cost of dropping certain symmetries. Again, we can visualize which variables are excluded by looking at Figure 3. Here we now have $q_{Decomp} = 13$ , requiring $26\\%$ of the variables of the Full formulation.\n\n# 3.2 Experiments with MIPLIB 2017\n\nTo study how much the Reduced and Decomposed forms of our formulations decrease the number of variables in our QUBOs in practice, we calculate values of $\\nu$ , $\\mu$ , and the size of the largest decomposition, which we refer to as MaxDecomp, for each problem in the MIPLIB 2017 collection. We then use these values to determine how many variables each formulation would require as a percent of the Full formulation. On average, the Reduced form requires $32\\%$ of the number of QUBO variables, while the MaxDecomp form requires $30\\%$ of the number of variables compared to the Full formulation. The full distribution of the percent of $\\pi$ , $\\sigma$ entries needed is shown in Figure 3.2.\n\n![](images/e3e9b3ffec33d4106733e78c7877dbab405e9fc048069e42b7e9014ae4978df9.jpg) \nFig. 4. Distributions of $q_{Reduced}$ (left) and $q_{MaxDecomp}$ (right) as a proportion of $q_{Full}$ within the MIPLIB 2017 Collection Set.\n\n![](images/3b8369ac27d280c3754ccf240259323ce86d407ef2df9b7bbe7f24614107b03c.jpg)\n\nWhen $\\frac{q_{Reduced}}{q_{Full}}$ is closer to 1, more of the MIP variables and constraints could be reasonably permuted. However, the overall size of the QUBO will remain quite large. On the other hand, if $\\frac{q_{Reduced}}{q_{Full}}$ is closer to 0, then the QUBO will be much smaller, but the MIP likely has very little symmetry worth exploiting. Therefore, when working with an MIP problem, the trade-off of QUBO size and potential for symmetry must be balanced.\n\nWe have also calculated a power regression of the form $y = x^{k}$ to estimate $\\nu$ and $\\mu$ as a function of $n$ and $m$ in the test set. We have $\\nu \\approx n^{1.764}$ and $\\mu \\approx m^{1.834}$ . Scatter-plots of the data can be seen in Figure 5.\n\n![](images/dddcb543d096460036c2797623350c97deac7398f329f7b4c04e09829964059f.jpg) \nFig. 5. Power regressions of $\\nu$ and $\\mu$ as functions of $n$ and $m$ within the MIPLIB 2017 Collection Set.\n\n![](images/e94771591222e36ac0ae969ac0ef53707f35bf392603fa06d70d3f73021fa197.jpg)\n\n# 3.3 DWave Embeddings\n\nTo use a DWave quantum annealer, the QUBO problem graph must be embedded on the hardware's topology. While we can use Proposition 2 to determine\n\nthe number of qubits needed in a Zephyr graph (with $t = 4$ ), this is a rather loose bound in practice. Thus we apply the find_embedding routine from the minorminer library provided with Dwave software, which is a heuristic for embedding a source graph (our QUBO) to a target graph (an appropriately sized Zephyr graph). For each reduced QUBO, we set the target graph as $Z_{g}$ where $g = \\frac{\\mu + \\mu + 8}{16}$ , the graph for which a $K_{\\nu + \\mu}$ graph could be embedded.\n\nDue to the computation time of the heuristic routine, we were only able to test the embedding heuristic on the 12 smallest embeddings. For the largest four instances within this set, no embedding was found within the routine's time limit. Complete results can be seen in Table 1. On average, the heuristic required $39.8\\%$ the amount of qubits that the complete graph embedding requires.\n\nWe run a regression on the number of physical qubits required to embed each problem on a Zephyr graph, finding that $qubits \\approx 0.98q_{Reduced}^2$ . A stronger correlation was found with the number of QUBO terms, which is reasonable as the connectivity (or lack thereof) is what necessitates the embeddings in the first place. We found that $qubits \\approx 0.324(\\#terms)$ . Visualizations of these regressions can be seen in Figure 6. Based on the formula for the number of qubits required to embed the $K_{q_{Reduced}}$ graph as well as the fact that there are $O(v^2)$ edges on a graph with $v$ vertices, it is not surprising that the required physical qubits are still $O(q_{Reduced}^2)$ and $O(\\#terms)$ . Combining our regressions, for the reduced symmetry formulation of an MIP with $n$ variables and $m$ constraints, we would expect to require a Zephyr topology quantum annealer with $0.98(n^{1.764} + m^{1.834})^2 \\in O(n^{3.528} + m^{3.668})$ qubits.\n\n![](images/67658c185625a6df210fe664236bd4fde1885472a0abc13067d13777b73740fe.jpg) \nFig. 6. Regressions of physical qubits required for embedding on Zephyr graphs vs $q_{Reduced}$ (left), where the dashed line is the bound described in Proposition 2, and the number of QUBO terms (right)\n\n![](images/9c53f78df8be3e9d1598a8143a125216442899f9d3fbf7810b3dc8e00763182a.jpg)\n\n# References\n\n1. Aaronson, S.: Bqp and the polynomial hierarchy. In: Proceedings of the forty-second ACM symposium on Theory of computing. pp. 141-150 (2010) \n2. Aaronson, S., Bouland, A., Fitzsimons, J., Lee, M.: The space\" just above\" bqp. In: Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. pp. 271-280 (2016) \n3. Abbas, A., Ambainis, A., Augustino, B., Bärtschi, A., Buhrman, H., Coffrin, C., Cortiana, G., Dunjko, V., Egger, D.J., Elmegreen, B.G., et al.: Challenges and opportunities in quantum optimization. Nature Reviews Physics pp. 1-18 (2024) \n4. Abbott, A.A., Calude, C.S., Dinneen, M.J., Hua, R.: A hybrid quantum-classical paradigm to mitigate embedding costs in quantum annealing. International Journal of Quantum Information 17(05), 1950042 (Aug 2019). https://doi.org/10.1142/s0219749919500424, http://dx.doi.org/10.1142/S0219749919500424 \n5. Ajagekar, A., Al Hamoud, K., You, F.: Hybrid classical-quantum optimization techniques for solving mixed-integer programming problems in production scheduling. IEEE Transactions on Quantum Engineering 3, 1-16 (2022) \n6. Babai, L.: Group, graphs, algorithms: the graph isomorphism problem. In: Proceedings of the International Congress of Mathematicians: Rio de Janeiro 2018. pp. 3319-3336. World Scientific (2018) \n7. Bödi, R., Herr, K., Joswig, M.: Algorithms for highly symmetric linear and integer programs. Mathematical Programming 137(1-2), 65-90 (2013) \n8. Bolusani, S., Besançon, M., Bestuzheva, K., Chmiela, A., Dionisio, J., Donkiewicz, T., van Doornmalen, J., Eifler, L., Ghannam, M., Gleixner, A., et al.: The scip optimization suite 9.0. arXiv preprint arXiv:2402.17702 (2024) \n9. Boothby, K., King, A.D., Raymond, J.: Zephyr topology of d-wave quantum processors. Tech. rep., DWave (2021) \n10. Boothby, T., King, A.D., Roy, A.: Fast clique minor generation in chimera qubit connectivity graphs. Quantum Information Processing 15, 495-508 (2016) \n1. Boudot, F., Gaudry, P., Guillevic, A., Heninger, N., Thorne, E., Zimmermann, P.: Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment. In: Annual International Cryptology Conference. pp. 62-91. Springer (2020) \n2. Brown, R., Bernal Neira, D.E., Venturelli, D., Pavone, M.: A copositive framework for analysis of hybrid ising-classical algorithms. SIAM Journal on Optimization 34(2), 1455-1489 (2024) \n3. Calude, C.S., Dinneen, M.J., Hua, R.: Qubo formulations for the graph isomorphism problem and related problems. Theoretical Computer Science 701, 54-69 (2017) \n4. Carlson, B., Chen, Y., Hong, M., Jones, R., Larson, K., Ma, X., Nieuwsteeg, P., Song, H., Sperry, K., Tackett, M., et al.: Miso unlocks billions in savings through the application of operations research for energy and ancillary services markets. Interfaces 42(1), 58-73 (2012) \n5. Developers, D.W.: D-wave hybrid solver service: An overview. D-Wave Systems Inc., Tech. Rep (2020) \n6. Ellinas, P., Chevalier, S., Chatzivasileiadis, S.: A hybrid quantum-classical algorithm for mixed-integer optimization in power systems. Electric Power Systems Research 235, 110835 (2024)\n\n17. Fielder, M.: Doubly stochastic matrices and optimization. Mathematical research 45, 44-51 (1988) \n18. Glover, F., Kochenberger, G., Hennig, R., Du, Y.: Quantum bridge analytics i: a tutorial on formulating and using qubo models. Annals of Operations Research 314(1), 141-183 (2022) \n19. Glover, F., Kochenberger, G., Ma, M., Du, Y.: Quantum bridge analytics ii: Quboplus, network optimization and combinatorial chaining for asset exchange. Annals of Operations Research 314(1), 185-212 (2022) \n20. Hua, R., Dinneen, M.J.: Improved qubo formulation of the graph isomorphism problem. SN Computer Science 1, 1-18 (2020) \n21. Jain, R., Ji, Z., Upadhyay, S., Watrous, J.: Qip= pspace. Journal of the ACM (JACM) 58(6), 1-27 (2011) \n22. Jun, K., Lee, H.: Hubo and qubo models for prime factorization. Scientific Reports 13(1), 10080 (2023) \n23. Jünger, M., Lobe, E., Mutzel, P., Reinelt, G., Rendl, F., Rinaldi, G., Stollenwerk, T.: Quantum annealing versus digital computing: An experimental comparison. Journal of Experimental Algorithmics (JEA) 26, 1-30 (2021) \n24. Junttila, T., Kaski, P.: Engineering an efficient canonical labeling tool for large and sparse graphs. In: 2007 Proceedings of the Ninth Workshop on Algorithm Engineering and Experiments (ALENEX). pp. 135-149. SIAM (2007) \n25. Liberti, L.: Reformulations in mathematical programming: automatic symmetry detection and exploitation. Mathematical Programming 131, 273-304 (2012) \n26. Lubinski, T., Coffrin, C., McGeoch, C., Sathe, P., Apanavicius, J., Bernal Neira, D., Consortium, Q.E.D., et al.: Optimization applications as quantum performance benchmarks. ACM Transactions on Quantum Computing 5(3), 1-44 (2024) \n27. Lucas, A.: Ising formulations of many np problems. Frontiers in Physics 2(5) (2014). https://doi.org/https://10.3389/fphy.2014.00005 \n28. Margot, F.: Symmetry in integer linear programming. 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-Art pp. 647-686 (2009) \n29. McKay, B.D.: Nauty user's guide (version 2.4). Computer Science Dept., Australian National University pp. 225-239 (2007) \n30. Mohammadisiahroudi, M., Wu, Z., Augustino, B., Carr, A., Terlaky, T.: Improvements to quantum interior point method for linear optimization. ACM Transactions on Quantum Computing 6(1), 1-24 (2025) \n31. Montanaro, A.: Quantum speedup of branch-and-bound algorithms. Physical Review Research 2(1), 013056 (2020) \n32. Nannicini, G.: Performance of hybrid quantum-classical variational heuristics for combinatorial optimization. Physical Review E 99(1), 013304 (2019) \n33. Nannicini, G.: Fast quantum subroutines for the simplex method. Operations Research 72(2), 763-780 (2024) \n34. Ostrowski, J., Anjos, M.F., Vannelli, A.: Modified orbital branching for structured symmetry with an application to unit commitment. Mathematical Programming 150, 99-129 (2015) \n35. Paterakis, N.G.: Hybrid quantum-classical multi-cut benders approach with a power system application. Computers & Chemical Engineering 172, 108161 (2023) \n36. Peng, W., Wang, B., Hu, F., Wang, Y., Fang, X., Chen, X., Wang, C.: Factoring larger integers with fewer qubits via quantum annealing with optimized parameters. SCIENCE CHINA Physics, Mechanics & Astronomy 62(6), 60311 (2019)\n\n37. Pfetsch, M.E., Rehn, T.: A computational comparison of symmetry handling methods for mixed integer programs. Mathematical Programming Computation 11, 37-93 (2019) \n38. Proctor, T., Young, K., Baczewski, A.D., Blume-Kohout, R.: Benchmarking quantum computers. Nature Reviews Physics 7(2), 105-118 (2025) \n39. Quinton, F.A., Myhr, P.A.S., Barani, M., Crespo del Granado, P., Zhang, H.: Quantum annealing applications, challenges and limitations for optimisation problems compared to classical solvers. Scientific Reports 15(1), 12733 (2025) \n40. Shor, P.: Algorithms for quantum computation: discrete logarithms and factoring. In: Proceedings 35th Annual Symposium on Foundations of Computer Science. pp. 124-134 (1994). https://doi.org/10.1109/SFCS.1994.365700 \n41. Svensson, M., Andersson, M., Gronkvist, M., Vikstål, P., Dubhashi, D., Ferrini, G., Johansson, G.: Hybrid quantum-classical heuristic to solve large-scale integer linear programs. Physical Review Applied 20(3), 034062 (2023) \n42. Wang, Y., Shen, Y., Zhang, Z., Wan, L.: Rbm-based simulated quantum annealing for graph isomorphism problems. arXiv preprint arXiv:2503.07749 (2025) \n43. Wei, X., Liu, J., Fan, L., Guo, Y., Han, Z., Wang, Y.: Hybrid quantum-classical computing via dantzig-wolfe decomposition for integer linear programming. In: 2024 33rd International Conference on Computer Communications and Networks (ICCCN). pp. 1-9. IEEE (2024) \n44. Willsch, D., Willsch, M., Jin, F., De Raedt, H., Michielsen, K.: Large-scale simulation of shor's quantum factoring algorithm. Mathematics 11(19), 4222 (2023) \n45. Woerner, S., Nannicini, G., Barkoutsos, P., Tavernelli, I.: Solving mixed integer optimization problems on a hybrid classical-quantum computing system (Jan 26 2021), uS Patent 10,902,085 \n46. Zhao, Z., Fan, L., Han, Z.: Hybrid quantum benders' decomposition for mixed-integer linear programming. In: 2022 IEEE Wireless Communications and Networking Conference (WCNC). pp. 2536-2540. IEEE (2022)\n\n# A Results of Heuristic Embeddings\n\nTable 1. Results of embedding our Reduced Symmetry Detecting QUBO on the smallest MIPLIB 2017 instances on Zephyr topology with $t = 4$ \n\n<table><tr><td>Problem</td><td>n</td><td>m</td><td>ν</td><td>μ</td><td>qReduced</td><td># QUBO Terms</td><td>g</td><td>Qubits (Heuristic)</td><td>Qubits Kν+μ</td><td>Proportion</td></tr><tr><td>ej</td><td>3</td><td>1</td><td>5</td><td>1</td><td>6</td><td>18</td><td>1</td><td>7</td><td>39</td><td>18%</td></tr><tr><td>flugpl</td><td>18</td><td>18</td><td>80</td><td>72</td><td>152</td><td>7066</td><td>10</td><td>2166</td><td>3360</td><td>64%</td></tr><tr><td>flugplinf</td><td>18</td><td>19</td><td>80</td><td>73</td><td>153</td><td>7139</td><td>11</td><td>2517</td><td>3402</td><td>74%</td></tr><tr><td>gen-ip016</td><td>28</td><td>24</td><td>28</td><td>24</td><td>52</td><td>706</td><td>4</td><td>148</td><td>510</td><td>29%</td></tr><tr><td>gen-ip036</td><td>29</td><td>46</td><td>29</td><td>46</td><td>75</td><td>1516</td><td>6</td><td>334</td><td>945</td><td>35%</td></tr><tr><td>gen-ip054</td><td>30</td><td>27</td><td>30</td><td>27</td><td>57</td><td>843</td><td>5</td><td>180</td><td>594</td><td>30%</td></tr><tr><td>gen-ip021</td><td>35</td><td>28</td><td>35</td><td>28</td><td>63</td><td>1036</td><td>5</td><td>220</td><td>702</td><td>31%</td></tr><tr><td>gen-ip002</td><td>41</td><td>24</td><td>41</td><td>24</td><td>65</td><td>1161</td><td>5</td><td>265</td><td>740</td><td>36%</td></tr></table>"}
# A Context-Free Smart Grid Model Using Complex System Approach Abstract—Energy and pollution are urging problems of the 21th century. By gradually changing the actual power grid system, smart grid may evolve into different systems by means of size, elements and strategies, but its fundamental requirements and objectives will not change such as optimizing production, transmission, and consumption. Studying the smart grid through modeling and simulation provides us with valuable results which cannot be obtained in real world due to time and cost related constraints. Moreover, due to the complexity of the smart grid, achieving global optimization is not an easy task. In this paper, we propose a complex system based approach to the smart grid modeling, accentuating on the optimization by combining game theoretical and classical methods in different levels. Thanks to this combination, the optimization can be achieved with flexibility and scalability, while keeping its generality. # I. INTRODUCTION Our society is electrically dependent. The electrical grid supply energy to households, businesses, and industries, but disturbances and blackouts are becoming common. With the pressure from ever increasing energy demand and climate change, finding new energy resources and enhancing energy efficiency have become priority of many nations in the 21st century. The term smart grid is coined by Amin in 2005. Smart grid is a type of electrical grid which attempts to predict and intelligently respond to the behavior and actions of all electric power users connected to it - suppliers, consumers and those that do both - in order to efficiently deliver reliable, economic, and sustainable electricity services. Then, The expression "Smart Grid" has expanded into different dimensions: some see it as a numerical solution for downstream counter and mostly residential customers, while others believe that it is a global system vision that transcends the current structure of the energy market to generate economic, environmental, and social benefits for everyone. Thus, Smart Grid is a fuzzy concept with various definitions in literature. However, Smart Grid could be defined according to the main requirements of an energy network. Smart Grid should integrate information and communication technologies to generate, transport, distribute, and consume energy more efficiently. In addition, the network should have mainly the following properties: self-healing, flexibility, predictability, interactivity, optimality, and safety. Moreover, the Smart Grid should improve reliability, reduce peak demand, and equalize energy consumption. Research works are being conducted to attain the objectives, but many problems of modeling and coordination hamper advancements. However each model offers its own vision of the smart grid, putting aside theoretical and technological advancement of others. Cooperation between smart technologies and existing infrastructure is often neglected in scientific and industrial studies. In, authors argued that an electrical grid which allows the adjustments on both supply and demand will improve efficiency, reduce costs on both sides and will be beneficial for the environment. Taking into account all these internal and external features, the Smart Grid is defined as a complex system,,. Contribution of our approach consists in treating the smart grid as a complex system, locating the problems at local as well as global levels, and solving them with coordinated methods. In other words, through studying and analyzing smart grid, we isolate homogeneous parts with similar behaviors or objectives, and apply classical optimization algorithms at different levels with coordination. Thanks to combining those interdependent methods, our approach guarantees the flexibility in terms of system size. Besides that, generality of our approach allows its applicability in different scenarios and models. This paper is organized as following: in the next section, the concept of complex system is introduced and theoretical approaches in their modeling are discussed. In Section 3, we present the details of our global Smart Grid model based on the complex system approach, and in section 4, we present the research of a global consensus between supply and demand. We also discuss our perspectives and first results in section 5. # II. COMPLEX SYSTEM APPROACH A system which consists of large populations of connected agents, or collections of interacting elements, is said to be complex if there exists an emergent global dynamics resulting from the actions of its parts rather than being imposed by a central controller. That is a self-organizing collective behavior difficult to anticipate from the knowledge of local behavior. Complex system study embraces not only traditional disciplines of science, but also engineering, management, and medicine. The majority of studies on Smart Grids use a top-down approach. The Smart Grid is broken into basic issues: optimization, network structure and communication technologies and security; and main components: users (consumers and producers), energy, controllers and data. Next, a global objective function is defined. The simulation returns the global solution, without taking into account local constraints. Applying the optimization method in complex systems in a global manner is almost an impossible task, if not impossible at all. Because, complex systems are composed of heterogeneous parts; it is hard to find all the variables that matters; even if all variables are included, the complexity of the objective function will become beyond the computation power of computers. In addition, each organized activity shows a conflict between two fundamental requirements, allocation of resources in various tasks and coordination of these tasks to accomplish the global mission. While modeling complex systems, bottom-up analysis, also named systemic approach, gives a more complete, realistic and comprehensive vision. Smart grid can be qualified as a complex system, due to its heterogeneous actors, dynamic, complex interactions among them, and global behaviors such self-healing, or self-organizing characters. Based on this observation, we will analyze the smart grid methodologically in order to understand the mechanisms and internal components as well as the needs of every sub-components. At first step, we should understand the system. An overview brings structural aspects, entities and objectives. All these elements are considered as agents. These one are not randomly distributed in the system, but according to patterns, and form distinct groups with their own arrangement. In the smart grid, three types of behavior are distinguished: consumer, producer, and transporter. The result is a hierarchy, or a food chain. After analyzing the characteristics of the system, we define the sub-components. A sub-component has a structure, objectives and specific entities; although quantities or position in the system are variable. As a separate system, it has its own dynamics. It is then possible to solve it with an appropriate optimization method. Consumers are generally located at the chain end, in network tree. The top producers are located in a mesh network, reinforcing the grid, and are linked to the consumers by linear chain. The sub-components are interacting, then you should take into account the I-O data for each method. The stability of the model depends on localized optimization. It is necessary to optimize each part of the chain as well as a whole to stabilize the system. If only consumers are optimized, all their devices will receive energy. If we optimize producer, they will produce the minimum at minimum cost. To prevent system crashes, the model must have a system of communication to reach a global consensus. Moreover, the system is subjected to external pressure. Feedback between sub-components are essential to maintain functionality, and to find local and global equilibrium. These mechanisms work like homeostasis in natural sciences: a balance between the internal environment and the external environment. The Smart Grid, through its self-organization, self-healing and optimizing resources at any scale has a similar phenomenon. That is why the search for a global consensus is essential. In summary, we analyze the system to determine subcomponents. These have a system of communication and their own optimization methods, global criteria ensure balance in the system. Local optimization and global consensus constitute a decentralized optimization of our complex system guaranteeing individual and collective benefits. # III. MODELING The problems of electrical networks have been known for long, and research as well as industrial works has been carried out to find effective and competitive solutions. Nevertheless the efforts are often concentrated on specific cases, and solutions are, too, specific without any room for evolution. Among the proposed solutions we can mention: - Distributed generation/microgrids: since a centralized optimization is very costly in terms of time and memory, optimization should be done at all levels. The microgrids can change the centralized interface into a distributed interface, therefore optimization can be carried out in a distributed manner. Consequently calculation benefits in terms of time and memory are significant, while ensuring optimal at different scales. - Design of intelligent network (home automation): domotics or smart devices give real-time data and are controllable by the user or a smart meter. While optimizing local consumption, they optimize overall consumption as a result. - Energy storage device: the energy storage coupled with energy optimization from beginning to end, regulates consumption and clears consumption peaks. - Reduction of Transmission and Distribution T&D network losses by automated distribution: One of the strong points of our model is the
# A Context-Free Smart Grid Model Using Complex System Approach Abstract—Energy and pollution are urging problems of the 21th century. By gradually changing the actual power grid system, smart grid may evolve into different systems by means of size, elements and strategies, but its fundamental requirements and objectives will not change such as optimizing production, transmission, and consumption. Studying the smart grid through modeling and simulation provides us with valuable results which cannot be obtained in real world due to time and cost related constraints. Moreover, due to the complexity of the smart grid, achieving global optimization is not an easy task. In this paper, we propose a complex system based approach to the smart grid modeling, accentuating on the optimization by combining game theoretical and classical methods in different levels. Thanks to this combination, the optimization can be achieved with flexibility and scalability, while keeping its generality. # I. INTRODUCTION Our society is electrically dependent. The electrical grid supply energy to households, businesses, and industries, but disturbances and blackouts are becoming common. With the pressure from ever increasing energy demand and climate change, finding new energy resources and enhancing energy efficiency have become priority of many nations in the 21st century. The term smart grid is coined by Amin in 2005. Smart grid is a type of electrical grid which attempts to predict and intelligently respond to the behavior and actions of all electric power users connected to it - suppliers, consumers and those that do both - in order to efficiently deliver reliable, economic, and sustainable electricity services. Then, The expression "Smart Grid" has expanded into different dimensions: some see it as a numerical solution for downstream counter and mostly residential customers, while others believe that it is a global system vision that transcends the current structure of the energy market to generate economic, environmental, and social benefits for everyone. Thus, Smart Grid is a fuzzy concept with various definitions in literature. However, Smart Grid could be defined according to the main requirements of an energy network. Smart Grid should integrate information and communication technologies to generate, transport, distribute, and consume energy more efficiently. In addition, the network should have mainly the following properties: self-healing, flexibility, predictability, interactivity, optimality, and safety. Moreover, the Smart Grid should improve reliability, reduce peak demand, and equalize energy consumption. Research works are being conducted to attain the objectives, but many problems of modeling and coordination hamper advancements. However each model offers its own vision of the smart grid, putting aside theoretical and technological advancement of others. Cooperation between smart technologies and existing infrastructure is often neglected in scientific and industrial studies. In, authors argued that an electrical grid which allows the adjustments on both supply and demand will improve efficiency, reduce costs on both sides and will be beneficial for the environment. Taking into account all these internal and external features, the Smart Grid is defined as a complex system,,. Contribution of our approach consists in treating the smart grid as a complex system, locating the problems at local as well as global levels, and solving them with coordinated methods. In other words, through studying and analyzing smart grid, we isolate homogeneous parts with similar behaviors or objectives, and apply classical optimization algorithms at different levels with coordination. Thanks to combining those interdependent methods, our approach guarantees the flexibility in terms of system size. Besides that, generality of our approach allows its applicability in different scenarios and models. This paper is organized as following: in the next section, the concept of complex system is introduced and theoretical approaches in their modeling are discussed. In Section 3, we present the details of our global Smart Grid model based on the complex system approach, and in section 4, we present the research of a global consensus between supply and demand. We also discuss our perspectives and first results in section 5. # II. COMPLEX SYSTEM APPROACH A system which consists of large populations of connected agents, or collections of interacting elements, is said to be complex if there exists an emergent global dynamics resulting from the actions of its parts rather than being imposed by a central controller. That is a self-organizing collective behavior difficult to anticipate from the knowledge of local behavior. Complex system study embraces not only traditional disciplines of science, but also engineering, management, and medicine. The majority of studies on Smart Grids use a top-down approach. The Smart Grid is broken into basic issues: optimization, network structure and communication technologies and security; and main components: users (consumers and producers), energy, controllers and data. Next, a global objective function is defined. The simulation returns the global solution, without taking into account local constraints. Applying the optimization method in complex systems in a global manner is almost an impossible task, if not impossible at all. Because, complex systems are composed of heterogeneous parts; it is hard to find all the variables that matters; even if all variables are included, the complexity of the objective function will become beyond the computation power of computers. In addition, each organized activity shows a conflict between two fundamental requirements, allocation of resources in various tasks and coordination of these tasks to accomplish the global mission. While modeling complex systems, bottom-up analysis, also named systemic approach, gives a more complete, realistic and comprehensive vision. Smart grid can be qualified as a complex system, due to its heterogeneous actors, dynamic, complex interactions among them, and global behaviors such self-healing, or self-organizing characters. Based on this observation, we will analyze the smart grid methodologically in order to understand the mechanisms and internal components as well as the needs of every sub-components. At first step, we should understand the system. An overview brings structural aspects, entities and objectives. All these elements are considered as agents. These one are not randomly distributed in the system, but according to patterns, and form distinct groups with their own arrangement. In the smart grid, three types of behavior are distinguished: consumer, producer, and transporter. The result is a hierarchy, or a food chain. After analyzing the characteristics of the system, we define the sub-components. A sub-component has a structure, objectives and specific entities; although quantities or position in the system are variable. As a separate system, it has its own dynamics. It is then possible to solve it with an appropriate optimization method. Consumers are generally located at the chain end, in network tree. The top producers are located in a mesh network, reinforcing the grid, and are linked to the consumers by linear chain. The sub-components are interacting, then you should take into account the I-O data for each method. The stability of the model depends on localized optimization. It is necessary to optimize each part of the chain as well as a whole to stabilize the system. If only consumers are optimized, all their devices will receive energy. If we optimize producer, they will produce the minimum at minimum cost. To prevent system crashes, the model must have a system of communication to reach a global consensus. Moreover, the system is subjected to external pressure. Feedback between sub-components are essential to maintain functionality, and to find local and global equilibrium. These mechanisms work like homeostasis in natural sciences: a balance between the internal environment and the external environment. The Smart Grid, through its self-organization, self-healing and optimizing resources at any scale has a similar phenomenon. That is why the search for a global consensus is essential. In summary, we analyze the system to determine subcomponents. These have a system of communication and their own optimization methods, global criteria ensure balance in the system. Local optimization and global consensus constitute a decentralized optimization of our complex system guaranteeing individual and collective benefits. # III. MODELING The problems of electrical networks have been known for long, and research as well as industrial works has been carried out to find effective and competitive solutions. Nevertheless the efforts are often concentrated on specific cases, and solutions are, too, specific without any room for evolution. Among the proposed solutions we can mention: - Distributed generation/microgrids: since a centralized optimization is very costly in terms of time and memory, optimization should be done at all levels. The microgrids can change the centralized interface into a distributed interface, therefore optimization can be carried out in a distributed manner. Consequently calculation benefits in terms of time and memory are significant, while ensuring optimal at different scales. - Design of intelligent network (home automation): domotics or smart devices give real-time data and are controllable by the user or a smart meter. While optimizing local consumption, they optimize overall consumption as a result. - Energy storage device: the energy storage coupled with energy optimization from beginning to end, regulates consumption and clears consumption peaks. - Reduction of Transmission and Distribution T&D network losses by automated distribution: One of the strong points of our model is the distribution optimization by local and global algorithms which reduce the loss of congestion or routing errors. - Intelligent control of price: when the network becomes intelligent, it is necessary that the consumer prices may also change in order to follow the new consumer behavior. Many theories, in order to optimize smart grid, come from complex system analysis. Sub-components, optimization methods and the necessary theories to obtain an optimal consensus, are defined in these articles,,. # A. Global objective function The overall mathematical problem is similar to a knapsack problem,. The objective function is under multiple temporal, spatial and physical constraints: such as the granularity of study, local optimization (routing, distribution, consumption), as well as variable consumption and production time. The general 0-1 knapsack problem applied to smart grids is: $$ \left\{ \begin{array}{l l} \text {m a x i m i z e} & \sum_ {i = 1} ^ {n} x _ {i} u _ {i}, \\ \text {s u b j e c t t o} & \sum_ {i = 1} ^ {n} x _ {i} w _ {i} \leq W \end{array} \right. $$ where $x_{i} = 1$ if the devices $i$ is taken, else 0; $u_{i}$ represents utility of the device $i$ and $w_{i}$ its consumption in $Watt * hour$ ; $W$ is the total energy produced in $Watt * hour$ ; there are $n$ devices. Moreover, several quadratic or linear constraints due Fig. 1. Knapsack problem for Smart Grid. to the complex system is added (routing, minimal values, cost, etc.). This problem, see Figure 1, is too hard to be solved at very large scale - millions of items, in few minutes. Criteria or constraints should be satisfied throughout algorithms and process. Is the decomposition of the global problem and assigning it to various computers connected to a single network is equivalent to resolving the global solution on a single machine? In distributed algorithms, all machines have the same role. We notice that all levels have the same overall goal, but only use specific algorithms. In other words, our process is similar to distributed algorithms. # B. A three layered grid The Smart Grid has three sub-components having a structure, dynamics and distinct behavior: the transmission and distribution network (T&D), the microgrid and the local level. Fig. 2. Smart Grid sub-components (from PowerMatrix, Siemens). The first level is the only full connected one, forming a single group, represented by the center, see Figure 2. The Transmission and Distribution network (T&D) must deliver energy from producers to points of consumption. Energy flows on electric cables with various criteria and technical constraints limiting the amount of energy that can circulate on each. The algorithm at this level should be able to limit the effects of congestion $n$ due to the widespread use of a few lines, while limiting the cost of routing energy. Production and consumption must match as better as possible, in order to achieve this, we must deliver most of energy while satisfying most of consumers The second level is the link between consumption and energy production, represented by the second ring, see Figure 2. The microgrid is a broader view of local consumers, it is a structure representing an eco-district bounded by the upstream substation. Its role is to distribute energy from substation to consumers. For this, it books an amount of energy from the T&D network. The outer ring represents local levels, see Figure 2. Local level models consumers, i.e. a group of consuming devices, local renewables or electric vehicle requesting or providing a measurable amount of energy. These isolated structures, like residences or factories support the consumption of energy, i.e. the distribution of energy among appliances under its responsibility. The energy distribution is a dynamic programming resolution of a knapsack problem: objects are devices, the weight is the energy received at the local level, and the utility is a function of demand-side management strategies # C. Iteration process An iteration occurs every five minutes. Once data are updated, the process is decomposed into four sequences, see Figure 3. Fig. 3. Sequential Scheme. Sequence A: to design intelligent aspect of the device, a priority is assigned to these dynamic entities, and for calculating a consumption value. Indeed, we use a local knapsack problem, solved by dynamic programming after data normalization, for finding a primary first optimal resource allocation. The knapsack is solved by dynamic programming, its complexity is $O(n * C_0)$ where $n$ is the number of devices and $C_0$ is the energy received. It is reduced to $O(n)$ if prognostics are correct. Due to the number of devices and the size of the bag, the optimal solution on a 0-1 knapsack by dynamic programming is obtained instantly with normalized weight. Sequence B: this sequence aims to book an amount of energy from producers to consumers using an auction. There are two ways to book energy: a consensus between consumers and production, i.e. a game where microgrids and energy flows are players; and a bid system with feedback. The problem of the first one is the complexity of the problem, impossible to resolve in few time. Second way have the advantage of time, but don't guarantee the global optimum. During an auction, it is likely that energy required does not correspond to any new consumption strategy, i.e. local level consume as much with that or without. In addition, it is possible to search the nearest consumer at lowest cost. The number of possible strategies is infinite, we must look at the impact of each of them on the microgrid and on the final decision. Sequence C: About the problem of routing, nodal rule or Kirchhoff's circuit specifies that at any node in a circuit, the sum of currents flowing into that node is equal to the sum of the currents flowing out of that node. An electrical circuit is equal to a graph in which a junction is a node, and physical connection corresponds to an edge. Routing problem is equivalent to the known max flow problem. Gale's theorem shows the existence of a solution in a network of offers and requests. The flow of the previous iteration is maximum by Ford-Fulkerson, recalculate the entire flow is not necessary. The residual graph removes excess flows between two updates, optimizing the computation time of the optimal flow, see Figure 4. It is also possible to resolve a maximum flow with minimal cost and minimal flow on edges by Busacker and Gowen, but this algorithm can't use previous and update graph solutions. If supply and demand do not match, algorithm analyzes the bottlenecks by performing Ford-Fulkerson on two schemes: infinite production, infinite consumption. These data are also used to calculate prognostics. The complexity is $O(A * f)$ where $A$ is the number of edges and $f$ is the maximum flow. The residual graph reduces $f$ to the sum of local differences in production and consumption. Sequence D: energy is distributed by knapsack problem, according to the auctions. The unconsumed energy is redistributed among non-used devices at upper scale. The device's priorities are updated according to the result of the final distribution. At worst, the complexity is $O(n * K)$ , with $n$ the number of devices and $K$ the energy received. # D. Global direction Studies conducted by Barabasi and Watts raised four general principles in distributed adaptive systems, more generally in complex systems: 1) Global information is encoded as statistics and dynamic patterns in the components of the system. 2) Chance and probabilities are essential. 3) The system performs a parallel search of opportunities. 4) The system has a continuous interaction. Principles of electricity generation and distribution are well known. Synchronization of the system is recognized that each station and each piece of equipment runs on the same clock, which is crucial for its proper functioning. Cascading failures related desynchronization can lead to massive power outages. In smart grid, a technical control automates the management of energy. Real-time data must be converted into information quickly enough so that errors are diagnosed in time, corrective actions are identified and executed dynamically in the network, (A) (B) (C) Fig. 4. Updating of routing. and feedback loops provide measures to ensure that the performed actions and production are consistent. An arbitrary configuration generates a random pattern, without prognostics. As a result of the first auction, gap between consumption and production occurs. Feedback adjusts supply and demand, as renewable energies or electric vehicle management for example. It is easier to vary demand over supply. So, supply and demand tend to the same value, see Figure 5. There are many steady states, economic and optimal criteria are considered in the final solution, both seeking minimal cost. Fig. 5. Supply and demand's Consensus. Global regulation must be done both at consumer and producer level, taking into account the difficulties of routing. To increase the effectiveness of this method, it is assumed that the front part of the infrastructure is home automation and every device can be controlled separately by the user and regulation algorithms. Regulation is an overview of the Smart Grid in order to smooth the production curve. We can discern three types of regulation: 1) Mathematical regulations: mathematical tools are introduced to smooth the consumption curve (standard, planning according to the derivative, gradient and barycenter, etc.). 2) Regulation by self-stabilization: the criteria for regulation of the curve are done at any point of the smart grid. Some technologies are already in effect, such as dynamic pricing systems or consumer subscriptions. 3) Hybrid regulation: This type of regulation is based on both mathematical and self-stabilizing approach. Its main advantage is to minimize the risks associated with either method. The model is actually based on a mathematical regulation. # IV. EQUILIBRIUM BETWEEN SUPPLY AND DEMAND # A. Demand-side management In order to predict consumption, Smart Grid will allow customers to make informed decisions about their energy consumption, adjusting both the timing and quantity of their electricity use. This ability to control usage is called demand-side management (DSM). In the literature, DSM programs have two goals: demand-response programs for energy efficiency; and load shifting which schedules the production and consumption over the long term, see Figures 6 and 7. Fig. 6. Load shifting strategies. Initially, energy conservation programs encourage customers to give some energy use in return for saving money, such as turning up the thermostat a few degrees in summer to reduce air conditioning. Additional gains in energy efficiency are possible through technologies that can provide targeted education or real time verification of costumer demand reduction. But the consumer behavior is too sporadic and cannot be implemented in the model by variables. However, it is possible to describe the desired effects at the microgrid through strategies. So, the auction will be based on a multitude of strategies for each consumer. Demand response programs and load shifting transfer customer load during periods of high demand to off-peak periods. Shifting daily peak demand flattens the load curve, allowing more electricity to be provided by less expensive base load generation. Fig. 7. Load shifting based on electric vehicles. DSM programs have existed across the globe since the 1970s and is an active field of research. California utilities have used such programs to hold per-capita energy consumption nearly constant over the past 30 years. DSM programs incorporate some or all of the following six levers: 1) Rates or utility tariffs. Not yet implemented, we currently work with economists to define to price energy (depending on producers), and the impact of DSM on consumers and producers prices. 2) Incentives. Consumers' participation in demand-side management programs is simulated by strategies. The impact on priority and the price isn't fix and depend on cases. 3) Access to information. Local level, microgrid and T&D level have access to information and algorithms allow energy management. 4) Utility controls, simulated by priority and algorithms process. 5) Education and marketing. 6) Customer insight and verification, see subsection IV-C. # B. Strategies based on utility To explain the role of strategies, a simplistic case will be served as an example. Let five houses (H1 to H5) are in the same microgrid, see Table I. The following table provides devices in every home with their variables: in order consumption, priority of operation (0 meaning already in use) and the value for knapsack. Values of device $i$ , noted $u_{i}$ in each house are calculated as follow: let $w_{max}$ the greatest consumption in the house, $p_{max}$ the greatest priority in the house; for each device $i$ in the house, $u_{i} = (w_{max} * p_{max}) - (w_{i} * p_{i}) + w_{i}$ . The table also present the forecast (Fc.) and the minimal energy required (devices in bold). Information are send to microgrid. Let $l$ is the utility of the strategy for the consumers, and $r$ for the producer. Different DSM strategies are defined for each house: 1) Basic consumption: consumption of all devices, all combination possible. In the example, combination TABLE I. CONSUMPTION IN THE MICROGRID. <table><tr><td></td><td>H1</td><td>H2</td><td>H3</td><td>H4</td><td>H5</td></tr><tr><td rowspan="6">Dev.</td><td>1/0/81</td><td>1/0/16</td><td>1/0/0</td><td>1/0/33</td><td>1/0/0</td></tr><tr><td>1/1/80</td><td>1/0/16</td><td>1/0/0</td><td>1/1/32</td><td>3/0/0</td></tr><tr><td>3/0/83</td><td>2/1/15</td><td>10/0/0</td><td>3/0/35</td><td></td></tr><tr><td>5/2/75</td><td>3/0/18</td><td></td><td>3/2/29</td><td></td></tr><tr><td>20/4/20</td><td>4/3/7</td><td></td><td>4/1/32</td><td></td></tr><tr><td></td><td>5/3/5</td><td></td><td>8/4/8</td><td></td></tr><tr><td>Fc.</td><td>4</td><td>6</td><td>12</td><td>8</td><td>6</td></tr><tr><td>Min.</td><td>5</td><td>7</td><td>12</td><td>9</td><td>4</td></tr></table> are based on the priority of the devices. Utilities are calculated as following: $l = \sum_{i=1}^{n} \frac{u_i * w_i}{p_i}$ for each device $i$ in this strategy; $r = \sum_{i=1}^{n} \left( \frac{u_i}{p_i} - \alpha \right) * w_i$ with $\alpha$ the average utility for an unity of consumption. 2) Peak shaving. Priority have an exponential impact on the utility of consumer and producer. 3) Conservation. All utilities depend on priority except those that can provide energy. 4) Load shifting. Utilities depend on the average time of consumption of all devices and the total amount of energy needed. 5) Over-production. Priority of batteries are reduced, in order to reload them. 6) Over-consumption. If possible, batteries give energy, and domotic reduces its consumption. We present the result of simplified basic consumption strategies in the Table II. Each strategy are based on a priority level, i.e. all devices equal or less than the priority are taken into account. Results are presented as follows: house value noted $r$ /distribution value noted $l$ ). At the end of the table, the final consumption (after three feedback) is shown. It is calculated with the strategy with the max $r + l$ . TABLE II. STRATEGIES OF THE MICROGRID. <table><tr><td>Priority</td><td>H1</td><td>H2</td><td>H3</td><td>H4</td><td>H5</td></tr><tr><td>0</td><td>330/194</td><td>77/27</td><td>Done</td><td>138/47</td><td>Done</td></tr><tr><td>1</td><td>410/240</td><td>107/37</td><td></td><td>298/56.5</td><td></td></tr><tr><td>2</td><td>620/257</td><td>none</td><td></td><td>341/32.5</td><td></td></tr><tr><td>3</td><td>none</td><td>125/-36</td><td></td><td>none</td><td></td></tr><tr><td>4</td><td>720/-322</td><td>none</td><td></td><td>357/-131</td><td></td></tr><tr><td>5</td><td>none</td><td>none</td><td></td><td>none</td><td></td></tr><tr><td>Final</td><td>10</td><td>7</td><td>12</td><td>12</td><td>4</td></tr></table> Currently strategies are unilateral, only the behavior of the local level is taken into account. To avoid many feedback, the market economy is studied in order to find in few games the final result. It is not intended to promote the producer but to plan the routing during a cooperative game. # C. Economic goals Like any investment decision in technology, the benefits need to exceed the costs. Making the value case for smart grid investments is complicated by at least two characteristics. First, the smart grid assets contribute to more than one value stream. Making a value determination for smart grid investments usually requires the recognition and accounting of the benefits from multiple value streams to offset the investment costs in technology deployment. Second, several of these value streams can be difficult to quantify financially. Reliability is traditionally something that is set by regulation and best practice and implemented as a necessary cost of providing electricity. Determining the value of decreasing environmental impact and ensuring the health and well-being of the populace are examples of other areas where benefits from smart grid investment are hard to capture in simple equations. Nevertheless, it is possible to measure the level of profits curves of Smart Grid consumption compared to standard curves. Indeed, consumption curves are known and used for many years to schedule daily production. At the local level, gross consumption, i.e. consumption without the aid of any control technology or renewable energy used locally, is compared to net consumption, i.e. consumption in the Smart Grid, see Figure 8. The cost of all renewable energy, local or plants, is approximate. Profits are the difference between the cost of the gross consumption minus the net consumption, and the cost of the technologies used. Fig. 8. Difference between normal consumption and consumption with management and renewable energies. Similarly, the cost of fuel plants or other plant used during peak consumption is known. On a global scale, the average cost of energetic output current is approximate, in the basic case, and in a Smart Grid, see Figure 9. Fig. 9. Energy prices before and after DSM. # V. EXPERIMENTAL RESULTS AND DISCUSSIONS The model was implemented using a multi-agent simulation paradigm,, (Figure 10). Each agent is either a consumer, a producer, both or an energy's transporter. Agents have specific behaviors induced by their class and act depending on previous algorithms. To validate the model, instances at local and global scale have been made. Agents present like engine consumption or energy plants' production are implemented by French national production companies data and energy distribution data (EDF and RTE). Fig. 10. Overview of the local production and consumption. In the first tests, consumption and production tend towards equilibrium. Local and renewable energies are privileged to maximize their profitability. The model limits the losses of the distance of consumption, and uses the least amount of fossil energy. It works at any scale and any agents under the condition that there exists a feasible solution. First results are based on an arbitrary utilities that does not take into account the economic aspect. The economic study will do a forthcoming publication. About the mathematical global problem, the knapsack for the overall system, we provide two very simple and thus highly practical algorithms that found the global solution of convex maximization quadratic problems. Algorithms are based respectively on inner and outer approximation like ball or cuboid following by a local search and the standard cutting plane technique adapted to our problem. The papers about these methods are not yet published. The goal of the Smart Grid's model is to reduce difference between the mathematical result and the model's result. Learning strategies are being developed so that the Smart Grid is the best possible configuration without human intervention. The proposed model works for randomized or parameterized Smart Grids, we actually work in the Positive Energy 2.0 project led by ALSTOM Energy Management and various companies such as Bouygues or Renault to validate the model on real projects. # VI. CONCLUSION As smart grid can be qualified as a complex system, classical optimization methods cannot be applied directly, due to the computational complexity in terms of time and memory. More generally, we also demonstrated how to solve optimization problems in complex systems. While applying optimization algorithms directly in complex systems is nearly impossible, we should analyze the system and divide them into sub-systems with defined characteristics, then we should apply specific algorithms and coordinate them using multiagent simulation in order to achieve global optimization. A general context-free model of a smart grid is being developed, which integrates those algorithms. This model does not replace the current model nor provides an ideal model, but presents an improvement of the Energy Grid. Preliminary tests have validate our approach. Data mining and learning strategy are studied in order to limit the number of simulation and variable's modifications before to obtain an optimized configuration.
arxiv_math
2025-12-05T00:00:00Z
https://arxiv.org/pdf/2512.15733
{"title": "A Context-Free Smart Grid Model Using Complex System Approach", "raw_content": "# A Context-Free Smart Grid Model Using Complex System Approach\n\nSoufian Ben Amor\n\nUniversity of Versailles SQY\n\nVersailles, France\n\nEmail: soufian.benamor@uvsq.fr\n\nAlain Bui\n\nUniversity of Versailles SQY\n\nVersailles, France\n\nEmail: alain.bui@uvsq.fr\n\nGuillaume Guerard\n\nUniversity of Versailles SQY\n\nVersailles, France\n\nEmail: guillaume.guerard@prism.uvsq.fr\n\nAbstract—Energy and pollution are urging problems of the 21th century. By gradually changing the actual power grid system, smart grid may evolve into different systems by means of size, elements and strategies, but its fundamental requirements and objectives will not change such as optimizing production, transmission, and consumption. Studying the smart grid through modeling and simulation provides us with valuable results which cannot be obtained in real world due to time and cost related constraints. Moreover, due to the complexity of the smart grid, achieving global optimization is not an easy task. In this paper, we propose a complex system based approach to the smart grid modeling, accentuating on the optimization by combining game theoretical and classical methods in different levels. Thanks to this combination, the optimization can be achieved with flexibility and scalability, while keeping its generality.\n\n# I. INTRODUCTION\n\nOur society is electrically dependent. The electrical grid supply energy to households, businesses, and industries, but disturbances and blackouts are becoming common. With the pressure from ever increasing energy demand and climate change, finding new energy resources and enhancing energy efficiency have become priority of many nations in the 21st century.\n\nThe term smart grid is coined by Amin in 2005 [2]. Smart grid is a type of electrical grid which attempts to predict and intelligently respond to the behavior and actions of all electric power users connected to it - suppliers, consumers and those that do both - in order to efficiently deliver reliable, economic, and sustainable electricity services. Then, The expression \"Smart Grid\" has expanded into different dimensions: some see it as a numerical solution for downstream counter and mostly residential customers, while others believe that it is a global system vision that transcends the current structure of the energy market to generate economic, environmental, and social benefits for everyone.\n\nThus, Smart Grid is a fuzzy concept with various definitions in literature. However, Smart Grid could be defined according to the main requirements of an energy network. Smart Grid should integrate information and communication technologies to generate, transport, distribute, and consume energy more efficiently. In addition, the network should have mainly the following properties: self-healing, flexibility, predictability, interactivity, optimality, and safety [13]. Moreover, the Smart Grid should improve reliability, reduce peak demand, and equalize energy consumption.\n\nResearch works are being conducted to attain the objectives, but many problems of modeling and coordination hamper advancements. However each model offers its own vision of the smart grid, putting aside theoretical and technological advancement of others. Cooperation between smart technologies and existing infrastructure is often neglected in scientific and industrial studies [21]. In [6], authors argued that an electrical grid which allows the adjustments on both supply and demand will improve efficiency, reduce costs on both sides and will be beneficial for the environment.\n\nTaking into account all these internal and external features, the Smart Grid is defined as a complex system [1], [11], [13]. Contribution of our approach consists in treating the smart grid as a complex system, locating the problems at local as well as global levels, and solving them with coordinated methods. In other words, through studying and analyzing smart grid, we isolate homogeneous parts with similar behaviors or objectives, and apply classical optimization algorithms at different levels with coordination. Thanks to combining those interdependent methods, our approach guarantees the flexibility in terms of system size. Besides that, generality of our approach allows its applicability in different scenarios and models.\n\nThis paper is organized as following: in the next section, the concept of complex system is introduced and theoretical approaches in their modeling are discussed. In Section 3, we present the details of our global Smart Grid model based on the complex system approach, and in section 4, we present the research of a global consensus between supply and demand. We also discuss our perspectives and first results in section 5.\n\n# II. COMPLEX SYSTEM APPROACH\n\nA system which consists of large populations of connected agents, or collections of interacting elements, is said to be complex if there exists an emergent global dynamics resulting from the actions of its parts rather than being imposed by a central controller. That is a self-organizing collective behavior difficult to anticipate from the knowledge of local behavior [5]. Complex system study embraces not only traditional disciplines of science, but also engineering, management, and medicine [26].\n\nThe majority of studies on Smart Grids use a top-down approach. The Smart Grid is broken into basic issues: optimization, network structure and communication technologies and security [8]; and main components: users (consumers and producers), energy, controllers and data [24]. Next, a\n\nglobal objective function is defined. The simulation returns the global solution, without taking into account local constraints. Applying the optimization method in complex systems in a global manner is almost an impossible task, if not impossible at all. Because, complex systems are composed of heterogeneous parts; it is hard to find all the variables that matters; even if all variables are included, the complexity of the objective function will become beyond the computation power of computers [17].\n\nIn addition, each organized activity shows a conflict between two fundamental requirements, allocation of resources in various tasks and coordination of these tasks to accomplish the global mission. While modeling complex systems, bottom-up analysis, also named systemic approach, gives a more complete, realistic and comprehensive vision [18].\n\nSmart grid can be qualified as a complex system [13], due to its heterogeneous actors, dynamic, complex interactions among them, and global behaviors such self-healing, or self-organizing characters. Based on this observation, we will analyze the smart grid methodologically in order to understand the mechanisms and internal components as well as the needs of every sub-components.\n\nAt first step, we should understand the system. An overview brings structural aspects, entities and objectives. All these elements are considered as agents. These one are not randomly distributed in the system, but according to patterns, and form distinct groups with their own arrangement. In the smart grid, three types of behavior are distinguished: consumer, producer, and transporter. The result is a hierarchy, or a food chain.\n\nAfter analyzing the characteristics of the system, we define the sub-components. A sub-component has a structure, objectives and specific entities; although quantities or position in the system are variable. As a separate system, it has its own dynamics. It is then possible to solve it with an appropriate optimization method. Consumers are generally located at the chain end, in network tree. The top producers are located in a mesh network, reinforcing the grid, and are linked to the consumers by linear chain.\n\nThe sub-components are interacting, then you should take into account the I-O data for each method. The stability of the model depends on localized optimization. It is necessary to optimize each part of the chain as well as a whole to stabilize the system. If only consumers are optimized, all their devices will receive energy. If we optimize producer, they will produce the minimum at minimum cost. To prevent system crashes, the model must have a system of communication to reach a global consensus. Moreover, the system is subjected to external pressure. Feedback between sub-components are essential to maintain functionality, and to find local and global equilibrium.\n\nThese mechanisms work like homeostasis in natural sciences: a balance between the internal environment and the external environment. The Smart Grid, through its self-organization, self-healing and optimizing resources at any scale has a similar phenomenon. That is why the search for a global consensus is essential.\n\nIn summary, we analyze the system to determine subcomponents. These have a system of communication and their own optimization methods, global criteria ensure balance in the system. Local optimization and global consensus constitute a\n\ndecentralized optimization of our complex system guaranteeing individual and collective benefits.\n\n# III. MODELING\n\nThe problems of electrical networks have been known for long, and research as well as industrial works has been carried out to find effective and competitive solutions. Nevertheless the efforts are often concentrated on specific cases, and solutions are, too, specific without any room for evolution. Among the proposed solutions we can mention:\n\n- Distributed generation/microgrids: since a centralized optimization is very costly in terms of time and memory, optimization should be done at all levels. The microgrids can change the centralized interface into a distributed interface, therefore optimization can be carried out in a distributed manner. Consequently calculation benefits in terms of time and memory are significant, while ensuring optimal at different scales. \n- Design of intelligent network (home automation): domotics or smart devices give real-time data and are controllable by the user or a smart meter. While optimizing local consumption, they optimize overall consumption as a result. \n- Energy storage device: the energy storage coupled with energy optimization from beginning to end, regulates consumption and clears consumption peaks. \n- Reduction of Transmission and Distribution T&D network losses by automated distribution: One of the strong points of our model is the distribution optimization by local and global algorithms which reduce the loss of congestion or routing errors. \n- Intelligent control of price: when the network becomes intelligent, it is necessary that the consumer prices may also change in order to follow the new consumer behavior.\n\nMany theories, in order to optimize smart grid, come from complex system analysis. Sub-components, optimization methods and the necessary theories to obtain an optimal consensus, are defined in these articles [1], [13], [14].\n\n# A. Global objective function\n\nThe overall mathematical problem is similar to a knapsack problem [19], [28]. The objective function is under multiple temporal, spatial and physical constraints: such as the granularity of study, local optimization (routing, distribution, consumption), as well as variable consumption and production time.\n\nThe general 0-1 knapsack problem applied to smart grids is:\n\n$$\n\\left\\{ \\begin{array}{l l} \\text {m a x i m i z e} & \\sum_ {i = 1} ^ {n} x _ {i} u _ {i}, \\\\ \\text {s u b j e c t t o} & \\sum_ {i = 1} ^ {n} x _ {i} w _ {i} \\leq W \\end{array} \\right.\n$$\n\nwhere $x_{i} = 1$ if the devices $i$ is taken, else 0; $u_{i}$ represents utility of the device $i$ and $w_{i}$ its consumption in $Watt * hour$ ; $W$ is the total energy produced in $Watt * hour$ ; there are $n$ devices. Moreover, several quadratic or linear constraints due\n\n![](images/62fd92a6bc3bc6f602c707ebeb317e7cc7526d6ac24301d22365a6cc970951fb.jpg) \nFig. 1. Knapsack problem for Smart Grid.\n\nto the complex system is added (routing, minimal values, cost, etc.).\n\nThis problem, see Figure 1, is too hard to be solved at very large scale - millions of items, in few minutes. Criteria or constraints should be satisfied throughout algorithms and process. Is the decomposition of the global problem and assigning it to various computers connected to a single network is equivalent to resolving the global solution on a single machine? In distributed algorithms, all machines have the same role. We notice that all levels have the same overall goal, but only use specific algorithms. In other words, our process is similar to distributed algorithms.\n\n# B. A three layered grid\n\nThe Smart Grid has three sub-components having a structure, dynamics and distinct behavior: the transmission and distribution network (T&D), the microgrid and the local level.\n\n![](images/c07b0ebf242142ec7c85c32aa72d292b177a123aaab2459ce195a4f9efe3a436.jpg) \nFig. 2. Smart Grid sub-components (from PowerMatrix, Siemens).\n\nThe first level is the only full connected one, forming a single group, represented by the center, see Figure 2. The Transmission and Distribution network (T&D) must deliver energy from producers to points of consumption. Energy flows on electric cables with various criteria and technical constraints limiting the amount of energy that can circulate on each. The algorithm at this level should be able to limit the effects of congestion $n$ due to the widespread use of a few lines, while limiting the cost of routing energy. Production and consumption must match as better as possible, in order to achieve this, we must deliver most of energy while satisfying most of consumers\n\nThe second level is the link between consumption and energy production, represented by the second ring, see Figure\n\n2. The microgrid is a broader view of local consumers, it is a structure representing an eco-district bounded by the upstream substation. Its role is to distribute energy from substation to consumers. For this, it books an amount of energy from the T&D network.\n\nThe outer ring represents local levels, see Figure 2. Local level models consumers, i.e. a group of consuming devices, local renewables or electric vehicle requesting or providing a measurable amount of energy. These isolated structures, like residences or factories support the consumption of energy, i.e. the distribution of energy among appliances under its responsibility. The energy distribution is a dynamic programming resolution of a knapsack problem: objects are devices, the weight is the energy received at the local level, and the utility is a function of demand-side management strategies\n\n# C. Iteration process\n\nAn iteration occurs every five minutes. Once data are updated, the process is decomposed into four sequences, see Figure 3.\n\n![](images/dbdb6894290513990f38b7daf8530a92afd44fb825f58ae6205b55182e841ed2.jpg) \nFig. 3. Sequential Scheme.\n\nSequence A: to design intelligent aspect of the device, a priority is assigned to these dynamic entities, and for calculating a consumption value. Indeed, we use a local knapsack problem, solved by dynamic programming after data normalization, for finding a primary first optimal resource allocation. The knapsack is solved by dynamic programming, its complexity is $O(n * C_0)$ where $n$ is the number of devices and $C_0$ is the energy received. It is reduced to $O(n)$ if prognostics are correct. Due to the number of devices and the size of the bag, the optimal solution on a 0-1 knapsack by dynamic programming is obtained instantly with normalized weight.\n\nSequence B: this sequence aims to book an amount of energy from producers to consumers using an auction. There are two ways to book energy: a consensus between consumers and production, i.e. a game where microgrids and energy flows\n\nare players; and a bid system with feedback. The problem of the first one is the complexity of the problem, impossible to resolve in few time. Second way have the advantage of time, but don't guarantee the global optimum. During an auction, it is likely that energy required does not correspond to any new consumption strategy, i.e. local level consume as much with that or without. In addition, it is possible to search the nearest consumer at lowest cost. The number of possible strategies is infinite, we must look at the impact of each of them on the microgrid and on the final decision.\n\nSequence C: About the problem of routing, nodal rule or Kirchhoff's circuit specifies that at any node in a circuit, the sum of currents flowing into that node is equal to the sum of the currents flowing out of that node. An electrical circuit is equal to a graph in which a junction is a node, and physical connection corresponds to an edge. Routing problem is equivalent to the known max flow problem. Gale's theorem shows the existence of a solution in a network of offers and requests [10]. The flow of the previous iteration is maximum by Ford-Fulkerson, recalculate the entire flow is not necessary. The residual graph removes excess flows between two updates, optimizing the computation time of the optimal flow, see Figure 4. It is also possible to resolve a maximum flow with minimal cost and minimal flow on edges by Busacker and Gowen, but this algorithm can't use previous and update graph solutions. If supply and demand do not match, algorithm analyzes the bottlenecks by performing Ford-Fulkerson on two schemes: infinite production, infinite consumption. These data are also used to calculate prognostics. The complexity is $O(A * f)$ where $A$ is the number of edges and $f$ is the maximum flow. The residual graph reduces $f$ to the sum of local differences in production and consumption.\n\nSequence D: energy is distributed by knapsack problem, according to the auctions. The unconsumed energy is redistributed among non-used devices at upper scale. The device's priorities are updated according to the result of the final distribution. At worst, the complexity is $O(n * K)$ , with $n$ the number of devices and $K$ the energy received.\n\n# D. Global direction\n\nStudies conducted by Barabasi [4] and Watts [27] raised four general principles in distributed adaptive systems, more generally in complex systems:\n\n1) Global information is encoded as statistics and dynamic patterns in the components of the system. \n2) Chance and probabilities are essential. \n3) The system performs a parallel search of opportunities. \n4) The system has a continuous interaction [20].\n\nPrinciples of electricity generation and distribution are well known. Synchronization of the system is recognized that each station and each piece of equipment runs on the same clock, which is crucial for its proper functioning. Cascading failures related desynchronization can lead to massive power outages. In smart grid, a technical control automates the management of energy. Real-time data must be converted into information quickly enough so that errors are diagnosed in time, corrective actions are identified and executed dynamically in the network,\n\n![](images/850fa66a26ce908b325ff8ab93afcd3b32dd423c7c28cc375257219be6a14eec.jpg) \n(A)\n\n![](images/cd3a054fc8a1bba67bad1522da6ca1432b564693a1f8f52cabdb1381fdaa096d.jpg) \n(B)\n\n![](images/ddd2a73ba59f5fc4e3b7b2dab5058390e27902d8560cd1e016125863cbef892d.jpg) \n(C) \nFig. 4. Updating of routing.\n\nand feedback loops provide measures to ensure that the performed actions and production are consistent [3].\n\nAn arbitrary configuration generates a random pattern, without prognostics. As a result of the first auction, gap between consumption and production occurs. Feedback adjusts supply and demand, as renewable energies or electric vehicle management for example. It is easier to vary demand over supply. So, supply and demand tend to the same value, see Figure 5. There are many steady states, economic and optimal criteria are considered in the final solution, both seeking minimal cost.\n\n![](images/a4a8365e25c6aa94ae9051c44202e6be8bfe06505ad0adbb120d9bc589a16c20.jpg) \nFig. 5. Supply and demand's Consensus.\n\nGlobal regulation must be done both at consumer and producer level, taking into account the difficulties of routing.\n\nTo increase the effectiveness of this method, it is assumed that the front part of the infrastructure is home automation and every device can be controlled separately by the user and regulation algorithms. Regulation is an overview of the Smart Grid in order to smooth the production curve. We can discern three types of regulation:\n\n1) Mathematical regulations: mathematical tools are introduced to smooth the consumption curve (standard, planning according to the derivative, gradient and barycenter, etc.). \n2) Regulation by self-stabilization: the criteria for regulation of the curve are done at any point of the smart grid. Some technologies are already in effect, such as dynamic pricing systems or consumer subscriptions. \n3) Hybrid regulation: This type of regulation is based on both mathematical and self-stabilizing approach. Its main advantage is to minimize the risks associated with either method.\n\nThe model is actually based on a mathematical regulation.\n\n# IV. EQUILIBRIUM BETWEEN SUPPLY AND DEMAND\n\n# A. Demand-side management\n\nIn order to predict consumption, Smart Grid will allow customers to make informed decisions about their energy consumption, adjusting both the timing and quantity of their electricity use. This ability to control usage is called demand-side management (DSM). In the literature, DSM programs have two goals: demand-response programs for energy efficiency; and load shifting which schedules the production and consumption over the long term, see Figures 6 and 7.\n\n![](images/6a238232559c7acc5947053adde6d2c22e3f076d4a3f078719122eed93981051.jpg) \nFig. 6. Load shifting strategies.\n\nInitially, energy conservation programs encourage customers to give some energy use in return for saving money, such as turning up the thermostat a few degrees in summer to reduce air conditioning. Additional gains in energy efficiency are possible through technologies that can provide targeted education or real time verification of costumer demand reduction. But the consumer behavior is too sporadic and cannot be implemented in the model by variables. However, it is possible to describe the desired effects at the microgrid through strategies. So, the auction will be based on a multitude of strategies for each consumer.\n\nDemand response programs and load shifting transfer customer load during periods of high demand to off-peak periods.\n\nShifting daily peak demand flattens the load curve, allowing more electricity to be provided by less expensive base load generation.\n\n![](images/07e0faf8befb68278da830eccc73bd1e5beb3d77512bb86b3d8d8d528c86e6a8.jpg) \nFig. 7. Load shifting based on electric vehicles [15].\n\nDSM programs have existed across the globe since the 1970s and is an active field of research [29]. California utilities have used such programs to hold per-capita energy consumption nearly constant over the past 30 years. DSM programs incorporate some or all of the following six levers [7]:\n\n1) Rates or utility tariffs. Not yet implemented, we currently work with economists to define to price energy (depending on producers), and the impact of DSM on consumers and producers prices. \n2) Incentives. Consumers' participation in demand-side management programs is simulated by strategies. The impact on priority and the price isn't fix and depend on cases. \n3) Access to information. Local level, microgrid and T&D level have access to information and algorithms allow energy management. \n4) Utility controls, simulated by priority and algorithms process. \n5) Education and marketing. \n6) Customer insight and verification, see subsection IV-C.\n\n# B. Strategies based on utility\n\nTo explain the role of strategies, a simplistic case will be served as an example. Let five houses (H1 to H5) are in the same microgrid, see Table I. The following table provides devices in every home with their variables: in order consumption, priority of operation (0 meaning already in use) and the value for knapsack. Values of device $i$ , noted $u_{i}$ in each house are calculated as follow: let $w_{max}$ the greatest consumption in the house, $p_{max}$ the greatest priority in the house; for each device $i$ in the house, $u_{i} = (w_{max} * p_{max}) - (w_{i} * p_{i}) + w_{i}$ . The table also present the forecast (Fc.) and the minimal energy required (devices in bold).\n\nInformation are send to microgrid. Let $l$ is the utility of the strategy for the consumers, and $r$ for the producer. Different DSM strategies are defined for each house:\n\n1) Basic consumption: consumption of all devices, all combination possible. In the example, combination\n\nTABLE I. CONSUMPTION IN THE MICROGRID. \n\n<table><tr><td></td><td>H1</td><td>H2</td><td>H3</td><td>H4</td><td>H5</td></tr><tr><td rowspan=\"6\">Dev.</td><td>1/0/81</td><td>1/0/16</td><td>1/0/0</td><td>1/0/33</td><td>1/0/0</td></tr><tr><td>1/1/80</td><td>1/0/16</td><td>1/0/0</td><td>1/1/32</td><td>3/0/0</td></tr><tr><td>3/0/83</td><td>2/1/15</td><td>10/0/0</td><td>3/0/35</td><td></td></tr><tr><td>5/2/75</td><td>3/0/18</td><td></td><td>3/2/29</td><td></td></tr><tr><td>20/4/20</td><td>4/3/7</td><td></td><td>4/1/32</td><td></td></tr><tr><td></td><td>5/3/5</td><td></td><td>8/4/8</td><td></td></tr><tr><td>Fc.</td><td>4</td><td>6</td><td>12</td><td>8</td><td>6</td></tr><tr><td>Min.</td><td>5</td><td>7</td><td>12</td><td>9</td><td>4</td></tr></table>\n\nare based on the priority of the devices. Utilities are calculated as following: $l = \\sum_{i=1}^{n} \\frac{u_i * w_i}{p_i}$ for each device $i$ in this strategy; $r = \\sum_{i=1}^{n} \\left( \\frac{u_i}{p_i} - \\alpha \\right) * w_i$ with $\\alpha$ the average utility for an unity of consumption.\n\n2) Peak shaving. Priority have an exponential impact on the utility of consumer and producer. \n3) Conservation. All utilities depend on priority except those that can provide energy. \n4) Load shifting. Utilities depend on the average time of consumption of all devices and the total amount of energy needed. \n5) Over-production. Priority of batteries are reduced, in order to reload them. \n6) Over-consumption. If possible, batteries give energy, and domotic reduces its consumption.\n\nWe present the result of simplified basic consumption strategies in the Table II. Each strategy are based on a priority level, i.e. all devices equal or less than the priority are taken into account. Results are presented as follows: house value noted $r$ /distribution value noted $l$ ). At the end of the table, the final consumption (after three feedback) is shown. It is calculated with the strategy with the max $r + l$ .\n\nTABLE II. STRATEGIES OF THE MICROGRID. \n\n<table><tr><td>Priority</td><td>H1</td><td>H2</td><td>H3</td><td>H4</td><td>H5</td></tr><tr><td>0</td><td>330/194</td><td>77/27</td><td>Done</td><td>138/47</td><td>Done</td></tr><tr><td>1</td><td>410/240</td><td>107/37</td><td></td><td>298/56.5</td><td></td></tr><tr><td>2</td><td>620/257</td><td>none</td><td></td><td>341/32.5</td><td></td></tr><tr><td>3</td><td>none</td><td>125/-36</td><td></td><td>none</td><td></td></tr><tr><td>4</td><td>720/-322</td><td>none</td><td></td><td>357/-131</td><td></td></tr><tr><td>5</td><td>none</td><td>none</td><td></td><td>none</td><td></td></tr><tr><td>Final</td><td>10</td><td>7</td><td>12</td><td>12</td><td>4</td></tr></table>\n\nCurrently strategies are unilateral, only the behavior of the local level is taken into account. To avoid many feedback, the market economy is studied in order to find in few games the final result. It is not intended to promote the producer but to plan the routing during a cooperative game.\n\n# C. Economic goals\n\nLike any investment decision in technology, the benefits need to exceed the costs. Making the value case for smart grid investments is complicated by at least two characteristics. First, the smart grid assets contribute to more than one value stream. Making a value determination for smart grid investments usually requires the recognition and accounting of the benefits from multiple value streams to offset the investment costs in technology deployment. Second, several of these value streams can be difficult to quantify financially. Reliability is traditionally something that is set by regulation\n\nand best practice and implemented as a necessary cost of providing electricity [12]. Determining the value of decreasing environmental impact and ensuring the health and well-being of the populace are examples of other areas where benefits from smart grid investment are hard to capture in simple equations [25].\n\nNevertheless, it is possible to measure the level of profits curves of Smart Grid consumption compared to standard curves. Indeed, consumption curves are known and used for many years to schedule daily production. At the local level, gross consumption, i.e. consumption without the aid of any control technology or renewable energy used locally, is compared to net consumption, i.e. consumption in the Smart Grid, see Figure 8. The cost of all renewable energy, local or plants, is approximate. Profits are the difference between the cost of the gross consumption minus the net consumption, and the cost of the technologies used.\n\n![](images/28b54a151f816c88ed4d71fe351df80f5d017781bcdc22c57ab2bc634cf59bd2.jpg) \nFig. 8. Difference between normal consumption and consumption with management and renewable energies.\n\nSimilarly, the cost of fuel plants or other plant used during peak consumption is known. On a global scale, the average cost of energetic output current is approximate, in the basic case, and in a Smart Grid, see Figure 9.\n\n![](images/079675db2ac176d449da71affd48f8c4f5ec4f6634ef7e4cf0dd1bb21f891ef7.jpg) \nFig. 9. Energy prices before and after DSM.\n\n# V. EXPERIMENTAL RESULTS AND DISCUSSIONS\n\nThe model was implemented using a multi-agent simulation paradigm [9], [16], [22] (Figure 10). Each agent is either a consumer, a producer, both or an energy's transporter.\n\nAgents have specific behaviors induced by their class and act depending on previous algorithms. To validate the model, instances at local and global scale have been made. Agents present like engine consumption or energy plants' production are implemented by French national production companies data and energy distribution data (EDF and RTE) [23].\n\n![](images/1d624f0fa0c7803a146e36f29d8fd44bbb9ad7f2275899153784db7b51ce673b.jpg) \nFig. 10. Overview of the local production and consumption.\n\nIn the first tests, consumption and production tend towards equilibrium. Local and renewable energies are privileged to maximize their profitability. The model limits the losses of the distance of consumption, and uses the least amount of fossil energy. It works at any scale and any agents under the condition that there exists a feasible solution. First results are based on an arbitrary utilities that does not take into account the economic aspect. The economic study will do a forthcoming publication.\n\nAbout the mathematical global problem, the knapsack for the overall system, we provide two very simple and thus highly practical algorithms that found the global solution of convex maximization quadratic problems. Algorithms are based respectively on inner and outer approximation like ball or cuboid following by a local search and the standard cutting plane technique adapted to our problem. The papers about these methods are not yet published.\n\nThe goal of the Smart Grid's model is to reduce difference between the mathematical result and the model's result. Learning strategies are being developed so that the Smart Grid is the best possible configuration without human intervention.\n\nThe proposed model works for randomized or parameterized Smart Grids, we actually work in the Positive Energy 2.0 project led by ALSTOM Energy Management and various companies such as Bouygues or Renault to validate the model on real projects.\n\n# VI. CONCLUSION\n\nAs smart grid can be qualified as a complex system, classical optimization methods cannot be applied directly, due to the computational complexity in terms of time and memory. More generally, we also demonstrated how to solve optimization problems in complex systems. While applying optimization algorithms directly in complex systems is nearly impossible, we should analyze the system and divide them into sub-systems with defined characteristics, then we should apply specific algorithms and coordinate them using multiagent simulation in order to achieve global optimization.\n\nA general context-free model of a smart grid is being developed, which integrates those algorithms. This model does not replace the current model nor provides an ideal model, but presents an improvement of the Energy Grid. Preliminary tests have validate our approach. Data mining and learning strategy are studied in order to limit the number of simulation and variable's modifications before to obtain an optimized configuration.\n\n# REFERENCES\n\n[1] M. Ahat, S. B. Amor, M. Bui, A. Bui, G. Guérard, and C. Petermann. Smart grid and optimization. American Journal of Operations Research, 3:196-206, 2013. \n[2] S. M. Amin and B. F. Wollenberg. Toward a smart grid: power delivery for the 21st century. Power and Energy Magazine, IEEE, 3(5):34-41, 2005. \n[3] R. Anderson, A. Boulanger, W. Powell, and W. Scott. Adaptive stochastic control for the smart grid. Proceedings of the IEEE, 99(6):1098-1115, 2011. \n[4] A. Barabasi and R. Crandall. Linked: The new science of networks. American journal of Physics, 71:409, 2003. \n[5] N. Boccara. Modeling complex systems. Springer Verlag, 2004. \n[6] S. Borenstein, M. Jaske, and A. Rosenfeld. Dynamic pricing, advanced metering, and demand response in electricity markets. 2002. \n[7] B. Davito, H. Tai, and R. Uhlaner. The smart grid and the promise of demand-side management. McKinsey on Smart Grid, pages 38-44, 2010. \n[8] X. Fang, S. Misra, G. Xue, and D. Yang. Smart grid – the new and improved power grid: A survey. 2011. \n[9] A. Ferscha and K. Zia. On the efficiency of lifebelt based crowd evacuation. In Proceedings of the 2009 13th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications, pages 13-20. IEEE Computer Society, 2009. \n[10] D. Gale. A theorem on flows in networks. Pacific Journal of Mathematics, 7(2):1073-1082, 1957. \n[11] J. Gao, Y. Xiao, J. Liu, W. Liang, and C. Chen. A survey of communication/networking in smart grids. Future Generation Computer Systems, 28(2):391-404, 2012. \n[12] A. E. W. Group et al. Apec energy efficiency and renewable energy financing task force progress report. *EWG32*, Yuzhno-Sakhalinsk, Russian Federation, pages 4-5, 2006. \n[13] G. Guérard, S. Amor, and A. Bui. Survey on smart grid modelling. International Journal of Systems, Control and Communications, 4(4):262-279, 2012. \n[14] G. Guérard, S. Ben Amor, and A. Bui. A complex system approach for smart grid analysis and modeling. International Journal of Knowledge-Based and Intelligent Engineering Systems, 243:788-797, 2012. \n[15] Y. Hermans, S. Lannez, B. Le Cun, and A. Passelergue. Optimizing electrical vehicle charging cycle to increase efficiency of electrical market participants. In OR12 International Annual Conference of the German OR Societyl, 2012. \n[16] N. R. Jennings, K. Sycara, and M. Wooldridge. A roadmap of agent research and development. Autonomous agents and multi-agent systems, 1(1):7-38, 1998. \n[17] S. Kirkpatrick, M. Vecchi, et al. Optimization by simulated annealing. science, 220(4598):671-680, 1983. \n[18] C. M. Macal and M. J. North. Tutorial on agent-based modeling and simulation. In Proceedings of the 37th conference on Winter simulation, pages 2-15. Winter Simulation Conference, 2005. \n[19] S. Martello and P. Toth. Knapsack problems: algorithms and computer implementations. John Wiley & Sons, Inc., 1990. \n[20] M. Mitchell. Complex systems: Network thinking. Artificial Intelligence, 170(18):1194-1212, 2006. \n[21] A. Molderink, M. Bosman, V. Bakker, J. Hurink, and G. Smit. Simulating the effect on the energy efficiency of smart grid technologies. In Winter Simulation Conference (WSC), Proceedings of the 2009, pages 1530-1541. IEEE, 2009.\n\n[22] C. Petermann, S. B. Amor, and A. Bui. A complex system approach for a reliable smart grid modeling. Frontiers in Artificial Intelligence and Applications, Advances in Knowledge-Based and Intelligent Information and Engineering Systems, 243:149-158, 2012. \n[23] C. Petermann, S. Ben Amor, and A. Bui. A pretopological multi-agents based model for an efficient and reliable smart grid simulation. In 14th International Conference on Artificial Intelligence (ICAI), pages 354-360, USA, 2012. CSREA Press. \n[24] M. Pipattanasomporn, H. Feroze, and S. Rahman. Multi-agent systems in a distributed smart grid: Design and implementation. In Power Systems Conference and Exposition, 2009. PSCE'09. IEEE/PES, pages 1-8. IEEE, 2009. \n[25] R. G. Pratt, P. Balducci, C. Gerkensmeyer, S. Katipamula, M. C. Kintner-Meyer, T. F. Sanquist, K. P. Schneider, and T. Secrets. The smart grid: an estimation of the energy and co2 benefits. 2010. \n[26] S. J. Turner. Symbiotic simulation and its application to complex adaptive systems. In Distributed Simulation and Real Time Applications (DS-RT), 2011 IEEE/ACM 15th International Symposium on, pages 3–3. IEEE, 2011. \n[27] D. Watts. Six degrees: The science of a connected age. WW Norton & Company, 2004. \n[28] G. Xiong, C. Chen, S. Kishore, and A. Yener. Smart (in-home) power scheduling for demand response on the smart grid. In Innovative smart grid technologies (ISGT), 2011 IEEE PES, pages 1-7. IEEE, 2011. \n[29] D. Zachhuber, J. Doppler, A. Ferscha, C. Klein, and J. Mitic. Simulating the potential savings of implicit energy management on a city scale. In Proceedings of the 2008 12th IEEE/ACM International Symposium on Distributed Simulation and Real-Time Applications, pages 207-216. IEEE Computer Society, 2008."}
# Deep Reinforcement Learning Optimization for Uncertain Nonlinear Systems via Event-Triggered Robust Adaptive Dynamic Programming Abstract: This work proposes a unified control architecture that couples a Reinforcement Learning (RL)-driven controller with a disturbance-rejection Extended State Observer (ESO), complemented by an Event-Triggered Mechanism (ETM) to limit unnecessary computations. The ESO is utilized to estimate the system states and the lumped disturbance in real time, forming the foundation for effective disturbance compensation. To obtain near-optimal behavior without an accurate system description, a value-iteration-based Adaptive Dynamic Programming (ADP) method is adopted for policy approximation. The inclusion of the ETM ensures that parameter updates of the learning module are executed only when the state deviation surpasses a predefined bound, thereby preventing excessive learning activity and substantially reducing computational load. A Lyapunov-oriented analysis is used to characterize the stability properties of the resulting closed-loop system. Numerical experiments further confirm that the developed approach maintains strong control performance and disturbance tolerance, while achieving a significant reduction in sampling and processing effort compared with standard time-triggered ADP schemes. Keywords: Reinforcement learning; Event-triggered control; Uncertain nonlinear systems; Adaptive dynamic programming # 1. INTRODUCTION Learning-based methods have become a fundamental paradigm in modern engineering systems, enabling algorithms to improve performance through data-driven adaptation without relying solely on explicit mathematical models. Over the past decade, advances in machine learning—particularly in function approximation, optimization, and representation learning—have significantly expanded the capability of intelligent systems operating under uncertainty, compared to traditional analytical methods Qin et al. (2022); Zhang et al. (2024); Hu et al. (2025). These approaches have been increasingly adopted in control, robotics, and even generative language models Lu et al. (2020); Zhao et al. (2024); Tang et al. (2025); Yao et al. (2025). However, conventional model-based techniques may be limited in their ability to handle nonlinearities, unknown disturbances, or incomplete system knowledge. Reinforcement Learning (RL) has gained attention for complex decision-making and control in uncertain, dynamic environments Tang et al. (2024d). In control engineering, RL-based methods offer a data-driven alternative to classical model-based designs. This is useful when accurate system models are difficult to obtain. Among these methods, ADP integrates RL with optimal control theory. It facilitates near-optimal control of nonlinear systems by approximating value functions and control policies through function approximators. This eliminates the need to explicitly solve the Hamilton-Jacobi-Bellman (HJB) equation Lewis and Vrabie (2009). However, conventional ADP frameworks often rely on continuous or periodic updates to neural network parameters. These updates impose significant computational burdens and may lead to overfitting to transient disturbances or noise. Event-triggered strategies have been widely adopted in diverse control applications, including networked and embedded systems, multi-agent coordination, and resource-constrained robotic platforms Onuoha et al. (2024b,a). At the meantime, the ETM has been widely employed in both control and ADP frameworks to reduce computational load (Han et al. (2024); Heemels et al. (2012); Tabuada (2007)). Unlike time-driven schemes, ETMs update only when systems meet a state- or error-based condition. State deviation or estimation error often directly triggers updates. This approach reduces redundant updates and preserves closed-loop stability (Dong et al. (2017); Xue et al. (2020); Onuoha et al. (2024b)). By limiting updates to key events, event-triggered ADP boosts efficiency and yields policies less sensitive to disturbances. Despite these advantages, engineers must ensure robustness against external disturbances and modeling uncertainties. In practice, environmental perturbations, unmodeled dynamics, nonlinear couplings, and parameter uncertainties cause disturbances. Many robust control approaches employ feedback to reduce perturbations rather than explicitly use feedforward compensation (Tang (2019); Tang et al. (2016, 2024a)). In this context, a ESO estimates the original states and accumulated interference in real time. This allows proactive compensation of parameter mismatches, unmodeled dynamics, and external perturbations in nonlinear systems s(Luo et al. (2020); Tang et al. (2024c); Han (2009); Chen et al. (2016); Ran et al. (2021); Pu et al. (2015); Tang et al. (2019)). inspired by the Active Disturbance Rejection Control (ADRC) philosophy (Gao (2003); Guo and Zhao (2013)), provide a powerful tool: Recent work combines ESO-based disturbance rejection with RL for uncertain nonlinear systems (Ran et al. (2022); Tang et al. (2024b)). However, these ESO-RL schemes primarily operate in a time-driven manner: both the controller and learning updates run continuously or periodically, lacking an event-triggered learning mechanism. Many continuous-time ADP designs also impose restrictive Persistence of Excitation (PE) conditions for parameter convergence (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)), making them hard to verify and enforce in practice. Inspired by these observations, we develop a composite control framework for output-feedback control of uncertain nonlinear systems with lumped disturbances. The main contributions are summarized as follows: (1) A unified control structure incorporating ETM is developed, in which ESO-based state estimation, disturbance compensation, and controller updates occur only at triggering instants. The resulting unified composite control framework enables an aperiodic, computationally efficient implementation of output-feedback RL control. By integrating ESO-based disturbance rejection with event-triggered RL, this work establishes a unified architecture that is not addressed in the existing literature. We propose an event-triggered learning rule for a simulation-of-experience ADP scheme. In this scheme, critic and actor networks update only at triggering instants, using both instantaneous and Extrapolated Bellman Error (EBE). Unlike existing ESO-RL frameworks (Ran et al. (2022); Tang et al. (2024b)), which use continuous or periodic learning, our mechanism yields an aperiodic, data-efficient adaptation. The analysis shows practical stability and Uniform Ultimate Boundedness (UUB) are achieved without a classical PE condition (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)). It also avoids redundant, high-frequency updates. The remainder of the paper is structured as follows: Section II presents the problem and the system model. Section III describes the proposed composite control framework. Section IV describes the overall composite control framework. The simulation results are demonstrated in Section V, and Section VI summarizes this paper and outlines potential directions for future research. # 2. PROBLEM FORMULATION In this paper, we identify the control of a set of uncertain affine nonlinear systems described by $$ \left\{ \begin{array}{l} \dot {z} = f _ {z} (x, z, \eta), \\ \dot {x} = A x + B [ f (x, z, \eta) + g (x, z, \eta) u ], \\ y = C x, \end{array} \right. \tag {1} $$ where $x = [x_1, \ldots, x_n]^{\mathrm{T}} \in \mathbb{R}^n$ denotes the state of the measured subsystem with relative degree $n$ ; $z \in \mathbb{R}^p$ represents the zero-dynamics state; $\eta \in \mathbb{R}$ denotes an external disturbance or uncertain parameter; $u \in \mathbb{R}$ is the control input; $f_z: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}^p$ is a smooth nonlinear mapping describing the evolution of zero dynamics; $f, g: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}$ are uncertain nonlinear functions characterizing the input and drift dynamics gain of the $x$ subsystem; and $A \in \mathbb{R}^{n \times n}$ , $B \in \mathbb{R}^{n \times 1}$ and $C \in \mathbb{R}^{1 \times n}$ are the standard companion matrices defining a nominal chain-of-integrators structure of the output dynamics. To enable subsequent observer and controller design, we impose the following standard assumptions. Assumption 1. The external signal $\eta(t)$ as well as its time derivative $\dot{\eta}(t)$ are bounded for all $t \geq 0$ . Assumption 2. The zero dynamics $\dot{z} = f_{z}(x,z,\eta)$ with input $(x,\eta)$ is Bounded-Input Bounded-State (BIBS) stable. In this paper, the nonlinear system dynamics with uncertainty is modeled as follows: $$ f (x, z, \eta) = f _ {0} (x) + \Delta f (x, z, \eta), $$ $$ g (x, z, \eta) = g _ {0} (x) + \Delta g (x, z, \eta), $$ where $f_0, g_0: \mathbb{R}^n \to \mathbb{R}$ denote the known nominal system dynamics; and $\Delta f, \Delta g: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}$ represent unknown disturbances and model uncertainties that may depend on the full state of the system $(x, z)$ and the external signal $\eta$ . Following the ADRC philosophy (Han (2009)), the general uncertainty is transferred to a broader state: $$ x _ {n + 1} \triangleq \Delta f (x, z, \eta) + \Delta g (x, z, \eta) u, \tag {2} $$ Accroding to this definition, the $n$ -th subsystem dynamics can be rewritten as $\dot{x}_n = x_{n+1} + f_0(x) + g_0(x)u$ , so that the overall system becomes an $(n+1)$ th order augmented integrator chain perturbed by the unknown term $\dot{x}_{n+1}$ . To quantify performance, we consider the nominal compensated subsystem and assign the infinite-horizon cost functional $$ J \left(x _ {0}\right) = \int_ {0} ^ {\infty}
# Deep Reinforcement Learning Optimization for Uncertain Nonlinear Systems via Event-Triggered Robust Adaptive Dynamic Programming Abstract: This work proposes a unified control architecture that couples a Reinforcement Learning (RL)-driven controller with a disturbance-rejection Extended State Observer (ESO), complemented by an Event-Triggered Mechanism (ETM) to limit unnecessary computations. The ESO is utilized to estimate the system states and the lumped disturbance in real time, forming the foundation for effective disturbance compensation. To obtain near-optimal behavior without an accurate system description, a value-iteration-based Adaptive Dynamic Programming (ADP) method is adopted for policy approximation. The inclusion of the ETM ensures that parameter updates of the learning module are executed only when the state deviation surpasses a predefined bound, thereby preventing excessive learning activity and substantially reducing computational load. A Lyapunov-oriented analysis is used to characterize the stability properties of the resulting closed-loop system. Numerical experiments further confirm that the developed approach maintains strong control performance and disturbance tolerance, while achieving a significant reduction in sampling and processing effort compared with standard time-triggered ADP schemes. Keywords: Reinforcement learning; Event-triggered control; Uncertain nonlinear systems; Adaptive dynamic programming # 1. INTRODUCTION Learning-based methods have become a fundamental paradigm in modern engineering systems, enabling algorithms to improve performance through data-driven adaptation without relying solely on explicit mathematical models. Over the past decade, advances in machine learning—particularly in function approximation, optimization, and representation learning—have significantly expanded the capability of intelligent systems operating under uncertainty, compared to traditional analytical methods Qin et al. (2022); Zhang et al. (2024); Hu et al. (2025). These approaches have been increasingly adopted in control, robotics, and even generative language models Lu et al. (2020); Zhao et al. (2024); Tang et al. (2025); Yao et al. (2025). However, conventional model-based techniques may be limited in their ability to handle nonlinearities, unknown disturbances, or incomplete system knowledge. Reinforcement Learning (RL) has gained attention for complex decision-making and control in uncertain, dynamic environments Tang et al. (2024d). In control engineering, RL-based methods offer a data-driven alternative to classical model-based designs. This is useful when accurate system models are difficult to obtain. Among these methods, ADP integrates RL with optimal control theory. It facilitates near-optimal control of nonlinear systems by approximating value functions and control policies through function approximators. This eliminates the need to explicitly solve the Hamilton-Jacobi-Bellman (HJB) equation Lewis and Vrabie (2009). However, conventional ADP frameworks often rely on continuous or periodic updates to neural network parameters. These updates impose significant computational burdens and may lead to overfitting to transient disturbances or noise. Event-triggered strategies have been widely adopted in diverse control applications, including networked and embedded systems, multi-agent coordination, and resource-constrained robotic platforms Onuoha et al. (2024b,a). At the meantime, the ETM has been widely employed in both control and ADP frameworks to reduce computational load (Han et al. (2024); Heemels et al. (2012); Tabuada (2007)). Unlike time-driven schemes, ETMs update only when systems meet a state- or error-based condition. State deviation or estimation error often directly triggers updates. This approach reduces redundant updates and preserves closed-loop stability (Dong et al. (2017); Xue et al. (2020); Onuoha et al. (2024b)). By limiting updates to key events, event-triggered ADP boosts efficiency and yields policies less sensitive to disturbances. Despite these advantages, engineers must ensure robustness against external disturbances and modeling uncertainties. In practice, environmental perturbations, unmodeled dynamics, nonlinear couplings, and parameter uncertainties cause disturbances. Many robust control approaches employ feedback to reduce perturbations rather than explicitly use feedforward compensation (Tang (2019); Tang et al. (2016, 2024a)). In this context, a ESO estimates the original states and accumulated interference in real time. This allows proactive compensation of parameter mismatches, unmodeled dynamics, and external perturbations in nonlinear systems s(Luo et al. (2020); Tang et al. (2024c); Han (2009); Chen et al. (2016); Ran et al. (2021); Pu et al. (2015); Tang et al. (2019)). inspired by the Active Disturbance Rejection Control (ADRC) philosophy (Gao (2003); Guo and Zhao (2013)), provide a powerful tool: Recent work combines ESO-based disturbance rejection with RL for uncertain nonlinear systems (Ran et al. (2022); Tang et al. (2024b)). However, these ESO-RL schemes primarily operate in a time-driven manner: both the controller and learning updates run continuously or periodically, lacking an event-triggered learning mechanism. Many continuous-time ADP designs also impose restrictive Persistence of Excitation (PE) conditions for parameter convergence (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)), making them hard to verify and enforce in practice. Inspired by these observations, we develop a composite control framework for output-feedback control of uncertain nonlinear systems with lumped disturbances. The main contributions are summarized as follows: (1) A unified control structure incorporating ETM is developed, in which ESO-based state estimation, disturbance compensation, and controller updates occur only at triggering instants. The resulting unified composite control framework enables an aperiodic, computationally efficient implementation of output-feedback RL control. By integrating ESO-based disturbance rejection with event-triggered RL, this work establishes a unified architecture that is not addressed in the existing literature. We propose an event-triggered learning rule for a simulation-of-experience ADP scheme. In this scheme, critic and actor networks update only at triggering instants, using both instantaneous and Extrapolated Bellman Error (EBE). Unlike existing ESO-RL frameworks (Ran et al. (2022); Tang et al. (2024b)), which use continuous or periodic learning, our mechanism yields an aperiodic, data-efficient adaptation. The analysis shows practical stability and Uniform Ultimate Boundedness (UUB) are achieved without a classical PE condition (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)). It also avoids redundant, high-frequency updates. The remainder of the paper is structured as follows: Section II presents the problem and the system model. Section III describes the proposed composite control framework. Section IV describes the overall composite control framework. The simulation results are demonstrated in Section V, and Section VI summarizes this paper and outlines potential directions for future research. # 2. PROBLEM FORMULATION In this paper, we identify the control of a set of uncertain affine nonlinear systems described by $$ \left\{ \begin{array}{l} \dot {z} = f _ {z} (x, z, \eta), \\ \dot {x} = A x + B [ f (x, z, \eta) + g (x, z, \eta) u ], \\ y = C x, \end{array} \right. \tag {1} $$ where $x = [x_1, \ldots, x_n]^{\mathrm{T}} \in \mathbb{R}^n$ denotes the state of the measured subsystem with relative degree $n$ ; $z \in \mathbb{R}^p$ represents the zero-dynamics state; $\eta \in \mathbb{R}$ denotes an external disturbance or uncertain parameter; $u \in \mathbb{R}$ is the control input; $f_z: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}^p$ is a smooth nonlinear mapping describing the evolution of zero dynamics; $f, g: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}$ are uncertain nonlinear functions characterizing the input and drift dynamics gain of the $x$ subsystem; and $A \in \mathbb{R}^{n \times n}$ , $B \in \mathbb{R}^{n \times 1}$ and $C \in \mathbb{R}^{1 \times n}$ are the standard companion matrices defining a nominal chain-of-integrators structure of the output dynamics. To enable subsequent observer and controller design, we impose the following standard assumptions. Assumption 1. The external signal $\eta(t)$ as well as its time derivative $\dot{\eta}(t)$ are bounded for all $t \geq 0$ . Assumption 2. The zero dynamics $\dot{z} = f_{z}(x,z,\eta)$ with input $(x,\eta)$ is Bounded-Input Bounded-State (BIBS) stable. In this paper, the nonlinear system dynamics with uncertainty is modeled as follows: $$ f (x, z, \eta) = f _ {0} (x) + \Delta f (x, z, \eta), $$ $$ g (x, z, \eta) = g _ {0} (x) + \Delta g (x, z, \eta), $$ where $f_0, g_0: \mathbb{R}^n \to \mathbb{R}$ denote the known nominal system dynamics; and $\Delta f, \Delta g: \mathbb{R}^n \times \mathbb{R}^p \times \mathbb{R} \to \mathbb{R}$ represent unknown disturbances and model uncertainties that may depend on the full state of the system $(x, z)$ and the external signal $\eta$ . Following the ADRC philosophy (Han (2009)), the general uncertainty is transferred to a broader state: $$ x _ {n + 1} \triangleq \Delta f (x, z, \eta) + \Delta g (x, z, \eta) u, \tag {2} $$ Accroding to this definition, the $n$ -th subsystem dynamics can be rewritten as $\dot{x}_n = x_{n+1} + f_0(x) + g_0(x)u$ , so that the overall system becomes an $(n+1)$ th order augmented integrator chain perturbed by the unknown term $\dot{x}_{n+1}$ . To quantify performance, we consider the nominal compensated subsystem and assign the infinite-horizon cost functional $$ J \left(x _ {0}\right) = \int_ {0} ^ {\infty} \left(Q (x (\tau)) + u _ {0} (\tau) ^ {T} R u _ {0} (\tau)\right) d \tau , \tag {3} $$ where $x_0 = x(0)$ is the initial condition, $Q: \mathbb{R}^n \to \mathbb{R}_+$ is a positive definite state penalty, $R > 0$ is a control-weighting matrix, and $u_0$ denotes the component of the input acting on the nominal dynamics after uncertainty compensation. The associated optimal control problem is $$ u _ {0} ^ {*} = \arg \min _ {u _ {0}} J (x _ {0}), $$ and the optimal policy will be approximated online via the RL mechanism developed later. Remark 1. The purpose of this paper is to develop a ESO-based RL disturbance rejection scheme equipped with an ETM. The proposed controller aims to stabilize the system under lumped uncertainties while achieving near-optimal performance with reduced control update frequency. # 3.COMPOSITE CONTROL FRAMEWORK # 3.1 ESO Design First, a ESO is designed to predict both the state of the system and all disturbances, following the standard ADRC structure (Han (2009); Chen et al. (2016)): $$ \left\{ \begin{array}{l} \dot {\hat {x}} _ {i} = \hat {x} _ {i + 1} + \frac {l _ {i}}{\epsilon^ {i}} (y - \hat {x} _ {1}) \quad i = 1, \dots , n - 1, \\ \dot {\hat {x}} _ {n} = \hat {x} _ {n + 1} + \frac {l _ {n}}{\epsilon^ {n}} (y - \hat {x} _ {1}) + f _ {0} (\hat {x}) + g _ {0} (\hat {x}) u, \quad (4) \\ \dot {\hat {x}} _ {n + 1} = \frac {l _ {n + 1}}{\epsilon^ {n + 1}} (y - \hat {x} _ {1}), \end{array} \right. $$ where $\hat{x} = [\hat{x}_1, \dots, \hat{x}_n, \hat{x}_{n+1}]^{\mathrm{T}}$ , $\epsilon > 0$ is a small positive constant adjusting the observer bandwidth, and $L = [l_1, \dots, l_{n+1}]^{\mathrm{T}}$ is chosen that the following matrix is Hurwitz: $$ E = \left[ \begin{array}{c c c c c} - l _ {1} & 1 & 0 & \dots & 0 \\ - l _ {2} & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ - l _ {n} & 0 & 0 & \dots & 1 \\ - l _ {n + 1} & 0 & 0 & \dots & 0 \end{array} \right] \in \mathbb {R} ^ {(n + 1) \times (n + 1)}. $$ However, since the observer gains are scaled by $\epsilon^{-i}$ , a small $\epsilon$ yields a high bandwidth ESO that responds rapidly to state deviations but may induce a pronounced peaking phenomenon during the initial transient. To mitigate this effect, we employ a widely used smooth saturation technique to constrain the observer outputs. Let the saturated observer states be defined as $$ \bar {x} _ {i} = M _ {i} s \left(\frac {\hat {x} _ {i}}{M _ {i}}\right), \quad i = 1, \ldots , n + 1, $$ where $M_{i} > 0$ are design bounds selected so that the saturation remains inactive during steady-state operation, and $s(\cdot)$ is an odd, continuously differentiable saturation-like function given by $$ s (v) = \left\{ \begin{array}{l l} v, & 0 \leq v \leq 1 \\ v + \frac {v - 1}{\varepsilon} - \frac {v ^ {2} - 1}{2 \varepsilon}, & 1 \leq v \leq 1 + \varepsilon \\ 1 + \frac {\varepsilon}{2}, & v > 1 + \varepsilon , \end{array} \right. $$ which satisfies $0 \leq s'(v) \leq 1$ , $|s(v) - \mathrm{sat}(v)| \leq \frac{\varepsilon}{2}$ , $\forall v \in \mathbb{R}$ . For later use, we denote $\bar{x} = [\bar{x}_1, \dots, \bar{x}_{n+1}]^{\mathrm{T}} \in \mathbb{R}^{n+1}$ and observe that $\dot{\bar{x}}_i = s' \left( \frac{\hat{x}_i}{M_i} \right) \dot{\hat{x}}_i$ . $i = 1, \dots, n + 1$ . # 3.2 ADP Design Second, we present the ADP design. An actor-critic architecture that is based on a neural network is employed, in which the critic approximates the optimal value function and the actor represents the corresponding optimal policy. To facilitate the theoretical development of the ADP-based controller and the associated optimized control law, the following standard assumptions are introduced: Assumption 3. There exist constants $g_{\mathrm{min}}, g_{\mathrm{max}} > 0$ such that $$ g _ {\min } \leq \inf _ {x \in \mathcal {X}} | g _ {0} (x) | \leq \sup _ {x \in \mathcal {X}} | g _ {0} (x) | \leq g _ {\max }. $$ Moreover, on $\Omega \triangleq \mathcal{X} \times \mathcal{Z} \times \mathcal{W}$ (where $\mathcal{W}$ is a compact set containing all admissible values of $\eta$ from Assumption 2, $\mathcal{X}$ is a compact set containing all admissible system states $x$ , and $\mathcal{Z}$ is a bounded positively invariant set for the zero-dynamics state $z$ ), the relative mismatch between the true and nominal input gains is bounded $$ \kappa_ {g} \triangleq \sup _ {(x, z, \eta) \in \Omega} \frac {| g (x , z , \eta) - g _ {0} (x) |}{| g _ {0} (x) |} < 1. $$ As shown in (3), the associated value equation can be derived as follows $$ V (x) = \min _ {u} J (x) = \int_ {0} ^ {\infty} \left(Q (x (\tau)) + u (\tau) ^ {T} R u (\tau)\right) d \tau . $$ The optimal value equation $V(x)$ satisfies the HJB equation $$ 0 = \min _ {u} \left[ Q (x) + u ^ {T} R u + \nabla V (x) ^ {T} (f (x, z, \eta) + g (x, z, \eta) u) \right]. $$ gives the corresponding optimized control law $$ u ^ {*} (x) = - \frac {1}{2} R ^ {- 1} g (x, z, \eta) ^ {T} \nabla V (x). \tag {5} $$ Substituting (5) into the HJB yields $$ \begin{array}{l} 0 = Q (x) + \nabla V (x) ^ {T} f (x, z, \eta) \\ - \frac {1}{4} \nabla V (x) ^ {T} g (x, z, \eta) R ^ {- 1} g (x, z, \eta) ^ {T} \nabla V (x). \\ \end{array} $$ This expression provides the optimality condition for the value equation and forms the basis for the subsequent critic approximation in the ADP framework. # 3.3 Training Process To avoid explicitly solving the HJB equation, we adopt an actor-critic architecture enhanced with an ESO-based extrapolation mechanism (Ran et al. (2022)). The critic network approximates the value function using a linearly parameterized structure $$ V \left(\bar {x}; W _ {v}\right) = W _ {v} ^ {\mathrm {T}} \phi (\bar {x}), $$ where $W_{c}$ denotes the critic weight vector and $\phi (x)$ represents a basis vector. The actor approximates the control policy as $$ u _ {0} (\bar {x}; W _ {a}) = - \frac {1}{2} R ^ {- 1} g _ {0} (\bar {x}) B ^ {\mathrm {T}} \phi_ {x} ^ {\mathrm {T}} (\bar {x}) W _ {a}, $$ where $W_{a}$ is the actor weight vector and $\phi_x$ is the gradient of the basis function vector Using the ESO-estimated state $\hat{x}$ , the Instantaneous Bellman Error (IBE) is defined as $$ \begin{array}{l} \varepsilon_ {t} \triangleq V _ {x} (\bar {x}, W _ {a}) \left[ A \bar {x} + B \left(f _ {0} (\bar {x}) + g _ {0} (\bar {x}) u _ {0} (\bar {x}, W _ {a})\right) \right] \tag {6} \\ + Q (\bar {x}) + u _ {0} ^ {T} (\bar {x}, W _ {c}) R u _ {0} (\bar {x}, W _ {c}), \\ \end{array} $$ which measures the deviation of the current actor-critic pair from the HJB optimality condition along the real trajectory. To enhance state-space coverage and improve robustness, an extrapolated dataset $\mathcal{X}_E = \{\xi^i\}_{i=1}^N$ is generated over the admissible domain, and the approximate EBE is defined as follows: $$ \begin{array}{l} \varepsilon_ {i} \triangleq V _ {x} (\xi^ {i}, W _ {a}) \left[ A \xi^ {i} + B \left(f _ {0} (\xi^ {i}) + g _ {0} (\xi^ {i}) u _ {0} (\xi^ {i}, W _ {a})\right) \right] \\ + Q \left(\xi^ {i}\right) + u _ {0} ^ {T} \left(\xi^ {i}, W _ {c}\right) R u _ {0} \left(\xi^ {i}, W _ {c}\right). \tag {7} \\ \end{array} $$ For least-squares type learning, we introduce the regressor $$ \zeta = \phi (\bar {x}) \left[ A \bar {x} + B \left(f _ {0} (\bar {x}) + g _ {0} (\bar {x}) u _ {0} (\bar {x}, W _ {a})\right) \right], $$ and the normalization term $$ \sigma = 1 + \rho \zeta^ {\mathrm {T}} \Psi \zeta , $$ where $\rho > 0$ is a constant normalization gain and the gain matrix $\Psi$ evolves by $$ \dot {\Psi} = \left(\gamma \Psi - \alpha_ {v 1} \frac {\Psi \zeta \zeta^ {\mathrm {T}} \Psi}{\sigma^ {2}}\right) \mathbf {1} _ {\{\| \Phi \| \leq \delta_ {1} \}}, \quad \| \Phi (0) \| \leq \delta_ {1}, $$ where $\gamma > 0$ is a constant forgetting factor, and the weight update for the critic is given by $$ \dot {W} _ {v} = - \alpha_ {v 1} \Psi \frac {\zeta}{\sigma} \varepsilon_ {i} - \frac {\alpha_ {v 2}}{N} \Psi \sum_ {i = 1} ^ {N} \frac {\zeta_ {i}}{\sigma_ {i}} \varepsilon_ {i}, \tag {8} $$ where $\delta_1 > 0$ is a saturation constant and constant $\alpha_{v1},\alpha_{v2} > 0$ yielding a least-squares-type adaptation with improved convergence and numerical robustness. The regressor and normalization term used for constructing the EBE at the extrapolated sample points are defined as $$ \zeta_ {i} = \phi_ {x} (\xi^ {i}) [ A \xi^ {i} + B (f _ {0} (\xi^ {i}) + g _ {0} (\xi^ {i}) u _ {0} (\xi^ {i}, W _ {a})) ], $$ $$ \sigma_ {i} = 1 + \rho \zeta_ {i} ^ {\mathrm {T}} \Psi \zeta_ {i}, $$ and the updated law for actor weight is $$ \begin{array}{l} \dot {W} _ {a} = - \alpha_ {c 1} (W _ {a} - W _ {v}) - \alpha_ {c 2} W _ {a} + \frac {\alpha_ {v 1} \mathcal {H} _ {t} ^ {T} W _ {a} \zeta^ {T}}{4 \sigma} W _ {v} \\ + \sum_ {i = 1} ^ {N} \frac {\alpha_ {v 1} \mathcal {H} _ {i} ^ {T} W _ {a} \zeta_ {i} ^ {T}}{4 N \sigma_ {i}} W _ {v}, \tag {9} \\ \end{array} $$ where $\alpha_{c1},\alpha_{c2} > 0$ are constant adaption gains and $$ \mathcal {H} _ {t} \triangleq \phi_ {x} (\bar {x}) B g _ {0} (\bar {x}) R ^ {- 1} g _ {0} ^ {T} (\bar {x}) B ^ {T} \phi_ {x} ^ {T} (\bar {x}), $$ $$ \mathcal {H} _ {i} \triangleq \phi_ {x} (\xi^ {i}) B g _ {0} (\xi^ {i}) R ^ {- 1} g _ {0} ^ {T} (\xi^ {i}) B ^ {T} \phi_ {x} ^ {T} (\xi^ {i}). $$ Finally, the resulting approximate optimal policy is expressed as $$ u = u _ {0} (\bar {x}, W _ {a}) - \frac {\bar {x} _ {n + 1}}{g _ {0} (\bar {x})}. \tag {8} $$ # 3.4 Event-Triggered Mechanism Let $\tau_{k}$ denote the $k$ -th moment of triggering, and let $x(\tau_k)$ be the most recently transmitted state held by a Zero-Order Hold (ZOH) device. The event-triggering error is defined as $$ e (t) = x \left(\tau_ {k}\right) - x (t), \quad t \in \left[ \tau_ {k}, \tau_ {k + 1}\right), \tag {8} $$ that measures the mismatch between the last transmitted state and the current state. During each inter-event interval, the control policy and the neural-network parameters are held constant, i.e., $\pi (t) = \pi (x(\tau_k))$ , and the actor-critic weights are frozen. When $\| e(t)\|$ exceeds a prescribed threshold, a new event is created, the state is transmitted, and both the control input and the network weights are updated. To guarantee stability while avoiding unnecessary updates, the triggering threshold is chosen in a state-dependent and weight-adaptive form: $$ \delta (t) = \sqrt {\frac {\lambda_ {\operatorname* {m i n}} (Q) \beta}{g _ {m a x} ^ {2} L _ {a} ^ {2} \| W _ {a} (t) \| ^ {2}}} \| x (t) \|, \tag {8} $$ that $\lambda_{\mathrm{min}}(Q)$ is the minimum eigenvalue of the cost matrix $Q$ , $\beta \in (0,1)$ is a design constant introduced in the Lyapunov analysis, $g_{max}$ is a known upper bound of the nominal input gain $g_0(x)$ , $L_{a}$ is the Lipschitz constant of the actor activation function, while $\| W_a(t)\|$ is an Euclidean norm of the actor weight vector. This construction links the triggering sensitivity to both the state magnitude and the current actor parameters: for large states or large weight norms, the condition becomes more stringent, enforcing timely updates; near the origin with well-converged weights, it becomes more relaxed, reducing redundant transmissions. The triggering condition is explicitly given by $$ \left\| e (t) \right\| ^ {2} > \delta (t) ^ {2}. \tag {8} $$ An event is generated only when this inequality is satisfied. Under this rule, all of the closed-loop signals are uniformly bounded, while the number of updates is significantly reduced compared with a periodic scheme (Dong et al. (2017)). # 4. INTEGRATED CONTROL FRAMEWORK A unified control pipeline is depicted in Fig. 1. At each control cycle, the ESO updates the augmented state estimates, the event-triggered monitor tracks the deviation between the current and last transmitted states, and the ADP either updates the actor-critic networks or holds the previous control input. A detailed procedure is summarized in Algorithm 1. Fig. 1. Pipeline of the composite control framework. Assumption 4. Under the proposed ETM, a Minimum Inter-Event Time (MIET) $\tau_{\mathrm{min}} > 0$ is enforced such that $\tau_{k + 1} - \tau_k \geq \tau_{\mathrm{min}}, \forall k \in \mathbb{N}$ , where $\tau_k$ denotes the sequence of triggering instants. This condition prevents Zeno behavior and guarantees that the count of triggering events in any finite time interval remains finite. Theorem 1. The composite control architecture includes the uncertain. nonlinear system (1), the ESO (4), the Algorithm 1 Composite Control Framework 1: Initialization 2: Set critic and actor weights $W_{c}(0)$ and $W_{a}(0)$ 3: Initialize ESO states $z(0)$ in (4) 4: Set initial triggering instant $\tau_0 = 0$ and store $x(\tau_0)$ and $u(\tau_0)$ 5: Generate extrapolation set $\mathcal{X}_E$ for computing EBE 6: 7: Online Control Loop 8: At each time $t$ : 9: Update ESO via (4) to obtain $\hat{x}(t)$ and $\hat{d}(t)$ 10: Compute triggering error using (3.4) 11: Evaluate threshold $\delta(t)$ via (3.4) 12: Check triggering condition (3.4) 13: If the triggering condition is satisfied then 14: Compute IBE $\varepsilon_t$ using (6). 15: Compute EBE $\varepsilon_i$ using (7) 16: Update critic weights $W_{v}$ via (8) 17: Update actor weights $W_{a}$ via (9) 18: Compute new control input $u$ using (3.3) 19: Set $\tau_{k+1} = t$ and store $\hat{x}(\tau_{k+1})$ and $u(\tau_{k+1})$ 20: Else 21: Hold $u(t) = u(\tau_k)$ via ZOH. 22: End if 23: Apply $u(t)$ to the plant dynamics (1). ADP-based controller (3.3), and the proposed ETM. Suppose that: (i) Assumptions 1-5 hold; (ii) the observer gain vector is chosen such that the ESO error dynamics are globally asymptotically stable; (iii) the uncertainty compensation renders the plant dynamics equivalent to the nominal model used in the ADP design; and (iv) the estimation errors of the actor-critic neural-network weights are uniformly ultimately bounded. Then all closed-loop signals are uniformly ultimately bounded, and the state $x(t)$ converges to a small neighborhood of the origin. Proof. We provide a sketch of the argument. Consider the Lyapunov candidate $$ V = V ^ {*} (x) + \frac {1}{2} (\Theta_ {v} ^ {e}) ^ {\mathrm {T}} \Gamma^ {- 1} \Theta_ {v} ^ {e} + \frac {1}{2} (\Theta_ {c} ^ {e}) ^ {\mathrm {T}} \Theta_ {c} ^ {e}. $$ where $V^{*}(x)$ is the optimal value function, and $\Theta_v^e,\Theta_c^e$ denote the critic and actor weight errors, respectively. On each inter-event interval $[\tau_k, \tau_{k+1})$ , the ESO, the control policy, and the actor-critic weights are fixed. Under the proposed control law, standard Lyapunov analysis yields $$ \dot {V} \leq - \alpha \| x \| ^ {2} + \mathcal {O} (\varepsilon), $$ for some $\alpha > 0$ and sufficiently small approximation and estimation errors, implying boundedness of all closed-loop variables on each inter-event interval. At each triggering instant, the ESO correction and the actor-critic updates are designed so that $V$ does not increase, which ensures UUB of $(x, \Theta_v^e, \Theta_c^e)$ . By Assumption 1, the inter-event times satisfy $\tau_{k + 1} - \tau_k\geq$ $\tau_{\mathrm{min}} > 0$ , which excludes Zeno behavior and guarantees that only finitely many events occur on any finite time interval. Combining these properties shows that this composite control framework system is uniformly ultimately bounded and that $x(t)$ converges to a small neighborhood of the origin. Remark 2. The above Lyapunov analysis is carried out only on the continuous-time closed loop between triggering instants. The event-triggering error $e(t)$ is not explicitly included in the Lyapunov function. As a result, the analysis provides practical stability within each inter-event interval, but does not constitute a full hybrid-system proof. A rigorous global stability proof would require augmenting the Lyapunov function with $e(t)$ and deriving an ISS-type inequality of the form $$ \dot {V} \leq - k _ {1} \| x \| ^ {2} + k _ {2} \| e \| ^ {2}, \tag {6} $$ where $k_{1} > 0$ is the nominal decay rate of the Lyapunov function, $k_{2} > 0$ quantifies how the Lyapunov derivative is affected by the measurement error, with a triggering rule guaranteeing $\| e \| \leq k_{0} \| x \|$ and $k_{2} k_{0}^{2} < k_{1}$ , while $k_{0} > 0$ is the design parameter in the triggering condition that restricts the size of $e(t)$ relative to $x(t)$ . A full stability analysis will be included in an extended journal version. # 5. SIMULATION RESULTS In what follows, two numerical case studies are conducted to evaluate the capability and robustness of the proposed composite control framework. # 5.1 Example 1 To demonstrate the controller performance, we first consider a third-order uncertain nonlinear system. The real plant is given by $$ \left\{ \begin{array}{l} \dot {z} = \underbrace {- \left(x _ {1} ^ {2} + 0 . 5 \eta^ {2}\right) z} _ {f _ {z} (x, z, \eta)} \\ \dot {x} _ {1} = x _ {2}, \\ \dot {x} _ {2} = \underbrace {- 1 . 5 x _ {1} - x _ {2} + 1 . 5 \left(x _ {1} + x _ {2}\right) \left(\sin \left(x _ {2}\right) + 2\right) ^ {2}} _ {f _ {0} (x)} \\ \quad + \underbrace {\left(- x _ {2} + \eta + z ^ {2}\right)} _ {\Delta f (x, z, \eta)} \\ \quad + \underbrace {\left(\cos \left(x _ {1}\right) + 2\right)} _ {g _ {0} (x)} + \underbrace {\left(\sin \left(x _ {2}\right) - \eta\right)} _ {\Delta g (x, z, \eta)} u, \\ y = x _ {1}. \end{array} \right. \tag {6} $$ In this example, we construct the Bellman Error (BE) over a uniformly discretized exploration set $\mathcal{X}_E$ , defined as $\mathcal{X}_E =_{0.5} \times_{0.5}$ . The ESO parameters are selected as follows. The observer gain vector is set to $L = [2 \ 2 \ 1]^{\mathrm{T}}$ . The small positive constant is set to $\varepsilon = 0.03$ to accelerate ESO convergence while maintaining robustness against measurement noise. The saturation bounds for the ESO outputs are chosen as $M_1 = M_2 = M_3 = 3$ . The nominal-function saturation limits are selected as $M_f = 7$ , $M_g = 3$ , ensuring boundedness of the ESO-based control law. Performance Analysis Fig. 2 demonstrates that the ESO performs effectively for the considered nonlinear system. The observer-generated state trajectories closely track the actual system states, confirming the accuracy and reliability of the proposed observer. Fig. 2. System state trajectories and their ESO estimates. Fig. 3 shows that the control input $u$ under the proposed framework exhibits significant activity during the initial transient period due to uncertainties. After this short transient, the control input settles quickly and stays close to zero, maintaining stable behavior, indicating that the proposed control strategy achieves fast transient response and effective disturbance rejection. To quantify the computational efficiency of the proposed ETM, we define the update saving ratio as $\eta = \frac{N_{\mathrm{skipped}}}{N_{\mathrm{total}}} \times 100\%$ , where $N_{\mathrm{skipped}}$ denotes the number of skipped control updates due to the triggering condition not being satisfied and $N_{\mathrm{total}} = N_{\mathrm{skipped}} + N_{\mathrm{updated}}$ represents the total number of potential update instants. Based on the simulation study, an update-saving rate of $72\%$ is obtained, demonstrating a significant reduction in computational load while maintaining system stability. As shown in Fig. 3, both $x_{1}$ and $x_{2}$ remain stable and converge to the equilibrium. As illustrated in Fig. 4, the control input produced by the composite controller stays well behaved and within bounds, even when roughly $71\%$ of its possible update instances are omitted. In contrast to the periodically updated ADP (red dashed), the ETM greatly reduces the number of control updates while maintaining comparable transient response and steady-state performance. Fig. 5 shows the distribution of triggering instants over the simulation horizon. Each cross mark represents an execution of the actor-critic update when the event-triggering condition is satisfied. Fig. 3. State trajectories for composite control framework in Example 1 Fig. 4. Control signal composite control framework compared with those generated by the periodic ADP strategy. Fig. 5. Event trigger distribution # 5.2 Example 2 In this example, the proposed control method is implemented on an inverted pendulum system subject to both internal and external nonlinear disturbances. The system dynamics are formulated as follows: Fig. 6. System state trajectories and their ESO estimates. $$ \left\{ \begin{array}{l} \dot {z} = \underbrace {0 . 5 \eta (t) z} _ {f _ {z} (z, \eta)} \\ \dot {x} _ {1} = \underbrace {x _ {2}} _ {f _ {x _ {1}} (x _ {2})}, \\ \dot {x} _ {2} = \underbrace {- \frac {g}{l} \sin \left(x _ {1}\right) - \frac {b}{m l ^ {2}} x _ {2}} _ {f _ {0} \left(x _ {1}, x _ {2}\right)} \\ + \underbrace {5 e ^ {- 0 . 3 t} + 0 . 5 \sin \left(x _ {1}\right) + 0 . 5 z} _ {\Delta f \left(x _ {1}, z, t\right)} + \underbrace {\frac {1}{m l ^ {2}}} _ {g _ {0}} u, \\ y = x _ {1}. \end{array} \right. \tag {6} $$ In this example, we construct the BE over a predefined rectangular exploration set: $\mathcal{X}_E =_{0.5} \times_{1.0}$ . The observer gain is selected as $L = [2 \ 2 \ 1]^{\mathrm{T}}$ , The small positive constant used in the ESO update is set to $\varepsilon = 0.03$ . The saturation bounds for the ESO states are configured as $M_1 = 1, M_2 = 1, M_3 = 3$ , and the saturation bounds for the nominal functions are selected as $M_f = M_g = 7$ The nonlinear pendulum employed in the simulation is characterized by the parameters $m = 0.8 \, \mathrm{kg}, l = 1.2 \, \mathrm{m}, b = 0.2, g = 9.81 \, \mathrm{m/s}^2$ . Here $m$ denotes the pendulum mass, $l$ the rod length, $b$ the viscous friction coefficient, and $g$ is the gravitational acceleration. Performance Analysis Fig. 6 shows that the ESO reconstructs all system states, including the aggregated disturbance term, with high accuracy. Even during fast transients and highly nonlinear phases, the estimated trajectories $\hat{x}_i$ remain tightly aligned with the true states, showing only minimal deviation. The inset plots further demonstrate that the ESO can track fast dynamics and suppress disturbances with rapid convergence. Fig. 7. Evolution of the system states from composite control framework and standard ADP method From a theoretical perspective, Fig. 7 illustrates the main benefit of the proposed composite control framework over its time-triggered counterpart. ETM updates only when the state deviation crosses a prescribed threshold, allowing the learning and control actions to respond only to relevant changes in the dynamics. As a result, the Event Trigger (ET)-ADP achieves faster and more structured convergence, most notably in the $x_{2}$ response, which uses control and learning updates more efficiently than the uniformly sampled, time-triggered scheme. In this example, our mechanism achieved a $56\%$ reduction in computational cost. As depicted in Fig. 8, the proposed framework controller delivers a quick corrective action during the initial transient phase, which is expected for stabilizing the nonlinear pendulum. After the system reaches steady state, the control input rapidly decays and remains near zero, reflecting stable closed-loop behavior and low steady-state effort. Fig. 8 further shows that the ET-ADP controller exhibits smaller amplitude variations and a faster convergence compared with the conventional time-triggered ADP. The time-triggered controller, by contrast, produces larger alternating control swings and wide input excursions, which are consistent with the overshoot and oscillatory behavior observed in Fig. 7. These results highlight the benefits of ETM: by updating only when necessary, it avoids excessive corrective actions, suppresses destabilizing oscillations, and promotes a more stable, energy-efficient control behavior. Fig. 8. Control signals generated by the proposed composite control framework, compared with those obtained from the periodic ADP strategy. Fig. 9. Event trigger distribution Fig. 9 depicts the timing of the triggering events during the simulation. A cross indicates an instance where the event-triggering condition prompts an actor-critic update. # 6. CONCLUSION An ESO-assisted ADP architecture with an ETM has been presented for uncertain nonlinear systems. In this framework, the ESO-based compensation scheme provides real-time estimation and removal of lumped uncertainties, while the augmented state formulation embeds the tracking error and system dynamics into the optimal control design. The ADP controller is then employed to approximate the optimal policy of the compensated subsystem through online learning. Simulation studies verify the framework's capability, showing that the controller maintains strong resistance to disturbances despite reduced update frequency enabled by the ETM. The developed methodology will be further expanded to handle multi-input-multi-output configurations in future investigations. Also, a complete hybrid-system stability proof that explicitly incorporates the triggering error will be developed in the extended journal version.
arxiv_math
2025-12-05T00:00:00Z
https://arxiv.org/pdf/2512.15735
{"title": "Deep Reinforcement Learning Optimization for Uncertain Nonlinear Systems via Event-Triggered Robust Adaptive Dynamic Programming", "raw_content": "# Deep Reinforcement Learning Optimization for Uncertain Nonlinear Systems via Event-Triggered Robust Adaptive Dynamic Programming\n\nNingwei Bai* Chi Pui Chan* Qichen Yin* Tengyang Gong** Yunda Yan* Zezhi Tang*\n\n* Department of Computer Science, University College London, Gower Street, London, WC1E 6BT United Kingdom (e-mail: zezhi.tang@ucl.ac.uk).\n\n** Department of Electrical and Electronic Engineering, University of Manchester, Oxford Rd, Manchester, M13 9PL United Kingdom\n\nAbstract: This work proposes a unified control architecture that couples a Reinforcement Learning (RL)-driven controller with a disturbance-rejection Extended State Observer (ESO), complemented by an Event-Triggered Mechanism (ETM) to limit unnecessary computations. The ESO is utilized to estimate the system states and the lumped disturbance in real time, forming the foundation for effective disturbance compensation. To obtain near-optimal behavior without an accurate system description, a value-iteration-based Adaptive Dynamic Programming (ADP) method is adopted for policy approximation. The inclusion of the ETM ensures that parameter updates of the learning module are executed only when the state deviation surpasses a predefined bound, thereby preventing excessive learning activity and substantially reducing computational load. A Lyapunov-oriented analysis is used to characterize the stability properties of the resulting closed-loop system. Numerical experiments further confirm that the developed approach maintains strong control performance and disturbance tolerance, while achieving a significant reduction in sampling and processing effort compared with standard time-triggered ADP schemes.\n\nKeywords: Reinforcement learning; Event-triggered control; Uncertain nonlinear systems; Adaptive dynamic programming\n\n# 1. INTRODUCTION\n\nLearning-based methods have become a fundamental paradigm in modern engineering systems, enabling algorithms to improve performance through data-driven adaptation without relying solely on explicit mathematical models. Over the past decade, advances in machine learning—particularly in function approximation, optimization, and representation learning—have significantly expanded the capability of intelligent systems operating under uncertainty, compared to traditional analytical methods Qin et al. (2022); Zhang et al. (2024); Hu et al. (2025). These approaches have been increasingly adopted in control, robotics, and even generative language models Lu et al. (2020); Zhao et al. (2024); Tang et al. (2025); Yao et al. (2025). However, conventional model-based techniques may be limited in their ability to handle nonlinearities, unknown disturbances, or incomplete system knowledge.\n\nReinforcement Learning (RL) has gained attention for complex decision-making and control in uncertain, dynamic environments Tang et al. (2024d). In control engineering, RL-based methods offer a data-driven alternative to classical model-based designs. This is useful when\n\naccurate system models are difficult to obtain. Among these methods, ADP integrates RL with optimal control theory. It facilitates near-optimal control of nonlinear systems by approximating value functions and control policies through function approximators. This eliminates the need to explicitly solve the Hamilton-Jacobi-Bellman (HJB) equation Lewis and Vrabie (2009). However, conventional ADP frameworks often rely on continuous or periodic updates to neural network parameters. These updates impose significant computational burdens and may lead to overfitting to transient disturbances or noise.\n\nEvent-triggered strategies have been widely adopted in diverse control applications, including networked and embedded systems, multi-agent coordination, and resource-constrained robotic platforms Onuoha et al. (2024b,a). At the meantime, the ETM has been widely employed in both control and ADP frameworks to reduce computational load (Han et al. (2024); Heemels et al. (2012); Tabuada (2007)). Unlike time-driven schemes, ETMs update only when systems meet a state- or error-based condition. State deviation or estimation error often directly triggers updates. This approach reduces redundant updates and preserves closed-loop stability (Dong et al. (2017); Xue et al. (2020); Onuoha et al. (2024b)). By limiting updates\n\nto key events, event-triggered ADP boosts efficiency and yields policies less sensitive to disturbances.\n\nDespite these advantages, engineers must ensure robustness against external disturbances and modeling uncertainties. In practice, environmental perturbations, unmodeled dynamics, nonlinear couplings, and parameter uncertainties cause disturbances. Many robust control approaches employ feedback to reduce perturbations rather than explicitly use feedforward compensation (Tang (2019); Tang et al. (2016, 2024a)). In this context, a ESO estimates the original states and accumulated interference in real time. This allows proactive compensation of parameter mismatches, unmodeled dynamics, and external perturbations in nonlinear systems s(Luo et al. (2020); Tang et al. (2024c); Han (2009); Chen et al. (2016); Ran et al. (2021); Pu et al. (2015); Tang et al. (2019)).\n\ninspired by the Active Disturbance Rejection Control (ADRC) philosophy (Gao (2003); Guo and Zhao (2013)), provide a powerful tool:\n\nRecent work combines ESO-based disturbance rejection with RL for uncertain nonlinear systems (Ran et al. (2022); Tang et al. (2024b)). However, these ESO-RL schemes primarily operate in a time-driven manner: both the controller and learning updates run continuously or periodically, lacking an event-triggered learning mechanism. Many continuous-time ADP designs also impose restrictive Persistence of Excitation (PE) conditions for parameter convergence (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)), making them hard to verify and enforce in practice.\n\nInspired by these observations, we develop a composite control framework for output-feedback control of uncertain nonlinear systems with lumped disturbances. The main contributions are summarized as follows:\n\n(1) A unified control structure incorporating ETM is developed, in which ESO-based state estimation, disturbance compensation, and controller updates occur only at triggering instants. The resulting unified composite control framework enables an aperiodic, computationally efficient implementation of output-feedback RL control. By integrating ESO-based disturbance rejection with event-triggered RL, this work establishes a unified architecture that is not addressed in the existing literature. We propose an event-triggered learning rule for a simulation-of-experience ADP scheme. In this scheme, critic and actor networks update only at triggering instants, using both instantaneous and Extrapolated Bellman Error (EBE). Unlike existing ESO-RL frameworks (Ran et al. (2022); Tang et al. (2024b)), which use continuous or periodic learning, our mechanism yields an aperiodic, data-efficient adaptation. The analysis shows practical stability and Uniform Ultimate Boundedness (UUB) are achieved without a classical PE condition (Jiang and Jiang (2012); Bian et al. (2017); Kamalapurkar et al. (2016)). It also avoids redundant, high-frequency updates.\n\nThe remainder of the paper is structured as follows: Section II presents the problem and the system model. Section III describes the proposed composite control framework.\n\nSection IV describes the overall composite control framework. The simulation results are demonstrated in Section V, and Section VI summarizes this paper and outlines potential directions for future research.\n\n# 2. PROBLEM FORMULATION\n\nIn this paper, we identify the control of a set of uncertain affine nonlinear systems described by\n\n$$\n\\left\\{ \\begin{array}{l} \\dot {z} = f _ {z} (x, z, \\eta), \\\\ \\dot {x} = A x + B [ f (x, z, \\eta) + g (x, z, \\eta) u ], \\\\ y = C x, \\end{array} \\right. \\tag {1}\n$$\n\nwhere $x = [x_1, \\ldots, x_n]^{\\mathrm{T}} \\in \\mathbb{R}^n$ denotes the state of the measured subsystem with relative degree $n$ ; $z \\in \\mathbb{R}^p$ represents the zero-dynamics state; $\\eta \\in \\mathbb{R}$ denotes an external disturbance or uncertain parameter; $u \\in \\mathbb{R}$ is the control input; $f_z: \\mathbb{R}^n \\times \\mathbb{R}^p \\times \\mathbb{R} \\to \\mathbb{R}^p$ is a smooth nonlinear mapping describing the evolution of zero dynamics; $f, g: \\mathbb{R}^n \\times \\mathbb{R}^p \\times \\mathbb{R} \\to \\mathbb{R}$ are uncertain nonlinear functions characterizing the input and drift dynamics gain of the $x$ subsystem; and $A \\in \\mathbb{R}^{n \\times n}$ , $B \\in \\mathbb{R}^{n \\times 1}$ and $C \\in \\mathbb{R}^{1 \\times n}$ are the standard companion matrices defining a nominal chain-of-integrators structure of the output dynamics.\n\nTo enable subsequent observer and controller design, we impose the following standard assumptions.\n\nAssumption 1. The external signal $\\eta(t)$ as well as its time derivative $\\dot{\\eta}(t)$ are bounded for all $t \\geq 0$ .\n\nAssumption 2. The zero dynamics $\\dot{z} = f_{z}(x,z,\\eta)$ with input $(x,\\eta)$ is Bounded-Input Bounded-State (BIBS) stable.\n\nIn this paper, the nonlinear system dynamics with uncertainty is modeled as follows:\n\n$$\nf (x, z, \\eta) = f _ {0} (x) + \\Delta f (x, z, \\eta),\n$$\n\n$$\ng (x, z, \\eta) = g _ {0} (x) + \\Delta g (x, z, \\eta),\n$$\n\nwhere $f_0, g_0: \\mathbb{R}^n \\to \\mathbb{R}$ denote the known nominal system dynamics; and $\\Delta f, \\Delta g: \\mathbb{R}^n \\times \\mathbb{R}^p \\times \\mathbb{R} \\to \\mathbb{R}$ represent unknown disturbances and model uncertainties that may depend on the full state of the system $(x, z)$ and the external signal $\\eta$ .\n\nFollowing the ADRC philosophy (Han (2009)), the general uncertainty is transferred to a broader state:\n\n$$\nx _ {n + 1} \\triangleq \\Delta f (x, z, \\eta) + \\Delta g (x, z, \\eta) u, \\tag {2}\n$$\n\nAccroding to this definition, the $n$ -th subsystem dynamics can be rewritten as $\\dot{x}_n = x_{n+1} + f_0(x) + g_0(x)u$ , so that the overall system becomes an $(n+1)$ th order augmented integrator chain perturbed by the unknown term $\\dot{x}_{n+1}$ .\n\nTo quantify performance, we consider the nominal compensated subsystem and assign the infinite-horizon cost functional\n\n$$\nJ \\left(x _ {0}\\right) = \\int_ {0} ^ {\\infty} \\left(Q (x (\\tau)) + u _ {0} (\\tau) ^ {T} R u _ {0} (\\tau)\\right) d \\tau , \\tag {3}\n$$\n\nwhere $x_0 = x(0)$ is the initial condition, $Q: \\mathbb{R}^n \\to \\mathbb{R}_+$ is a positive definite state penalty, $R > 0$ is a control-weighting matrix, and $u_0$ denotes the component of the input acting on the nominal dynamics after uncertainty compensation.\n\nThe associated optimal control problem is\n\n$$\nu _ {0} ^ {*} = \\arg \\min _ {u _ {0}} J (x _ {0}),\n$$\n\nand the optimal policy will be approximated online via the RL mechanism developed later.\n\nRemark 1. The purpose of this paper is to develop a ESO-based RL disturbance rejection scheme equipped with an ETM. The proposed controller aims to stabilize the system under lumped uncertainties while achieving near-optimal performance with reduced control update frequency.\n\n# 3.COMPOSITE CONTROL FRAMEWORK\n\n# 3.1 ESO Design\n\nFirst, a ESO is designed to predict both the state of the system and all disturbances, following the standard ADRC structure (Han (2009); Chen et al. (2016)):\n\n$$\n\\left\\{ \\begin{array}{l} \\dot {\\hat {x}} _ {i} = \\hat {x} _ {i + 1} + \\frac {l _ {i}}{\\epsilon^ {i}} (y - \\hat {x} _ {1}) \\quad i = 1, \\dots , n - 1, \\\\ \\dot {\\hat {x}} _ {n} = \\hat {x} _ {n + 1} + \\frac {l _ {n}}{\\epsilon^ {n}} (y - \\hat {x} _ {1}) + f _ {0} (\\hat {x}) + g _ {0} (\\hat {x}) u, \\quad (4) \\\\ \\dot {\\hat {x}} _ {n + 1} = \\frac {l _ {n + 1}}{\\epsilon^ {n + 1}} (y - \\hat {x} _ {1}), \\end{array} \\right.\n$$\n\nwhere $\\hat{x} = [\\hat{x}_1, \\dots, \\hat{x}_n, \\hat{x}_{n+1}]^{\\mathrm{T}}$ , $\\epsilon > 0$ is a small positive constant adjusting the observer bandwidth, and $L = [l_1, \\dots, l_{n+1}]^{\\mathrm{T}}$ is chosen that the following matrix is Hurwitz:\n\n$$\nE = \\left[ \\begin{array}{c c c c c} - l _ {1} & 1 & 0 & \\dots & 0 \\\\ - l _ {2} & 0 & 1 & \\dots & 0 \\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\ - l _ {n} & 0 & 0 & \\dots & 1 \\\\ - l _ {n + 1} & 0 & 0 & \\dots & 0 \\end{array} \\right] \\in \\mathbb {R} ^ {(n + 1) \\times (n + 1)}.\n$$\n\nHowever, since the observer gains are scaled by $\\epsilon^{-i}$ , a small $\\epsilon$ yields a high bandwidth ESO that responds rapidly to state deviations but may induce a pronounced peaking phenomenon during the initial transient. To mitigate this effect, we employ a widely used smooth saturation technique to constrain the observer outputs. Let the saturated observer states be defined as\n\n$$\n\\bar {x} _ {i} = M _ {i} s \\left(\\frac {\\hat {x} _ {i}}{M _ {i}}\\right), \\quad i = 1, \\ldots , n + 1,\n$$\n\nwhere $M_{i} > 0$ are design bounds selected so that the saturation remains inactive during steady-state operation, and $s(\\cdot)$ is an odd, continuously differentiable saturation-like function given by\n\n$$\ns (v) = \\left\\{ \\begin{array}{l l} v, & 0 \\leq v \\leq 1 \\\\ v + \\frac {v - 1}{\\varepsilon} - \\frac {v ^ {2} - 1}{2 \\varepsilon}, & 1 \\leq v \\leq 1 + \\varepsilon \\\\ 1 + \\frac {\\varepsilon}{2}, & v > 1 + \\varepsilon , \\end{array} \\right.\n$$\n\nwhich satisfies $0 \\leq s'(v) \\leq 1$ , $|s(v) - \\mathrm{sat}(v)| \\leq \\frac{\\varepsilon}{2}$ , $\\forall v \\in \\mathbb{R}$ . For later use, we denote $\\bar{x} = [\\bar{x}_1, \\dots, \\bar{x}_{n+1}]^{\\mathrm{T}} \\in \\mathbb{R}^{n+1}$ and observe that $\\dot{\\bar{x}}_i = s' \\left( \\frac{\\hat{x}_i}{M_i} \\right) \\dot{\\hat{x}}_i$ . $i = 1, \\dots, n + 1$ .\n\n# 3.2 ADP Design\n\nSecond, we present the ADP design. An actor-critic architecture that is based on a neural network is employed, in which the critic approximates the optimal value function and the actor represents the corresponding optimal policy.\n\nTo facilitate the theoretical development of the ADP-based controller and the associated optimized control law, the following standard assumptions are introduced:\n\nAssumption 3. There exist constants $g_{\\mathrm{min}}, g_{\\mathrm{max}} > 0$ such that\n\n$$\ng _ {\\min } \\leq \\inf _ {x \\in \\mathcal {X}} | g _ {0} (x) | \\leq \\sup _ {x \\in \\mathcal {X}} | g _ {0} (x) | \\leq g _ {\\max }.\n$$\n\nMoreover, on $\\Omega \\triangleq \\mathcal{X} \\times \\mathcal{Z} \\times \\mathcal{W}$ (where $\\mathcal{W}$ is a compact set containing all admissible values of $\\eta$ from Assumption 2, $\\mathcal{X}$ is a compact set containing all admissible system states $x$ , and $\\mathcal{Z}$ is a bounded positively invariant set for the zero-dynamics state $z$ ), the relative mismatch between the true and nominal input gains is bounded\n\n$$\n\\kappa_ {g} \\triangleq \\sup _ {(x, z, \\eta) \\in \\Omega} \\frac {| g (x , z , \\eta) - g _ {0} (x) |}{| g _ {0} (x) |} < 1.\n$$\n\nAs shown in (3), the associated value equation can be derived as follows\n\n$$\nV (x) = \\min _ {u} J (x) = \\int_ {0} ^ {\\infty} \\left(Q (x (\\tau)) + u (\\tau) ^ {T} R u (\\tau)\\right) d \\tau .\n$$\n\nThe optimal value equation $V(x)$ satisfies the HJB equation\n\n$$\n0 = \\min _ {u} \\left[ Q (x) + u ^ {T} R u + \\nabla V (x) ^ {T} (f (x, z, \\eta) + g (x, z, \\eta) u) \\right].\n$$\n\ngives the corresponding optimized control law\n\n$$\nu ^ {*} (x) = - \\frac {1}{2} R ^ {- 1} g (x, z, \\eta) ^ {T} \\nabla V (x). \\tag {5}\n$$\n\nSubstituting (5) into the HJB yields\n\n$$\n\\begin{array}{l} 0 = Q (x) + \\nabla V (x) ^ {T} f (x, z, \\eta) \\\\ - \\frac {1}{4} \\nabla V (x) ^ {T} g (x, z, \\eta) R ^ {- 1} g (x, z, \\eta) ^ {T} \\nabla V (x). \\\\ \\end{array}\n$$\n\nThis expression provides the optimality condition for the value equation and forms the basis for the subsequent critic approximation in the ADP framework.\n\n# 3.3 Training Process\n\nTo avoid explicitly solving the HJB equation, we adopt an actor-critic architecture enhanced with an ESO-based extrapolation mechanism (Ran et al. (2022)). The critic network approximates the value function using a linearly parameterized structure\n\n$$\nV \\left(\\bar {x}; W _ {v}\\right) = W _ {v} ^ {\\mathrm {T}} \\phi (\\bar {x}),\n$$\n\nwhere $W_{c}$ denotes the critic weight vector and $\\phi (x)$ represents a basis vector.\n\nThe actor approximates the control policy as\n\n$$\nu _ {0} (\\bar {x}; W _ {a}) = - \\frac {1}{2} R ^ {- 1} g _ {0} (\\bar {x}) B ^ {\\mathrm {T}} \\phi_ {x} ^ {\\mathrm {T}} (\\bar {x}) W _ {a},\n$$\n\nwhere $W_{a}$ is the actor weight vector and $\\phi_x$ is the gradient of the basis function vector\n\nUsing the ESO-estimated state $\\hat{x}$ , the Instantaneous Bellman Error (IBE) is defined as\n\n$$\n\\begin{array}{l} \\varepsilon_ {t} \\triangleq V _ {x} (\\bar {x}, W _ {a}) \\left[ A \\bar {x} + B \\left(f _ {0} (\\bar {x}) + g _ {0} (\\bar {x}) u _ {0} (\\bar {x}, W _ {a})\\right) \\right] \\tag {6} \\\\ + Q (\\bar {x}) + u _ {0} ^ {T} (\\bar {x}, W _ {c}) R u _ {0} (\\bar {x}, W _ {c}), \\\\ \\end{array}\n$$\n\nwhich measures the deviation of the current actor-critic pair from the HJB optimality condition along the real trajectory.\n\nTo enhance state-space coverage and improve robustness, an extrapolated dataset $\\mathcal{X}_E = \\{\\xi^i\\}_{i=1}^N$ is generated over the admissible domain, and the approximate EBE is defined as follows:\n\n$$\n\\begin{array}{l} \\varepsilon_ {i} \\triangleq V _ {x} (\\xi^ {i}, W _ {a}) \\left[ A \\xi^ {i} + B \\left(f _ {0} (\\xi^ {i}) + g _ {0} (\\xi^ {i}) u _ {0} (\\xi^ {i}, W _ {a})\\right) \\right] \\\\ + Q \\left(\\xi^ {i}\\right) + u _ {0} ^ {T} \\left(\\xi^ {i}, W _ {c}\\right) R u _ {0} \\left(\\xi^ {i}, W _ {c}\\right). \\tag {7} \\\\ \\end{array}\n$$\n\nFor least-squares type learning, we introduce the regressor\n\n$$\n\\zeta = \\phi (\\bar {x}) \\left[ A \\bar {x} + B \\left(f _ {0} (\\bar {x}) + g _ {0} (\\bar {x}) u _ {0} (\\bar {x}, W _ {a})\\right) \\right],\n$$\n\nand the normalization term\n\n$$\n\\sigma = 1 + \\rho \\zeta^ {\\mathrm {T}} \\Psi \\zeta ,\n$$\n\nwhere $\\rho > 0$ is a constant normalization gain and the gain matrix $\\Psi$ evolves by\n\n$$\n\\dot {\\Psi} = \\left(\\gamma \\Psi - \\alpha_ {v 1} \\frac {\\Psi \\zeta \\zeta^ {\\mathrm {T}} \\Psi}{\\sigma^ {2}}\\right) \\mathbf {1} _ {\\{\\| \\Phi \\| \\leq \\delta_ {1} \\}}, \\quad \\| \\Phi (0) \\| \\leq \\delta_ {1},\n$$\n\nwhere $\\gamma > 0$ is a constant forgetting factor, and the weight update for the critic is given by\n\n$$\n\\dot {W} _ {v} = - \\alpha_ {v 1} \\Psi \\frac {\\zeta}{\\sigma} \\varepsilon_ {i} - \\frac {\\alpha_ {v 2}}{N} \\Psi \\sum_ {i = 1} ^ {N} \\frac {\\zeta_ {i}}{\\sigma_ {i}} \\varepsilon_ {i}, \\tag {8}\n$$\n\nwhere $\\delta_1 > 0$ is a saturation constant and constant $\\alpha_{v1},\\alpha_{v2} > 0$ yielding a least-squares-type adaptation with improved convergence and numerical robustness.\n\nThe regressor and normalization term used for constructing the EBE at the extrapolated sample points are defined as\n\n$$\n\\zeta_ {i} = \\phi_ {x} (\\xi^ {i}) [ A \\xi^ {i} + B (f _ {0} (\\xi^ {i}) + g _ {0} (\\xi^ {i}) u _ {0} (\\xi^ {i}, W _ {a})) ],\n$$\n\n$$\n\\sigma_ {i} = 1 + \\rho \\zeta_ {i} ^ {\\mathrm {T}} \\Psi \\zeta_ {i},\n$$\n\nand the updated law for actor weight is\n\n$$\n\\begin{array}{l} \\dot {W} _ {a} = - \\alpha_ {c 1} (W _ {a} - W _ {v}) - \\alpha_ {c 2} W _ {a} + \\frac {\\alpha_ {v 1} \\mathcal {H} _ {t} ^ {T} W _ {a} \\zeta^ {T}}{4 \\sigma} W _ {v} \\\\ + \\sum_ {i = 1} ^ {N} \\frac {\\alpha_ {v 1} \\mathcal {H} _ {i} ^ {T} W _ {a} \\zeta_ {i} ^ {T}}{4 N \\sigma_ {i}} W _ {v}, \\tag {9} \\\\ \\end{array}\n$$\n\nwhere $\\alpha_{c1},\\alpha_{c2} > 0$ are constant adaption gains and\n\n$$\n\\mathcal {H} _ {t} \\triangleq \\phi_ {x} (\\bar {x}) B g _ {0} (\\bar {x}) R ^ {- 1} g _ {0} ^ {T} (\\bar {x}) B ^ {T} \\phi_ {x} ^ {T} (\\bar {x}),\n$$\n\n$$\n\\mathcal {H} _ {i} \\triangleq \\phi_ {x} (\\xi^ {i}) B g _ {0} (\\xi^ {i}) R ^ {- 1} g _ {0} ^ {T} (\\xi^ {i}) B ^ {T} \\phi_ {x} ^ {T} (\\xi^ {i}).\n$$\n\nFinally, the resulting approximate optimal policy is expressed as\n\n$$\nu = u _ {0} (\\bar {x}, W _ {a}) - \\frac {\\bar {x} _ {n + 1}}{g _ {0} (\\bar {x})}. \\tag {8}\n$$\n\n# 3.4 Event-Triggered Mechanism\n\nLet $\\tau_{k}$ denote the $k$ -th moment of triggering, and let $x(\\tau_k)$ be the most recently transmitted state held by a Zero-Order Hold (ZOH) device. The event-triggering error is defined as\n\n$$\ne (t) = x \\left(\\tau_ {k}\\right) - x (t), \\quad t \\in \\left[ \\tau_ {k}, \\tau_ {k + 1}\\right), \\tag {8}\n$$\n\nthat measures the mismatch between the last transmitted state and the current state. During each inter-event interval, the control policy and the neural-network parameters are held constant, i.e., $\\pi (t) = \\pi (x(\\tau_k))$ , and the actor-critic weights are frozen. When $\\| e(t)\\|$ exceeds a prescribed threshold, a new event is created, the state is transmitted,\n\nand both the control input and the network weights are updated.\n\nTo guarantee stability while avoiding unnecessary updates, the triggering threshold is chosen in a state-dependent and weight-adaptive form:\n\n$$\n\\delta (t) = \\sqrt {\\frac {\\lambda_ {\\operatorname* {m i n}} (Q) \\beta}{g _ {m a x} ^ {2} L _ {a} ^ {2} \\| W _ {a} (t) \\| ^ {2}}} \\| x (t) \\|, \\tag {8}\n$$\n\nthat $\\lambda_{\\mathrm{min}}(Q)$ is the minimum eigenvalue of the cost matrix $Q$ , $\\beta \\in (0,1)$ is a design constant introduced in the Lyapunov analysis, $g_{max}$ is a known upper bound of the nominal input gain $g_0(x)$ , $L_{a}$ is the Lipschitz constant of the actor activation function, while $\\| W_a(t)\\|$ is an Euclidean norm of the actor weight vector. This construction links the triggering sensitivity to both the state magnitude and the current actor parameters: for large states or large weight norms, the condition becomes more stringent, enforcing timely updates; near the origin with well-converged weights, it becomes more relaxed, reducing redundant transmissions.\n\nThe triggering condition is explicitly given by\n\n$$\n\\left\\| e (t) \\right\\| ^ {2} > \\delta (t) ^ {2}. \\tag {8}\n$$\n\nAn event is generated only when this inequality is satisfied. Under this rule, all of the closed-loop signals are uniformly bounded, while the number of updates is significantly reduced compared with a periodic scheme (Dong et al. (2017)).\n\n# 4. INTEGRATED CONTROL FRAMEWORK\n\nA unified control pipeline is depicted in Fig. 1. At each control cycle, the ESO updates the augmented state estimates, the event-triggered monitor tracks the deviation between the current and last transmitted states, and the ADP either updates the actor-critic networks or holds the previous control input. A detailed procedure is summarized in Algorithm 1.\n\n![](images/794f5450d02a9afd202d2097bafc0ef302b476509e3ca588f244415aa39962c7.jpg) \nFig. 1. Pipeline of the composite control framework.\n\nAssumption 4. Under the proposed ETM, a Minimum Inter-Event Time (MIET) $\\tau_{\\mathrm{min}} > 0$ is enforced such that $\\tau_{k + 1} - \\tau_k \\geq \\tau_{\\mathrm{min}}, \\forall k \\in \\mathbb{N}$ , where $\\tau_k$ denotes the sequence of triggering instants. This condition prevents Zeno behavior and guarantees that the count of triggering events in any finite time interval remains finite.\n\nTheorem 1. The composite control architecture includes the uncertain. nonlinear system (1), the ESO (4), the\n\nAlgorithm 1 Composite Control Framework \n1: Initialization \n2: Set critic and actor weights $W_{c}(0)$ and $W_{a}(0)$ \n3: Initialize ESO states $z(0)$ in (4) \n4: Set initial triggering instant $\\tau_0 = 0$ and store $x(\\tau_0)$ and $u(\\tau_0)$ \n5: Generate extrapolation set $\\mathcal{X}_E$ for computing EBE \n6: \n7: Online Control Loop \n8: At each time $t$ : \n9: Update ESO via (4) to obtain $\\hat{x}(t)$ and $\\hat{d}(t)$ \n10: Compute triggering error using (3.4) \n11: Evaluate threshold $\\delta(t)$ via (3.4) \n12: Check triggering condition (3.4) \n13: If the triggering condition is satisfied then \n14: Compute IBE $\\varepsilon_t$ using (6). \n15: Compute EBE $\\varepsilon_i$ using (7) \n16: Update critic weights $W_{v}$ via (8) \n17: Update actor weights $W_{a}$ via (9) \n18: Compute new control input $u$ using (3.3) \n19: Set $\\tau_{k+1} = t$ and store $\\hat{x}(\\tau_{k+1})$ and $u(\\tau_{k+1})$ \n20: Else \n21: Hold $u(t) = u(\\tau_k)$ via ZOH. \n22: End if \n23: Apply $u(t)$ to the plant dynamics (1).\n\nADP-based controller (3.3), and the proposed ETM. Suppose that: (i) Assumptions 1-5 hold; (ii) the observer gain vector is chosen such that the ESO error dynamics are globally asymptotically stable; (iii) the uncertainty compensation renders the plant dynamics equivalent to the nominal model used in the ADP design; and (iv) the estimation errors of the actor-critic neural-network weights are uniformly ultimately bounded. Then all closed-loop signals are uniformly ultimately bounded, and the state $x(t)$ converges to a small neighborhood of the origin.\n\nProof. We provide a sketch of the argument. Consider the Lyapunov candidate\n\n$$\nV = V ^ {*} (x) + \\frac {1}{2} (\\Theta_ {v} ^ {e}) ^ {\\mathrm {T}} \\Gamma^ {- 1} \\Theta_ {v} ^ {e} + \\frac {1}{2} (\\Theta_ {c} ^ {e}) ^ {\\mathrm {T}} \\Theta_ {c} ^ {e}.\n$$\n\nwhere $V^{*}(x)$ is the optimal value function, and $\\Theta_v^e,\\Theta_c^e$ denote the critic and actor weight errors, respectively.\n\nOn each inter-event interval $[\\tau_k, \\tau_{k+1})$ , the ESO, the control policy, and the actor-critic weights are fixed. Under the proposed control law, standard Lyapunov analysis yields\n\n$$\n\\dot {V} \\leq - \\alpha \\| x \\| ^ {2} + \\mathcal {O} (\\varepsilon),\n$$\n\nfor some $\\alpha > 0$ and sufficiently small approximation and estimation errors, implying boundedness of all closed-loop variables on each inter-event interval. At each triggering instant, the ESO correction and the actor-critic updates are designed so that $V$ does not increase, which ensures UUB of $(x, \\Theta_v^e, \\Theta_c^e)$ .\n\nBy Assumption 1, the inter-event times satisfy $\\tau_{k + 1} - \\tau_k\\geq$ $\\tau_{\\mathrm{min}} > 0$ , which excludes Zeno behavior and guarantees that only finitely many events occur on any finite time interval.\n\nCombining these properties shows that this composite control framework system is uniformly ultimately bounded\n\nand that $x(t)$ converges to a small neighborhood of the origin.\n\nRemark 2. The above Lyapunov analysis is carried out only on the continuous-time closed loop between triggering instants. The event-triggering error $e(t)$ is not explicitly included in the Lyapunov function. As a result, the analysis provides practical stability within each inter-event interval, but does not constitute a full hybrid-system proof. A rigorous global stability proof would require augmenting the Lyapunov function with $e(t)$ and deriving an ISS-type inequality of the form\n\n$$\n\\dot {V} \\leq - k _ {1} \\| x \\| ^ {2} + k _ {2} \\| e \\| ^ {2}, \\tag {6}\n$$\n\nwhere $k_{1} > 0$ is the nominal decay rate of the Lyapunov function, $k_{2} > 0$ quantifies how the Lyapunov derivative is affected by the measurement error, with a triggering rule guaranteeing $\\| e \\| \\leq k_{0} \\| x \\|$ and $k_{2} k_{0}^{2} < k_{1}$ , while $k_{0} > 0$ is the design parameter in the triggering condition that restricts the size of $e(t)$ relative to $x(t)$ . A full stability analysis will be included in an extended journal version.\n\n# 5. SIMULATION RESULTS\n\nIn what follows, two numerical case studies are conducted to evaluate the capability and robustness of the proposed composite control framework.\n\n# 5.1 Example 1\n\nTo demonstrate the controller performance, we first consider a third-order uncertain nonlinear system. The real plant is given by\n\n$$\n\\left\\{ \\begin{array}{l} \\dot {z} = \\underbrace {- \\left(x _ {1} ^ {2} + 0 . 5 \\eta^ {2}\\right) z} _ {f _ {z} (x, z, \\eta)} \\\\ \\dot {x} _ {1} = x _ {2}, \\\\ \\dot {x} _ {2} = \\underbrace {- 1 . 5 x _ {1} - x _ {2} + 1 . 5 \\left(x _ {1} + x _ {2}\\right) \\left(\\sin \\left(x _ {2}\\right) + 2\\right) ^ {2}} _ {f _ {0} (x)} \\\\ \\quad + \\underbrace {\\left(- x _ {2} + \\eta + z ^ {2}\\right)} _ {\\Delta f (x, z, \\eta)} \\\\ \\quad + \\underbrace {\\left(\\cos \\left(x _ {1}\\right) + 2\\right)} _ {g _ {0} (x)} + \\underbrace {\\left(\\sin \\left(x _ {2}\\right) - \\eta\\right)} _ {\\Delta g (x, z, \\eta)} u, \\\\ y = x _ {1}. \\end{array} \\right. \\tag {6}\n$$\n\nIn this example, we construct the Bellman Error (BE) over a uniformly discretized exploration set $\\mathcal{X}_E$ , defined as $\\mathcal{X}_E = [-2, 2]_{0.5} \\times [-2, 2]_{0.5}$ . The ESO parameters are selected as follows. The observer gain vector is set to $L = [2 \\ 2 \\ 1]^{\\mathrm{T}}$ . The small positive constant is set to $\\varepsilon = 0.03$ to accelerate ESO convergence while maintaining robustness against measurement noise.\n\nThe saturation bounds for the ESO outputs are chosen as $M_1 = M_2 = M_3 = 3$ . The nominal-function saturation limits are selected as $M_f = 7$ , $M_g = 3$ , ensuring boundedness of the ESO-based control law.\n\nPerformance Analysis Fig. 2 demonstrates that the ESO performs effectively for the considered nonlinear system. The observer-generated state trajectories closely track the actual system states, confirming the accuracy and reliability of the proposed observer.\n\n![](images/a4872d204eccef15e9bffa350e3a3fac26d0d53f1b9e84d87c32831427d65c64.jpg) \nFig. 2. System state trajectories and their ESO estimates.\n\nFig. 3 shows that the control input $u$ under the proposed framework exhibits significant activity during the initial transient period due to uncertainties. After this short transient, the control input settles quickly and stays close to zero, maintaining stable behavior, indicating that the proposed control strategy achieves fast transient response and effective disturbance rejection.\n\nTo quantify the computational efficiency of the proposed ETM, we define the update saving ratio as $\\eta = \\frac{N_{\\mathrm{skipped}}}{N_{\\mathrm{total}}} \\times 100\\%$ , where $N_{\\mathrm{skipped}}$ denotes the number of skipped control updates due to the triggering condition not being satisfied and $N_{\\mathrm{total}} = N_{\\mathrm{skipped}} + N_{\\mathrm{updated}}$ represents the total number of potential update instants. Based on the simulation study, an update-saving rate of $72\\%$ is obtained, demonstrating a significant reduction in computational load while maintaining system stability.\n\nAs shown in Fig. 3, both $x_{1}$ and $x_{2}$ remain stable and converge to the equilibrium.\n\nAs illustrated in Fig. 4, the control input produced by the composite controller stays well behaved and within bounds, even when roughly $71\\%$ of its possible update instances are omitted. In contrast to the periodically updated ADP (red dashed), the ETM greatly reduces the number of control updates while maintaining comparable transient response and steady-state performance.\n\nFig. 5 shows the distribution of triggering instants over the simulation horizon. Each cross mark represents an execution of the actor-critic update when the event-triggering condition is satisfied.\n\n![](images/b7af455997a79ab0168c79b279f8c368df3cf3596359228fc943ced84b5ba085.jpg) \nFig. 3. State trajectories for composite control framework in Example 1\n\n![](images/a77928652affc340566b53c8bf43db87637ed42be4be1718c0f12504a6dbea4a.jpg) \nFig. 4. Control signal composite control framework compared with those generated by the periodic ADP strategy.\n\n![](images/550618587fd4d944ff0b8e7f2deeee831a2cc9f223a892e58ed541435c9de21c.jpg) \nFig. 5. Event trigger distribution\n\n# 5.2 Example 2\n\nIn this example, the proposed control method is implemented on an inverted pendulum system subject to both internal and external nonlinear disturbances. The system dynamics are formulated as follows:\n\n![](images/56b01022adc4edc26f87ef1907c7114fb55999bb5b8bbe7442073dbbdffb8dd3.jpg)\n\n![](images/c33789c340d82adfe5b21256da5fd5138af6037a34ffb5c172c391055a9cd0eb.jpg)\n\n![](images/cd324f80dbd3ae2049b60dee949fe851be503a11492b03c0052fafb4433cb377.jpg) \nFig. 6. System state trajectories and their ESO estimates.\n\n$$\n\\left\\{ \\begin{array}{l} \\dot {z} = \\underbrace {0 . 5 \\eta (t) z} _ {f _ {z} (z, \\eta)} \\\\ \\dot {x} _ {1} = \\underbrace {x _ {2}} _ {f _ {x _ {1}} (x _ {2})}, \\\\ \\dot {x} _ {2} = \\underbrace {- \\frac {g}{l} \\sin \\left(x _ {1}\\right) - \\frac {b}{m l ^ {2}} x _ {2}} _ {f _ {0} \\left(x _ {1}, x _ {2}\\right)} \\\\ + \\underbrace {5 e ^ {- 0 . 3 t} + 0 . 5 \\sin \\left(x _ {1}\\right) + 0 . 5 z} _ {\\Delta f \\left(x _ {1}, z, t\\right)} + \\underbrace {\\frac {1}{m l ^ {2}}} _ {g _ {0}} u, \\\\ y = x _ {1}. \\end{array} \\right. \\tag {6}\n$$\n\nIn this example, we construct the BE over a predefined rectangular exploration set: $\\mathcal{X}_E = [-2, 2]_{0.5} \\times [-5, 5]_{1.0}$ .\n\nThe observer gain is selected as $L = [2 \\ 2 \\ 1]^{\\mathrm{T}}$ , The small positive constant used in the ESO update is set to $\\varepsilon = 0.03$ . The saturation bounds for the ESO states are configured as $M_1 = 1, M_2 = 1, M_3 = 3$ , and the saturation bounds for the nominal functions are selected as $M_f = M_g = 7$\n\nThe nonlinear pendulum employed in the simulation is characterized by the parameters $m = 0.8 \\, \\mathrm{kg}, l = 1.2 \\, \\mathrm{m}, b = 0.2, g = 9.81 \\, \\mathrm{m/s}^2$ . Here $m$ denotes the pendulum mass, $l$ the rod length, $b$ the viscous friction coefficient, and $g$ is the gravitational acceleration.\n\nPerformance Analysis Fig. 6 shows that the ESO reconstructs all system states, including the aggregated disturbance term, with high accuracy. Even during fast transients and highly nonlinear phases, the estimated trajectories $\\hat{x}_i$ remain tightly aligned with the true states, showing only minimal deviation. The inset plots further demonstrate that the ESO can track fast dynamics and suppress disturbances with rapid convergence.\n\n![](images/2c0533392f870bd64294b3f5f25755f75813f4b4e56affe9b9b3b13bf99672bb.jpg)\n\n![](images/d4a036ebf42d7fc13d65033aed2ecfd5797dee38012097a1d504581bd2b047bc.jpg) \nFig. 7. Evolution of the system states from composite control framework and standard ADP method\n\nFrom a theoretical perspective, Fig. 7 illustrates the main benefit of the proposed composite control framework over its time-triggered counterpart. ETM updates only when the state deviation crosses a prescribed threshold, allowing the learning and control actions to respond only to relevant changes in the dynamics. As a result, the Event Trigger (ET)-ADP achieves faster and more structured convergence, most notably in the $x_{2}$ response, which uses control and learning updates more efficiently than the uniformly sampled, time-triggered scheme. In this example, our mechanism achieved a $56\\%$ reduction in computational cost.\n\nAs depicted in Fig. 8, the proposed framework controller delivers a quick corrective action during the initial transient phase, which is expected for stabilizing the nonlinear pendulum. After the system reaches steady state, the control input rapidly decays and remains near zero, reflecting stable closed-loop behavior and low steady-state effort.\n\nFig. 8 further shows that the ET-ADP controller exhibits smaller amplitude variations and a faster convergence compared with the conventional time-triggered ADP. The time-triggered controller, by contrast, produces larger alternating control swings and wide input excursions, which are consistent with the overshoot and oscillatory behavior observed in Fig. 7. These results highlight the benefits of ETM: by updating only when necessary, it avoids excessive corrective actions, suppresses destabilizing oscillations, and promotes a more stable, energy-efficient control behavior.\n\n![](images/7270ace3c943ea96f6380a7d04374b5602cac0f61bd8d8e0384fc6c98da9a698.jpg) \nFig. 8. Control signals generated by the proposed composite control framework, compared with those obtained from the periodic ADP strategy.\n\n![](images/5551dc83bfad89821e6cd17399dfccb5aa34999f7f39a73bf05be6a647cb0d48.jpg) \nFig. 9. Event trigger distribution \nFig. 9 depicts the timing of the triggering events during the simulation. A cross indicates an instance where the event-triggering condition prompts an actor-critic update.\n\n# 6. CONCLUSION\n\nAn ESO-assisted ADP architecture with an ETM has been presented for uncertain nonlinear systems. In this framework, the ESO-based compensation scheme provides real-time estimation and removal of lumped uncertainties, while the augmented state formulation embeds the tracking error and system dynamics into the optimal control design. The ADP controller is then employed to approximate the optimal policy of the compensated subsystem through online learning. Simulation studies verify the framework's capability, showing that the controller maintains strong resistance to disturbances despite reduced update frequency enabled by the ETM. The developed methodology will be further expanded to handle multi-input-multi-output configurations in future investigations. Also, a complete hybrid-system stability proof that explicitly incorporates the triggering error will be developed in the extended journal version.\n\n# REFERENCES\n\nBian, T., Jiang, Z., and Jiang, Y. (2017). A reinforcement learning approach to regulation of continuous-time systems. IEEE Transactions on Automatic Control, 62(1), 113-128.\n\nChen, W., Yang, J., Guo, L., and Li, S. (2016). Disturbance-observer-based control and related methods—an overview. IEEE Transactions on Industrial Electronics, 63(2), 1083-1095. \nDong, L., Zhong, X., Sun, C., and He, H. (2017). Event-triggered adaptive dynamic programming for continuous-time systems with control constraints. IEEE Transactions on Neural Networks and Learning Systems, 28(8), 1941-1953. \nGao, Z. (2003). Scaling and bandwidth-parameterization based controller tuning. In Proceedings of the American Control Conference, 4989-4996. Denver, CO. \nGuo, B. and Zhao, Z. (2013). On the convergence of the nonlinear active disturbance rejection control for mimo systems. SIAM Journal on Control and Optimization, 51(2), 1727-1757. \nHan, J. (2009). From pid to active disturbance rejection control. IEEE Transactions on Industrial Electronics, 56(3), 900-906. \nHan, X., Zhao, X., Wang, D., and Wang, B. (2024). Event-triggered-based online integral reinforcement learning for optimal control of unknown constrained nonlinear systems. International Journal of Control, 97(2), 213-225. \nHeemels, W., Johansson, K., and Tabuada, P. (2012). An introduction to event-triggered and self-triggered control. In Proceedings of the 51st IEEE Conference on Decision and Control (CDC), 3270-3285. \nHu, J., Tang, Z., Jin, X., Zhang, B., Dong, Y., and Huang, X. (2025). Hierarchical testing with rabbit optimization for industrial cyber-physical systems. IEEE Transactions on Industrial Cyber-Physical Systems. \nJiang, Y. and Jiang, Z. (2012). Computational Adaptive Optimal Control: The Thinking and Development of Dual Heuristic Programming. Wiley, Hoboken, NJ. \nKamalapurkar, R., Walters, P., Lewis, F., and Kiumarsi, B. (2016). Concurrent learning-based approximate optimal regulation of continuous-time nonlinear systems. Automatica, 64, 1-10. \nLewis, F. and Vrabie, D. (2009). Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits and Systems Magazine, 9(3), 32-50. \nLu, M., Meng, X., Huang, R., Chen, L., Tang, Z., Li, J., Peyton, A., and Yin, W. (2020). Determination of surface crack orientation based on thin-skin regime using triple-coil drive-pickup eddy-current sensor. IEEE Transactions on Instrumentation and Measurement, 70, 1-9. \nLuo, Z., Zhang, P., Ding, X., Tang, Z., Wang, C., and Wang, J. (2020). Adaptive affine formation maneuver control of second-order multi-agent systems with disturbances. In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), 1071-1076. IEEE. \nOnuoha, O., Kurawa, S., Tang, Z., and Dong, Y. (2024a). Discrete-time stress matrix-based formation control of general linear multi-agent systems. arXiv preprint arXiv:2401.05083. \nOnuoha, O., Ubochi, B.C., Kurawa, S., Tang, Z., Wu, C., and Dong, Y. (2024b). Stress matrix-based formation control of multi-agent systems with discrete-time communication. In 2024 12th International Conference on Systems and Control (ICSC), 173-177. IEEE.\n\nPu, Z., Yuan, R., Yi, J., and Tan, X. (2015). A class of adaptive extended state observers for nonlinear disturbed systems. IEEE Transactions on Industrial Electronics, 62(9), 5858-5869. \nQin, X., Huang, W., Wang, X., Tang, Z., and Liu, Z. (2022). Real-time remaining useful life prediction of cutting tools using sparse augmented lagrangian analysis and gaussian process regression. Sensors, 23(1), 413. \nRan, M., Li, J., and Xie, L. (2021). A new extended state observer for uncertain nonlinear systems. Automatica, 131, 109772. \nRan, M., Li, J., and Xie, L. (2022). Reinforcement-learning-based disturbance rejection control for uncertain nonlinear systems. IEEE Transactions on Cybernetics, 52(9), 9621-9633. \nTabuada, P. (2007). Event-triggered real-time scheduling of stabilizing control tasks. IEEE Transactions on Automatic Control, 52(9), 1680-1685. \nTang, Z. (2019). Control design for the active magnetic bearing system. The University of Manchester (United Kingdom). \nTang, Z., Chen, X., Jin, X., Zhang, B., and Liang, W. (2025). A temporal scale transformer framework for precise remaining useful life prediction in fuel cells. arXiv preprint arXiv:2504.08803. \nTang, Z., Passmore, C., Rossiter, J.A., Ebbens, S., Dunderdale, G., and Panoutsos, G. (2024a). Disturbance observer-based optimal tracking control for slot coating process with mismatched input disturbances. In 2024 UKACC 14th International Conference on Control (CONTROL), 55-56. IEEE. \nTang, Z., Rossiter, A., Dong, Y., and Panoutsos, G. (2024b). Reinforcement learning-based output stabilization control for nonlinear systems with generalized disturbances. In 2024 IEEE International Conference on Industrial Technology (ICIT), 1-6. IEEE. \nTang, Z., Rossiter, A., Jin, X., Zhang, B., and Panoutsos, G. (2024c). Output tracking for uncertain time-delay systems via robust reinforcement learning control. In 2024 43rd Chinese Control Conference (CCC), 2219-2226. IEEE. \nTang, Z., Rossiter, A., and Panoutsos, G. (2024d). A reinforcement learning-based approach for optimal output tracking in uncertain nonlinear systems with mismatched disturbances. In 2024 UKACC 14th International Conference on Control (CONTROL), 169-174. IEEE. \nTang, Z., Wang, C., and Ding, Z. (2016). Unmatched disturbance rejection for amb systems via dobc approach. In 2016 35th Chinese Control Conference (CCC), 5931-5935. IEEE. \nTang, Z., Yu, Y., Li, Z., and Ding, Z. (2019). Disturbance rejection via iterative learning control with a disturbance observer for active magnetic bearing systems. Frontiers of Information Technology & Electronic Engineering, 20(1), 131-140. \nXue, S., Luo, B., Liu, D., and Li, Y. (2020). Adaptive dynamic programming based event-triggered control for unknown continuous-time nonlinear systems with input constraints. Neurocomputing, 396, 191-200. \nYao, Y., Tang, Z., and Zeng, L. (2025). From learning to adoption: Understanding marketing students'acceptance of generative ai technologies. In 2025\n\nGlobal Marketing Conference at Hong Kong Proceedings, 908. \nZhang, B., Jin, X., Liang, W., Chen, X., Li, Z., Panoutsos, G., Liu, Z., and Tang, Z. (2024). Tabnet: Locally interpretable estimation and prediction for advanced proton exchange membrane fuel cell health management. *Electronics*, 13(7), 1358. \nZhao, H., Tang, Z., Li, Z., Dong, Y., Si, Y., Lu, M., and Panoutsos, G. (2024). Real-time object detection and robotic manipulation for agriculture using a yolobased learning approach. In 2024 IEEE International Conference on Industrial Technology (ICIT), 1-6. IEEE."}
# Probabilistic combinatorics at exponentially small scales Abstract. In many applications of the probabilistic method, one looks to study phenomena that occur "with high probability". More recently however, in an attempt to understand some of the most fundamental problems in combinatorics, researchers have been diving deeper into these probability spaces, and understanding phenomena that occur at much smaller probability scales. Here I will survey a few of these ideas from the perspective of my own work in the area. 1 Introduction In many applications of the probabilistic method one aims to show the existence of an object not by constructing the object directly, but rather by setting up a probability space and showing that the one can draw the object of interest with non-zero probability. While this might at first seem like a trivial recasting of the problem, it has proven to be incredibly powerful change of perspective which has come to dominate the field in recent years. Classically, in such applications of the probabilistic method, one studies events that occur not just with non-zero probability, but with probability close to one. A state of affairs which has been cheekily sloganized as "the probabilistic method finds the hay in the haystack". In this survey we will touch on a few topics where we are forced to go far beyond these typical behaviors and study phenomena that occur at exponentially small probability scales - right at the edge where probability is still useful. Perhaps surprisingly, there is quite a bit to be said in these cases. Rather than attempt to properly survey this topic, which I am not properly placed to do, I will instead use this theme to tie together a few strands of my own recent work and to highlight some of my thinking on the topics where I have had some success. Flat littlewood polynomials. We begin this survey in Section 2 where we introduce the area of discrepancy theory before sketching how some of these ideas came to inform the thinking behind a recent result of myself, joint with Paul Balister, Béla Bollobás, Robert Morris and Marius Tiba, in the area of harmonic analysis. Here we used ideas from discrepancy theory to construct so called flat Littlewood polynomials, thus resolving an old conjecture of Littlewood. A Littlewood polynomial is a polynomial with all coefficients in $\{-1, 1\}$ . Inspired by a question of Erdős, Littlewood went on to consider the following question regarding about how "flat" the modulus of such polynomials can be. If $P$ is a degree $n$ Littlewood polynomial then, by a simple application of Parseval, we see that we must have that $\max_{z:|z|=1} |P(z)| \geqslant \sqrt{n+1}$ . Littlewood conjectured that there exist such polynomials that are "flat" as possible, in the sense that $|P(z)| = \Theta(\sqrt{n})$ for all $z$ with $|z| = 1$ . Using tools from probability and discrepancy theory, we solved this conjecture by showing that there are constants $c_{1}, c_{2} > 0$ so that, for all $n \geqslant 2$ , there exists a Littlewood polynomial $P(z)$ of degree $n$ with $$ c _ {1} \sqrt {n} \leqslant | P (z) | \leqslant c _ {2} \sqrt {n} \tag {1.1} $$ for all $z \in \mathbb{C}$ with $|z| = 1$ . This set us up to discuss some beautiful open problems in discrepancy theory and some exciting recent developments. Sphere packing in high dimensions. In Section 3 we will go on to discuss a technique which can be used to "access" these events at very small probability scales; via a random process that is biased towards the event of interest. We will use this thinking to frame the recent work of myself, joint with Marcelo Campos, Matthew Jenssen and Marcus Michelen, where we construct sphere packings and spherical codes in large dimension. In particular, we show the following new lower bounds on the classical sphere packing problem, giving the first asymptotically growing improvement since the work 1947 work of Rogers. Let $\theta (d)$ be the maximum proportion of $\mathbb{R}^d$ which is covered by non-overlapping identical spheres. As the dimension $d$ tends to infinity, we have $$ \theta (d) \geqslant (1 - o (1)) \frac {d \log d}{2 ^ {d + 1}}. \tag {1.2} $$ One interesting aspect of our sphere packings is that they are both "dense" and highly "disordered". Such sphere packings arise in the physics literature and were conjectured to exist by Parisi and Zamponi. It moreover seems reasonable to conjecture that the bound (1.2) is actually sharp, up to constant factors, for such "highly disordered" sphere packings. This method also naturally adapts to improve the lower bounds on spherical codes in large dimension. Let $A(d, \theta)$ be the maximum proportion of the sphere $\mathbb{S}^{d-1}$ that can be covered by non-overlapping spherical caps of radius $\theta$ . If $\theta \in (0, \pi/2)$ , then $$ A (d, \theta) \geqslant (1 - o (1)) \frac {d \log d}{2 s _ {d} (\theta)}, $$ where $s_d(\theta)$ denotes the normalized volume of a spherical cap. We will then go on to discuss a further beautiful advance of Klartag who has construed even denser sphere packings in large dimension by studying properties of random lattices at exponentially small scales. He proved that $$ \theta (d) \geqslant (c - o (1)) d ^ {2} / 2 ^ {d}, $$ for some $c > 0$ . Interestingly, it seems reasonable to conjecture that Klartag's bound is also sharp, up to constant factors, in the case of lattice packings. Despite these advances, there still remain exponential gaps between the upper and lower bounds on the quantities $\theta(d)$ and $A(d,\theta)$ as $d\to \infty$ and it seems to be an incredibly inciting question to improve either by an exponential factor. Random matrix theory at exponentially small scales. In the final section we shall keep with our theme of high dimensional geometry and move on to discuss phenomena that occur at exponentially small scales in the setting of random matrix theory. This discussion will center around a recent result of the author in joint work with Campos, Jenssen and Michelen on the singularity probability of a random symmetric matrix. We showed that if $A_{n}$ is drawn uniformly at random from the $n \times n$ symmetric matrices with entries in $\{-1,1\}$ then $$ \mathbb {P} \left(\det \left(A _ {n}\right) = 0\right) \leqslant e ^ {- c n}, \tag {1.3} $$ where $c > 0$ is an absolute constant. As it happens, this will open us to discuss a handful of other results on related topics - the study of the distribution of the least singular value and estimates on the repulsion and clustering of the eigenvalues of random symmetric matrices. We will, in particular, highlight several of the techniques developed in service of (1.3). 2 Beck, Spencer and Flat Littlewood polynomials Motivated by generalizing his famous theorem on 3-term progressions in dense sets of integers, Roth considered the following natural problem on two-colourings of an interval. Let $n$ be a large integer and let $f$ be an arbitrary function $f:[n] \to \{-1,1\}$ , which we extend to all of $\mathbb{N}$ by setting $f(x) = 0$ for all $x > n$ . Define the discrepancy of $f$ , denoted $D(f)$ , to be the maximum sum in absolute value along an arithmetic progression; $$ D (f) = \max _ {a, b} \left| \sum_ {x} f (a x + b) \right|. $$ Roth proved that $D(f) \geqslant cn^{1/4}$ for all such functions $f$ and conjectured that this could be improved to $n^{1/2 - o(1)}$ . This was then disproved by Sarkozy who's result was improved a few more times before Beck introduced his famous partial coloring method to resolve the question up to logarithms, showing that Roth's original bound was just about sharp $$ \min _ {f} D (f) = n ^ {1 / 4 + o (1)}. $$ Some years later, the general utility of this beautiful idea was noticed by Spencer who used Beck's partial colouring idea to prove Spencer's famous theorem in discrepancy theory. THEOREM 2.1. Let $A$ be a $n \times n$ matrix with all entries bounded in absolute value by one. Then there exists a vector $x \in \{-1,1\}^n$ for which $$ \| A x \| _ {\infty} \leqslant C \sqrt {n}, \tag {2.1} $$ for some absolute constant $C > 0$ . What is remarkable about this result is that if one takes a suitably typical matrix $^2$ and a random $x \in \{-1, 1\}^n$ one will have $$ \| A x \| _ {2} = \Theta (\sqrt {n \log n}), $$ with high probability. Moreover, one expects that it is exponentially unlikely that a random vector $x \in \{-1, 1\}^n$ satisfies $\| Ax \|_{\infty} \leqslant C \sqrt{n}$ . This is because for each $i$ we have $|(Ax)_i| \leqslant C \sqrt{n}$ with constant probability and one expects that each entry is roughly independent (again assuming $A$ is appropriately non-degenerate). Thus to find the vector $x \in \{-1, 1\}^n$ guaranteed by Theorem 2.1, "standard" probabilistic arguments are not available. Instead one can adapt the simple but ingenious differencing method of Beck to this setting. For this, we iterate the following lemma that finds a vector $x \in \{-1, 0, 1\}^n$ with not to many zero entries. LEMMA 2.2. Let $A$ be a $n \times n$ matrix with complex entries, bounded in absolute value by one. Then there exists a vector $x \in \{-1,0,1\}^n$ with at least $n/4$ non-zero terms for which $$ \| A x \| _ {\infty} \leqslant C \sqrt {n}, \tag {2.2} $$ for some absolute constant $C > 0$ . Proof sketch. To prove this consider all images $\{Ay:y\in \{0,1\} ^n\}$ . One then uses a (careful) application of the pigeon-hole principle to show that there is a subset $S\subset \{0,1\} ^n$ with $|S|\geqslant 2^{(1 - \varepsilon)n}$ , for which $\| y - y^{\prime}\|_{\infty}\leqslant C\sqrt{n}$ for all $y,y^{\prime}\in S$ . Since each vector $y\in \{0,1\}$ has at most $\sum_{i}^{d}\binom{n}{i}$ vectors with hamming distance $\leqslant d$ , there must be a pair $y,y^{\prime}\in S$ with the property that $x = y - y^{\prime}$ has at
# Probabilistic combinatorics at exponentially small scales Abstract. In many applications of the probabilistic method, one looks to study phenomena that occur "with high probability". More recently however, in an attempt to understand some of the most fundamental problems in combinatorics, researchers have been diving deeper into these probability spaces, and understanding phenomena that occur at much smaller probability scales. Here I will survey a few of these ideas from the perspective of my own work in the area. 1 Introduction In many applications of the probabilistic method one aims to show the existence of an object not by constructing the object directly, but rather by setting up a probability space and showing that the one can draw the object of interest with non-zero probability. While this might at first seem like a trivial recasting of the problem, it has proven to be incredibly powerful change of perspective which has come to dominate the field in recent years. Classically, in such applications of the probabilistic method, one studies events that occur not just with non-zero probability, but with probability close to one. A state of affairs which has been cheekily sloganized as "the probabilistic method finds the hay in the haystack". In this survey we will touch on a few topics where we are forced to go far beyond these typical behaviors and study phenomena that occur at exponentially small probability scales - right at the edge where probability is still useful. Perhaps surprisingly, there is quite a bit to be said in these cases. Rather than attempt to properly survey this topic, which I am not properly placed to do, I will instead use this theme to tie together a few strands of my own recent work and to highlight some of my thinking on the topics where I have had some success. Flat littlewood polynomials. We begin this survey in Section 2 where we introduce the area of discrepancy theory before sketching how some of these ideas came to inform the thinking behind a recent result of myself, joint with Paul Balister, Béla Bollobás, Robert Morris and Marius Tiba, in the area of harmonic analysis. Here we used ideas from discrepancy theory to construct so called flat Littlewood polynomials, thus resolving an old conjecture of Littlewood. A Littlewood polynomial is a polynomial with all coefficients in $\{-1, 1\}$ . Inspired by a question of Erdős, Littlewood went on to consider the following question regarding about how "flat" the modulus of such polynomials can be. If $P$ is a degree $n$ Littlewood polynomial then, by a simple application of Parseval, we see that we must have that $\max_{z:|z|=1} |P(z)| \geqslant \sqrt{n+1}$ . Littlewood conjectured that there exist such polynomials that are "flat" as possible, in the sense that $|P(z)| = \Theta(\sqrt{n})$ for all $z$ with $|z| = 1$ . Using tools from probability and discrepancy theory, we solved this conjecture by showing that there are constants $c_{1}, c_{2} > 0$ so that, for all $n \geqslant 2$ , there exists a Littlewood polynomial $P(z)$ of degree $n$ with $$ c _ {1} \sqrt {n} \leqslant | P (z) | \leqslant c _ {2} \sqrt {n} \tag {1.1} $$ for all $z \in \mathbb{C}$ with $|z| = 1$ . This set us up to discuss some beautiful open problems in discrepancy theory and some exciting recent developments. Sphere packing in high dimensions. In Section 3 we will go on to discuss a technique which can be used to "access" these events at very small probability scales; via a random process that is biased towards the event of interest. We will use this thinking to frame the recent work of myself, joint with Marcelo Campos, Matthew Jenssen and Marcus Michelen, where we construct sphere packings and spherical codes in large dimension. In particular, we show the following new lower bounds on the classical sphere packing problem, giving the first asymptotically growing improvement since the work 1947 work of Rogers. Let $\theta (d)$ be the maximum proportion of $\mathbb{R}^d$ which is covered by non-overlapping identical spheres. As the dimension $d$ tends to infinity, we have $$ \theta (d) \geqslant (1 - o (1)) \frac {d \log d}{2 ^ {d + 1}}. \tag {1.2} $$ One interesting aspect of our sphere packings is that they are both "dense" and highly "disordered". Such sphere packings arise in the physics literature and were conjectured to exist by Parisi and Zamponi. It moreover seems reasonable to conjecture that the bound (1.2) is actually sharp, up to constant factors, for such "highly disordered" sphere packings. This method also naturally adapts to improve the lower bounds on spherical codes in large dimension. Let $A(d, \theta)$ be the maximum proportion of the sphere $\mathbb{S}^{d-1}$ that can be covered by non-overlapping spherical caps of radius $\theta$ . If $\theta \in (0, \pi/2)$ , then $$ A (d, \theta) \geqslant (1 - o (1)) \frac {d \log d}{2 s _ {d} (\theta)}, $$ where $s_d(\theta)$ denotes the normalized volume of a spherical cap. We will then go on to discuss a further beautiful advance of Klartag who has construed even denser sphere packings in large dimension by studying properties of random lattices at exponentially small scales. He proved that $$ \theta (d) \geqslant (c - o (1)) d ^ {2} / 2 ^ {d}, $$ for some $c > 0$ . Interestingly, it seems reasonable to conjecture that Klartag's bound is also sharp, up to constant factors, in the case of lattice packings. Despite these advances, there still remain exponential gaps between the upper and lower bounds on the quantities $\theta(d)$ and $A(d,\theta)$ as $d\to \infty$ and it seems to be an incredibly inciting question to improve either by an exponential factor. Random matrix theory at exponentially small scales. In the final section we shall keep with our theme of high dimensional geometry and move on to discuss phenomena that occur at exponentially small scales in the setting of random matrix theory. This discussion will center around a recent result of the author in joint work with Campos, Jenssen and Michelen on the singularity probability of a random symmetric matrix. We showed that if $A_{n}$ is drawn uniformly at random from the $n \times n$ symmetric matrices with entries in $\{-1,1\}$ then $$ \mathbb {P} \left(\det \left(A _ {n}\right) = 0\right) \leqslant e ^ {- c n}, \tag {1.3} $$ where $c > 0$ is an absolute constant. As it happens, this will open us to discuss a handful of other results on related topics - the study of the distribution of the least singular value and estimates on the repulsion and clustering of the eigenvalues of random symmetric matrices. We will, in particular, highlight several of the techniques developed in service of (1.3). 2 Beck, Spencer and Flat Littlewood polynomials Motivated by generalizing his famous theorem on 3-term progressions in dense sets of integers, Roth considered the following natural problem on two-colourings of an interval. Let $n$ be a large integer and let $f$ be an arbitrary function $f:[n] \to \{-1,1\}$ , which we extend to all of $\mathbb{N}$ by setting $f(x) = 0$ for all $x > n$ . Define the discrepancy of $f$ , denoted $D(f)$ , to be the maximum sum in absolute value along an arithmetic progression; $$ D (f) = \max _ {a, b} \left| \sum_ {x} f (a x + b) \right|. $$ Roth proved that $D(f) \geqslant cn^{1/4}$ for all such functions $f$ and conjectured that this could be improved to $n^{1/2 - o(1)}$ . This was then disproved by Sarkozy who's result was improved a few more times before Beck introduced his famous partial coloring method to resolve the question up to logarithms, showing that Roth's original bound was just about sharp $$ \min _ {f} D (f) = n ^ {1 / 4 + o (1)}. $$ Some years later, the general utility of this beautiful idea was noticed by Spencer who used Beck's partial colouring idea to prove Spencer's famous theorem in discrepancy theory. THEOREM 2.1. Let $A$ be a $n \times n$ matrix with all entries bounded in absolute value by one. Then there exists a vector $x \in \{-1,1\}^n$ for which $$ \| A x \| _ {\infty} \leqslant C \sqrt {n}, \tag {2.1} $$ for some absolute constant $C > 0$ . What is remarkable about this result is that if one takes a suitably typical matrix $^2$ and a random $x \in \{-1, 1\}^n$ one will have $$ \| A x \| _ {2} = \Theta (\sqrt {n \log n}), $$ with high probability. Moreover, one expects that it is exponentially unlikely that a random vector $x \in \{-1, 1\}^n$ satisfies $\| Ax \|_{\infty} \leqslant C \sqrt{n}$ . This is because for each $i$ we have $|(Ax)_i| \leqslant C \sqrt{n}$ with constant probability and one expects that each entry is roughly independent (again assuming $A$ is appropriately non-degenerate). Thus to find the vector $x \in \{-1, 1\}^n$ guaranteed by Theorem 2.1, "standard" probabilistic arguments are not available. Instead one can adapt the simple but ingenious differencing method of Beck to this setting. For this, we iterate the following lemma that finds a vector $x \in \{-1, 0, 1\}^n$ with not to many zero entries. LEMMA 2.2. Let $A$ be a $n \times n$ matrix with complex entries, bounded in absolute value by one. Then there exists a vector $x \in \{-1,0,1\}^n$ with at least $n/4$ non-zero terms for which $$ \| A x \| _ {\infty} \leqslant C \sqrt {n}, \tag {2.2} $$ for some absolute constant $C > 0$ . Proof sketch. To prove this consider all images $\{Ay:y\in \{0,1\} ^n\}$ . One then uses a (careful) application of the pigeon-hole principle to show that there is a subset $S\subset \{0,1\} ^n$ with $|S|\geqslant 2^{(1 - \varepsilon)n}$ , for which $\| y - y^{\prime}\|_{\infty}\leqslant C\sqrt{n}$ for all $y,y^{\prime}\in S$ . Since each vector $y\in \{0,1\}$ has at most $\sum_{i}^{d}\binom{n}{i}$ vectors with hamming distance $\leqslant d$ , there must be a pair $y,y^{\prime}\in S$ with the property that $x = y - y^{\prime}$ has at least $d = n / 4$ non-zero entries. 2.1 Flat Littlewood polynomials Interestingly these fundamental questions of discrepancy theory connect with an old and famous conjecture of Littlewood on "Littlewood polynomials". Indeed, say that a polynomial $P(z)$ of degree $n$ , is a Littlewood polynomial if $$ P (z) = \sum_ {k = 0} ^ {n} \varepsilon_ {k} z ^ {k} \qquad \text {w h e r e} \qquad \varepsilon_ {k} \in \{- 1, 1 \}. $$ The study of such polynomials has a long and fascinating history (see or), with their study going back to the work of Hardy and Littlewood over 100 years ago, but was also popularized by the work of Bloch and Pólya, the work of Littlewood and Offord, and others. While there are many beautiful questions about such polynomials, we are interested here in a problem asked by Erdős in 1957 which was then taken up and extensively studied by Littlewood in a series of papers on the extremal properties of polynomials with restricted coefficients. For this we say a (sequence) of Littlewood polynomials $P_{n}(z)$ ( $P_{n}$ of degree $n$ ) is flat if there exist constants $c_{1}, c_{2} > 0$ such that<sup>3</sup> $$ c _ {1} \sqrt {n} \leqslant \left| P _ {n} (z) \right| \leqslant c _ {2} \sqrt {n}. \tag {2.3} $$ In particular, Littlewood conjectured in that flat Littlewood polynomials exist. To see a first connection with discrepancy, we note that if one relaxes (2.3) to the one-sided bound, $$ \max _ {z: | z | = 1} | P (z) | \leqslant C \sqrt {n}. \tag {2.4} $$ we arrive at the natural "analogue" of (2.1) in the setting of polynomials. Actually, the one-sided problem was solved in the 1950s by Rudin and Shapiro who introduced the famous "Rudin-Shapiro" polynomials. Interestingly, one of the first applications that Spencer gave of his new theorem was to give a very different proof of the existence of polynomials satisfying (2.4). To see the connection, we briefly sketch the idea. THEOREM 2.3. For every $n \geqslant 2$ there is a Littlewood polynomial $P_{n}$ with $\deg(P_{n}) = n$ which satisfies (2.4), for some absolute $C > 0$ . Proof sketch. For $\theta \in [0,2\pi ]$ , define the vector $$ v (\theta) = \left(1, e ^ {i \theta}, e ^ {2 i \theta}, \ldots , e ^ {n i \theta}\right). $$ Note if we apply Spencer's theorem to $v(2\pi j / n)$ , for $j = 1, \dots, n$ , we can ensure the polynomial is small at the $n$ th roots of unity. But this is not quite enough. To get around this we can also use Spencer's theorem to ensure that all of the derivatives $P^{(j)}$ are also well controlled at roots of unity. While this might seem like we are putting infinitely many constraints on the sequence $\{-1, 1\}^n$ it is actually okay since we need weaker and weaker control on each derivative. Then, via Taylor's theorem we can ensure that the polynomial is well behaved everywhere. To attack Littlewood's conjecture, one might hope for a strengthening of Theorem 2.1 that also provides a lower bound on each entry. However it is not hard to see that this is not possible. Consider the $n + 1$ vectors $a^{(0)},\ldots ,a^{(n)}$ where we define $$ a ^ {(i)} = (- 1, \dots , - 1, 1, \dots 1) $$ to have exactly $i$ 1s and and $n - i$ entries equal to $-1$ . So given any $x \in \{-1, 1\}^n$ , we can assume $\langle x, a^{(0)} \rangle \geqslant 0$ and thus $\langle x, a^{(n)} \rangle = -\langle x, a^{(1)} \rangle \leqslant 0$ . Now note that for each $i < n$ , $$ \left| \langle a ^ {(i + 1)}, x \rangle - \langle a ^ {(i)}, x \rangle \right| \leqslant 2 $$ and therefore there is always a vector $a^{(i)}$ for which $|\langle x,a^{(i)}\rangle |\leqslant 2$ . Thus we see that any matrix $A$ which is defined by any $n$ vectors among the $\{a^{(i)}\}_i$ provides a counterexample to this potential strengthening of the Spencer's theorem. In joint work with Bollobás, Balister, Morris and Tiba, we showed that we could in fact get around this obstacle in the context of flat Littlewood polynomials and use discrepancy theory methods to prove the existence of these polynomials. THEOREM 2.4. For all $n \geqslant 2$ , there exists a Littlewood polynomial $P(z)$ of degree $n$ with $$ c _ {1} \sqrt {n} \leqslant | P (z) | \leqslant c _ {2} \sqrt {n} \tag {2.5} $$ for all $z \in \mathbb{C}$ with $|z| = 1$ . Here $c_{1}, c_{2} > 0$ are absolute constants. In what follows we sketch a proof of Theorem 2.4. 2.2 Sketch of the construction To prove Theorem 2.4 we may assume that the degree is $4n$ . We then write $z = e^{ix}$ and multiply by a phase $e^{-2ixk}$ so that we obtain a function $$ f (x) = \sum_ {k = - 2 n} ^ {2 n} \varepsilon_ {k} e ^ {i x k}. $$ Thus our goal is to choose the $\varepsilon_{k}\in \{-1,1\}$ so that $f(x) = \Theta (n^{1 / 2})$ , for all $x$ . We now constrain the coefficients so that the real and imaginary parts of $f$ separate nicely. In particular, we partition $C\cup S = \{1,\ldots ,2n\}$ and then fix $\varepsilon_{-k} = \varepsilon_{k}$ for each $k\in C$ , and $\varepsilon_{-k} = -\varepsilon_{k}$ for each $k\in S$ . Thus we may write $$ f (x) = \varepsilon_ {0} + 2 \sum_ {k \in C} \varepsilon_ {k} \cos k x + 2 i \sum_ {k \in S} \varepsilon_ {k} \sin k x. $$ and then define the real and imaginary trigonometric polynomials as $$ c (x) = \sum_ {k \in C} \varepsilon_ {k} \cos k x \quad \text {a n d} \quad s (x) = \sum_ {k \in S} \varepsilon_ {k} \sin k x. \tag {2.6} $$ While we don't discuss the precise definition of $C, S$ here, it will be important for us that $C \subset [\gamma n]$ , where $\gamma$ is a small constant, so that the degree of $c(x)$ is small. Thus we construct our function $f$ in two stages. We first construct a cosine polynomial $c(x)$ which is $O(\sqrt{n})$ for all $x$ and satisfies $|c(x)| \geqslant \delta \sqrt{n}$ , except on a set of intervals $\mathcal{I} = \{I\}$ , which are not too long, well separated and not too numerous. In the second, and more challenging step, we shall show that we can construct a sine polynomial $s(x)$ that is $\Omega(n^{1/2})$ on these intervals where the cosine polynomial is small, while still maintaining the upper bound of $O(n^{1/2})$ overall. While there are probably many different ways of constructing an appropriate cosine polynomial, we use a deterministic construction based on the Rudin-Shapiro polynomials, mentioned above. Rudin and Shapiro defined their polynomials recursively by defining the pairs of polynomials $P_{t}, Q_{t}$ by $P_{0}(z) = Q_{0}(z) = 1$ and inductively defining $$ P _ {t + 1} (z) = P _ {t} (z) + z ^ {2 ^ {t}} Q _ {t} (z), \qquad \mathrm {a n d} \qquad Q _ {t + 1} (z) = P _ {t} (z) - z ^ {2 ^ {t}} Q _ {t} (z). $$ for each $t\geqslant 0$ We construct our cosine polynomial $c(x)$ , by using a "twisted" version of these polynomials. We define $$ c (x) = \Re \left(z ^ {T} P _ {t} (z) + z ^ {2 T} Q _ {t} (z)\right), $$ where $T \approx \gamma n$ and $t = \log \gamma n$ so that $\deg(P_t)$ and $\deg(Q_t) \approx \gamma n$ . Thus by the boundedness of the Rudin-Shapiro polynomials, we have that $|c(x)| = O(\sqrt{n})$ . We also have that $|c(x)| \geqslant \delta \sqrt{n}$ except on a collection $\mathcal{I}$ of intervals that satisfy 1. $|\mathcal{I}| = O(\gamma n)$ 2. Each $I \in \mathcal{I}$ has $|I| = O(n^{-1})$ ; 3. For distinct $I, J \in \mathcal{I}$ , we have $d(I, J) = \Omega(n^{-1})$ . Actually, the first condition holds since we arranged for the degree of $c(x)$ to be at most $\gamma n$ . Also note that the second condition holds "typically" in the sense that we expect such a polynomial to have derivative $\approx (\gamma n)^{3/2}$ . So we expect it to be within the interval $[-\delta \sqrt{n}, \delta \sqrt{n}]$ for time at most $\approx n^{-1}$ (here $\delta \ll \gamma$ ). We expect the last condition for a similar reason. In the second and more challenging part of the proof, we show that if we are given any collection of intervals satisfying the conditions (1)-(3), we can construct a sine polynomial $s(x)$ of the form in (2.6) that satisfies $s(x) = O(\sqrt{n})$ everywhere and $|s(x)| \geqslant \delta \sqrt{n}$ on each interval $I \in \mathcal{I}$ . It's in the construction of $s(x)$ that we use ideas from discrepancy theory. Before defining the $s(x)$ , we first assign a "direction" $\alpha(I) \in \{-1, 1\}$ to each bad interval $I \in \mathcal{I}$ , which indicates the sign we want $s(x)$ to have on $I$ . (We will describe how we choose these $\alpha_I$ in a moment). We then define, for each $k$ , the quantity $$ \Delta (k) = \sum_ {I \in \mathcal {I}} \frac {\alpha_ {I}}{| I |} \int_ {I} \sin k s d s, \tag {2.7} $$ which tells us, based on how positive or negative it is, how much we "prefer" the choice of $\varepsilon_{k} = 1$ versus the choice of $\varepsilon_{k} = -1$ . Indeed one can think of $\Delta(k)$ as the "net-progress" we make towards "pushing" our various intervals in the desired directions, when choosing the coefficient $\varepsilon_{k}$ . To ensure that we carefully "spread" each push out over all of the intervals and time steps, we make our first use of discrepancy theory (Theorem 2.1) to choose the $\alpha_{I}$ so that $$ \left| \Delta (k) \right| \leqslant C ^ {\prime} | \mathcal {I} | ^ {1 / 2} \leqslant C (\gamma n) ^ {1 / 2}, \tag {2.8} $$ for some absolute constants $C', C > 0$ . We now use these quantities $\Delta(k)$ to define a space of random sine polynomials. First define the random variables $$ \hat {\varepsilon} _ {k} \in \{- 1, 1 \} \qquad \text {b y} \qquad \mathbb {E} \hat {\varepsilon} _ {k} = \frac {\Delta (k)}{C (\gamma n) ^ {1 / 2}}, $$ where we are implicitly using (2.8), to make sure such a random variable exists. We then define the random sine polynomial $\hat{s}(x)$ by $$ \hat {s} (x) = \sum_ {k \in S} \hat {\varepsilon} _ {k} \sin k x. $$ Heuristically, the idea is this: with each choice of $\varepsilon_{k}$ , for each $k\in S$ , we can increase $$ \min _ {x \in I} | \hat {s} _ {\alpha} (x) | $$ by about $\Delta(k) / (C\gamma n)^{1/2} = \Theta((\gamma n)^{-1/2})$ , for each $I \in \mathcal{I}$ . Since we have $|S| = \Theta(n)$ values of $k$ to work with, we should (on average at least) push each interval as far as $\geqslant n^{1/2} / \gamma^{1/2}$ . That is, far enough. To get some idea why we can indeed guarantee this, we see how orthogonality of the characters, allows us to see that the expected value of $\hat{s}$ is large on all of the intervals. Indeed, fix an interval $J\in \mathcal{I}$ and fix a point $x\in J$ . We observe that $$ \mathbb {E} \hat {s} (x) = \mathbb {E} \sum_ {k \in S} \varepsilon_ {k} \sin k x = \frac {1}{C (\gamma n) ^ {1 / 2}} \sum_ {k \in S} \Delta (k) \sin k x, $$ which gives, expanding the definition of $\Delta (k)$ $$ \mathbb {E} \hat {s} (x) = \frac {1}{C (\gamma n) ^ {1 / 2}} \sum_ {I \in \mathcal {I}} \frac {\alpha_ {I}}{| I |} \cdot \int_ {I} \sum_ {k \in S} (\sin k x) (\sin k s) d s. \tag {2.9} $$ Now if $d(x, I) < 1 / n$ we have the approximate orthogonality relations $$ \left| I \right| ^ {- 1} \int_ {I} \sum_ {k \in S} (\sin k x) (\sin k s) d s \approx n \quad \text {a n d} \quad \left| I \right| ^ {- 1} \int_ {I} \sum_ {k \in S} (\sin k x) (\sin k s) d s \ll n, $$ whenever $d(x, I) \gg 1 / n$ . Thus we have that the sum on the right hand side of (2.9) "picks out" the interval $J$ . We can therefore conclude that $$ \mathbb {E} s (x) \approx \frac {\alpha_ {J} n}{C (\gamma n) ^ {1 / 2}} = \Theta \big (\sqrt {n / \gamma} \big). $$ Thus we have sketched how a sample from $\hat{s}(x)$ behaves correctly on the intervals $I \in \mathcal{I}$ on average. Unfortunately, a typical sample from this polynomial will not be enough to push up all the intervals simultaneously. Indeed, the variance is large enough to spoil the value of $\hat{s}(x)$ on many $I \in \mathcal{I}$ . To get beyond this, we appeal to tools in discrepancy theory and, in particular, to a version of the partial colouring lemma mentioned above (Lemma 2.2) due to Lovett and Meka. With this we are able to find a (exponentially unlikely) polynomial $s(x)$ with the property that $|s(x)| \geqslant \delta n^{1/2}$ for all $x \in \bigcup_{I \in \mathcal{I}} I$ . It is with this polynomial that we can complete the proof. 2.3 Constructive proofs of Spencer's theorem As we sketched above, Spencer's original proof of Theorem 2.1 relies fundamentally on an application of the pigeonhole principle and thus, while it does show that a solution $x$ does exist, it gives little guidance on how to find it efficiently. In fact, Spencer conjectured that no efficient algorithm exists. The first breakthrough was provided by Bansal, who refuted Spencer's conjecture by providing an efficient algorithm which used a clever random walk which was guided by a semi-definite program, which encoded the current "state" of the solution. A few years later a much simpler algorithm was then given by Lovett and Meka, which has the additional advantage that it did not rely on Spencer's original proof. Because their proof is so simple and elegant, we sketch a proof of their main recoloring step; that is, their analogue of Lemma 2.2. LEMMA 2.5. Let $a^{(1)}, \ldots, a^{(m)} \in \mathbb{R}^n$ be vectors with $\| a^{(i)} \|_{\infty} \leqslant 1$ for all $i \in [m]$ . Let $x_0 \in^n$ and let $c_1, \ldots, c_m \geqslant 0$ be such that $$ \sum_ {j} \exp \left(- c _ {j} ^ {2} / 1 6\right) \leqslant 1 / 1 6. $$ Then there exists $x \in^n$ so that for all $j \in [m]$ , we have $$ \left| \langle x - x _ {0}, a ^ {(j)} \rangle \right| \leqslant c _ {j} \sqrt {n} $$ and $|x_i| = 1$ , for at least $n/4$ entries of $x$ . Here we have taken the liberty of removing the quantification that makes it clear that Lemma 2.5 also results in an efficient algorithm, since this is not our focus here. Let us also note that, in contrast to Lemma 2.2, the present lemma fixes the colouring on $n/4$ coordinates and gives a fractional weight to all other coordinates. Thus one should think about $x_0$ as the fractional weight "so far" in the iteration. Proof sketch. We imagine two convex bodies. The first is simply the cube $^n$ , which is defined, of course, by the hyperplanes $-1 \leqslant x_i \leqslant 1$ , for $i \in [n]$ . The second is the convex body defined by the hyperplanes which come from the linear constraints. That is, $$ \left| \langle x - x _ {0}, a ^ {(j)} \rangle \right| \leqslant c _ {j} \sqrt {n}. \tag {2.10} $$ We now define the random process $X_{t}$ as follows. The process $X_{t}$ starts, at $t = 0$ , at $x_0$ . We then allow $X_{t}$ to evolve as Brownian motion until it first hits one of the hyperplanes above, at which point it sticks to it and behaves as a Brownian motion within the hyperplane. Thus our process evolves, wandering as a Brownian motion within a set of hyperplanes, hitting further hyperplanes, and then restricting itself further. Thus $X_{t}$ is a random process with mean $x_0$ and with a covariance matrix that starts as the identity and which is successively projected onto the hyperplanes that it sticks to. Using standard martingale concentration estimates, we can say that the large deviations of this process throughout time are no worse than that of the unconditioned Brownian motion. In particular we can see that $$ \mathbb {P} _ {X _ {t}} \left(\left| \langle X _ {t} - x _ {0}, a ^ {(j)} \rangle \right| \leqslant c _ {j} \sqrt {n}\right) \leqslant e ^ {- c _ {j} ^ {2} / 2}. $$ Thus, by time $t = 1$ , the expected number of hyperplanes of type (2.10) that the process hits is at most $1/16$ . Thus it must have hit a good proportion of the hyperplanes of the type $|x_i| = 1$ , which gives exactly what we want. 2.4 The Komlós Conjecture and the Beck-Fiala conjecture Before concluding our discussion of discrepancy theory, it is impossible not to mention the beautiful and conjectural extension of Spencer's theorem known as the Komlós conjecture, which says that one only needs to control the $\ell_2$ norm of the columns of the matrix $A$ to arrive at the same conclusion as Spencer's theorem (the normalization is changed here to match the literature). CONJECTURE 2.6. Let $A$ be a $n \times n$ matrix where each column has $\ell_2$ -norm at most 1. Then there exists $x \in \{-1,1\}^n$ so that $$ \| A x \| _ {\infty} \leqslant K, $$ for an absolute constant $K$ . There is also a famous hypergraph colouring "companion" to this conjecture, made independently by Beck and Fiala. It is not hard to see that the Komlós conjecture implies the Beck-Fiala conjecture. CONJECTURE 2.7. Let $\mathcal{H}$ be a hypergraph on finite ground set where every vertex has degree at most $d$ . There exists $f:X\to \{-1,1\}$ so that for all $e\in \mathcal{H}$ we have $$ \left| \sum_ {x \in e} f (x) \right| \leqslant C \sqrt {d}, \tag {2.11} $$ where $C > 0$ can be taken to be a absolute constant. Beck and Fiala proved that Conjecture 2.7 is true if $C\sqrt{d}$ is replaced with $2d - 1$ . The only unconditional improvement to this bound is by Bukh who improved it to $2d - \log_{*}d$ . If we set $|X| = n$ , Banaszczyk proved that one can take $C = O(\sqrt{\log n})$ in (2.11) by proving one can take $K = O(\sqrt{\log n})$ in the setting of the Komlós conjecture. These results remained the state of the art for over 25 years, until a very recent and exciting breakthrough of Bansal and Jiang. In these papers they prove the Beck-Fiala conjecture in the case $n \leqslant 2^{c\sqrt{k}}$ and prove a bound of $(\sqrt{k} + \log n)(\log \log n)^{c}$ in general. They also gave the first improvement on Banaszczyk's bound of $K = O(\sqrt{\log n})$ in the setting of the Komlos conjecture, by showing that one may take $K = (\log n)^{1/4 + o(1)}$ . 3 Changing the distribution: the semi-random method and sphere packings One alternative way of finding a rare object in a probability space is to change the underlying distribution that it is sampled from. This can give us a way of naturally "accessing" unlikely events. We already saw this kind of idea in action with the constructive proof of Spencer's theorem due to Lovett and Meka, but perhaps the most classical example comes from the work of Ajtai, Komlós, and Szemerédi on independent sets in triangle free graphs, which was later refined by Shearer to give the following basic and beautiful result. For this we recall that an independent set in a graph $G$ is a set of pairwise non-adjacent vertices and the independence number of a graph $G$ , denoted $\alpha(G)$ , is the largest independent set in $G$ . THEOREM 3.1. Let $G$ be a triangle-free graph on $n$ vertices with average degree $d$ . Then $$ \alpha (G) \geqslant \big (1 + o (1) \big) \frac {n \log d}{d}, $$ where the $o(1)$ term tends to 0 as $d$ tends to infinity. This result is easily seen to be sharp, up to a factor of $2 + o(1)$ by appropriately modifying a random graph. While this theorem has many applications, perhaps the best known is the following bound on the extreme off-diagonal Ramsey numbers. THEOREM 3.2. We have $$ R (3, k) \leqslant \left(1 + o (1)\right) \frac {k ^ {2}}{\log k}. $$ If one looks to prove Theorem 3.1 with a "direct" application of the probabilistic method, that is by selecting a set uniformly at random, one is doomed to failure. Indeed in a random graph of average degree $d$ there exponentially few such independent sets among all $k$ sets. To access these sets we instead "tilt" our distribution towards independent sets. In fact, there are a couple different ways to do achieve this in practice, but in what follows we outline a heuristic that is behind all of these different approaches. Heuristic justification of Theorem 3.1. Suppose that we could define a distribution on independent sets that produced a set that still "looked random" apart from the constraint we imposed of being independent. How large could we reasonably expect the size of the independent set we produced? Let's say that our distribution produces an independent set $I$ of $pn$ vertices, for some $p \in (0,1)$ . Say a vertex is open if $v \notin I$ and all of its neighbours are not in $I$ . The probability that a vertex is left open is $$ \mathbb {P} (v \text {o p e n}) = \mathbb {P} ((v \cup N (v)) \cap I = \emptyset) \approx (1 - p) ^ {d (v) + 1}, \tag {3.1} $$ where $d(v) = |N(v)|$ is the degree of $v$ and $N(v)$ is its neighbourhood. Here we have used our heuristic assumption that $I$ is random-like. Now intuitively we want to choose $p$ just large enough so that we just start to have $$ (1 - p) ^ {d} \approx \mathbb {P} (v \text {o p e n}) \ll p. \tag {3.2} $$ That is, we expect this optimal $p$ to be just at the point where the number of vertices left open is significantly smaller than the number of vertices that we have added so far. Thus, solving for $p$ in the equation (3.2), brings us to a heuristic for the maximum density of a "random-like" independent set $$ p = (1 + o (1)) \frac {\log d}{d}, $$ which exactly matches Sheaer's bound. Of course this is not a proof at all since we have not provided any distribution that satisfies these conditions. However, there are at least two natural such distributions. The first is to build $I$ by a random greedy process where we remove random vertices one-by-one along with all of their neighbors. This is the idea behind the proofs of Ajtai, Komlós and Szemerédi and the refinement of Shearer. But we also note that a different, more direct proof was given by Shearer which uses the hardcore model on $G$ to sample $I$ . Towards an optimal version of Shearer's theorem. We pause to remark that it is a major open problem to determine the correct constant in Theorem 3.1. It is unclear which if either the upper bound or lower bound is sharp. We also mention the beautiful algorithmic problem of finding an independent set in the random regular graph that improves upon Shearer by a constant factor. More precisely, does there exist a randomized polynomial time algorithm that finds an independent set in the random regular graph $G(n,d)$ of size $\geqslant (1 + \varepsilon)n(\log d) / d$ , with high probability? In this setting we even know that large independent sets exist and thus should in principle be easier. However it has been shown there are serious obstructions to finding such an algorithm. The problem of finding an optimal version of Shearer's theorem is also intimately tied up with the problem of determining the Ramsey numbers $R(3, k)$ and accounts for the missing factor of $2 + o(1)$ in this problem. 3.1 Spherical codes and sphere packing in large dimension Recently, in joint work with Marcelo Campos, Matthew Jenssen and Marcus Michelen, we applied this sort of thinking to the classical sphere packing problem: What is the maximum proportion of $\mathbb{R}^d$ that can be covered by non-overlapping spheres of volume one? There is also the closely related question of constructing spherical codes: Given an angle $\theta$ , what is the maximum proportion of $\mathbb{S}^{d-1}$ that can be covered by non-overlapping spherical caps of radius $\theta$ ? Let $\theta(d)$ denote this maximum proportion in the sphere packing problem and let $A(\theta, d)$ denote the maximum proportion of $\mathbb{S}^{d-1}$ in the spherical caps problem. Despite the simplicity of these problems, little is known about these fascinating quantities. The precise value of $\theta(d)$ is only known in dimensions $d \in \{1, 2, 3, 8, 24\}$ . The case $d = 1$ is trivial, the case $d = 2$ is classical, while the remaining known cases are the result of a series of extraordinary breakthroughs: dimension 3 was a landmark achievement of Hales, resolving the Kepler conjecture from 1611. Dimensions 8 and 24 were resolved only recently due to the major breakthroughs of Viazovska, in dimension 8, and then Cohn, Kumar, Miller, Radchenko, and Viazovska in dimension 24. (See for a beautiful exposition of these developments). We also recall that the kissing number of $\mathbb{R}^d$ corresponds to the special case of the spherical codes problem $A(d,\pi /3)$ , although it is more traditionally phrased as the maximum number of unit spheres in $\mathbb{R}^d$ that can be arranged tangent to (or which "kiss") a central unit sphere. The only kissing numbers that are known are in dimensions $d\in \{1,2,3,4,8,24\}$ . Similarly only a few cases of optimal spherical codes are known for other $\theta$ , for which we refer the reader to. In our work, our focus is on sphere packing and spherical codes in large dimension, where the situation remains even more mysterious. A simple argument shows that any saturated packing (one in which no additional sphere can be added) has density $\geqslant 2^{-d}$ and thus $$ \theta (d) \geqslant 2 ^ {- d}. $$ A classical theorem of Minkowski's improved upon this bound by a factor of $2 + o(1)$ . In 1947 Rogers made the first asymptotically growing improvement to the trivial lower bound showing that $$ \theta (d) \geqslant (\beta + o (1)) d 2 ^ {- d}, $$ where $\beta = 2 / e\approx 0.74$ . Since the work of Rogers, a number of improvements have been made to the constant factor $\beta$ . Davenport and Rogers showed that one can take $\beta = 1.68$ ; Ball, some 45 years later, improved the bound to $\beta = 2$ ; and Vance showed that one can take $\beta = 6 / e\approx 2.21$ when the dimension $d$ is divisible by 4. Venkatesh showed that one can take $\beta = 65963$ and additionally showed that one can obtain an additional log log $d$ factor along a sparse sequence of dimensions. In our paper, we go beyond this barrier and improve Minkowski's bound by a factor of $\Omega (d\log d)$ in general dimension. THEOREM 3.3. As $d$ tends to infinity $$ \theta (d) \geqslant (1 - o (1)) \frac {d \log d}{2 ^ {d + 1}}. $$ Recently, this result has been seen a further spectacular improvement by Klartag, who used a method reminiscent of Lovett and Meka's proof of Spencer's theorem, to show the following. THEOREM 3.4. As $d$ tends to infinity $$ \theta (d) \geqslant c d ^ {2} 2 ^ {- d}, $$ for some $c > 0$ We discuss this beautiful result further in Section 3.5. Our method also naturally adapts the setting of spherical codes in large dimension and provides us with an improvement in this setting. To state this result, we let $s_d(\theta)$ denote the normalized spherical volume of a cap of angle $\theta$ . In we also prove the following. THEOREM 3.5. If $\theta \in (0,\pi /2)$ and $d$ tends to infinity then $$ A (d, \theta) \geqslant (1 - o (1)) \frac {d \log d}{2 s _ {d} (\theta)}. $$ This improved upon the best known bounds due to Fernández, Kim, Liu and Pikhurko who gave a constant factor improvement to bounds of Jenssen, Joos and Perkins. These bounds were of the type $A(d,\theta) \geqslant cd / s_d(\theta)$ for some constant $c > 0$ . We also note that our results have been adapted further to other settings. Fernández, Kim, Liu and Pikhurko improved the best bounds for the sphere packing in high dimensional hyperbolic space using this method and Schildkraut has extended this method to show that one can obtain a similar bound for packing balls in an arbitrary norm. Upper bounds on the sphere packing problem. Despite this progress, the upper bounds for the sphere packing problem are quite far off the lower bounds, with an exponential gap between the two. The best known upper bounds are of the form $$ \theta (d) \leqslant 2 ^ {- (. 5 9 9 \dots + o _ {d} (1)) d}, $$ which is due to the 1978 work of Kabatjanskii and Levenstein and has only been improved by a multiplicative constant factor in the years since by Cohn and Zhao and then Sardari and Zargar. It is a beautiful and central problem to improve these bounds further. 3.2 Amorphous sphere packings in physics One interesting property of the sphere packings behind Theorem 3.3 is that they are "random-like". While essentially all other results focus on lattice packings, which are therefore very "structured", our packings are essentially as random-like as possible. Such packings are of independent interest in the physics literature where random sphere packings at a given density are a natural model of physical matter. In dimension 3, for instance, it is believed that random sphere packings transition from appearing "gas-like" at low density to "lattice-like" at high density, paralleling the phase transition between states of matter. However, rigorously demonstrating that this phase transition occurs remains a beautiful and major open problem in the field (see and the references therein). Physicists have also devoted enormous effort to analysing sphere packings in high dimensions, with the aim of providing a more tractable analysis than in low dimensions, and in order to use the powerful machinery of equilibrium statistical physics to generate rich predictions. Here, the important qualitative distinction is between sphere packings that are crystalline, meaning that they exhibit long-range "correlations", and amorphous, meaning they don't have any such correlations. For example, lattice packings are extreme instances of crystalline packings where the entire structure is determined by a basis. In their seminal work on applying the replica method to the structure of high-dimensional sphere packings, Parisi and Zamponi predicted that the largest density of amorphous packings in $d$ dimensions is $$ (1 + o (1)) (d \log d) 2 ^ {- d}, $$ that is, a factor of 2 larger than our lower bound from Theorem 3.3. While there is no agreed-upon rigorous definition of "amorphous," it seems likely that any such definition would be satisfied by our construction for Theorem 3.3, which enjoys extremely fast decay of correlations. 3.3 Sketch proof - a graph theoretic reduction To prove Theorem 3.3 and Theorem 3.5 we convert the problem into the problem of finding a large independent set in a certain graph. To do this we discretize the space in a natural way. Here we sketch the situation for sphere packings, and note that the case for spherical codes only requires small adjustments. To discretize, we simply sample a Poisson point process in a large box $[-L,L]^d$ at intensity $$ \lambda = d ^ {d / 2 - o (d)}. $$ We don't worry about the $o(d)$ term, but it is chosen so that, for a typical point in our sample, the next nearest point will be of distance $\gg \log d$ . (Some points will have a closer nearest point, but we can simply delete these). Let $X$ be the outcome of this initial discretization step. Now a natural graph $G = G_{X}$ suggests itself. We let $X$ be the vertex set and we define $$ x \sim y \quad \text {w h e n e v e r} \quad \| x - y \| _ {2} < 2 r _ {d}, $$ where $r_d$ is the radius of a ball of volume one in $\mathbb{R}^d$ . That is, $x$ and $y$ are joined by an edge if $B_{r_d}(x) \cap B_{r_d}(y) = \emptyset$ . Thus an independent set in $G$ is a sphere packing in the box $[-L, L]^d$ . We now would like to "lift" this graph out of its geometric context and think of it only as a graph. But what properties can we hold on to? One obvious one is the degree. We can easily compute the expected degree. If we fix a point $x \in X$ , using the basic properties of Poisson point processes, we can estimate $$ \mathbb {E} \left| X \cap B _ {2 r _ {d}} (x) \right| = \operatorname {V o l} \left(B _ {2 r _ {d}} (x)\right) \lambda = 2 ^ {d} \lambda =: \Delta . $$ If we were to use this bound along with the trivial bound (mentioned above) $\alpha(G) \geqslant \frac{n+1}{\Delta(G)}$ , we can recover the (also trivial) bound $\theta(d) \geqslant 2^{-d}$ . To get beyond this bound we need to use some additional information. Inspired by the theorem of Ajtai, Komlós and Szemerédi (or Theorem 3.1) one might think about focusing on the number of triangles in the graph $G$ . This perspective was taken in but only matches the bounds of Rogers and is sharp from this point of view. Our new idea is to focus on the maximum codegree of our graph, which actually behaves very well in this context. Indeed we can easily compute the co-degree of our graph $$ \mathbb {E} \left| X \cap B _ {2 r _ {d}} (x) \cap B _ {2 r _ {d}} (y) \right| = \operatorname {V o l} \left(B _ {2 r _ {d}} (x) \cap B _ {2 r _ {d}} (y)\right) \lambda \leqslant \left(2 ^ {d} \lambda\right) e ^ {- \| x - y \| _ {2} ^ {2} / 2} \leqslant \Delta / (\log \Delta) ^ {\omega (1)}, $$ where in the last inequality we are using that $\| x - y\| _2\gg \log d$ The insight here is that we can obtain the same bound as Shearer for graphs that have controlled codegrees. Interestingly, this is also a new result in graph theory. THEOREM 3.6. Let $G$ be a $n$ vertex graph with $\Delta(G) \leqslant \Delta$ and $\Delta_2(G) \leqslant C\Delta(\log \Delta)^{-c}$ . Then $$ \alpha (G) \geqslant (1 - o (1)) \frac {n \log \Delta}{\Delta}, $$ where $o(1)$ tends to 0 as $\Delta \to \infty$ and we can take $C = 2^{-7}$ and $c = 7$ . 3.4 Sketch proof of Theorem 3.6 To prove this we use a nibble process as Ajtai, Komlós and Szemerédi, but our analysis is quite a bit different. We sketch a little to see how the co-degree condition comes naturally into play. As we discussed above, we build our independent set by building it up in pieces. We take our first piece as $p_1 = \frac{\gamma}{\Delta}$ , for some small $\gamma \ll 1$ . Let $I_1$ be this $p_1$ -random set. Note that since the maximum degree of this graph is $\Delta$ , every vertex in $G[I_1]$ will have average degree $\gamma \ll 1$ , and thus $I_1$ is very close to an independent set. Indeed, we can make it independent by throwing away $o(|I_1|)$ vertices. We now delete all of $I_{1}$ and all of the neighbors of $I_{1}$ from the graph. Define $$ D _ {1} = I _ {1} \cup \bigcup_ {x \in I _ {1}} N (x). $$ which is about $\gamma$ proportion of the vertices of $G$ . The key property we would like to maintain is that $D_{1}$ "looks like" a random set of density $\gamma$ in $G$ . If this is possible then we expect that the new maximum degree is about $(1 - \gamma)\Delta$ and the new maximum codegree is about $(1 - \gamma)\Delta$ . Thus we can choose $p_2 = \gamma / ((1 - \gamma)\Delta)$ and then choose $I_2$ to be a $p_2$ random set in the second nibble. Thus we have $$ \left| I _ {2} \right| \approx p _ {2} (1 - \gamma) n = \gamma n / \Delta . $$ More generally, after the $i$ th nibble, we will have constructed disjoint sets $I_1, \ldots, I_i$ with $|I_i| \approx \gamma n / \Delta$ and so that $I_1 \cup \dots \cup I_i$ is independent (after a small amount of clean-up), and the graph remaining after we remove all of the $I_i$ and all vertices adjacent to them has size $(1 - \gamma)^i n$ . Thus we can continue this process until $$ (1 - \gamma) ^ {i} n \leqslant n / \Delta , $$ meaning that we can run the process for $i \approx (\log \Delta) / \gamma$ steps. Thus (assuming that we can maintain these properties) we can construct an independent set of size $\approx (n / \Delta) \log \Delta$ . To make the above story work, the key new idea is in controlling the evolution of the degrees of the vertices. To sketch the idea here, we fix a vertex $x$ and consider $N(x)$ and a stage $i$ of the process. Let us condition on the survival of $x$ into the next process - which means that none of the neighbors of $x$ are selected for $I_{i}$ . Now the size of $N(x)$ is precisely governed by the set $$ Y = N (N (x)) \setminus (N (x) \cup \{x \}), $$ the neighbors of the neighbors of $x$ , apart from $N(x) \cup \{x\}$ (since $I_i$ will not include vertices of $N(x) \cup \{x\}$ ). We now run a martingale argument. We iteratively expose each vertex in $Y \cap I_i$ . If a vertex $v \in Y$ is included into $I_i$ we then delete all of $N(v) \cap X$ from $X$ . Now, to obtain concentration we note that the steps of the martingale are controlled by the sum of the squares of the increments, which due to the double counting inequality $$ \sum_ {y \in Y} | I \cap N (y) | ^ {2} \leqslant \sum_ {y, z \in N (x)} | N (y) \cap N (z) |, $$ are controlled by the co-degrees of the vertices. 3.5 Klartag's new sphere packing bounds We now turn to sketch the beautiful new idea of Klartag that allows one to obtain sphere packings of density $\Omega(d^2 2^{-d})$ . Klartag picks up on an earlier idea of building a packing out of a random lattice. However the novelty in Klartag's proof is that instead of simply selecting the lattice uniformly at random, he cleverly "guides" a random process to find a better (and exponentially unlikely) choice. The setup is this. We first find a lattice $\Lambda \subset \mathbb{R}^d$ with $\operatorname*{det}(\Lambda) = 1$ and an ellipsoid $\mathcal{E}$ of large volume, which is centered at the origin and with $\mathcal{E} \cap \Lambda = \{0\}$ . We then turn this into a sphere packing by applying a linear transformation $T: \mathbb{R}^d \to \mathbb{R}^d$ with $\operatorname*{det}(T) = 1$ so that $T(\mathcal{E})$ is the Euclidean ball $B$ centered at the origin with $\mathrm{Vol}(B) = \mathrm{Vol}(\mathcal{E})$ . Note that $T(\Lambda)$ is a new lattice with determinant one. Thus if we place a copy of the dilated ball $B/2$ at each lattice point of $T(\Lambda)$ we obtain a sphere packing of identical balls with density $\mathrm{Vol}(\mathcal{E})2^{-d}$ . Proof sketch of Theorem 3.4. By the discussion above, we see that the problem reduces to the problem of finding a lattice $\Lambda$ with $\operatorname{det}(\Lambda) = 1$ and a centrally symmetric ellipsoid of volume $\Omega(d^2)$ that contains no lattice points of $\Lambda$ , apart from the origin. In a first step, we choose $\Lambda$ to be a random lattice with determinant 1. While we won't say anything technically about how to work with these lattices here, it is enough to say that this lattice looks "locally" like a Poisson point process with intensity 1. We then grow an ellipsoid $\mathcal{E}_t$ in a manner analogous to the proof of Lemma 2.5, although here we are working in the space of ellipsoids. Let $\mathcal{E}_0$ be a euclidean ball which is small enough to ensure that $\Lambda \cap \mathcal{E}_0 = \{0\}$ . We then randomly "evolve" this ellipsoid as time proceeds. As soon as this ellipsoid hits a lattice point, it sticks to it and evolves further, keeping this point on its boundary. Indeed, we may describe the ellipsoid $\mathcal{E}_t$ as $$ \mathcal {E} _ {t} = \left\{x \in \mathbb {R} ^ {d}: \langle x, A _ {t} x \rangle \leqslant 1 \right\}, $$ where $A_{t}$ is a positive definite matrix. Thus hitting a point $y \in \Lambda$ , precisely introduces the linear constraint $\langle y, A_{t}y \rangle = 1$ on $A_{t}$ . Since the dimension of the space of such positive semi-definite ellipsoids is $\approx d^2 / 2$ , we expect that the process runs until the ellipsoid has $\approx d^2 / 2$ points of $\Lambda$ on its boundary. Thus we can heuristically argue about the volume of the final ellipse. If $|\mathcal{E}_T \cap \Lambda| \approx d^2$ , one can use the fact that the random lattice $\Lambda$ locally looks like a Poisson point process to see that $$ d ^ {2} \approx \mathbb {E} | \mathcal {E} _ {T} \cap \Lambda | \approx \operatorname {V o l} (\mathcal {E} _ {T}), $$ as desired. 4 Random matrix theory at exponentially small scales We now turn to discuss phenomena in random matrix theory that occur at exponentially small scales. Here we focus on the singularity probability of a random symmetric matrix. Let $B_{n}$ be a random $n \times n$ matrix whose entries are chosen independently and uniformly from $\{-1, 1\}$ . It is an old problem, likely stemming from multiple origins, to determine the probability that $B_{n}$ is singular. While a moment's thought reveals the lower bound of $(1 + o(1)) 2n^{2} 2^{-n}$ , the probability that two rows or columns are equal up to sign, establishing the corresponding upper bound remains an extremely challenging open problem. Indeed, it is widely believed that (4.1) $\mathbb{P}(\operatorname*{det}(B_n) = 0) = (1 + o(1))2n^2 2^{-n}.$ While this precise asymptotic has so far eluded researchers, some stunning advances have been made on this fascinating problem. The first steps were taken by the pioneering work of Komlós in the 1960s, who showed that the singularity probability is $O(n^{-1/2})$ . Nearly thirty years later, Kahn, Komlós and Szemerédi, in a remarkable paper, showed that the singularity probability is exponentially small. At the heart of their paper is an ingenious argument with the Fourier transform that allows them to give vastly more efficient descriptions of "structured" subspaces of $\mathbb{R}^n$ that are spanned by $\{-1,1\}$ -vectors. Their method was then developed by Tao and Vu who showed a bound of $(3/4)^{n + o(n)}$ , by providing a link between the ideas of and the structure of set addition and, in particular, Freiman's theorem (see). This trajectory was then developed further by Bourgain, Vu and Wood, who proved a bound of $2^{-n/2 + o(n)}$ , and by Tao and Vu, who pioneered the development of "inverse Littlewood-Offord theory", which we discuss below. In 2007, Rudelson and Vershynin, in an important and influential paper, gave a different proof of the exponential upper bound on the singularity probability of $B_{n}$ . The key idea was to construct efficient $\varepsilon$ -nets for points on the sphere that have special anti-concentration properties and are thus more likely to be in the kernel of $B_{n}$ . This then led them to prove an elegant inverse Littlewood-Offord type result, inspired by, in a geometric setting. This perspective was then developed further in the breakthrough work of Tikhomirov, who proved (4.2) $\mathbb{P}(\operatorname*{det}(B_n) = 0) = 2^{-n + o(n)}$ thereby proving the conjectured upper bound, up to subexponential factors. One of the key innovations in was to adopt a probabilistic viewpoint on such Littlewood-Offord questions, a topic which we elaborate on in Section 4.1 We remark that another pair of advances was made by Jain, Sah and Sawhney, following the work of Litvak and Tikhomirov, who proved the natural analogue of (4.1) for random matrices with lopsided distributions. In the case of $\{-1,1\}$ -matrices, however, the problem of establishing (4.1) perhaps remains as the central open problem in the area. Singularity of random symmetric matrices. We now turn to discuss the singularity problem for random symmetric matrices, which has proven to be more challenging still. The study of random symmetric matrices goes back to the pioneering work of Wigner in the 1950s (sometimes such random matrices are called Wigner matrices) who studied the typical eigenvalue distribution of these matrices, showing that they follow the so-called "semi-circular law". Let $A_{n}$ be drawn uniformly at random among all $n \times n$ symmetric matrices with entries in $\{-1, 1\}$ . Again we have the lower bound (4.3) $\mathbb{P}\big(\operatorname *{det}(A_n) = 0\big)\geqslant 2^{-n + o(n)},$ by considering the probability that two rows are equal up to sign. Costello, Tao and Vu were the first to show that $A_{n}$ is non-singular with high probability. That is, $$ \mathbb {P} \big (\det (A _ {n}) = 0 \big) = o (1), $$ with a precise error term of $O(n^{-1/4})$ . Since, this result has seen a sequence of improvements: A bound of $N^{-\omega(1)}$ was proved by Nguyen, a bound of the form $\exp(-n^c)$ was proved by Vershynin, which was in turn improved by Ferber and Jain based on the results of Ferber, Jain, Luh and Samotij. In a similar spirit, Campos, Mattos, Morris and Morrison then improved this bound to $\exp(-cn^{1/2})$ by proving a "rough" inverse Littlewood-Offord theorem, inspired by the theory of hypergraph containers. This bound was then improved by Jain, Sah and Sawhney, who improved the exponent to $-cn^{1/2}\log^{1/4}n$ , and, simultaneously, by the author to $-c(n\log n)^{1/2}$ in joint work with Campos, Jenssen and Michelen. As might be suggested by the "convergence" of these results to the exponent $-c(n\log n)^{1 / 2}$ , a natural barrier lurks exactly at this point. In fact, in authors showed that if one wants to get beyond bounds at this probability scale, one needs use "reuse" randomness in the top half of the matrix (which is of course independent) in the most difficult part of the proof. Rather one needs to directly deal with the complicated dependencies that are present in a random symmetric matrix. In recent work the author, in joint work with Campos, Jenssen and Michelen, managed to get around this obstacle and prove an exponential upper bound, thus matching (4.3) up to base of the exponent. THEOREM 4.1. Let $A_{n}$ be drawn uniformly at random from the $n \times n$ symmetric matrices with entries in $\{-1,1\}$ . Then $$ \mathbb {P} \big (\det (A _ {n}) = 0 \big) \leqslant e ^ {- c n}, $$ where $c > 0$ is an absolute constant. In what follows we discuss some of the techniques that are behind this result. This will allow us to touch on some of the exciting ideas that have been developed in this area. Least singular value, clustering and repulsion of eigenvalues. The singularity problem is related to several other phenomena regarding the spectrum of the matrix $A_{n}$ , the most natural being the extreme behavior of the least singular value. Recall that if $M$ is an $n \times n$ matrix the least singular value is $\sigma_{\min}(M) = \min_{x \in \mathbb{S}^{n-1}} \| M x \|_2$ . The study of this quantity in random matrices was first initiated by Goldstine and von Neumann in the 1950s and has undergone intense study in the intervening years, partly in its own right, but also because of its link with spectral laws of random matrices and the smoothed analysis of algorithms. A key guiding conjecture is due to Spielmen and Teng, who suggested that in the case of iid Bernoulli random matrices $B_{n}$ we have $$ \mathbb {P} \left(\sigma_ {\min } \left(B _ {n}\right) \leqslant \varepsilon n ^ {- 1 / 2}\right) \leqslant \varepsilon + 2 e ^ {- c n}, \tag {4.4} $$ for all $\varepsilon > 0$ . A key breakthrough on this problem was made by Rudelson which inspired a sequence of further papers, cumulating in the influential papers of Rudelson and Vershynin, who proved (4.4) up to a constant factors, and Tao and Vu who proved (4.4) in the case of $\varepsilon > n^{-c}$ . Recently, in joint work with Sah and Sawheny, the author proved (4.4), up to a $1 + o(1)$ factor. This question has also been intensely studied in the case of random symmetric matrices. In this case we have the additional interpretation that $\sigma_{\mathrm{min}}(A) = \min_i|\lambda_i|$ , where the minimum is over all the eigenvalues of $A$ . After many partial advances in the paper, we determined the optimal probabilities of having a small least singular value for such random symmetric matrices. We showed that for all $\varepsilon >0$ $$ \mathbb {P} \left(\sigma_ {n} \left(A _ {n}\right) \leqslant \varepsilon n ^ {- 1 / 2}\right) \leqslant C \varepsilon + e ^ {- c n}, \tag {4.5} $$ where $C, c > 0$ are absolute constants. One can also apply these results to understand the clustering of the spectrum more generally. Indeed we can apply a version of (4.5) to the matrix $A_{n} - \lambda I$ , for any $-(2 - \delta)\sqrt{n} \leqslant \lambda \leqslant (2 - \delta)\sqrt{n}$ , to bound the probability of the event $\min_i |\lambda_i - \lambda| \leqslant \varepsilon n^{1/2}$ . Allowing ourselves to speak somewhat informally, the form of this result, with two different terms in the bound, reflects two very different phenomena that the least singular value can have. If $\varepsilon \gg e^{-cn}$ , for some $c$ , the most likely way to have $\sigma_{\min}(A_n) < \varepsilon n^{-1/2}$ comes from the event of a single "random-like" direction being hit very weakly by $A_n$ . On the other hand if $\varepsilon$ is a small enough exponential, the most likely way to have $\sigma_{\min}(A_n) < \varepsilon n^{-1/2}$ comes from the matrix just being singular, which (conjecturally) should come from the very structured every of having two rows or columns equal. These sort of problems are also related to further questions of clustering and repulsion of eigenvalues and we refer the reader to. Anti-concentration of the determinant and permanent. Before we turn to discuss techniques, we turn to highlight two further problems in this area. The first concerns the anti-concentration of the determinant. Concretely, what is $\mathbb{P}(\operatorname*{det}(B_n) = 1)$ ? (or similarly for the symmetric model $A_{n}$ ). Here the above proofs for singularity, immediately give that this probability is exponentially small, but conjecturally should be much smaller, most likely of the form $n^{-cn}$ . Indeed it seems to be an major step to prove that this probability is $e^{-\omega(n)}$ . We also highlight the related problem on the permanent of a random matrix. In this case there is no natural geometric interpretation, forcing us reason by other means. Tao and Vu showed that $\operatorname{Per}(B_n) = 0$ , with probability $O(n^{-c})$ which was the state of the art until a very recent breakthrough on this problem by Hunter, Kwan and Sauermann who have showed an exponential upper bound. Similar to the previous question, it seems that a bound of the type $n^{-cn}$ should be the truth. 4.1 Littlewood-Offord theory Littlewood-Offord theory is a classical subject that has undergone intense development in recent years as it has become interwoven with the methods used in random matrix theory. The main object of study here is the concentration function $\rho_{\varepsilon}(v)$ , where $v \in \mathbb{R}^n$ and $\varepsilon > 0$ , which is defined by $$ \rho (v) = \max _ {b} \mathbb {P} \big (\big | \langle X, v \rangle - b \big | < \varepsilon \big), $$ where $X \in \{-1, 1\}^n$ is sampled uniformly at random and the maximum is over all $b \in \mathbb{R}$ . (Actually much more general distribution for the $X$ are considered, but we limit ourselves to uniform $X$ here). To get a feel about how this is immediately connected to the problems above, we consider the singularity problem for $iid$ matrices (i.e. not symmetric). As above, we let $B_{n}$ be an $n\times n$ matrix with all entries independent and uniform in $\{-1,1\}$ . We now expose the first $n - 1$ rows of the matrix $B_{n}$ and define $\mathcal{E}$ to be the event that the first $n - 1$ rows have full rank. Thus, if we let $X\in \{-1,1\} ^n$ be the last row of $B_{n}$ (and so far unexposed), we have that $B_{n}$ is singular if and only if $\langle X,v\rangle = 0$ , on the event $\mathcal{E}$ . Thus $$ \mathbb {P} (\det (B _ {n}) = 0) \leqslant \mathbb {P} (\langle X, v \rangle = 0 \text {a n d} \mathcal {E}) + \mathbb {P} \left(\mathcal {E} ^ {c}\right) \leqslant \mathbb {E} _ {v} \rho_ {0} (v) + \mathbb {P} \left(\mathcal {E} ^ {c}\right), \tag {4.6} $$ where the expectation is over vectors $v$ that occur as normals to the subspace defined by the first $n - 1$ rows. While the probability $\mathbb{P}(\mathcal{E}^c)$ can be taken care of by induction, or otherwise, the main difficulty is in dealing with $\rho_0(v)$ . While one expects a somewhat typical vector $v$ to have $\rho_0(v) \approx 2^{-n}$ (further supporting our intuition for (4.3)), there exist $v$ for which $\rho_0(v)$ can be as large as $1/2$ , for example $v = (1, -1, 0, \ldots, 0)$ . Moreover, anything in between these two extremes is possible. Thus, the central challenge in estimating the singularity probability is to show that it is unlikely that a vector $v$ with large $\rho_0(v)$ will be orthogonal to the first $n - 1$ rows. Thus we are led to understand the concentration function $\rho_0(v)$ , as $v$ varies over all possible normals. Classical theory. Interestingly, the study of $\rho_0(v)$ long pre-dates the study of random matrices, going back to the work of Littlewood and Offord in the 1930s on the zeros of random polynomials. (Actually we already mentioned these papers in our discussion of flat Littlewood polynomials.) In 1945 Erdős proved what is perhaps the subject's first flagship result, showing that if $v\in \mathbb{R}^n$ has all non-zero coordinates then $$ \rho_ {0} (v) \leqslant \rho ((1, \dots , 1)) = O (n ^ {- 1 / 2}). $$ This was then developed by Szemerédi and Sárközy, Stanley and many others. These early results provide us with a beautiful combinatorial perspective on the problem, but most important for us is the pioneering work of Halasz who made an important connection with the Fourier transform, thus giving us a different analytic perspective on the problem. Inverse Littlewood Offord theory. More recently the question has been turned on its head by Tao and Vu, who pioneered the study of "inverse" Littlewood-Offord theory. They suggested the following "meta-conjecture" that has guided much subsequent research. If $\rho_0(v)$ is "large" then $v$ must exhibit arithmetic structure. This "meta-conjecture" has been addressed in the work of Tao and Vu, and Nguyen and Vu who proved that if $v$ is such that $\rho(v) > n^{-C}$ then $O(n^{1 - \varepsilon})$ of the coordinates $v_i$ of $v$ can be efficiently covered with a generalized arithmetic progression of rank $r = O_{\varepsilon, C}(1)$ . While these results provide a very satisfying picture in the range $\rho_0(v) > n^{-C}$ , they begin to break down when $\rho (v) = n^{-\omega (1)}$ and are therefore of limited use at exponential probability scales. More recently these ideas have been extended to give structural information about $v$ when $\rho_0(v)$ as small as $\exp (-c\sqrt{n\log n})$ , but these results are of a somewhat different nature. Perhaps the most relevant among these results for our discussion concerning Theorem 4.1 is the inverse Littlewood-Offord theorem of Rudelson and Vershynin that allows one to control $\rho(v)$ in terms of a related quantity known as the "least common denominator" (LCD) of the vector $v$ . This result gives relatively weak information about $v$ , relative to the results mentioned above, however is effective and very useful all the way down to exponential scales. As this quantity will pop up in our own work, we postpone the discussion of this to Section 4.2. Typical Littlewood-Offord theory. More recently, with the breakthrough work of Tikhomirov (discussed at (4.2)), a fresh perspective has been shed on (4.6). Instead of trying to stratify the different behaviors of $\rho_0(v)$ with different "inverse" theorems, he directly studies the distribution of $\rho_0(v)$ as a random variable. More precisely, he considers $\rho_{\varepsilon}(v)$ at each fixed scale $\varepsilon > 2^{-n + o(n)}$ , where $v \sim \mathbb{S}^{n - 1}$ is chosen uniformly at random from the unit sphere. Now for such random $v$ and $\varepsilon > 2^{-n + o(n)}$ one has $$ \mathbb {E} _ {v} \rho_ {\varepsilon} (v) = \Theta (\varepsilon). $$ The technical heart of the work is the following tail-estimate on the distribution of $\rho_0(v)$ $$ \mathbb {P} _ {v} \left(\rho_ {\varepsilon} (v) \geqslant L \varepsilon\right) \leqslant L ^ {- \omega (n)} $$ for appropriately large (but fixed) $L$ . In our work on the singularity probability, we also take a probabilistic perspective but employ a completely different set of techniques to understand these sorts of tail events, a topic we discuss in Section 4.3. 4.2 Approximate negative correlation One of the new ingredients introduced for Theorem 4.1 is an "approximate negative correlation" inequality for linear events. We first discuss this result in its own right and then sketch how it fits into place in the proof of Theorem 4.1 in Section 4.3. We say that two events $A, B$ are negatively correlated if $$ \mathbb {P} (A \cap B) \leqslant \mathbb {P} (A) \mathbb {P} (B). $$ In what follows we let $\varepsilon > 0$ and let $X \in \{-1, 1\}^n$ be a uniform random vector. Here we are interested in "linear" events of the shape $$ | \langle X, v \rangle | \leqslant \varepsilon . \tag {4.7} $$ The result we discuss here shows the approximate negative dependence of the event (4.7), for all $\varepsilon > e^{-cn}$ , from the intersection of events $$ | \langle X, w _ {1} \rangle | \leqslant \beta , | \langle X, w _ {2} \rangle | \leqslant \beta , \dots , | \langle X, w _ {k} \rangle | \leqslant \beta , \tag {4.8} $$ where $\beta > 0$ is small but fixed and $w_{1}, \ldots, w_{k}$ are orthonormal vectors with $k \leqslant cn$ . Crucially, in this statement we don't assume anything about the structure of the vectors $w_{i}$ and allow the dimension of the space they span to be as large as $\Theta(n)$ . Our main negative dependence result says something of the general shape $$ \mathbb {P} _ {X} \left(\left\{\left| \langle X, v \rangle \right| \leqslant \varepsilon \right\} \cap \bigcap_ {i = 1} ^ {k} \left\{\left| \langle X, w _ {i} \rangle \right| \leqslant \beta \right\}\right) \leqslant \mathbb {P} _ {X} \left(\left| \langle X, v \rangle \right| \leqslant \varepsilon\right) \mathbb {P} _ {X} \left(\bigcap_ {i = 1} ^ {k} \left\{\left| \langle X, w _ {i} \rangle \right| \leqslant \beta \right\}\right), \tag {4.9} $$ although in an approximate form. To state this result properly, we use the notion of the "Least Common Denominator" of a vector $v$ , introduced by Rudelson and Vershynin and mentioned above in our discussion of Littlewood-Offord theory. For $\alpha \in (0,1)$ , we define the least common denominator of a vector $v \in \mathbb{R}^n$ to be $$ D _ {\alpha} (v) = \inf \left\{\varphi > 0: d (\varphi \cdot v, \mathbb {Z} ^ {n} \setminus \{0 \}) \leqslant \sqrt {\alpha n} \right\}. $$ That is, this quantity is the smallest multiple of the vector $v$ for which it is "close" to the integer lattice $\mathbb{Z}^n$ (the value of. Rudelson and Vershynin showed that $(D_{\alpha}(v))^{-1}$ behaves quite a bit like $\mathbb{P}(|\langle X,V\rangle |\leqslant \varepsilon)$ THEOREM 4.2. For $d \in \mathbb{N}$ , $\alpha \in (0,1)$ and $\varepsilon > 0$ , let $v \in \mathbb{S}^{n-1}$ satisfy $D_{\alpha}(v) > c / \varepsilon$ . If $X \sim \{-1,1\}^d$ is uniform then $$ \mathbb {P} \left(| \langle X, v \rangle | \leqslant \varepsilon\right) \leqslant C \varepsilon + e ^ {- c \alpha n}, $$ where $C, c > 0$ are absolute constants. Our first main negative dependence proves an approximate version of (4.9) with $(D(v))^{-1}$ (a slightly better behaved quantity) as a proxy for $\mathbb{P}(\langle X,v\rangle \leqslant \varepsilon)$ . The following is a formal statement. THEOREM 4.3. Let $n \in \mathbb{N}$ , $\alpha \in (0,1)$ , $0 \leqslant k \leqslant \alpha \beta n$ and $\varepsilon \geqslant \exp(-\alpha \beta n)$ . Let $v \in \mathbb{S}^{n-1}$ and let $w_1, \ldots, w_k \in \mathbb{S}^{n-1}$ be orthogonal. If $X \in \{-1,1\}^n$ is a uniform random vector and $D_\alpha(v) > 16 / \varepsilon$ then $$ \mathbb {P} _ {X} \left(\left| \langle X, v \rangle \right| \leqslant \varepsilon \text {a n d} \bigcap_ {i = 1} ^ {k} \left\{\left| \langle X, w _ {i} \rangle \right| \leqslant \beta \right\}\right) < C \varepsilon e ^ {- c k}, \tag {4.10} $$ where $C, c > 0$ are absolute constants. Our proof in of Theorem 4.3 is a delicate argument with the $O(n)$ -dimensional Fourier transform. As the proof of this somewhat different from the other results in this survey we don't elaborate on it further. 4.3 Sketch of the proof of Theorem 4.1 Now that we have motivated a few of the tools behind Theorem 4.1, we now turn to sketch its proof. In analogy with our discussion in Section 4.1, we study the " $n$ -dimensional concentration function" which we define to be $$ f _ {\varepsilon} (v) = \mathbb {P} _ {A _ {n}} \left(\left\| A _ {n} v \right\| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right), \tag {4.11} $$ where $v \in \mathbb{S}^{n - 1}$ , $\varepsilon > 0$ and $A_{n}$ is a uniformly drawn from $n \times n$ symmetric matrices with entries in $\{-1,1\}$ . Intuitively speaking, we expect $f_{\varepsilon}(v)$ to be "large" for directions $v$ that are more likely to appear in the kernel of $A_{n}$ and therefore, to understand the singularity probability of a matrix $A_{n}$ , it is essential to understand the upper tails of the random variable $f_{\varepsilon}(v)$ when $v \sim \mathbb{S}^{n-1}$ is sampled uniformly at random. As we discussed above, this probabilistic interpretation of singularity was pioneered by Tikhomirov and is a convenient perspective for us to adopt here, although our techniques are quite different. Moreover, if we want to prove exponential bounds on the singularity probability, we need to control this function for $\varepsilon$ as small as $e^{-cn}$ . For technical reasons we also need to restrict ourselves to vectors on the sphere that have controlled infinity norm. We call this (vast majority) of the $n$ -dimensional sphere $\mathbb{S}_b^{n-1}$ . Central to our proof is a large deviation estimate of the following type. THEOREM 4.4. For $L > 1$ and $e^{-cn} \leqslant \varepsilon \ll 1$ we have that $$ \mathbb {P} _ {v} \left(f _ {\varepsilon} (v) \geqslant \left(L \varepsilon\right) ^ {n}\right) \leqslant (c L) ^ {- 2 n}, $$ where $v \sim \mathbb{S}_b^{n-1}$ is sampled uniformly at random and $c > 0$ is an absolute constant. In what follows, we sketch the proof of a weaker form of Theorem 4.4 where we prove a bound of the type $(cL)^{-n}$ in place of $(cL)^{-2n}$ . This former bound is too weak for our purposes, but most of the main ideas are contained in its proof. Indeed to prove this, it is enough to show $$ \mathbb {E} _ {v} f _ {\varepsilon} (v) = \mathbb {E} _ {v} \mathbb {P} _ {A _ {n}} \left(\| A _ {n} v \| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right) \leqslant (C \varepsilon) ^ {n} \tag {4.12} $$ and can then apply Markov's inequality to finish. To this end, our first step is to break up the sphere based on the set of coordinates are well behaved. Indeed by now standard methods, we can assume that $v_{i} = \Theta (n^{-1 / 2})$ for $d = cn$ values of $i$ . By union bounding over all such choices for these coordinates, it is enough to assume $v_{i} = \Theta (n^{-1 / 2})$ for all $i\in [d]$ . We then show that we can "replace" the matrix $A_{n}$ (in the definition of $f_{\varepsilon}$ in (4.11)) with a random matrix $M_{n}$ that has many of the entries zeroed out. This will allow us to focus on the well-behaved part of $v_{i}$ and additionally untangle some of the more subtle and complicated dependencies. Indeed we show, by an appropriate Fourier argument, that $$ f _ {\varepsilon} (v) \leqslant C ^ {n} \cdot \mathbb {P} _ {A _ {n}} \left(\| M _ {n} v \| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right), \tag {4.13} $$ for some $C > 1$ , where $M_{n}$ is the random matrix defined by $$ M = \left[ \begin{array}{c c} \mathbf {0} _ {[ d ] \times [ d ]} & H ^ {T} \\ H & \mathbf {0} _ {[ d + 1, n ] \times [ d + 1, n ]} \end{array} \right]. \tag {4.14} $$ Here $H$ is a $(n - d)\times d$ random matrix with iid entries that are $\mu$ -lazy, meaning that $(H)_{i,j} = 0$ with probability $1 - \mu$ and $(H)_{i,j} = \pm 1$ with probability $\mu /2$ for some appropriately small $\mu$ . We now use this special form of $M$ to break up the event $\| Mv\| _2\leqslant \varepsilon n^{1 / 2}$ . Indeed, we write $$ M v = \left[ \begin{array}{c} H v _ {[ d ]} \\ H ^ {T} v _ {[ d + 1, n ]} \end{array} \right] $$ and so we need only to control the intersection of the events $$ \left\| H v _ {[ d ]} \right\| _ {2} \leqslant \varepsilon n ^ {1 / 2} \qquad \text {a n d} \qquad \left\| H ^ {T} v _ {[ d + 1, n ]} \right\| _ {2} \leqslant \varepsilon n ^ {1 / 2}. $$ Note that if we simply ignore the second event and work only with the first, we land in a situation very similar to previous works; where half of the matrix is neglected entirely and we are thus limited by the $(n\log n)^{1 / 2}$ obstruction, discussed above Theorem 4.1. To overcome this barrier, we crucially need to control these two events simultaneously. The key idea is to use the randomness in $H$ to control the event $\| Hv_{[d]}\| _2\leqslant \varepsilon n^{1 / 2}$ and we use the randomness in the selection of $v\in \mathbb{S}_b^{n - 1}$ to control the event $\| H^T v_{[d + 1,n]}\| _2\leqslant \varepsilon n^{1 / 2}$ . For this, we partition the outcomes in $H$ , based on a robust notion of rank (hence the name "rank-splitting"). We define the event $\mathcal{E}_k$ to be the event that $H$ has at $k$ "unhealthy" singular values, $$ \mathcal {E} _ {k} = \left\{H: \sigma_ {d - k} (H) \geqslant c \sqrt {n} \text {a n d} \sigma_ {d - k + 1} (H) < c \sqrt {n} \right\}, $$ where $\sigma_1(H) \geqslant \dots \geqslant \sigma_d(H)$ denote the singular values of $H$ . We then bound $$ \mathbb {P} _ {M} \left(\| M v \| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right) $$ above by (only using the randomness in $M$ , for the moment) $$ \sum_ {k = 0} ^ {d} \mathbb {P} _ {H} \left(\| H ^ {T} v _ {[ d + 1, n ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2} \mid \| H v _ {[ d ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2} \wedge \mathcal {E} _ {k}\right) \cdot \mathbb {P} _ {H} \left(\| H v _ {[ d ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2} \wedge \mathcal {E} _ {k}\right). \tag {4.15} $$ We now see the link with our "approximate negative dependence theorem", which we discussed in section 4.2 which we use (after a good deal of preparation) to bound the quantity $$ \mathbb {P} _ {H} \big (\| H v _ {[ d ]} \| _ {2} \leqslant \varepsilon \sqrt {n} \wedge \mathcal {E} _ {k} \big). $$ Indeed, after "tensorizing" Theorem 4.3 and approximating these objects with appropriate nets, we are able to conclude that $$ \mathbb {P} _ {H} \big (\| H v _ {[ d ]} \| _ {2} \leqslant \varepsilon \sqrt {n}, \wedge \mathcal {E} _ {k} \big) \leqslant (C \varepsilon e ^ {- c k}) ^ {n - d}, $$ unless $v_{[d]}$ is "structured", in which case we do something different (and substantially easier). Thus, for all non-structured $v$ , we have (4.15) is at most something of the form $$ (C \varepsilon) ^ {n - d} \sum_ {k = 0} ^ {d} e ^ {- c k (n - d)} \cdot \mathbb {P} _ {H} \left(\| H ^ {T} v _ {[ d + 1, n ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2} \mid \| H v _ {[ d ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2} \wedge \mathcal {E} _ {k}\right). \tag {4.16} $$ Up to this point, we have not appealed to the randomness in the choice of $v \in \mathbb{S}_{\flat}^{n - 1}$ , beyond imposing that $v$ is non-structured. We now introduce the randomness in $v$ by taking an expectation in $v$ , bringing us back to our goal of bounding (4.12). Taking expectations and swapping the order of the expectations above, tells us that (4.12) is at most $$ (C \varepsilon) ^ {n - d} \sum_ {k = 0} ^ {d} e ^ {- c k (n - d)} \cdot \mathbb {E} _ {H} \mathbb {P} _ {v} \left(\left\| H ^ {T} v _ {[ d + 1, n ]} \right\| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right) \mathbf {1} (H \in \mathcal {E} _ {k}). \tag {4.17} $$ We then deal with this inner probability by considering a fixed $H \in \mathcal{E}$ . Here one can show $$ \mathbb {P} _ {v _ {[ d + 1, n ]}} \left(\| H ^ {T} v _ {[ d + 1, n ]} \| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right) \leqslant (C \varepsilon) ^ {d - k}. \tag {4.18} $$ Indeed, intuitively this is clear, since $H^T$ has at most $k$ small singular directions, so $v_{[d + 1,n]}$ must be "nearly orthogonal" to the $d - k$ singular directions of $H$ . At this point one might be slightly worried that we only have another hard high-dimensional Littlewood-Offord problem on our hands. However, the big advantage here is that $v_{[d + 1,n]}$ is a continuous random variable and thus its analysis is vastly easier. Now stringing together (4.13), (4.18) and (4.17) (and using that $\varepsilon > e^{-cn}$ ) we arrive at our goal of showing that $$ \mathbb {E} _ {v} f _ {\varepsilon} (v) \leqslant C ^ {n} \mathbb {E} _ {v} \mathbb {P} _ {M} \left(\| M v \| _ {2} \leqslant \varepsilon n ^ {1 / 2}\right) \leqslant \left(C \varepsilon\right) ^ {n}. $$ To prove the stronger bound stated in Theorem 4.4, we do a "second-moment" type version of the above, which adds some extra complications, but traces the same shape as the above. Acknowledgments. I would like to thank Marcus Michelen, for some useful discussion on topics related to the sphere packing literature. I would also like to thank Rob Morris and Marcus Michelen for comments on an earlier draft. Finally I would like to thank the volunteers working to put together the ICM surveys.
arxiv_math
2025-12-16T00:00:00Z
https://arxiv.org/pdf/2512.15077
{"title": "Probabilistic combinatorics at exponentially small scales", "raw_content": "# Probabilistic combinatorics at exponentially small scales\n\nJulian Sahasrabudhe*\n\nAbstract. In many applications of the probabilistic method, one looks to study phenomena that occur \"with high probability\". More recently however, in an attempt to understand some of the most fundamental problems in combinatorics, researchers have been diving deeper into these probability spaces, and understanding phenomena that occur at much smaller probability scales. Here I will survey a few of these ideas from the perspective of my own work in the area.\n\n1 Introduction In many applications of the probabilistic method one aims to show the existence of an object not by constructing the object directly, but rather by setting up a probability space and showing that the one can draw the object of interest with non-zero probability. While this might at first seem like a trivial recasting of the problem, it has proven to be incredibly powerful change of perspective which has come to dominate the field in recent years.\n\nClassically, in such applications of the probabilistic method, one studies events that occur not just with non-zero probability, but with probability close to one. A state of affairs which has been cheekily sloganized as \"the probabilistic method finds the hay in the haystack\". In this survey we will touch on a few topics where we are forced to go far beyond these typical behaviors and study phenomena that occur at exponentially small probability scales - right at the edge where probability is still useful. Perhaps surprisingly, there is quite a bit to be said in these cases.\n\nRather than attempt to properly survey this topic, which I am not properly placed to do, I will instead use this theme to tie together a few strands of my own recent work and to highlight some of my thinking on the topics where I have had some success.\n\nFlat littlewood polynomials. We begin this survey in Section 2 where we introduce the area of discrepancy theory before sketching how some of these ideas came to inform the thinking behind a recent result of myself, joint with Paul Balister, Béla Bollobás, Robert Morris and Marius Tiba, in the area of harmonic analysis [2]. Here we used ideas from discrepancy theory to construct so called flat Littlewood polynomials, thus resolving an old conjecture of Littlewood.\n\nA Littlewood polynomial is a polynomial with all coefficients in $\\{-1, 1\\}$ . Inspired by a question of Erdős [27], Littlewood went on to consider the following question regarding about how \"flat\" the modulus of such polynomials can be. If $P$ is a degree $n$ Littlewood polynomial then, by a simple application of Parseval, we see that we must have that $\\max_{z:|z|=1} |P(z)| \\geqslant \\sqrt{n+1}$ . Littlewood conjectured that there exist such polynomials that are \"flat\" as possible, in the sense that $|P(z)| = \\Theta(\\sqrt{n})$ for all $z$ with $|z| = 1$ .\n\nUsing tools from probability and discrepancy theory, we solved this conjecture by showing that there are constants $c_{1}, c_{2} > 0$ so that, for all $n \\geqslant 2$ , there exists a Littlewood polynomial $P(z)$ of degree $n$ with\n\n$$\nc _ {1} \\sqrt {n} \\leqslant | P (z) | \\leqslant c _ {2} \\sqrt {n} \\tag {1.1}\n$$\n\nfor all $z \\in \\mathbb{C}$ with $|z| = 1$ . This set us up to discuss some beautiful open problems in discrepancy theory and some exciting recent developments.\n\nSphere packing in high dimensions. In Section 3 we will go on to discuss a technique which can be used to \"access\" these events at very small probability scales; via a random process that is biased towards the event of interest. We will use this thinking to frame the recent work of myself, joint with Marcelo Campos, Matthew Jenssen and Marcus Michelen, where we construct sphere packings and spherical codes in large dimension. In particular, we\n\nshow the following new lower bounds on the classical sphere packing problem, giving the first asymptotically growing improvement since the work 1947 work of Rogers [72].\n\nLet $\\theta (d)$ be the maximum proportion of $\\mathbb{R}^d$ which is covered by non-overlapping identical spheres. As the dimension $d$ tends to infinity, we have\n\n$$\n\\theta (d) \\geqslant (1 - o (1)) \\frac {d \\log d}{2 ^ {d + 1}}. \\tag {1.2}\n$$\n\nOne interesting aspect of our sphere packings is that they are both \"dense\" and highly \"disordered\". Such sphere packings arise in the physics literature and were conjectured to exist by Parisi and Zamponi [69]. It moreover seems reasonable to conjecture that the bound (1.2) is actually sharp, up to constant factors, for such \"highly disordered\" sphere packings.\n\nThis method also naturally adapts to improve the lower bounds on spherical codes in large dimension. Let $A(d, \\theta)$ be the maximum proportion of the sphere $\\mathbb{S}^{d-1}$ that can be covered by non-overlapping spherical caps of radius $\\theta$ . If $\\theta \\in (0, \\pi/2)$ , then\n\n$$\nA (d, \\theta) \\geqslant (1 - o (1)) \\frac {d \\log d}{2 s _ {d} (\\theta)},\n$$\n\nwhere $s_d(\\theta)$ denotes the normalized volume of a spherical cap.\n\nWe will then go on to discuss a further beautiful advance of Klartag [46] who has construed even denser sphere packings in large dimension by studying properties of random lattices at exponentially small scales. He proved that\n\n$$\n\\theta (d) \\geqslant (c - o (1)) d ^ {2} / 2 ^ {d},\n$$\n\nfor some $c > 0$ . Interestingly, it seems reasonable to conjecture that Klartag's bound is also sharp, up to constant factors, in the case of lattice packings.\n\nDespite these advances, there still remain exponential gaps between the upper and lower bounds on the quantities $\\theta(d)$ and $A(d,\\theta)$ as $d\\to \\infty$ and it seems to be an incredibly inciting question to improve either by an exponential factor.\n\nRandom matrix theory at exponentially small scales. In the final section we shall keep with our theme of high dimensional geometry and move on to discuss phenomena that occur at exponentially small scales in the setting of random matrix theory. This discussion will center around a recent result of the author in joint work with Campos, Jenssen and Michelen [18] on the singularity probability of a random symmetric matrix. We showed that if $A_{n}$ is drawn uniformly at random from the $n \\times n$ symmetric matrices with entries in $\\{-1,1\\}$ then\n\n$$\n\\mathbb {P} \\left(\\det \\left(A _ {n}\\right) = 0\\right) \\leqslant e ^ {- c n}, \\tag {1.3}\n$$\n\nwhere $c > 0$ is an absolute constant.\n\nAs it happens, this will open us to discuss a handful of other results on related topics - the study of the distribution of the least singular value and estimates on the repulsion and clustering of the eigenvalues of random symmetric matrices. We will, in particular, highlight several of the techniques developed in service of (1.3).\n\n2 Beck, Spencer and Flat Littlewood polynomials Motivated by generalizing his famous theorem on 3-term progressions in dense sets of integers [74], Roth considered the following natural problem on two-colourings of an interval. Let $n$ be a large integer and let $f$ be an arbitrary function $f:[n] \\to \\{-1,1\\}$ , which we extend to all of $\\mathbb{N}$ by setting $f(x) = 0$ for all $x > n$ . Define the discrepancy of $f$ , denoted $D(f)$ , to be the maximum sum in absolute value along an arithmetic progression;\n\n$$\nD (f) = \\max _ {a, b} \\left| \\sum_ {x} f (a x + b) \\right|.\n$$\n\nRoth proved [73] that $D(f) \\geqslant cn^{1/4}$ for all such functions $f$ and conjectured that this could be improved to $n^{1/2 - o(1)}$ . This was then disproved by Sarkozy who's result was improved a few more times before Beck [8]\n\nintroduced his famous partial coloring method to resolve the question up to logarithms, showing that Roth's original bound was just about sharp\n\n$$\n\\min _ {f} D (f) = n ^ {1 / 4 + o (1)}.\n$$\n\nSome years later, the general utility of this beautiful idea was noticed by Spencer [91] who used Beck's partial colouring idea to prove Spencer's famous theorem in discrepancy theory.\n\nTHEOREM 2.1. Let $A$ be a $n \\times n$ matrix with all entries bounded in absolute value by one. Then there exists a vector $x \\in \\{-1,1\\}^n$ for which\n\n$$\n\\| A x \\| _ {\\infty} \\leqslant C \\sqrt {n}, \\tag {2.1}\n$$\n\nfor some absolute constant $C > 0$ .\n\nWhat is remarkable about this result is that if one takes a suitably typical matrix $^2$ and a random $x \\in \\{-1, 1\\}^n$ one will have\n\n$$\n\\| A x \\| _ {2} = \\Theta (\\sqrt {n \\log n}),\n$$\n\nwith high probability. Moreover, one expects that it is exponentially unlikely that a random vector $x \\in \\{-1, 1\\}^n$ satisfies $\\| Ax \\|_{\\infty} \\leqslant C \\sqrt{n}$ . This is because for each $i$ we have $|(Ax)_i| \\leqslant C \\sqrt{n}$ with constant probability and one expects that each entry is roughly independent (again assuming $A$ is appropriately non-degenerate). Thus to find the vector $x \\in \\{-1, 1\\}^n$ guaranteed by Theorem 2.1, \"standard\" probabilistic arguments are not available. Instead one can adapt the simple but ingenious differencing method of Beck to this setting.\n\nFor this, we iterate the following lemma that finds a vector $x \\in \\{-1, 0, 1\\}^n$ with not to many zero entries.\n\nLEMMA 2.2. Let $A$ be a $n \\times n$ matrix with complex entries, bounded in absolute value by one. Then there exists a vector $x \\in \\{-1,0,1\\}^n$ with at least $n/4$ non-zero terms for which\n\n$$\n\\| A x \\| _ {\\infty} \\leqslant C \\sqrt {n}, \\tag {2.2}\n$$\n\nfor some absolute constant $C > 0$ .\n\nProof sketch. To prove this consider all images $\\{Ay:y\\in \\{0,1\\} ^n\\}$ . One then uses a (careful) application of the pigeon-hole principle to show that there is a subset $S\\subset \\{0,1\\} ^n$ with $|S|\\geqslant 2^{(1 - \\varepsilon)n}$ , for which $\\| y - y^{\\prime}\\|_{\\infty}\\leqslant C\\sqrt{n}$ for all $y,y^{\\prime}\\in S$ . Since each vector $y\\in \\{0,1\\}$ has at most $\\sum_{i}^{d}\\binom{n}{i}$ vectors with hamming distance $\\leqslant d$ , there must be a pair $y,y^{\\prime}\\in S$ with the property that $x = y - y^{\\prime}$ has at least $d = n / 4$ non-zero entries.\n\n2.1 Flat Littlewood polynomials Interestingly these fundamental questions of discrepancy theory connect with an old and famous conjecture of Littlewood on \"Littlewood polynomials\". Indeed, say that a polynomial $P(z)$ of degree $n$ , is a Littlewood polynomial if\n\n$$\nP (z) = \\sum_ {k = 0} ^ {n} \\varepsilon_ {k} z ^ {k} \\qquad \\text {w h e r e} \\qquad \\varepsilon_ {k} \\in \\{- 1, 1 \\}.\n$$\n\nThe study of such polynomials has a long and fascinating history (see [11] or [62]), with their study going back to the work of Hardy and Littlewood [38] over 100 years ago, but was also popularized by the work of Bloch and Pólya [10], the work of Littlewood and Offord [54, 57], and others [28, 83].\n\nWhile there are many beautiful questions about such polynomials, we are interested here in a problem asked by Erdős [27] in 1957 which was then taken up and extensively studied by Littlewood [50, 51, 53, 52] in a series of papers on the extremal properties of polynomials with restricted coefficients. For this we say a (sequence) of Littlewood polynomials $P_{n}(z)$ ( $P_{n}$ of degree $n$ ) is flat if there exist constants $c_{1}, c_{2} > 0$ such that<sup>3</sup>\n\n$$\nc _ {1} \\sqrt {n} \\leqslant \\left| P _ {n} (z) \\right| \\leqslant c _ {2} \\sqrt {n}. \\tag {2.3}\n$$\n\nIn particular, Littlewood conjectured in [52] that flat Littlewood polynomials exist.\n\nTo see a first connection with discrepancy, we note that if one relaxes (2.3) to the one-sided bound,\n\n$$\n\\max _ {z: | z | = 1} | P (z) | \\leqslant C \\sqrt {n}. \\tag {2.4}\n$$\n\nwe arrive at the natural \"analogue\" of (2.1) in the setting of polynomials. Actually, the one-sided problem was solved in the 1950s by Rudin [79] and Shapiro [88] who introduced the famous \"Rudin-Shapiro\" polynomials. Interestingly, one of the first applications that Spencer gave of his new theorem was to give a very different proof of the existence of polynomials satisfying (2.4). To see the connection, we briefly sketch the idea.\n\nTHEOREM 2.3. For every $n \\geqslant 2$ there is a Littlewood polynomial $P_{n}$ with $\\deg(P_{n}) = n$ which satisfies (2.4), for some absolute $C > 0$ .\n\nProof sketch. For $\\theta \\in [0,2\\pi ]$ , define the vector\n\n$$\nv (\\theta) = \\left(1, e ^ {i \\theta}, e ^ {2 i \\theta}, \\ldots , e ^ {n i \\theta}\\right).\n$$\n\nNote if we apply Spencer's theorem to $v(2\\pi j / n)$ , for $j = 1, \\dots, n$ , we can ensure the polynomial is small at the $n$ th roots of unity. But this is not quite enough. To get around this we can also use Spencer's theorem to ensure that all of the derivatives $P^{(j)}$ are also well controlled at roots of unity. While this might seem like we are putting infinitely many constraints on the sequence $\\{-1, 1\\}^n$ it is actually okay since we need weaker and weaker control on each derivative. Then, via Taylor's theorem we can ensure that the polynomial is well behaved everywhere.\n\nTo attack Littlewood's conjecture, one might hope for a strengthening of Theorem 2.1 that also provides a lower bound on each entry. However it is not hard to see that this is not possible. Consider the $n + 1$ vectors $a^{(0)},\\ldots ,a^{(n)}$ where we define\n\n$$\na ^ {(i)} = (- 1, \\dots , - 1, 1, \\dots 1)\n$$\n\nto have exactly $i$ 1s and and $n - i$ entries equal to $-1$ . So given any $x \\in \\{-1, 1\\}^n$ , we can assume $\\langle x, a^{(0)} \\rangle \\geqslant 0$ and thus $\\langle x, a^{(n)} \\rangle = -\\langle x, a^{(1)} \\rangle \\leqslant 0$ . Now note that for each $i < n$ ,\n\n$$\n\\left| \\langle a ^ {(i + 1)}, x \\rangle - \\langle a ^ {(i)}, x \\rangle \\right| \\leqslant 2\n$$\n\nand therefore there is always a vector $a^{(i)}$ for which $|\\langle x,a^{(i)}\\rangle |\\leqslant 2$ . Thus we see that any matrix $A$ which is defined by any $n$ vectors among the $\\{a^{(i)}\\}_i$ provides a counterexample to this potential strengthening of the Spencer's theorem.\n\nIn joint work with Bollobás, Balister, Morris and Tiba, we showed that we could in fact get around this obstacle in the context of flat Littlewood polynomials and use discrepancy theory methods to prove the existence of these polynomials.\n\nTHEOREM 2.4. For all $n \\geqslant 2$ , there exists a Littlewood polynomial $P(z)$ of degree $n$ with\n\n$$\nc _ {1} \\sqrt {n} \\leqslant | P (z) | \\leqslant c _ {2} \\sqrt {n} \\tag {2.5}\n$$\n\nfor all $z \\in \\mathbb{C}$ with $|z| = 1$ . Here $c_{1}, c_{2} > 0$ are absolute constants.\n\nIn what follows we sketch a proof of Theorem 2.4.\n\n2.2 Sketch of the construction To prove Theorem 2.4 we may assume that the degree is $4n$ . We then write $z = e^{ix}$ and multiply by a phase $e^{-2ixk}$ so that we obtain a function\n\n$$\nf (x) = \\sum_ {k = - 2 n} ^ {2 n} \\varepsilon_ {k} e ^ {i x k}.\n$$\n\nThus our goal is to choose the $\\varepsilon_{k}\\in \\{-1,1\\}$ so that $f(x) = \\Theta (n^{1 / 2})$ , for all $x$ . We now constrain the coefficients so that the real and imaginary parts of $f$ separate nicely. In particular, we partition $C\\cup S = \\{1,\\ldots ,2n\\}$ and then fix $\\varepsilon_{-k} = \\varepsilon_{k}$ for each $k\\in C$ , and $\\varepsilon_{-k} = -\\varepsilon_{k}$ for each $k\\in S$ . Thus we may write\n\n$$\nf (x) = \\varepsilon_ {0} + 2 \\sum_ {k \\in C} \\varepsilon_ {k} \\cos k x + 2 i \\sum_ {k \\in S} \\varepsilon_ {k} \\sin k x.\n$$\n\nand then define the real and imaginary trigonometric polynomials as\n\n$$\nc (x) = \\sum_ {k \\in C} \\varepsilon_ {k} \\cos k x \\quad \\text {a n d} \\quad s (x) = \\sum_ {k \\in S} \\varepsilon_ {k} \\sin k x. \\tag {2.6}\n$$\n\nWhile we don't discuss the precise definition of $C, S$ here, it will be important for us that $C \\subset [\\gamma n]$ , where $\\gamma$ is a small constant, so that the degree of $c(x)$ is small.\n\nThus we construct our function $f$ in two stages. We first construct a cosine polynomial $c(x)$ which is $O(\\sqrt{n})$ for all $x$ and satisfies $|c(x)| \\geqslant \\delta \\sqrt{n}$ , except on a set of intervals $\\mathcal{I} = \\{I\\}$ , which are not too long, well separated and not too numerous. In the second, and more challenging step, we shall show that we can construct a sine polynomial $s(x)$ that is $\\Omega(n^{1/2})$ on these intervals where the cosine polynomial is small, while still maintaining the upper bound of $O(n^{1/2})$ overall.\n\nWhile there are probably many different ways of constructing an appropriate cosine polynomial, we use a deterministic construction based on the Rudin-Shapiro polynomials, mentioned above. Rudin and Shapiro defined their polynomials recursively by defining the pairs of polynomials $P_{t}, Q_{t}$ by $P_{0}(z) = Q_{0}(z) = 1$ and inductively defining\n\n$$\nP _ {t + 1} (z) = P _ {t} (z) + z ^ {2 ^ {t}} Q _ {t} (z), \\qquad \\mathrm {a n d} \\qquad Q _ {t + 1} (z) = P _ {t} (z) - z ^ {2 ^ {t}} Q _ {t} (z).\n$$\n\nfor each $t\\geqslant 0$\n\nWe construct our cosine polynomial $c(x)$ , by using a \"twisted\" version of these polynomials. We define\n\n$$\nc (x) = \\Re \\left(z ^ {T} P _ {t} (z) + z ^ {2 T} Q _ {t} (z)\\right),\n$$\n\nwhere $T \\approx \\gamma n$ and $t = \\log \\gamma n$ so that $\\deg(P_t)$ and $\\deg(Q_t) \\approx \\gamma n$ . Thus by the boundedness of the Rudin-Shapiro polynomials, we have that $|c(x)| = O(\\sqrt{n})$ . We also have that $|c(x)| \\geqslant \\delta \\sqrt{n}$ except on a collection $\\mathcal{I}$ of intervals that satisfy\n\n1. $|\\mathcal{I}| = O(\\gamma n)$ \n2. Each $I \\in \\mathcal{I}$ has $|I| = O(n^{-1})$ ; \n3. For distinct $I, J \\in \\mathcal{I}$ , we have $d(I, J) = \\Omega(n^{-1})$ .\n\nActually, the first condition holds since we arranged for the degree of $c(x)$ to be at most $\\gamma n$ . Also note that the second condition holds \"typically\" in the sense that we expect such a polynomial to have derivative $\\approx (\\gamma n)^{3/2}$ . So we expect it to be within the interval $[-\\delta \\sqrt{n}, \\delta \\sqrt{n}]$ for time at most $\\approx n^{-1}$ (here $\\delta \\ll \\gamma$ ). We expect the last condition for a similar reason.\n\nIn the second and more challenging part of the proof, we show that if we are given any collection of intervals satisfying the conditions (1)-(3), we can construct a sine polynomial $s(x)$ of the form in (2.6) that satisfies $s(x) = O(\\sqrt{n})$ everywhere and $|s(x)| \\geqslant \\delta \\sqrt{n}$ on each interval $I \\in \\mathcal{I}$ . It's in the construction of $s(x)$ that we use ideas from discrepancy theory.\n\nBefore defining the $s(x)$ , we first assign a \"direction\" $\\alpha(I) \\in \\{-1, 1\\}$ to each bad interval $I \\in \\mathcal{I}$ , which indicates the sign we want $s(x)$ to have on $I$ . (We will describe how we choose these $\\alpha_I$ in a moment). We then define, for each $k$ , the quantity\n\n$$\n\\Delta (k) = \\sum_ {I \\in \\mathcal {I}} \\frac {\\alpha_ {I}}{| I |} \\int_ {I} \\sin k s d s, \\tag {2.7}\n$$\n\nwhich tells us, based on how positive or negative it is, how much we \"prefer\" the choice of $\\varepsilon_{k} = 1$ versus the choice of $\\varepsilon_{k} = -1$ . Indeed one can think of $\\Delta(k)$ as the \"net-progress\" we make towards \"pushing\" our various intervals in the desired directions, when choosing the coefficient $\\varepsilon_{k}$ .\n\nTo ensure that we carefully \"spread\" each push out over all of the intervals and time steps, we make our first use of discrepancy theory (Theorem 2.1) to choose the $\\alpha_{I}$ so that\n\n$$\n\\left| \\Delta (k) \\right| \\leqslant C ^ {\\prime} | \\mathcal {I} | ^ {1 / 2} \\leqslant C (\\gamma n) ^ {1 / 2}, \\tag {2.8}\n$$\n\nfor some absolute constants $C', C > 0$ .\n\nWe now use these quantities $\\Delta(k)$ to define a space of random sine polynomials. First define the random variables\n\n$$\n\\hat {\\varepsilon} _ {k} \\in \\{- 1, 1 \\} \\qquad \\text {b y} \\qquad \\mathbb {E} \\hat {\\varepsilon} _ {k} = \\frac {\\Delta (k)}{C (\\gamma n) ^ {1 / 2}},\n$$\n\nwhere we are implicitly using (2.8), to make sure such a random variable exists. We then define the random sine polynomial $\\hat{s}(x)$ by\n\n$$\n\\hat {s} (x) = \\sum_ {k \\in S} \\hat {\\varepsilon} _ {k} \\sin k x.\n$$\n\nHeuristically, the idea is this: with each choice of $\\varepsilon_{k}$ , for each $k\\in S$ , we can increase\n\n$$\n\\min _ {x \\in I} | \\hat {s} _ {\\alpha} (x) |\n$$\n\nby about $\\Delta(k) / (C\\gamma n)^{1/2} = \\Theta((\\gamma n)^{-1/2})$ , for each $I \\in \\mathcal{I}$ . Since we have $|S| = \\Theta(n)$ values of $k$ to work with, we should (on average at least) push each interval as far as $\\geqslant n^{1/2} / \\gamma^{1/2}$ . That is, far enough.\n\nTo get some idea why we can indeed guarantee this, we see how orthogonality of the characters, allows us to see that the expected value of $\\hat{s}$ is large on all of the intervals. Indeed, fix an interval $J\\in \\mathcal{I}$ and fix a point $x\\in J$ . We observe that\n\n$$\n\\mathbb {E} \\hat {s} (x) = \\mathbb {E} \\sum_ {k \\in S} \\varepsilon_ {k} \\sin k x = \\frac {1}{C (\\gamma n) ^ {1 / 2}} \\sum_ {k \\in S} \\Delta (k) \\sin k x,\n$$\n\nwhich gives, expanding the definition of $\\Delta (k)$\n\n$$\n\\mathbb {E} \\hat {s} (x) = \\frac {1}{C (\\gamma n) ^ {1 / 2}} \\sum_ {I \\in \\mathcal {I}} \\frac {\\alpha_ {I}}{| I |} \\cdot \\int_ {I} \\sum_ {k \\in S} (\\sin k x) (\\sin k s) d s. \\tag {2.9}\n$$\n\nNow if $d(x, I) < 1 / n$ we have the approximate orthogonality relations\n\n$$\n\\left| I \\right| ^ {- 1} \\int_ {I} \\sum_ {k \\in S} (\\sin k x) (\\sin k s) d s \\approx n \\quad \\text {a n d} \\quad \\left| I \\right| ^ {- 1} \\int_ {I} \\sum_ {k \\in S} (\\sin k x) (\\sin k s) d s \\ll n,\n$$\n\nwhenever $d(x, I) \\gg 1 / n$ . Thus we have that the sum on the right hand side of (2.9) \"picks out\" the interval $J$ . We can therefore conclude that\n\n$$\n\\mathbb {E} s (x) \\approx \\frac {\\alpha_ {J} n}{C (\\gamma n) ^ {1 / 2}} = \\Theta \\big (\\sqrt {n / \\gamma} \\big).\n$$\n\nThus we have sketched how a sample from $\\hat{s}(x)$ behaves correctly on the intervals $I \\in \\mathcal{I}$ on average. Unfortunately, a typical sample from this polynomial will not be enough to push up all the intervals simultaneously. Indeed, the variance is large enough to spoil the value of $\\hat{s}(x)$ on many $I \\in \\mathcal{I}$ . To get beyond this, we appeal to tools in discrepancy theory and, in particular, to a version of the partial colouring lemma mentioned above (Lemma 2.2) due to Lovett and Meka [59].\n\nWith this we are able to find a (exponentially unlikely) polynomial $s(x)$ with the property that $|s(x)| \\geqslant \\delta n^{1/2}$ for all $x \\in \\bigcup_{I \\in \\mathcal{I}} I$ . It is with this polynomial that we can complete the proof.\n\n2.3 Constructive proofs of Spencer's theorem As we sketched above, Spencer's original proof of Theorem 2.1 relies fundamentally on an application of the pigeonhole principle and thus, while it does show that a solution $x$ does exist, it gives little guidance on how to find it efficiently. In fact, Spencer conjectured that no efficient algorithm exists.\n\nThe first breakthrough was provided by Bansal [5], who refuted Spencer's conjecture by providing an efficient algorithm which used a clever random walk which was guided by a semi-definite program, which encoded the current \"state\" of the solution. A few years later a much simpler algorithm was then given by Lovett and Meka [59], which has the additional advantage that it did not rely on Spencer's original proof. Because their proof is so simple and elegant, we sketch a proof of their main recoloring step; that is, their analogue of Lemma 2.2.\n\nLEMMA 2.5. Let $a^{(1)}, \\ldots, a^{(m)} \\in \\mathbb{R}^n$ be vectors with $\\| a^{(i)} \\|_{\\infty} \\leqslant 1$ for all $i \\in [m]$ . Let $x_0 \\in [-1, 1]^n$ and let $c_1, \\ldots, c_m \\geqslant 0$ be such that\n\n$$\n\\sum_ {j} \\exp \\left(- c _ {j} ^ {2} / 1 6\\right) \\leqslant 1 / 1 6.\n$$\n\nThen there exists $x \\in [-1, 1]^n$ so that for all $j \\in [m]$ , we have\n\n$$\n\\left| \\langle x - x _ {0}, a ^ {(j)} \\rangle \\right| \\leqslant c _ {j} \\sqrt {n}\n$$\n\nand $|x_i| = 1$ , for at least $n/4$ entries of $x$ .\n\nHere we have taken the liberty of removing the quantification that makes it clear that Lemma 2.5 also results in an efficient algorithm, since this is not our focus here. Let us also note that, in contrast to Lemma 2.2, the present lemma fixes the colouring on $n/4$ coordinates and gives a fractional weight to all other coordinates. Thus one should think about $x_0$ as the fractional weight \"so far\" in the iteration.\n\nProof sketch. We imagine two convex bodies. The first is simply the cube $[-1, 1]^n$ , which is defined, of course, by the hyperplanes $-1 \\leqslant x_i \\leqslant 1$ , for $i \\in [n]$ . The second is the convex body defined by the hyperplanes which come from the linear constraints. That is,\n\n$$\n\\left| \\langle x - x _ {0}, a ^ {(j)} \\rangle \\right| \\leqslant c _ {j} \\sqrt {n}. \\tag {2.10}\n$$\n\nWe now define the random process $X_{t}$ as follows. The process $X_{t}$ starts, at $t = 0$ , at $x_0$ . We then allow $X_{t}$ to evolve as Brownian motion until it first hits one of the hyperplanes above, at which point it sticks to it and behaves as a Brownian motion within the hyperplane.\n\nThus our process evolves, wandering as a Brownian motion within a set of hyperplanes, hitting further hyperplanes, and then restricting itself further. Thus $X_{t}$ is a random process with mean $x_0$ and with a covariance matrix that starts as the identity and which is successively projected onto the hyperplanes that it sticks to.\n\nUsing standard martingale concentration estimates, we can say that the large deviations of this process throughout time are no worse than that of the unconditioned Brownian motion. In particular we can see that\n\n$$\n\\mathbb {P} _ {X _ {t}} \\left(\\left| \\langle X _ {t} - x _ {0}, a ^ {(j)} \\rangle \\right| \\leqslant c _ {j} \\sqrt {n}\\right) \\leqslant e ^ {- c _ {j} ^ {2} / 2}.\n$$\n\nThus, by time $t = 1$ , the expected number of hyperplanes of type (2.10) that the process hits is at most $1/16$ . Thus it must have hit a good proportion of the hyperplanes of the type $|x_i| = 1$ , which gives exactly what we want.\n\n2.4 The Komlós Conjecture and the Beck-Fiala conjecture Before concluding our discussion of discrepancy theory, it is impossible not to mention the beautiful and conjectural extension of Spencer's theorem known as the Komlós conjecture, which says that one only needs to control the $\\ell_2$ norm of the columns of the matrix $A$ to arrive at the same conclusion as Spencer's theorem (the normalization is changed here to match the literature).\n\nCONJECTURE 2.6. Let $A$ be a $n \\times n$ matrix where each column has $\\ell_2$ -norm at most 1. Then there exists $x \\in \\{-1,1\\}^n$ so that\n\n$$\n\\| A x \\| _ {\\infty} \\leqslant K,\n$$\n\nfor an absolute constant $K$ .\n\nThere is also a famous hypergraph colouring \"companion\" to this conjecture, made independently by Beck and Fiala [9]. It is not hard to see that the Komlós conjecture implies the Beck-Fiala conjecture.\n\nCONJECTURE 2.7. Let $\\mathcal{H}$ be a hypergraph on finite ground set where every vertex has degree at most $d$ . There exists $f:X\\to \\{-1,1\\}$ so that for all $e\\in \\mathcal{H}$ we have\n\n$$\n\\left| \\sum_ {x \\in e} f (x) \\right| \\leqslant C \\sqrt {d}, \\tag {2.11}\n$$\n\nwhere $C > 0$ can be taken to be a absolute constant.\n\nBeck and Fiala proved that Conjecture 2.7 is true if $C\\sqrt{d}$ is replaced with $2d - 1$ . The only unconditional improvement to this bound is by Bukh [13] who improved it to $2d - \\log_{*}d$ . If we set $|X| = n$ , Banaszczyk [4] proved that one can take $C = O(\\sqrt{\\log n})$ in (2.11) by proving one can take $K = O(\\sqrt{\\log n})$ in the setting of the Komlós conjecture.\n\nThese results remained the state of the art for over 25 years, until a very recent and exciting breakthrough of Bansal and Jiang [7, 6]. In these papers they prove the Beck-Fiala conjecture in the case $n \\leqslant 2^{c\\sqrt{k}}$ and prove a bound of $(\\sqrt{k} + \\log n)(\\log \\log n)^{c}$ in general. They also gave the first improvement on Banaszczyk's bound of $K = O(\\sqrt{\\log n})$ in the setting of the Komlos conjecture, by showing that one may take $K = (\\log n)^{1/4 + o(1)}$ .\n\n3 Changing the distribution: the semi-random method and sphere packings One alternative way of finding a rare object in a probability space is to change the underlying distribution that it is sampled from. This can give us a way of naturally \"accessing\" unlikely events. We already saw this kind of idea in action with the constructive proof of Spencer's theorem due to Lovett and Meka, but perhaps the most classical example comes from the work of Ajtai, Komlós, and Szemerédi [1] on independent sets in triangle free graphs, which was later refined by Shearer [89] to give the following basic and beautiful result.\n\nFor this we recall that an independent set in a graph $G$ is a set of pairwise non-adjacent vertices and the independence number of a graph $G$ , denoted $\\alpha(G)$ , is the largest independent set in $G$ .\n\nTHEOREM 3.1. Let $G$ be a triangle-free graph on $n$ vertices with average degree $d$ . Then\n\n$$\n\\alpha (G) \\geqslant \\big (1 + o (1) \\big) \\frac {n \\log d}{d},\n$$\n\nwhere the $o(1)$ term tends to 0 as $d$ tends to infinity.\n\nThis result is easily seen to be sharp, up to a factor of $2 + o(1)$ by appropriately modifying a random graph. While this theorem has many applications, perhaps the best known is the following bound on the extreme off-diagonal Ramsey numbers.\n\nTHEOREM 3.2. We have\n\n$$\nR (3, k) \\leqslant \\left(1 + o (1)\\right) \\frac {k ^ {2}}{\\log k}.\n$$\n\nIf one looks to prove Theorem 3.1 with a \"direct\" application of the probabilistic method, that is by selecting a set uniformly at random, one is doomed to failure. Indeed in a random graph of average degree $d$ there exponentially few such independent sets among all $k$ sets. To access these sets we instead \"tilt\" our distribution towards independent sets. In fact, there are a couple different ways to do achieve this in practice, but in what follows we outline a heuristic that is behind all of these different approaches.\n\nHeuristic justification of Theorem 3.1. Suppose that we could define a distribution on independent sets that produced a set that still \"looked random\" apart from the constraint we imposed of being independent. How large could we reasonably expect the size of the independent set we produced? Let's say that our distribution produces an independent set $I$ of $pn$ vertices, for some $p \\in (0,1)$ . Say a vertex is open if $v \\notin I$ and all of its neighbours are not in $I$ . The probability that a vertex is left open is\n\n$$\n\\mathbb {P} (v \\text {o p e n}) = \\mathbb {P} ((v \\cup N (v)) \\cap I = \\emptyset) \\approx (1 - p) ^ {d (v) + 1}, \\tag {3.1}\n$$\n\nwhere $d(v) = |N(v)|$ is the degree of $v$ and $N(v)$ is its neighbourhood. Here we have used our heuristic assumption that $I$ is random-like. Now intuitively we want to choose $p$ just large enough so that we just start to have\n\n$$\n(1 - p) ^ {d} \\approx \\mathbb {P} (v \\text {o p e n}) \\ll p. \\tag {3.2}\n$$\n\nThat is, we expect this optimal $p$ to be just at the point where the number of vertices left open is significantly smaller than the number of vertices that we have added so far. Thus, solving for $p$ in the equation (3.2), brings us to a heuristic for the maximum density of a \"random-like\" independent set\n\n$$\np = (1 + o (1)) \\frac {\\log d}{d},\n$$\n\nwhich exactly matches Sheaer's bound.\n\nOf course this is not a proof at all since we have not provided any distribution that satisfies these conditions. However, there are at least two natural such distributions. The first is to build $I$ by a random greedy process where we remove random vertices one-by-one along with all of their neighbors. This is the idea behind the proofs\n\nof Ajtai, Komlós and Szemerédi [1] and the refinement of Shearer. But we also note that a different, more direct proof was given by Shearer [90] which uses the hardcore model on $G$ to sample $I$ .\n\nTowards an optimal version of Shearer's theorem. We pause to remark that it is a major open problem to determine the correct constant in Theorem 3.1. It is unclear which if either the upper bound or lower bound is sharp. We also mention the beautiful algorithmic problem of finding an independent set in the random regular graph that improves upon Shearer by a constant factor. More precisely, does there exist a randomized polynomial time algorithm that finds an independent set in the random regular graph $G(n,d)$ of size $\\geqslant (1 + \\varepsilon)n(\\log d) / d$ , with high probability? In this setting we even know that large independent sets exist and thus should in principle be easier. However it has been shown there are serious obstructions to finding such an algorithm [32, 71].\n\nThe problem of finding an optimal version of Shearer's theorem is also intimately tied up with the problem of determining the Ramsey numbers $R(3, k)$ and accounts for the missing factor of $2 + o(1)$ in this problem [17, 39].\n\n3.1 Spherical codes and sphere packing in large dimension Recently, in joint work with Marcelo Campos, Matthew Jenssen and Marcus Michelen, we applied this sort of thinking to the classical sphere packing problem: What is the maximum proportion of $\\mathbb{R}^d$ that can be covered by non-overlapping spheres of volume one? There is also the closely related question of constructing spherical codes: Given an angle $\\theta$ , what is the maximum proportion of $\\mathbb{S}^{d-1}$ that can be covered by non-overlapping spherical caps of radius $\\theta$ ? Let $\\theta(d)$ denote this maximum proportion in the sphere packing problem and let $A(\\theta, d)$ denote the maximum proportion of $\\mathbb{S}^{d-1}$ in the spherical caps problem.\n\nDespite the simplicity of these problems, little is known about these fascinating quantities. The precise value of $\\theta(d)$ is only known in dimensions $d \\in \\{1, 2, 3, 8, 24\\}$ . The case $d = 1$ is trivial, the case $d = 2$ is classical [102], while the remaining known cases are the result of a series of extraordinary breakthroughs: dimension 3 was a landmark achievement of Hales [37], resolving the Kepler conjecture from 1611. Dimensions 8 and 24 were resolved only recently due to the major breakthroughs of Viazovska [107], in dimension 8, and then Cohn, Kumar, Miller, Radchenko, and Viazovska [22] in dimension 24. (See [21] for a beautiful exposition of these developments).\n\nWe also recall that the kissing number of $\\mathbb{R}^d$ corresponds to the special case of the spherical codes problem $A(d,\\pi /3)$ , although it is more traditionally phrased as the maximum number of unit spheres in $\\mathbb{R}^d$ that can be arranged tangent to (or which \"kiss\") a central unit sphere. The only kissing numbers that are known are in dimensions $d\\in \\{1,2,3,4,8,24\\}$ . Similarly only a few cases of optimal spherical codes are known for other $\\theta$ , for which we refer the reader to [20].\n\nIn our work, our focus is on sphere packing and spherical codes in large dimension, where the situation remains even more mysterious. A simple argument shows that any saturated packing (one in which no additional sphere can be added) has density $\\geqslant 2^{-d}$ and thus\n\n$$\n\\theta (d) \\geqslant 2 ^ {- d}.\n$$\n\nA classical theorem of Minkowski's [61] improved upon this bound by a factor of $2 + o(1)$ . In 1947 Rogers [72] made the first asymptotically growing improvement to the trivial lower bound showing that\n\n$$\n\\theta (d) \\geqslant (\\beta + o (1)) d 2 ^ {- d},\n$$\n\nwhere $\\beta = 2 / e\\approx 0.74$ . Since the work of Rogers, a number of improvements have been made to the constant factor $\\beta$ . Davenport and Rogers [25] showed that one can take $\\beta = 1.68$ ; Ball [3], some 45 years later, improved the bound to $\\beta = 2$ ; and Vance [104] showed that one can take $\\beta = 6 / e\\approx 2.21$ when the dimension $d$ is divisible by 4. Venkatesh [105] showed that one can take $\\beta = 65963$ and additionally showed that one can obtain an additional log log $d$ factor along a sparse sequence of dimensions. In our paper [15], we go beyond this barrier and improve Minkowski's bound by a factor of $\\Omega (d\\log d)$ in general dimension.\n\nTHEOREM 3.3. As $d$ tends to infinity\n\n$$\n\\theta (d) \\geqslant (1 - o (1)) \\frac {d \\log d}{2 ^ {d + 1}}.\n$$\n\nRecently, this result has been seen a further spectacular improvement by Klartag [46], who used a method reminiscent of Lovett and Meka's proof of Spencer's theorem, to show the following.\n\nTHEOREM 3.4. As $d$ tends to infinity\n\n$$\n\\theta (d) \\geqslant c d ^ {2} 2 ^ {- d},\n$$\n\nfor some $c > 0$\n\nWe discuss this beautiful result further in Section 3.5.\n\nOur method also naturally adapts the setting of spherical codes in large dimension and provides us with an improvement in this setting. To state this result, we let $s_d(\\theta)$ denote the normalized spherical volume of a cap of angle $\\theta$ . In [15] we also prove the following.\n\nTHEOREM 3.5. If $\\theta \\in (0,\\pi /2)$ and $d$ tends to infinity then\n\n$$\nA (d, \\theta) \\geqslant (1 - o (1)) \\frac {d \\log d}{2 s _ {d} (\\theta)}.\n$$\n\nThis improved upon the best known bounds due to Fernández, Kim, Liu and Pikhurko [31] who gave a constant factor improvement to bounds of Jenssen, Joos and Perkins [43]. These bounds were of the type $A(d,\\theta) \\geqslant cd / s_d(\\theta)$ for some constant $c > 0$ .\n\nWe also note that our results have been adapted further to other settings. Fernández, Kim, Liu and Pikhurko [33] improved the best bounds for the sphere packing in high dimensional hyperbolic space using this method and Schildkraut [87] has extended this method to show that one can obtain a similar bound for packing balls in an arbitrary norm.\n\nUpper bounds on the sphere packing problem. Despite this progress, the upper bounds for the sphere packing problem are quite far off the lower bounds, with an exponential gap between the two. The best known upper bounds are of the form\n\n$$\n\\theta (d) \\leqslant 2 ^ {- (. 5 9 9 \\dots + o _ {d} (1)) d},\n$$\n\nwhich is due to the 1978 work of Kabatjanskii and Levenstein [44] and has only been improved by a multiplicative constant factor in the years since by Cohn and Zhao [23] and then Sardari and Zargar [85]. It is a beautiful and central problem to improve these bounds further.\n\n3.2 Amorphous sphere packings in physics One interesting property of the sphere packings behind Theorem 3.3 is that they are \"random-like\". While essentially all other results focus on lattice packings, which are therefore very \"structured\", our packings are essentially as random-like as possible. Such packings are of independent interest in the physics literature where random sphere packings at a given density are a natural model of physical matter.\n\nIn dimension 3, for instance, it is believed that random sphere packings transition from appearing \"gas-like\" at low density to \"lattice-like\" at high density, paralleling the phase transition between states of matter. However, rigorously demonstrating that this phase transition occurs remains a beautiful and major open problem in the field (see [60] and the references therein).\n\nPhysicists have also devoted enormous effort to analysing sphere packings in high dimensions, with the aim of providing a more tractable analysis than in low dimensions, and in order to use the powerful machinery of equilibrium statistical physics to generate rich predictions. Here, the important qualitative distinction is between sphere packings that are crystalline, meaning that they exhibit long-range \"correlations\", and amorphous, meaning they don't have any such correlations. For example, lattice packings are extreme instances of crystalline packings where the entire structure is determined by a basis.\n\nIn their seminal work on applying the replica method to the structure of high-dimensional sphere packings, Parisi and Zamponi [70, 69] predicted that the largest density of amorphous packings in $d$ dimensions is\n\n$$\n(1 + o (1)) (d \\log d) 2 ^ {- d},\n$$\n\nthat is, a factor of 2 larger than our lower bound from Theorem 3.3. While there is no agreed-upon rigorous definition of \"amorphous,\" it seems likely that any such definition would be satisfied by our construction for Theorem 3.3, which enjoys extremely fast decay of correlations.\n\n3.3 Sketch proof - a graph theoretic reduction To prove Theorem 3.3 and Theorem 3.5 we convert the problem into the problem of finding a large independent set in a certain graph. To do this we discretize the space in a natural way. Here we sketch the situation for sphere packings, and note that the case for spherical codes only requires small adjustments.\n\nTo discretize, we simply sample a Poisson point process in a large box $[-L,L]^d$ at intensity\n\n$$\n\\lambda = d ^ {d / 2 - o (d)}.\n$$\n\nWe don't worry about the $o(d)$ term, but it is chosen so that, for a typical point in our sample, the next nearest point will be of distance $\\gg \\log d$ . (Some points will have a closer nearest point, but we can simply delete these). Let $X$ be the outcome of this initial discretization step.\n\nNow a natural graph $G = G_{X}$ suggests itself. We let $X$ be the vertex set and we define\n\n$$\nx \\sim y \\quad \\text {w h e n e v e r} \\quad \\| x - y \\| _ {2} < 2 r _ {d},\n$$\n\nwhere $r_d$ is the radius of a ball of volume one in $\\mathbb{R}^d$ . That is, $x$ and $y$ are joined by an edge if $B_{r_d}(x) \\cap B_{r_d}(y) = \\emptyset$ . Thus an independent set in $G$ is a sphere packing in the box $[-L, L]^d$ .\n\nWe now would like to \"lift\" this graph out of its geometric context and think of it only as a graph. But what properties can we hold on to? One obvious one is the degree. We can easily compute the expected degree. If we fix a point $x \\in X$ , using the basic properties of Poisson point processes, we can estimate\n\n$$\n\\mathbb {E} \\left| X \\cap B _ {2 r _ {d}} (x) \\right| = \\operatorname {V o l} \\left(B _ {2 r _ {d}} (x)\\right) \\lambda = 2 ^ {d} \\lambda =: \\Delta .\n$$\n\nIf we were to use this bound along with the trivial bound (mentioned above) $\\alpha(G) \\geqslant \\frac{n+1}{\\Delta(G)}$ , we can recover the (also trivial) bound $\\theta(d) \\geqslant 2^{-d}$ . To get beyond this bound we need to use some additional information. Inspired by the theorem of Ajtai, Komlós and Szemerédi (or Theorem 3.1) one might think about focusing on the number of triangles in the graph $G$ . This perspective was taken in [49] but only matches the bounds of Rogers and is sharp from this point of view.\n\nOur new idea is to focus on the maximum codegree of our graph, which actually behaves very well in this context. Indeed we can easily compute the co-degree of our graph\n\n$$\n\\mathbb {E} \\left| X \\cap B _ {2 r _ {d}} (x) \\cap B _ {2 r _ {d}} (y) \\right| = \\operatorname {V o l} \\left(B _ {2 r _ {d}} (x) \\cap B _ {2 r _ {d}} (y)\\right) \\lambda \\leqslant \\left(2 ^ {d} \\lambda\\right) e ^ {- \\| x - y \\| _ {2} ^ {2} / 2} \\leqslant \\Delta / (\\log \\Delta) ^ {\\omega (1)},\n$$\n\nwhere in the last inequality we are using that $\\| x - y\\| _2\\gg \\log d$\n\nThe insight here is that we can obtain the same bound as Shearer for graphs that have controlled codegrees. Interestingly, this is also a new result in graph theory.\n\nTHEOREM 3.6. Let $G$ be a $n$ vertex graph with $\\Delta(G) \\leqslant \\Delta$ and $\\Delta_2(G) \\leqslant C\\Delta(\\log \\Delta)^{-c}$ . Then\n\n$$\n\\alpha (G) \\geqslant (1 - o (1)) \\frac {n \\log \\Delta}{\\Delta},\n$$\n\nwhere $o(1)$ tends to 0 as $\\Delta \\to \\infty$ and we can take $C = 2^{-7}$ and $c = 7$ .\n\n3.4 Sketch proof of Theorem 3.6 To prove this we use a nibble process as Ajtai, Komlós and Szemerédi, but our analysis is quite a bit different. We sketch a little to see how the co-degree condition comes naturally into play. As we discussed above, we build our independent set by building it up in pieces. We take our first piece as $p_1 = \\frac{\\gamma}{\\Delta}$ , for some small $\\gamma \\ll 1$ . Let $I_1$ be this $p_1$ -random set. Note that since the maximum degree of this graph is $\\Delta$ , every vertex in $G[I_1]$ will have average degree $\\gamma \\ll 1$ , and thus $I_1$ is very close to an independent set. Indeed, we can make it independent by throwing away $o(|I_1|)$ vertices.\n\nWe now delete all of $I_{1}$ and all of the neighbors of $I_{1}$ from the graph. Define\n\n$$\nD _ {1} = I _ {1} \\cup \\bigcup_ {x \\in I _ {1}} N (x).\n$$\n\nwhich is about $\\gamma$ proportion of the vertices of $G$ . The key property we would like to maintain is that $D_{1}$ \"looks like\" a random set of density $\\gamma$ in $G$ . If this is possible then we expect that the new maximum degree is about\n\n$(1 - \\gamma)\\Delta$ and the new maximum codegree is about $(1 - \\gamma)\\Delta$ . Thus we can choose $p_2 = \\gamma / ((1 - \\gamma)\\Delta)$ and then choose $I_2$ to be a $p_2$ random set in the second nibble. Thus we have\n\n$$\n\\left| I _ {2} \\right| \\approx p _ {2} (1 - \\gamma) n = \\gamma n / \\Delta .\n$$\n\nMore generally, after the $i$ th nibble, we will have constructed disjoint sets $I_1, \\ldots, I_i$ with $|I_i| \\approx \\gamma n / \\Delta$ and so that $I_1 \\cup \\dots \\cup I_i$ is independent (after a small amount of clean-up), and the graph remaining after we remove all of the $I_i$ and all vertices adjacent to them has size $(1 - \\gamma)^i n$ . Thus we can continue this process until\n\n$$\n(1 - \\gamma) ^ {i} n \\leqslant n / \\Delta ,\n$$\n\nmeaning that we can run the process for $i \\approx (\\log \\Delta) / \\gamma$ steps. Thus (assuming that we can maintain these properties) we can construct an independent set of size $\\approx (n / \\Delta) \\log \\Delta$ .\n\nTo make the above story work, the key new idea is in controlling the evolution of the degrees of the vertices. To sketch the idea here, we fix a vertex $x$ and consider $N(x)$ and a stage $i$ of the process. Let us condition on the survival of $x$ into the next process - which means that none of the neighbors of $x$ are selected for $I_{i}$ . Now the size of $N(x)$ is precisely governed by the set\n\n$$\nY = N (N (x)) \\setminus (N (x) \\cup \\{x \\}),\n$$\n\nthe neighbors of the neighbors of $x$ , apart from $N(x) \\cup \\{x\\}$ (since $I_i$ will not include vertices of $N(x) \\cup \\{x\\}$ ). We now run a martingale argument. We iteratively expose each vertex in $Y \\cap I_i$ . If a vertex $v \\in Y$ is included into $I_i$ we then delete all of $N(v) \\cap X$ from $X$ . Now, to obtain concentration we note that the steps of the martingale are controlled by the sum of the squares of the increments, which due to the double counting inequality\n\n$$\n\\sum_ {y \\in Y} | I \\cap N (y) | ^ {2} \\leqslant \\sum_ {y, z \\in N (x)} | N (y) \\cap N (z) |,\n$$\n\nare controlled by the co-degrees of the vertices.\n\n3.5 Klartag's new sphere packing bounds We now turn to sketch the beautiful new idea of Klartag [46] that allows one to obtain sphere packings of density $\\Omega(d^2 2^{-d})$ . Klartag picks up on an earlier idea of building a packing out of a random lattice. However the novelty in Klartag's proof is that instead of simply selecting the lattice uniformly at random, he cleverly \"guides\" a random process to find a better (and exponentially unlikely) choice.\n\nThe setup is this. We first find a lattice $\\Lambda \\subset \\mathbb{R}^d$ with $\\operatorname*{det}(\\Lambda) = 1$ and an ellipsoid $\\mathcal{E}$ of large volume, which is centered at the origin and with $\\mathcal{E} \\cap \\Lambda = \\{0\\}$ . We then turn this into a sphere packing by applying a linear transformation $T: \\mathbb{R}^d \\to \\mathbb{R}^d$ with $\\operatorname*{det}(T) = 1$ so that $T(\\mathcal{E})$ is the Euclidean ball $B$ centered at the origin with $\\mathrm{Vol}(B) = \\mathrm{Vol}(\\mathcal{E})$ . Note that $T(\\Lambda)$ is a new lattice with determinant one. Thus if we place a copy of the dilated ball $B/2$ at each lattice point of $T(\\Lambda)$ we obtain a sphere packing of identical balls with density $\\mathrm{Vol}(\\mathcal{E})2^{-d}$ .\n\nProof sketch of Theorem 3.4. By the discussion above, we see that the problem reduces to the problem of finding a lattice $\\Lambda$ with $\\operatorname{det}(\\Lambda) = 1$ and a centrally symmetric ellipsoid of volume $\\Omega(d^2)$ that contains no lattice points of $\\Lambda$ , apart from the origin. In a first step, we choose $\\Lambda$ to be a random lattice with determinant 1. While we won't say anything technically about how to work with these lattices here, it is enough to say that this lattice looks \"locally\" like a Poisson point process with intensity 1.\n\nWe then grow an ellipsoid $\\mathcal{E}_t$ in a manner analogous to the proof of Lemma 2.5, although here we are working in the space of ellipsoids. Let $\\mathcal{E}_0$ be a euclidean ball which is small enough to ensure that $\\Lambda \\cap \\mathcal{E}_0 = \\{0\\}$ . We then randomly \"evolve\" this ellipsoid as time proceeds. As soon as this ellipsoid hits a lattice point, it sticks to it and evolves further, keeping this point on its boundary. Indeed, we may describe the ellipsoid $\\mathcal{E}_t$ as\n\n$$\n\\mathcal {E} _ {t} = \\left\\{x \\in \\mathbb {R} ^ {d}: \\langle x, A _ {t} x \\rangle \\leqslant 1 \\right\\},\n$$\n\nwhere $A_{t}$ is a positive definite matrix. Thus hitting a point $y \\in \\Lambda$ , precisely introduces the linear constraint $\\langle y, A_{t}y \\rangle = 1$ on $A_{t}$ . Since the dimension of the space of such positive semi-definite ellipsoids is $\\approx d^2 / 2$ , we expect that the process runs until the ellipsoid has $\\approx d^2 / 2$ points of $\\Lambda$ on its boundary.\n\nThus we can heuristically argue about the volume of the final ellipse. If $|\\mathcal{E}_T \\cap \\Lambda| \\approx d^2$ , one can use the fact that the random lattice $\\Lambda$ locally looks like a Poisson point process to see that\n\n$$\nd ^ {2} \\approx \\mathbb {E} | \\mathcal {E} _ {T} \\cap \\Lambda | \\approx \\operatorname {V o l} (\\mathcal {E} _ {T}),\n$$\n\nas desired.\n\n![](images/8522330706fc376f3b09b447751dde354046d15683ec9ed0dbcc051b8b7cc06f.jpg)\n\n4 Random matrix theory at exponentially small scales We now turn to discuss phenomena in random matrix theory that occur at exponentially small scales. Here we focus on the singularity probability of a random symmetric matrix.\n\nLet $B_{n}$ be a random $n \\times n$ matrix whose entries are chosen independently and uniformly from $\\{-1, 1\\}$ . It is an old problem, likely stemming from multiple origins, to determine the probability that $B_{n}$ is singular. While a moment's thought reveals the lower bound of $(1 + o(1)) 2n^{2} 2^{-n}$ , the probability that two rows or columns are equal up to sign, establishing the corresponding upper bound remains an extremely challenging open problem. Indeed, it is widely believed that\n\n(4.1) $\\mathbb{P}(\\operatorname*{det}(B_n) = 0) = (1 + o(1))2n^2 2^{-n}.$\n\nWhile this precise asymptotic has so far eluded researchers, some stunning advances have been made on this fascinating problem. The first steps were taken by the pioneering work of Komlós [48] in the 1960s, who showed that the singularity probability is $O(n^{-1/2})$ .\n\nNearly thirty years later, Kahn, Komlós and Szemerédi [45], in a remarkable paper, showed that the singularity probability is exponentially small. At the heart of their paper is an ingenious argument with the Fourier transform that allows them to give vastly more efficient descriptions of \"structured\" subspaces of $\\mathbb{R}^n$ that are spanned by $\\{-1,1\\}$ -vectors. Their method was then developed by Tao and Vu [94, 95] who showed a bound of $(3/4)^{n + o(n)}$ , by providing a link between the ideas of [45] and the structure of set addition and, in particular, Freiman's theorem (see [100]). This trajectory was then developed further by Bourgain, Vu and Wood [12], who proved a bound of $2^{-n/2 + o(n)}$ , and by Tao and Vu [101], who pioneered the development of \"inverse Littlewood-Offord theory\", which we discuss below.\n\nIn 2007, Rudelson and Vershynin, in an important and influential paper [77], gave a different proof of the exponential upper bound on the singularity probability of $B_{n}$ . The key idea was to construct efficient $\\varepsilon$ -nets for points on the sphere that have special anti-concentration properties and are thus more likely to be in the kernel of $B_{n}$ . This then led them to prove an elegant inverse Littlewood-Offord type result, inspired by [101], in a geometric setting.\n\nThis perspective was then developed further in the breakthrough work of Tikhomirov [103], who proved\n\n(4.2) $\\mathbb{P}(\\operatorname*{det}(B_n) = 0) = 2^{-n + o(n)}$\n\nthereby proving the conjectured upper bound, up to subexponential factors. One of the key innovations in [103] was to adopt a probabilistic viewpoint on such Littlewood-Offord questions, a topic which we elaborate on in Section 4.1\n\nWe remark that another pair of advances was made by Jain, Sah and Sawhney [41], following the work of Litvak and Tikhomirov [58], who proved the natural analogue of (4.1) for random matrices with lopsided distributions. In the case of $\\{-1,1\\}$ -matrices, however, the problem of establishing (4.1) perhaps remains as the central open problem in the area.\n\nSingularity of random symmetric matrices. We now turn to discuss the singularity problem for random symmetric matrices, which has proven to be more challenging still. The study of random symmetric matrices goes back to the pioneering work of Wigner in the 1950s (sometimes such random matrices are called Wigner matrices) who studied the typical eigenvalue distribution of these matrices, showing that they follow the so-called \"semi-circular law\".\n\nLet $A_{n}$ be drawn uniformly at random among all $n \\times n$ symmetric matrices with entries in $\\{-1, 1\\}$ . Again we have the lower bound\n\n(4.3) $\\mathbb{P}\\big(\\operatorname *{det}(A_n) = 0\\big)\\geqslant 2^{-n + o(n)},$\n\nby considering the probability that two rows are equal up to sign. Costello, Tao and Vu [24] were the first to show that $A_{n}$ is non-singular with high probability. That is,\n\n$$\n\\mathbb {P} \\big (\\det (A _ {n}) = 0 \\big) = o (1),\n$$\n\nwith a precise error term of $O(n^{-1/4})$ . Since, this result has seen a sequence of improvements: A bound of $N^{-\\omega(1)}$ was proved by Nguyen [65], a bound of the form $\\exp(-n^c)$ was proved by Vershynin [106], which was in turn improved by Ferber and Jain [29] based on the results of Ferber, Jain, Luh and Samotij [30]. In a similar spirit, Campos, Mattos, Morris and Morrison [19] then improved this bound to $\\exp(-cn^{1/2})$ by proving a \"rough\" inverse Littlewood-Offord theorem, inspired by the theory of hypergraph containers. This bound was then improved by Jain, Sah and Sawhney [42], who improved the exponent to $-cn^{1/2}\\log^{1/4}n$ , and, simultaneously, by the author to $-c(n\\log n)^{1/2}$ in joint work with Campos, Jenssen and Michelen [14].\n\nAs might be suggested by the \"convergence\" of these results to the exponent $-c(n\\log n)^{1 / 2}$ , a natural barrier lurks exactly at this point. In fact, in [19] authors showed that if one wants to get beyond bounds at this probability scale, one needs use \"reuse\" randomness in the top half of the matrix (which is of course independent) in the most difficult part of the proof. Rather one needs to directly deal with the complicated dependencies that are present in a random symmetric matrix.\n\nIn recent work the author, in joint work with Campos, Jenssen and Michelen, managed to get around this obstacle and prove an exponential upper bound, thus matching (4.3) up to base of the exponent.\n\nTHEOREM 4.1. Let $A_{n}$ be drawn uniformly at random from the $n \\times n$ symmetric matrices with entries in $\\{-1,1\\}$ . Then\n\n$$\n\\mathbb {P} \\big (\\det (A _ {n}) = 0 \\big) \\leqslant e ^ {- c n},\n$$\n\nwhere $c > 0$ is an absolute constant.\n\nIn what follows we discuss some of the techniques that are behind this result. This will allow us to touch on some of the exciting ideas that have been developed in this area.\n\nLeast singular value, clustering and repulsion of eigenvalues. The singularity problem is related to several other phenomena regarding the spectrum of the matrix $A_{n}$ , the most natural being the extreme behavior of the least singular value. Recall that if $M$ is an $n \\times n$ matrix the least singular value is $\\sigma_{\\min}(M) = \\min_{x \\in \\mathbb{S}^{n-1}} \\| M x \\|_2$ . The study of this quantity in random matrices was first initiated by Goldstine and von Neumann [34] in the 1950s and has undergone intense study in the intervening years, partly in its own right, but also because of its link with spectral laws of random matrices [98, 76, 80, 82] and the smoothed analysis of algorithms [92].\n\nA key guiding conjecture is due to Spielmen and Teng [92], who suggested that in the case of iid Bernoulli random matrices $B_{n}$ we have\n\n$$\n\\mathbb {P} \\left(\\sigma_ {\\min } \\left(B _ {n}\\right) \\leqslant \\varepsilon n ^ {- 1 / 2}\\right) \\leqslant \\varepsilon + 2 e ^ {- c n}, \\tag {4.4}\n$$\n\nfor all $\\varepsilon > 0$ .\n\nA key breakthrough on this problem was made by Rudelson [75] which inspired a sequence of further papers [101, 78], cumulating in the influential papers of Rudelson and Vershynin [77], who proved (4.4) up to a constant factors, and Tao and Vu [97] who proved (4.4) in the case of $\\varepsilon > n^{-c}$ . Recently, in joint work with Sah and Sawheny [81], the author proved (4.4), up to a $1 + o(1)$ factor.\n\nThis question has also been intensely studied in the case of random symmetric matrices. In this case we have the additional interpretation that $\\sigma_{\\mathrm{min}}(A) = \\min_i|\\lambda_i|$ , where the minimum is over all the eigenvalues of $A$ . After many partial advances [65, 66, 106, 42] in the paper [16], we determined the optimal probabilities of having a small least singular value for such random symmetric matrices. We showed that for all $\\varepsilon >0$\n\n$$\n\\mathbb {P} \\left(\\sigma_ {n} \\left(A _ {n}\\right) \\leqslant \\varepsilon n ^ {- 1 / 2}\\right) \\leqslant C \\varepsilon + e ^ {- c n}, \\tag {4.5}\n$$\n\nwhere $C, c > 0$ are absolute constants.\n\nOne can also apply these results to understand the clustering of the spectrum more generally. Indeed we can apply a version of (4.5) to the matrix $A_{n} - \\lambda I$ , for any $-(2 - \\delta)\\sqrt{n} \\leqslant \\lambda \\leqslant (2 - \\delta)\\sqrt{n}$ , to bound the probability of the event $\\min_i |\\lambda_i - \\lambda| \\leqslant \\varepsilon n^{1/2}$ .\n\nAllowing ourselves to speak somewhat informally, the form of this result, with two different terms in the bound, reflects two very different phenomena that the least singular value can have. If $\\varepsilon \\gg e^{-cn}$ , for some $c$ , the most likely way to have $\\sigma_{\\min}(A_n) < \\varepsilon n^{-1/2}$ comes from the event of a single \"random-like\" direction being hit very weakly by $A_n$ . On the other hand if $\\varepsilon$ is a small enough exponential, the most likely way to have $\\sigma_{\\min}(A_n) < \\varepsilon n^{-1/2}$ comes from the matrix just being singular, which (conjecturally) should come from the very structured every of having two rows or columns equal.\n\nThese sort of problems are also related to further questions of clustering and repulsion of eigenvalues and we refer the reader to [63, 67].\n\nAnti-concentration of the determinant and permanent. Before we turn to discuss techniques, we turn to highlight two further problems in this area. The first concerns the anti-concentration of the determinant. Concretely, what is $\\mathbb{P}(\\operatorname*{det}(B_n) = 1)$ ? (or similarly for the symmetric model $A_{n}$ ). Here the above proofs for singularity, immediately give that this probability is exponentially small, but conjecturally should be much smaller, most likely of the form $n^{-cn}$ . Indeed it seems to be an major step to prove that this probability is $e^{-\\omega(n)}$ .\n\nWe also highlight the related problem on the permanent of a random matrix. In this case there is no natural geometric interpretation, forcing us reason by other means. Tao and Vu [96] showed that $\\operatorname{Per}(B_n) = 0$ , with probability $O(n^{-c})$ which was the state of the art until a very recent breakthrough on this problem by Hunter, Kwan and Sauermann [40] who have showed an exponential upper bound. Similar to the previous question, it seems that a bound of the type $n^{-cn}$ should be the truth.\n\n4.1 Littlewood-Offord theory Littlewood-Offord theory is a classical subject that has undergone intense development in recent years as it has become interwoven with the methods used in random matrix theory. The main object of study here is the concentration function $\\rho_{\\varepsilon}(v)$ , where $v \\in \\mathbb{R}^n$ and $\\varepsilon > 0$ , which is defined by\n\n$$\n\\rho (v) = \\max _ {b} \\mathbb {P} \\big (\\big | \\langle X, v \\rangle - b \\big | < \\varepsilon \\big),\n$$\n\nwhere $X \\in \\{-1, 1\\}^n$ is sampled uniformly at random and the maximum is over all $b \\in \\mathbb{R}$ . (Actually much more general distribution for the $X$ are considered, but we limit ourselves to uniform $X$ here).\n\nTo get a feel about how this is immediately connected to the problems above, we consider the singularity problem for $iid$ matrices (i.e. not symmetric). As above, we let $B_{n}$ be an $n\\times n$ matrix with all entries independent and uniform in $\\{-1,1\\}$ . We now expose the first $n - 1$ rows of the matrix $B_{n}$ and define $\\mathcal{E}$ to be the event that the first $n - 1$ rows have full rank. Thus, if we let $X\\in \\{-1,1\\} ^n$ be the last row of $B_{n}$ (and so far unexposed), we have that $B_{n}$ is singular if and only if $\\langle X,v\\rangle = 0$ , on the event $\\mathcal{E}$ . Thus\n\n$$\n\\mathbb {P} (\\det (B _ {n}) = 0) \\leqslant \\mathbb {P} (\\langle X, v \\rangle = 0 \\text {a n d} \\mathcal {E}) + \\mathbb {P} \\left(\\mathcal {E} ^ {c}\\right) \\leqslant \\mathbb {E} _ {v} \\rho_ {0} (v) + \\mathbb {P} \\left(\\mathcal {E} ^ {c}\\right), \\tag {4.6}\n$$\n\nwhere the expectation is over vectors $v$ that occur as normals to the subspace defined by the first $n - 1$ rows. While the probability $\\mathbb{P}(\\mathcal{E}^c)$ can be taken care of by induction, or otherwise, the main difficulty is in dealing with $\\rho_0(v)$ .\n\nWhile one expects a somewhat typical vector $v$ to have $\\rho_0(v) \\approx 2^{-n}$ (further supporting our intuition for (4.3)), there exist $v$ for which $\\rho_0(v)$ can be as large as $1/2$ , for example $v = (1, -1, 0, \\ldots, 0)$ . Moreover, anything in between these two extremes is possible. Thus, the central challenge in estimating the singularity probability is to show that it is unlikely that a vector $v$ with large $\\rho_0(v)$ will be orthogonal to the first $n - 1$ rows. Thus we are led to understand the concentration function $\\rho_0(v)$ , as $v$ varies over all possible normals.\n\nClassical theory. Interestingly, the study of $\\rho_0(v)$ long pre-dates the study of random matrices, going back to the work of Littlewood and Offord [55, 56] in the 1930s on the zeros of random polynomials. (Actually we already mentioned these papers in our discussion of flat Littlewood polynomials.) In 1945 Erdős [26] proved what is perhaps the subject's first flagship result, showing that if $v\\in \\mathbb{R}^n$ has all non-zero coordinates then\n\n$$\n\\rho_ {0} (v) \\leqslant \\rho ((1, \\dots , 1)) = O (n ^ {- 1 / 2}).\n$$\n\nThis was then developed by Szemerédi and Sárközy [86], Stanley [93] and many others [84, 35, 47]. These early results provide us with a beautiful combinatorial perspective on the problem, but most important for us is the pioneering work of Halasz [36] who made an important connection with the Fourier transform, thus giving us a different analytic perspective on the problem.\n\nInverse Littlewood Offord theory. More recently the question has been turned on its head by Tao and Vu [101], who pioneered the study of \"inverse\" Littlewood-Offord theory. They suggested the following \"meta-conjecture\" that has guided much subsequent research.\n\nIf $\\rho_0(v)$ is \"large\" then $v$ must exhibit arithmetic structure.\n\nThis \"meta-conjecture\" has been addressed in the work of Tao and Vu [101, 99], and Nguyen and Vu [64, 68] who proved that if $v$ is such that $\\rho(v) > n^{-C}$ then $O(n^{1 - \\varepsilon})$ of the coordinates $v_i$ of $v$ can be efficiently covered with a generalized arithmetic progression of rank $r = O_{\\varepsilon, C}(1)$ .\n\nWhile these results provide a very satisfying picture in the range $\\rho_0(v) > n^{-C}$ , they begin to break down when $\\rho (v) = n^{-\\omega (1)}$ and are therefore of limited use at exponential probability scales. More recently these ideas have been extended to give structural information about $v$ when $\\rho_0(v)$ as small as $\\exp (-c\\sqrt{n\\log n})$ , but these results are of a somewhat different nature [30, 19, 14].\n\nPerhaps the most relevant among these results for our discussion concerning Theorem 4.1 is the inverse Littlewood-Offord theorem of Rudelson and Vershynin [77] that allows one to control $\\rho(v)$ in terms of a related quantity known as the \"least common denominator\" (LCD) of the vector $v$ . This result gives relatively weak information about $v$ , relative to the results mentioned above, however is effective and very useful all the way down to exponential scales. As this quantity will pop up in our own work, we postpone the discussion of this to Section 4.2.\n\nTypical Littlewood-Offord theory. More recently, with the breakthrough work of Tikhomirov [103] (discussed at (4.2)), a fresh perspective has been shed on (4.6). Instead of trying to stratify the different behaviors of $\\rho_0(v)$ with different \"inverse\" theorems, he directly studies the distribution of $\\rho_0(v)$ as a random variable. More precisely, he considers $\\rho_{\\varepsilon}(v)$ at each fixed scale $\\varepsilon > 2^{-n + o(n)}$ , where $v \\sim \\mathbb{S}^{n - 1}$ is chosen uniformly at random from the unit sphere. Now for such random $v$ and $\\varepsilon > 2^{-n + o(n)}$ one has\n\n$$\n\\mathbb {E} _ {v} \\rho_ {\\varepsilon} (v) = \\Theta (\\varepsilon).\n$$\n\nThe technical heart of the work [103] is the following tail-estimate on the distribution of $\\rho_0(v)$\n\n$$\n\\mathbb {P} _ {v} \\left(\\rho_ {\\varepsilon} (v) \\geqslant L \\varepsilon\\right) \\leqslant L ^ {- \\omega (n)}\n$$\n\nfor appropriately large (but fixed) $L$ . In our work on the singularity probability, we also take a probabilistic perspective but employ a completely different set of techniques to understand these sorts of tail events, a topic we discuss in Section 4.3.\n\n4.2 Approximate negative correlation One of the new ingredients introduced for Theorem 4.1 is an \"approximate negative correlation\" inequality for linear events. We first discuss this result in its own right and then sketch how it fits into place in the proof of Theorem 4.1 in Section 4.3.\n\nWe say that two events $A, B$ are negatively correlated if\n\n$$\n\\mathbb {P} (A \\cap B) \\leqslant \\mathbb {P} (A) \\mathbb {P} (B).\n$$\n\nIn what follows we let $\\varepsilon > 0$ and let $X \\in \\{-1, 1\\}^n$ be a uniform random vector. Here we are interested in \"linear\" events of the shape\n\n$$\n| \\langle X, v \\rangle | \\leqslant \\varepsilon . \\tag {4.7}\n$$\n\nThe result we discuss here shows the approximate negative dependence of the event (4.7), for all $\\varepsilon > e^{-cn}$ , from the intersection of events\n\n$$\n| \\langle X, w _ {1} \\rangle | \\leqslant \\beta , | \\langle X, w _ {2} \\rangle | \\leqslant \\beta , \\dots , | \\langle X, w _ {k} \\rangle | \\leqslant \\beta , \\tag {4.8}\n$$\n\nwhere $\\beta > 0$ is small but fixed and $w_{1}, \\ldots, w_{k}$ are orthonormal vectors with $k \\leqslant cn$ . Crucially, in this statement we don't assume anything about the structure of the vectors $w_{i}$ and allow the dimension of the space they span to be as large as $\\Theta(n)$ . Our main negative dependence result says something of the general shape\n\n$$\n\\mathbb {P} _ {X} \\left(\\left\\{\\left| \\langle X, v \\rangle \\right| \\leqslant \\varepsilon \\right\\} \\cap \\bigcap_ {i = 1} ^ {k} \\left\\{\\left| \\langle X, w _ {i} \\rangle \\right| \\leqslant \\beta \\right\\}\\right) \\leqslant \\mathbb {P} _ {X} \\left(\\left| \\langle X, v \\rangle \\right| \\leqslant \\varepsilon\\right) \\mathbb {P} _ {X} \\left(\\bigcap_ {i = 1} ^ {k} \\left\\{\\left| \\langle X, w _ {i} \\rangle \\right| \\leqslant \\beta \\right\\}\\right), \\tag {4.9}\n$$\n\nalthough in an approximate form.\n\nTo state this result properly, we use the notion of the \"Least Common Denominator\" of a vector $v$ , introduced by Rudelson and Vershynin [77] and mentioned above in our discussion of Littlewood-Offord theory. For $\\alpha \\in (0,1)$ , we define the least common denominator of a vector $v \\in \\mathbb{R}^n$ to be\n\n$$\nD _ {\\alpha} (v) = \\inf \\left\\{\\varphi > 0: d (\\varphi \\cdot v, \\mathbb {Z} ^ {n} \\setminus \\{0 \\}) \\leqslant \\sqrt {\\alpha n} \\right\\}.\n$$\n\nThat is, this quantity is the smallest multiple of the vector $v$ for which it is \"close\" to the integer lattice $\\mathbb{Z}^n$ (the value of. Rudelson and Vershynin showed that $(D_{\\alpha}(v))^{-1}$ behaves quite a bit like $\\mathbb{P}(|\\langle X,V\\rangle |\\leqslant \\varepsilon)$\n\nTHEOREM 4.2. For $d \\in \\mathbb{N}$ , $\\alpha \\in (0,1)$ and $\\varepsilon > 0$ , let $v \\in \\mathbb{S}^{n-1}$ satisfy $D_{\\alpha}(v) > c / \\varepsilon$ . If $X \\sim \\{-1,1\\}^d$ is uniform then\n\n$$\n\\mathbb {P} \\left(| \\langle X, v \\rangle | \\leqslant \\varepsilon\\right) \\leqslant C \\varepsilon + e ^ {- c \\alpha n},\n$$\n\nwhere $C, c > 0$ are absolute constants.\n\nOur first main negative dependence proves an approximate version of (4.9) with $(D(v))^{-1}$ (a slightly better behaved quantity) as a proxy for $\\mathbb{P}(\\langle X,v\\rangle \\leqslant \\varepsilon)$ . The following is a formal statement.\n\nTHEOREM 4.3. Let $n \\in \\mathbb{N}$ , $\\alpha \\in (0,1)$ , $0 \\leqslant k \\leqslant \\alpha \\beta n$ and $\\varepsilon \\geqslant \\exp(-\\alpha \\beta n)$ . Let $v \\in \\mathbb{S}^{n-1}$ and let $w_1, \\ldots, w_k \\in \\mathbb{S}^{n-1}$ be orthogonal. If $X \\in \\{-1,1\\}^n$ is a uniform random vector and $D_\\alpha(v) > 16 / \\varepsilon$ then\n\n$$\n\\mathbb {P} _ {X} \\left(\\left| \\langle X, v \\rangle \\right| \\leqslant \\varepsilon \\text {a n d} \\bigcap_ {i = 1} ^ {k} \\left\\{\\left| \\langle X, w _ {i} \\rangle \\right| \\leqslant \\beta \\right\\}\\right) < C \\varepsilon e ^ {- c k}, \\tag {4.10}\n$$\n\nwhere $C, c > 0$ are absolute constants.\n\nOur proof in [18] of Theorem 4.3 is a delicate argument with the $O(n)$ -dimensional Fourier transform. As the proof of this somewhat different from the other results in this survey we don't elaborate on it further.\n\n4.3 Sketch of the proof of Theorem 4.1 Now that we have motivated a few of the tools behind Theorem 4.1, we now turn to sketch its proof. In analogy with our discussion in Section 4.1, we study the \" $n$ -dimensional concentration function\" which we define to be\n\n$$\nf _ {\\varepsilon} (v) = \\mathbb {P} _ {A _ {n}} \\left(\\left\\| A _ {n} v \\right\\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right), \\tag {4.11}\n$$\n\nwhere $v \\in \\mathbb{S}^{n - 1}$ , $\\varepsilon > 0$ and $A_{n}$ is a uniformly drawn from $n \\times n$ symmetric matrices with entries in $\\{-1,1\\}$ .\n\nIntuitively speaking, we expect $f_{\\varepsilon}(v)$ to be \"large\" for directions $v$ that are more likely to appear in the kernel of $A_{n}$ and therefore, to understand the singularity probability of a matrix $A_{n}$ , it is essential to understand the upper tails of the random variable $f_{\\varepsilon}(v)$ when $v \\sim \\mathbb{S}^{n-1}$ is sampled uniformly at random. As we discussed above, this probabilistic interpretation of singularity was pioneered by Tikhomirov [103] and is a convenient perspective for us to adopt here, although our techniques are quite different.\n\nMoreover, if we want to prove exponential bounds on the singularity probability, we need to control this function for $\\varepsilon$ as small as $e^{-cn}$ . For technical reasons we also need to restrict ourselves to vectors on the sphere that have controlled infinity norm. We call this (vast majority) of the $n$ -dimensional sphere $\\mathbb{S}_b^{n-1}$ . Central to our proof is a large deviation estimate of the following type.\n\nTHEOREM 4.4. For $L > 1$ and $e^{-cn} \\leqslant \\varepsilon \\ll 1$ we have that\n\n$$\n\\mathbb {P} _ {v} \\left(f _ {\\varepsilon} (v) \\geqslant \\left(L \\varepsilon\\right) ^ {n}\\right) \\leqslant (c L) ^ {- 2 n},\n$$\n\nwhere $v \\sim \\mathbb{S}_b^{n-1}$ is sampled uniformly at random and $c > 0$ is an absolute constant.\n\nIn what follows, we sketch the proof of a weaker form of Theorem 4.4 where we prove a bound of the type $(cL)^{-n}$ in place of $(cL)^{-2n}$ . This former bound is too weak for our purposes, but most of the main ideas are contained in its proof. Indeed to prove this, it is enough to show\n\n$$\n\\mathbb {E} _ {v} f _ {\\varepsilon} (v) = \\mathbb {E} _ {v} \\mathbb {P} _ {A _ {n}} \\left(\\| A _ {n} v \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right) \\leqslant (C \\varepsilon) ^ {n} \\tag {4.12}\n$$\n\nand can then apply Markov's inequality to finish.\n\nTo this end, our first step is to break up the sphere based on the set of coordinates are well behaved. Indeed by now standard methods, we can assume that $v_{i} = \\Theta (n^{-1 / 2})$ for $d = cn$ values of $i$ . By union bounding over all such choices for these coordinates, it is enough to assume $v_{i} = \\Theta (n^{-1 / 2})$ for all $i\\in [d]$ .\n\nWe then show that we can \"replace\" the matrix $A_{n}$ (in the definition of $f_{\\varepsilon}$ in (4.11)) with a random matrix $M_{n}$ that has many of the entries zeroed out. This will allow us to focus on the well-behaved part of $v_{i}$ and additionally untangle some of the more subtle and complicated dependencies. Indeed we show, by an appropriate Fourier argument, that\n\n$$\nf _ {\\varepsilon} (v) \\leqslant C ^ {n} \\cdot \\mathbb {P} _ {A _ {n}} \\left(\\| M _ {n} v \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right), \\tag {4.13}\n$$\n\nfor some $C > 1$ , where $M_{n}$ is the random matrix defined by\n\n$$\nM = \\left[ \\begin{array}{c c} \\mathbf {0} _ {[ d ] \\times [ d ]} & H ^ {T} \\\\ H & \\mathbf {0} _ {[ d + 1, n ] \\times [ d + 1, n ]} \\end{array} \\right]. \\tag {4.14}\n$$\n\nHere $H$ is a $(n - d)\\times d$ random matrix with iid entries that are $\\mu$ -lazy, meaning that $(H)_{i,j} = 0$ with probability $1 - \\mu$ and $(H)_{i,j} = \\pm 1$ with probability $\\mu /2$ for some appropriately small $\\mu$ .\n\nWe now use this special form of $M$ to break up the event $\\| Mv\\| _2\\leqslant \\varepsilon n^{1 / 2}$ . Indeed, we write\n\n$$\nM v = \\left[ \\begin{array}{c} H v _ {[ d ]} \\\\ H ^ {T} v _ {[ d + 1, n ]} \\end{array} \\right]\n$$\n\nand so we need only to control the intersection of the events\n\n$$\n\\left\\| H v _ {[ d ]} \\right\\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\qquad \\text {a n d} \\qquad \\left\\| H ^ {T} v _ {[ d + 1, n ]} \\right\\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}.\n$$\n\nNote that if we simply ignore the second event and work only with the first, we land in a situation very similar to previous works; where half of the matrix is neglected entirely and we are thus limited by the $(n\\log n)^{1 / 2}$ obstruction, discussed above Theorem 4.1. To overcome this barrier, we crucially need to control these two events simultaneously. The key idea is to use the randomness in $H$ to control the event $\\| Hv_{[d]}\\| _2\\leqslant \\varepsilon n^{1 / 2}$ and we use the randomness in the selection of $v\\in \\mathbb{S}_b^{n - 1}$ to control the event $\\| H^T v_{[d + 1,n]}\\| _2\\leqslant \\varepsilon n^{1 / 2}$ .\n\nFor this, we partition the outcomes in $H$ , based on a robust notion of rank (hence the name \"rank-splitting\"). We define the event $\\mathcal{E}_k$ to be the event that $H$ has at $k$ \"unhealthy\" singular values,\n\n$$\n\\mathcal {E} _ {k} = \\left\\{H: \\sigma_ {d - k} (H) \\geqslant c \\sqrt {n} \\text {a n d} \\sigma_ {d - k + 1} (H) < c \\sqrt {n} \\right\\},\n$$\n\nwhere $\\sigma_1(H) \\geqslant \\dots \\geqslant \\sigma_d(H)$ denote the singular values of $H$ . We then bound\n\n$$\n\\mathbb {P} _ {M} \\left(\\| M v \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right)\n$$\n\nabove by (only using the randomness in $M$ , for the moment)\n\n$$\n\\sum_ {k = 0} ^ {d} \\mathbb {P} _ {H} \\left(\\| H ^ {T} v _ {[ d + 1, n ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\mid \\| H v _ {[ d ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\wedge \\mathcal {E} _ {k}\\right) \\cdot \\mathbb {P} _ {H} \\left(\\| H v _ {[ d ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\wedge \\mathcal {E} _ {k}\\right). \\tag {4.15}\n$$\n\nWe now see the link with our \"approximate negative dependence theorem\", which we discussed in section 4.2 which we use (after a good deal of preparation) to bound the quantity\n\n$$\n\\mathbb {P} _ {H} \\big (\\| H v _ {[ d ]} \\| _ {2} \\leqslant \\varepsilon \\sqrt {n} \\wedge \\mathcal {E} _ {k} \\big).\n$$\n\nIndeed, after \"tensorizing\" Theorem 4.3 and approximating these objects with appropriate nets, we are able to conclude that\n\n$$\n\\mathbb {P} _ {H} \\big (\\| H v _ {[ d ]} \\| _ {2} \\leqslant \\varepsilon \\sqrt {n}, \\wedge \\mathcal {E} _ {k} \\big) \\leqslant (C \\varepsilon e ^ {- c k}) ^ {n - d},\n$$\n\nunless $v_{[d]}$ is \"structured\", in which case we do something different (and substantially easier). Thus, for all non-structured $v$ , we have (4.15) is at most something of the form\n\n$$\n(C \\varepsilon) ^ {n - d} \\sum_ {k = 0} ^ {d} e ^ {- c k (n - d)} \\cdot \\mathbb {P} _ {H} \\left(\\| H ^ {T} v _ {[ d + 1, n ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\mid \\| H v _ {[ d ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2} \\wedge \\mathcal {E} _ {k}\\right). \\tag {4.16}\n$$\n\nUp to this point, we have not appealed to the randomness in the choice of $v \\in \\mathbb{S}_{\\flat}^{n - 1}$ , beyond imposing that $v$ is non-structured. We now introduce the randomness in $v$ by taking an expectation in $v$ , bringing us back to our goal of bounding (4.12). Taking expectations and swapping the order of the expectations above, tells us that (4.12) is at most\n\n$$\n(C \\varepsilon) ^ {n - d} \\sum_ {k = 0} ^ {d} e ^ {- c k (n - d)} \\cdot \\mathbb {E} _ {H} \\mathbb {P} _ {v} \\left(\\left\\| H ^ {T} v _ {[ d + 1, n ]} \\right\\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right) \\mathbf {1} (H \\in \\mathcal {E} _ {k}). \\tag {4.17}\n$$\n\nWe then deal with this inner probability by considering a fixed $H \\in \\mathcal{E}$ . Here one can show\n\n$$\n\\mathbb {P} _ {v _ {[ d + 1, n ]}} \\left(\\| H ^ {T} v _ {[ d + 1, n ]} \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right) \\leqslant (C \\varepsilon) ^ {d - k}. \\tag {4.18}\n$$\n\nIndeed, intuitively this is clear, since $H^T$ has at most $k$ small singular directions, so $v_{[d + 1,n]}$ must be \"nearly orthogonal\" to the $d - k$ singular directions of $H$ . At this point one might be slightly worried that we only have another hard high-dimensional Littlewood-Offord problem on our hands. However, the big advantage here is that $v_{[d + 1,n]}$ is a continuous random variable and thus its analysis is vastly easier.\n\nNow stringing together (4.13), (4.18) and (4.17) (and using that $\\varepsilon > e^{-cn}$ ) we arrive at our goal of showing that\n\n$$\n\\mathbb {E} _ {v} f _ {\\varepsilon} (v) \\leqslant C ^ {n} \\mathbb {E} _ {v} \\mathbb {P} _ {M} \\left(\\| M v \\| _ {2} \\leqslant \\varepsilon n ^ {1 / 2}\\right) \\leqslant \\left(C \\varepsilon\\right) ^ {n}.\n$$\n\nTo prove the stronger bound stated in Theorem 4.4, we do a \"second-moment\" type version of the above, which adds some extra complications, but traces the same shape as the above.\n\nAcknowledgments. I would like to thank Marcus Michelen, for some useful discussion on topics related to the sphere packing literature. I would also like to thank Rob Morris and Marcus Michelen for comments on an earlier draft. Finally I would like to thank the volunteers working to put together the ICM surveys.\n\n# References\n\n[1] M. AJTAI, J. KOMLOS, AND E. SZEMERÉDI, A note on Ramsey numbers, Journal of Combinatorial Theory, Series A, 29 (1980), pp. 354-360. \n[2] P. BALISTER, B. BOLLOBÁS, R. MORRIS, J. SAHASRABUDHE, AND M. TÍBA, Flat littlewood polynomials exist, Annals of Mathematics, 192 (2020), pp. 977-1004. \n[3] K. BALL, A lower bound for the optimal density of lattice packings, International Mathematics Research Notices, 1992 (1992), pp. 217-221. \n[4] W. BANASZCZYK, Balancing vectors and Gaussian measures of $n$ -dimensional convex bodies, Random Structures & Algorithms, 12 (1998), pp. 351-360. \n[5] N. BANSAL, Constructive algorithms for discrepancy minimization, in 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, IEEE, 2010, pp. 3-10. \n[6] N. BANSAL AND H. JIANG, Decoupling via affine spectral-independence: Beck-Fiala and Komlós bounds beyond Banaszczyk, arXiv preprint arXiv:2508.03961, (2025). \n[7] N. BANSAL AND H. JIANG, An improved bound for the Beck-Fiala conjecture, arXiv preprint arXiv:2508.01937, (2025). \n[8] J. BECK, Roth's estimate of the discrepancy of integer sequences is nearly sharp, Combinatorica, 1 (1981), pp. 319-325.\n\n[9] J. BECK AND T. FIALA, “integer-making” theorems, Discrete Applied Mathematics, 3 (1981), pp. 1-8. \n[10] A. BLOCH AND G. POLYA, On the roots of certain algebraic equations, Proceedings of the London Mathematical Society, 33 (1932), pp. 102-114. \n[11] P. BORWEIN, Computational Excursions in Analysis and Number Theory, Springer-Verlag, New York, 2002. \n[12] J. BOURGAIN, V. H. Vu, AND P. M. WOOD, On the singularity probability of discrete random matrices, Journal of Functional Analysis, 258 (2010), pp. 559-603. \n[13] B. BuKH, An improvement of the Beck-Fiala theorem, Combinatorics, Probability and Computing, 25 (2016), pp. 380-398. \n[14] M. CAMPOS, M. JENSSEN, M. MICHELEN, AND J. SAHASRABUDHE, Singularity of random symmetric matrices revisited, Proceedings of the American Mathematical Society, 150 (2022), pp. 3147-3159. \n[15] M. CAMPOS, M. JENSSEN, M. MICHELEN, AND J. SAHASRABUDHE, A new lower bound for sphere packing, arXiv preprint arXiv:2312.10026, (2023). \n[16] M. CAMPOS, M. JENSSEN, M. MICHELEN, AND J. SAHASRABUDHE, The least singular value of a random symmetric matrix, in Forum of Mathematics. Pi, vol. 12, Cambridge University Press, 2024. \n[17] M. CAMPOS, M. JENSSEN, M. MICHELEN, AND J. SAHASRABUDHE, A new lower bound for the Ramsey numbers $R(3,k)$ , arXiv preprint arXiv:2505.13371, (2025). \n[18] M. CAMPOS, M. JENSSEN, M. MICHELEN, AND J. SAHASRABUDHE, The singularity probability of a random symmetric matrix is exponentially small, Journal of the American Mathematical Society, 38 (2025), pp. 179-224. \n[19] M. CAMPOS, L. MATTOS, R. MORRIS, AND N. MORRISON, On the singularity of random symmetric matrices, Duke Mathematical Journal, 170 (2021), pp. 881-907. \n[20] H. CoHN, Spherical codes. https://cohn.mit.edu/spherical-codes/. Accessed: 2025-11-24. \n[21] H. COHN, A conceptual breakthrough in sphere packing, Notices of the American Mathematical Society, 64 (2017), pp. 102-115. \n[22] H. COHN, A. KUMAR, S. D. MILLER, D. RADCHENKO, AND M. VIAZOVSKA, The sphere packing problem in dimension 24, Annals of Mathematics, 185 (2017), pp. 1017-1033. \n[23] H. COHN AND Y. ZHAO, Sphere packing bounds via spherical codes, Duke Mathematical Journal, 163 (2014), pp. 1965-2002. \n[24] K. P. COSTELLO, T. TAO, AND V. VU, Random symmetric matrices are almost surely nonsingular, Duke Mathematical Journal, 135 (2006), pp. 395-413. \n[25] H. DAVENPORT AND C. A. ROGERS, Hlawka's theorem in the geometry of numbers, Duke Matemematic Journal, 14 (1947), pp. 367-375. \n[26] P. ERDOS, On a lemma of Littlewood and Offord, Bulletin of the American Mathematical Society, 51 (1945), pp. 898-902. \n[27] P. ERDős, Some unsolved problems, Michigan Mathematical Journal, 4 (1957), pp. 291-300. \n[28] P. ERDOS AND A. C. OFFORD, On the number of real roots of a random algebraic equation, Proceedings of the London Mathematical Society, 6 (1956), pp. 139-160. \n[29] A. FERBER AND V. JAIN, Singularity of random symmetric matrices—a combinatorial approach to improved bounds, Forum of Mathematics. Sigma, 7 (2019).\n\n[30] A. FERBER, V. JAIN, K. LUH, AND W. SAMOTIJ, On the counting problem in inverse Littlewood-Offord theory, Journal of the London Mathematical Society. \n[31] I. G. FERNÁNDEZ, J. KIM, H. LIU, AND O. PIKHURKO, New lower bounds on kissing numbers and spherical codes in high dimensions, American Journal of Mathematics, 147 (2025), pp. 901-925. \n[32] D. GAMARNIK AND M. SUDAN, Limits of local algorithms over sparse random graphs, in Proceedings of the 5th conference on innovations in theoretical computer science, 2014, pp. 369-376. \n[33] I. GIL FERNÁNEZ, J. KIM, H. LIU, AND O. PIKHURKO, New lower bound on ball packing density in high-dimensional hyperbolic spaces, International Mathematics Research Notices, 2025 (2025). rnae282. \n[34] H. H. GOLDSTINE AND J. VON NEUMANN, Numerical inverting of matrices of high order. II, Proceedings of the American Mathematical Society, 2 (1951), pp. 188-202. \n[35] J. R. GRIGGS, J. C. LAGARIAS, A. M. ODLYZKO, AND J. B. SHEARER, On the tightest packing of sums of vectors, European Journal of Combinatorics, 4 (1983), pp. 231-236. \n[36] G. HALÁSZ, On the distribution of additive arithmetic functions, Acta Arithmetica, 1 (1975), pp. 143-152. \n[37] T. C. HALES, A proof of the Kepler conjecture, Annals of Mathematics, 162 (2005), pp. 1065-1185. \n[38] G. H. HARDY AND J. E. LITTLEWOOD, Some problems of diophantine approximation: a remarkable trigonometric series, Proceedings of the National Academy of Sciences, 2 (1916), pp. 583-586. \n[39] Z. HEFTY, P. HORN, D. KING, AND F. PFENDER, Improving $r(3,k)$ in just two bites, arXiv preprint arXiv:2510.19718, (2025). \n[40] Z. HUNTER, M. KWAN, AND L. SAUERMANN, Exponential anticoncentration of the permanent, arXiv preprint arXiv:2509.22577, (2025). \n[41] V. JAIN, A. SAH, AND M. SAWHNEY, *Singularity of discrete random matrices*, Geometric and Functional Analysis, 31 (2021), pp. 1160-1218. \n[42] V. JAIN, A. SAH, AND M. SAWHNEY, On the smallest singular value of symmetric random matrices, Combinatorics, Probability and Computing, 31 (2022), pp. 662-683. \n[43] M. JENSSEN, F. JOOS, AND W. PERKINS, On kissing numbers and spherical codes in high dimensions, Advances in Mathematics, 335 (2018), pp. 307-321. \n[44] G. A. KABATJANSKI AND V. I. LEVENSTEIN, Bounds for packings on the sphere and in space, Problemy Peredaci Informaci, 14 (1978), pp. 3-25. \n[45] J. KAHN, J. KOMLós, AND E. SZEMERÉDI, On the probability that a random $\\pm 1$ -matrix is singular, Journal of the American Mathematical Society, 8 (1995), pp. 223-240. \n[46] B. KLARTAG, Lattice packing of spheres in high dimensions using a stochastically evolving ellipsoid, arXiv preprint arXiv:2504.05042, (2025). \n[47] D. J. KLEITMAN, On a lemma of Littlewood and Offord on the distributions of linear combinations of vectors, Advances in Mathematics, 5 (1970), pp. 155-157. \n[48] J. Komlós, On the determinant of (0, 1) matrices, Studia Scientiarum Mathematicarum Hungarica, 2 (1967), pp. 7-21. \n[49] M. KRIVELEVICH, S. LITSYN, AND A. VARDY, A lower bound on the density of sphere packings via graph theory, International Mathematics Research Notices, 2004 (2004), pp. 2271-2279. \n[50] J. E. LITTLEWOOD, On the mean values of certain trigonometric polynomials, Journal of the London Mathematical Society, 36 (1961), pp. 307-334.\n\n[51] J. E. LITTLEWOOD, On the mean values of certain trigonometric polynomials II, Illinois Journal of Mathematics, 6 (1962), pp. 1-39. \n[52] J. E. LITTLEWOOD, On polynomials $\\sum^n \\pm z^m$ , $\\sum^n e^{\\alpha_m i} z^m$ , $z = e^{\\theta i}$ , Journal of the London Mathematical Society, 41 (1966), pp. 367-376. \n[53] J. E. LITTLEWOOD, The real zeros and value distributions of real trigonometrical polynomials, Journal of the London Mathematical Society, 41 (1966), pp. 336-342. \n[54] J. E. LITTLEWOOD AND A. C. OFFORD, On the number of real roots of a random algebraic equation, Journal of the London Mathematical Society, 13 (1938), pp. 288-295. \n[55] J. E. LITTLEWOOD AND A. C. OFFORD, On the Number of Real Roots of a Random Algebraic Equation, Journal of the London Mathematical Society, 13 (1938), pp. 288-295. \n[56] J. E. LITTLEWOOD AND A. C. OFFORD, On the number of real roots of a random algebraic equation. III, Rec. Math. [Mat. Sbornik] N.S., 12(54) (1943), pp. 277-286. \n[57] J. E. LITTLEWOOD AND A. C. OFFORD, On the distribution of zeros and a-values of a random integral function II, Annals of Mathematics, 49 (1948), pp. 885-952. \n[58] A. E. LITVAK AND K. E. TIKHOMIROV, Singularity of sparse Bernoulli matrices, Duke Mathematical Journal, 171 (2022), pp. 1135-1233. \n[59] S. LOVETT AND R. MEKA, Constructive discrepancy minimization by walking on the edges, SIAM Journal on Computing, 44 (2015), pp. 1573-1582. \n[60] H. Löwen, Fun with hard spheres, in Statistical physics and spatial statistics (Wuppertal, 1999), vol. 554 of Lecture Notes in Phys., pp. 295-331. \n[61] H. MINKOWSKI, Diskontinuitätsbereich für arithmetische äquivalenz, Journal für die reine und angewandte Mathematik (Crelle), 129 (1905), pp. 220-274. \n[62] H. L. MONTGOMERY, Littlewood polynomials, in Analytic Number Theory, Modular Forms and q-hypergeometric Series, G. Andrews and F. Garvan, eds., Springer, Cham, 2017, pp. 533-553. \n[63] H. NGUYEN, T. TAO, AND V. VU, Random matrices: tail bounds for gaps between eigenvalues, Probability Theory and Related Fields, 167 (2017), pp. 777-816. \n[64] H. NGUYEN AND V. VU, Optimal inverse Littlewood-Offord theorems, Advances in Mathematics, 226 (2011), pp. 5298-5319. \n[65] H. H. NGUYEN, Inverse Littlewood-Offord problems and the singularity of random symmetric matrices, Duke Mathematical Journal, 161 (2012), pp. 545-586. \n[66] H. H. NGUYEN, On the least singular value of random symmetric matrices, Electronic Journal of Probability, 17 (2012), pp. 1-19. \n[67] H. H. NGUYEN, Random matrices: Overcrowding estimates for the spectrum, Journal of Functional Analysis, 275 (2018), pp. 2197-2224. \n[68] H. H. NGUYEN AND V. H. VU, Small probability, inverse theorems, and applications, in Erdős centennial, vol. 25 of Bolyai Society Mathematical Studies, János Bolyai Math. Soc., Budapest, 2013, pp. 409-463. \n[69] G. PARISI AND F. ZAMPONI, Amorphous packings of hard spheres for large space dimension, Journal of Statistical Mechanics: Theory and Experiment, (2006), pp. P03017, 15. \n[70] G. PARISI AND F. ZAMPONI, Mean-field theory of hard sphere glasses and jamming, Reviews of Modern Physics, 82 (2010), p. 789.\n\n[71] M. RAHMAN AND B. VIRÁG, Local algorithms for independent sets are half-optimal, Annals of Probability, 45 (2017). \n[72] C. A. ROGERS, Existence theorems in the geometry of numbers, Annals of Mathematics, 48 (1947), pp. 994-1002. \n[73] K. Roth, Remark concerning integer sequences, Acta Arithmetica, 9 (1964), pp. 257-260. \n[74] K. F. Roth, On certain sets of integers, Journal of the London Mathematical Society, 1 (1953), pp. 104-109. \n[75] M. RUDELSON, Invertibility of random matrices: norm of the inverse, Annals of Mathematics, 168 (2008), pp. 575-600. \n[76] M. RUDELSON AND K. TIKHOMIROV, The sparse circular law under minimal assumptions, Geometric and Functional Analysis, 29 (2019), pp. 561-637. \n[77] M. RUDELSON AND R. VERSHYNIN, The Littlewood-Offord problem and invertibility of random matrices, Advances in Mathematics, 218 (2008), pp. 600-633. \n[78] M. RUDELSON AND R. VERSHYNIN, Smallest singular value of a random rectangular matrix, Communications on Pure and Applied Mathematics, 62 (2009), pp. 1707-1739. \n[79] W. RUDIN, Some theorems on fourier coefficients, Proceedings of the American Mathematical Society, 10 (1959), pp. 855-859. \n[80] A. SAH, J. SAHASRABUDHE, AND M. SAWHNEY, The limiting spectral law for sparse iid matrices, Forum of Mathematics. Pi. To appear. \n[81] A. SAH, J. SAHASRABUDHE, AND M. SAWHNEY, On the Spielman-Teng conjecture, Geometric and Functional Analysis, 35 (2025), pp. 633-671. \n[82] A. SAH, J. SAHASRABUDHE, AND M. SAWHNEY, The sparse circular law, revisited, Bulletin of the London Mathematical Society, 57 (2025), pp. 330-358. \n[83] R. SALEM AND A. ZYGMUND, Some properties of trigonometric series whose terms have random signs, Acta Mathematica, 91 (1954), pp. 245-301. \n[84] A. SALI, Stronger form of an $m$ -part Sperner theorem, European Journal of Combinatorics, 4 (1983), pp. 179-183. \n[85] N. T. SARDARI AND M. ZARGAR, New upper bounds for spherical codes and packings, Mathematische Annalen, (2023), pp. 1-51. \n[86] A. SÁRKÖZY AND E. SZÉMEREDI, Über ein problem von Erdős und Moser, Acta Arithmetica, 11 (1965), pp. 205-208. \n[87] C. SCHILDKRAUT, Lower bounds for sphere packing in arbitrary norms, arXiv preprint arXiv:2406.07479, (2024). \n[88] H. SHAPIRO, Extremal problems for polynomials, MS thesis, MIT, Cambridge, (1951). \n[89] J. B. SHEARER, A note on the independence number of triangle-free graphs, Discrete Mathematics, 46 (1983), pp. 83-87. \n[90] J. B. SHEARER, On the independence number of sparse graphs, Random Structures & Algorithms, 7 (1995), pp. 269-271. \n[91] J. SPENCER, Six standard deviations suffice, Transactions of the American mathematical society, 289 (1985), pp. 679-706.\n\n[92] D. A. SPIELMAN AND S.-H. TENG, Smoothed analysis of algorithms, in Proceedings of the International Congress of Mathematicians, Vol. I (Beijing, 2002), Higher Ed. Press, Beijing, 2002, pp. 597-606. \n[93] R. P. STANLEY, Weyl groups, the hard Lefschetz theorem, and the Sperner property, SIAM Journal on Algebraic Discrete Methods, 1 (1980), pp. 168-184. \n[94] T. TAO AND V. Vu, On random $\\pm 1$ matrices: singularity and determinant, Random Structures & Algorithms, 28 (2006), pp. 1-23. \n[95] T. TAO AND V. Vu, On the singularity probability of random Bernoulli matrices, Journal of the American Mathematical Society, 20 (2007), pp. 603-628. \n[96] T. TAO AND V. Vu, On the permanent of random Bernoulli matrices, Advances in Mathematics, 220 (2009), pp. 657-669. \n[97] T. TAO AND V. Vu, Random matrices: the distribution of the smallest singular values, Geometric and Functional Analysis, 20 (2010), pp. 260-297. \n[98] T. Tao AND V. Vu, Random matrices: universality of ESDs and the circular law, Annals of Probability, 38 (2010), pp. 2023-2065. With an appendix by Manjunath Krishnapur. \n[99] T. TAO AND V. Vu, A sharp inverse Littlewood-Offord theorem, Random Structures & Algorithms, 37 (2010), pp. 525-539. \n[100] T. Tao AND V. H. Vu, Additive combinatorics, vol. 105, Cambridge University Press, 2006. \n[101] T. TAO AND V. H. Vu, Inverse Littlewood-Offord theorems and the condition number of random discrete matrices, Ann. of Math. (2), 169 (2009), pp. 595-632. \n[102] A. THUE, Über die dichteste Zusammenstellung von kongruenten Kreisen in einer Ebene, no. 1, J. Dybwad, 1911. \n[103] K. Tikhomirov, Singularity of random Bernoulli matrices, Annals of Mathematics, 191 (2020), pp. 593-634. \n[104] S. VANCE, Improved sphere packing lower bounds from Hurwitz lattices, Advances in Mathematics, 227 (2011), pp. 2144-2156. \n[105] A. VENKATESH, A note on sphere packings in high dimension, International Mathematics Research Notices, 2013 (2013), pp. 1628-1642. \n[106] R. VERSHYNIN, Invertibility of symmetric random matrices, Random Structures & Algorithms, 44 (2014), pp. 135-182. \n[107] M. S. VIAZOVSKA, The sphere packing problem in dimension 8, Annals of mathematics, (2017), pp. 991-1015."}
# The Diffusive Behavior of Solutions to the Linear Damped Wave Equation: an Undergraduate D.I.Y. Classnote Abstract Despite of the fact that the Damped Wave and the Heat equations describe phenomena of distinct nature, it is amazing that their solutions are related in the limit as $t \to \infty$ . The aim of this note is to explain to undergraduate students, with a good calculus background, how the relation between these solutions is established. We follow a "do it yourself" strategy and the students are invited to do the suggested exercises in order to understand the content of this note. # 1 Introduction Consider the following Partial Differential Equation (PDE) $$ \mu u _ {t} + u _ {t t} - u _ {x x} = 0, \tag {1} $$ where $u = u(x,t)$ , $x,t\in \mathbb{R}$ and $\mu \geq 0$ . When $\mu = 0$ , this is the one-dimensional wave equation, a classical example of a hyperbolic equation. Its solutions are the superposition of two travelling waves, one to the right and one to the left, both propagating with velocity one, see. Jump discontinuities at time zero will also propagate over characteristic curves with velocity one. When $\mu > 0$ , the case we are interested in, the equation is still hyperbolic and its solutions retain the properties of the $\mu = 0$ solutions. It is called the Damped Wave Equation (DWE), or the Telegraph Equation (TE), see for instance. Exercise 1.1 Multiplying both sides of (1) by $u_{t}$ , integrating over $\mathbb{R}$ and assuming all operations are allowed, conclude that $$ \partial_ {t} \left(\int_ {\mathbb {R}} \frac {1}{2} [ u _ {t} ^ {2} + u _ {x} ^ {2} ] d x\right) = - \mu \int_ {\mathbb {R}} u _ {t} ^ {2} d x. \tag {2} $$ The integral on the left hand side (lhs) of (2) is the wave's total energy. If $\mu > 0$ and if $u_{t}$ is not identically zero then the right hand side (rhs) of (2) is negative implying that the wave's total energy decreases with time, not being conserved. As we will see later on, the solutions to the DWE are also a superposition of left and right travelling waves but, due to the damping term $\mu u_{t}$ , their amplitudes will diminish with time. On the other hand, the Heat or Diffusion Equation, (HE) or (DE), $$ \mu u _ {t} - u _ {x x} = 0, \tag {3} $$ where $u = u(x,t)$ , $t > 0$ , $x \in \mathbb{R}$ and $\mu > 0$ , is a classical example of a parabolic PDE. In (3), $\sigma = 1 / \mu$ is the diffusion coefficient. Exercise 1.2 For $t > 0$ and $x\in \mathbb{R}$ , show that $$ K (x, t) = \frac {1}{\sqrt {t}} f _ {\mu} ^ {*} \left(\frac {x}{\sqrt {t}}\right), \tag {4} $$ where $$ f _ {\mu} ^ {*} (x) = \sqrt {\frac {\mu}{4 \pi}} e ^ {- \mu \frac {x ^ {2}}{4}}, \tag {5} $$ is a solution to (3). We observe that $f_{\mu}^{*}(x)$ , defined by (5), is the probability density function of a zero mean Gaussian random variable with $\sqrt{2 / \mu}$ variance. Exercise 1.3 Show that $$ \int_ {\mathbb {R}} f _ {\mu} ^ {*} (x) d x = 1, \int_ {\mathbb {R}} x f _ {\mu} ^ {*} (x) d x = 0, \int_ {\mathbb {R}} x ^ {2} f _ {\mu} ^ {*} (x) d x = \frac {2}{\mu}. \tag {6} $$ $K(x,t)$ , given by (4), is said to be a fundamental solution to (3). The following properties for $K(x,t)$ are easily verified: Exercise 1.4 For $t > 0$ and $x \in \mathbb{R}$ , verify that: 1) $K(x,t)$ is a $C^\infty$ function of $x$ ; 2) $K(x,t)$ is scaling invariant, i.e., $$ \sqrt {t} K (\sqrt {t} x, t) = f _ {\mu} ^ {*} (x). \tag {7} $$ It turns out that any solution to the Initial Value Problem (IVP), with $\mu > 0$ and a continuous $f(x)$ , $$ \left\{ \begin{array}{c} \mu u _ {t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x), \end{array} \right. \tag {8} $$ retains the properties of the $K(x,t)$ , stated in Exercise 1.4, in the sense that: 1) for $t > 0$ , $u(\cdot ,t)$ is a $C^\infty (\mathbb{R})$ function even if $u(x,t)$ has a jump discontinuity at time $t = 0$ . In this case, we say that the discontinuities at time $t = 0$ are instantaneously smoothed out at any later time $t > 0$ ; 2) solutions to the IVP (8), as a function of $t$ , decay and spread out at rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively, i.e., identity (7) holds in the limit as $t \to \infty$ , see identity (10). The above two properties are a straightforward consequence of the well known integral representation formula. $$ u (x, t) = \int_ {\mathbb {R}} K (x - y, t) f (y) d y \tag {9} $$ which holds for solutions to the IVP (8) where $f(x)$ is a bounded and continuous function except possibly for a finite number of jump discontinuities. Exercise 1.5 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ , is $C^\infty$ as a function of $x$ . Exercise 1.6 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ such that $\int_{\mathbb{R}} |f(x)| dx < \infty$ , satisfies $$ \lim _ {t \rightarrow \infty} \sqrt {t} u (\sqrt {t} x, t) = M f _ {\mu} ^ {*} (x), \tag {10} $$ where $$ M = \int_ {\mathbb {R}} f (x) d x \tag {11} $$ and $f_{\mu}^{*}(x)$ is given by (5). The limit (10) expresses the fact that identity (4), which is satisfied for the kernel $K(x,t)$ , holds asymptotically for the solution $u(x,t)$ of the IVP (8) so that $u(x,t)$ decays and spreads out at rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively, having $f_{\mu}^{*}(x)$ as its profile function, in the limit $t \to \infty$ . $M$ , given by (11), is the prefactor. For large values of $t$ , (10) can be rephrased as $$ u (x, t) \approx \frac {M}{\sqrt {t}} f _ {\mu} ^ {*} \left(\frac {x}{\sqrt {t}}\right). \tag {12} $$ The above notation means that the two functions in (12) are asymptotically equivalent, that is, their ratio tends to one, when $t$ goes to infinity. Despite of the fact that equations (1) and (3) describe phenomena of distinct nature, it is amazing that their solutions are asymptotically related as $t \to \infty$ . The aim of this note is to explain to undergraduate mathematics, science and engineering students how the solutions to the above two equations are connected. Before presenting the main theorem and its proof, we provide a heuristic argument to highlight the intuition that distinct space and time scaling is behind the explanation for the changing mechanism from hyperbolic to parabolic behavior. To present this heuristic reasoning, let $u(x,t)$ be a solution to the following Cauchy Problem (CP) $$ \left\{ \begin{array}{c} \mu u _ {t} + u _ {t t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x), \\ u _ {t} (x, 0) = g (x). \end{array} \right. \tag {13} $$ Our purpose here is to prove that, if $u(x,t)$ is a solution to (13), then the limit expressed in (10) holds, with the prefactor $M$ given by $$ M = \int_ {\mathbb {R}} \left[ f (x) + \frac {1}{\mu} g (x) \right] d x. \tag {14} $$ Now define $$ v (x, t) \equiv L ^ {\frac {1}{2}} u \left(L ^ {\frac {1}{2}} x, L t\right), \tag {15} $$ where $L > 1$ . We say that the above $v(x,t)$ is a rescaling of $u(x,t)$ . Exercise 1.7 Show that $v(x,t)$ , defined by (15), solves the equation $$ \mu v _ {t} + \frac {1}{L} v _ {t t} - v _ {x x} = 0. \tag {16} $$ Assuming that $|v_{tt}(x,t)|$ is uniformly bounded for $x \in \mathbb{R}$ and $t > 0$ , the second term on the left hand side of (16) will be small if we choose $L$ large enough. Then, for large $L$ , it is reasonable to drop this term off thus generating the diffusion equation (3). More precisely, we conclude that, for large values of $t$ , $$ u (x, t) \approx v (x, t), \tag {17} $$ where $v(x,t)$ is the solution to the IVP $$ \left\{ \begin{array}{c} \mu u _ {t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x) + \frac {1}{\mu} g (x). \end{array} \right. \tag {18} $$ The approximation (17) reflects the surprisingly fact that damped propagating waves decay and spread out with rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively. This formal argument or, equivalently, the approximation (17), is rigorously translated into the following theorem which we will prove in the next sections. Theorem 1.1 If $u(x,t)$ is the solution to the Cauchy problem (13), with $\mu > 0$ , $f \in C_0^2(\mathbb{R})$ and $g \in C_0^1(\mathbb{R})$ , then $$ \lim _ {t \rightarrow \infty} \sqrt {t} u (\sqrt {t} x, t) = M f _ {\mu} ^ {*} (x), \tag {19} $$ where $M$ and $f_{\mu}^{*}(x)$ are given by (14) and (5), respectively. Remark: Theorem 1.1 is a simpler version of more general theorems which require advanced mathematical methods to be proven. In the papers, the reader will be able to check in which directions the above theorem can be generalized. # 2 Integral representations of solutions to (13) The DW Equation (1) is classified as hyperbolic and the pair of straight lines $$ \alpha = x + t \mathrm {e} \beta = x - t. $$ form its characteristic curves, (see). According to, the solution to the CP (13) can be expressed as follows $$ \begin{array}{l} u (x, t) = \frac {e ^ {- \frac {\mu}{2} t}}{2} \left[ f (x + t) + f (x - t) + \int_ {x - t} ^ {x + t} f (\alpha) \frac {d}{d t} I _ {0} \left(\frac {\mu}{2} \sqrt {t ^ {2} - (x - \alpha) ^ {2}}\right) d \alpha \right. \\ \left. + \int_ {x - t} ^ {x + t} \left(g (\alpha) + \frac {\mu}{2} f (\alpha)\right) I _ {0} \left(\frac {\mu}{2} \sqrt {t ^ {2} - (x - \alpha) ^ {2}}\right) d \alpha \right], \tag {20} \\ \end{array} $$ where $I_{n}(x)$ , for $n = 0,1,2,\dots$ , represents the modified Bessel function of order $n$ , given by $$ I _ {n} (x) = i ^ {- n} J _ {n} (i x) = \sum_ {j = 0} ^ {\infty} \frac {1}{j ! (j + n) !} \left(\frac {x}{2}\right) ^ {2 j + n}, \tag {21} $$ where $J_{n}(x)$ is the Bessel function of order $n$ $$ J _ {n} (x) = \sum_ {j = 0} ^ {\infty} \frac {(- 1) ^ {j}}{j ! (j + n) !} \left(\frac {x}{2}\right) ^ {2 j + n}. \tag {22} $$ If $\mu = 0$ in (13) then we obtain the CP for the Wave Equation, whose solution
# The Diffusive Behavior of Solutions to the Linear Damped Wave Equation: an Undergraduate D.I.Y. Classnote Abstract Despite of the fact that the Damped Wave and the Heat equations describe phenomena of distinct nature, it is amazing that their solutions are related in the limit as $t \to \infty$ . The aim of this note is to explain to undergraduate students, with a good calculus background, how the relation between these solutions is established. We follow a "do it yourself" strategy and the students are invited to do the suggested exercises in order to understand the content of this note. # 1 Introduction Consider the following Partial Differential Equation (PDE) $$ \mu u _ {t} + u _ {t t} - u _ {x x} = 0, \tag {1} $$ where $u = u(x,t)$ , $x,t\in \mathbb{R}$ and $\mu \geq 0$ . When $\mu = 0$ , this is the one-dimensional wave equation, a classical example of a hyperbolic equation. Its solutions are the superposition of two travelling waves, one to the right and one to the left, both propagating with velocity one, see. Jump discontinuities at time zero will also propagate over characteristic curves with velocity one. When $\mu > 0$ , the case we are interested in, the equation is still hyperbolic and its solutions retain the properties of the $\mu = 0$ solutions. It is called the Damped Wave Equation (DWE), or the Telegraph Equation (TE), see for instance. Exercise 1.1 Multiplying both sides of (1) by $u_{t}$ , integrating over $\mathbb{R}$ and assuming all operations are allowed, conclude that $$ \partial_ {t} \left(\int_ {\mathbb {R}} \frac {1}{2} [ u _ {t} ^ {2} + u _ {x} ^ {2} ] d x\right) = - \mu \int_ {\mathbb {R}} u _ {t} ^ {2} d x. \tag {2} $$ The integral on the left hand side (lhs) of (2) is the wave's total energy. If $\mu > 0$ and if $u_{t}$ is not identically zero then the right hand side (rhs) of (2) is negative implying that the wave's total energy decreases with time, not being conserved. As we will see later on, the solutions to the DWE are also a superposition of left and right travelling waves but, due to the damping term $\mu u_{t}$ , their amplitudes will diminish with time. On the other hand, the Heat or Diffusion Equation, (HE) or (DE), $$ \mu u _ {t} - u _ {x x} = 0, \tag {3} $$ where $u = u(x,t)$ , $t > 0$ , $x \in \mathbb{R}$ and $\mu > 0$ , is a classical example of a parabolic PDE. In (3), $\sigma = 1 / \mu$ is the diffusion coefficient. Exercise 1.2 For $t > 0$ and $x\in \mathbb{R}$ , show that $$ K (x, t) = \frac {1}{\sqrt {t}} f _ {\mu} ^ {*} \left(\frac {x}{\sqrt {t}}\right), \tag {4} $$ where $$ f _ {\mu} ^ {*} (x) = \sqrt {\frac {\mu}{4 \pi}} e ^ {- \mu \frac {x ^ {2}}{4}}, \tag {5} $$ is a solution to (3). We observe that $f_{\mu}^{*}(x)$ , defined by (5), is the probability density function of a zero mean Gaussian random variable with $\sqrt{2 / \mu}$ variance. Exercise 1.3 Show that $$ \int_ {\mathbb {R}} f _ {\mu} ^ {*} (x) d x = 1, \int_ {\mathbb {R}} x f _ {\mu} ^ {*} (x) d x = 0, \int_ {\mathbb {R}} x ^ {2} f _ {\mu} ^ {*} (x) d x = \frac {2}{\mu}. \tag {6} $$ $K(x,t)$ , given by (4), is said to be a fundamental solution to (3). The following properties for $K(x,t)$ are easily verified: Exercise 1.4 For $t > 0$ and $x \in \mathbb{R}$ , verify that: 1) $K(x,t)$ is a $C^\infty$ function of $x$ ; 2) $K(x,t)$ is scaling invariant, i.e., $$ \sqrt {t} K (\sqrt {t} x, t) = f _ {\mu} ^ {*} (x). \tag {7} $$ It turns out that any solution to the Initial Value Problem (IVP), with $\mu > 0$ and a continuous $f(x)$ , $$ \left\{ \begin{array}{c} \mu u _ {t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x), \end{array} \right. \tag {8} $$ retains the properties of the $K(x,t)$ , stated in Exercise 1.4, in the sense that: 1) for $t > 0$ , $u(\cdot ,t)$ is a $C^\infty (\mathbb{R})$ function even if $u(x,t)$ has a jump discontinuity at time $t = 0$ . In this case, we say that the discontinuities at time $t = 0$ are instantaneously smoothed out at any later time $t > 0$ ; 2) solutions to the IVP (8), as a function of $t$ , decay and spread out at rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively, i.e., identity (7) holds in the limit as $t \to \infty$ , see identity (10). The above two properties are a straightforward consequence of the well known integral representation formula. $$ u (x, t) = \int_ {\mathbb {R}} K (x - y, t) f (y) d y \tag {9} $$ which holds for solutions to the IVP (8) where $f(x)$ is a bounded and continuous function except possibly for a finite number of jump discontinuities. Exercise 1.5 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ , is $C^\infty$ as a function of $x$ . Exercise 1.6 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ such that $\int_{\mathbb{R}} |f(x)| dx < \infty$ , satisfies $$ \lim _ {t \rightarrow \infty} \sqrt {t} u (\sqrt {t} x, t) = M f _ {\mu} ^ {*} (x), \tag {10} $$ where $$ M = \int_ {\mathbb {R}} f (x) d x \tag {11} $$ and $f_{\mu}^{*}(x)$ is given by (5). The limit (10) expresses the fact that identity (4), which is satisfied for the kernel $K(x,t)$ , holds asymptotically for the solution $u(x,t)$ of the IVP (8) so that $u(x,t)$ decays and spreads out at rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively, having $f_{\mu}^{*}(x)$ as its profile function, in the limit $t \to \infty$ . $M$ , given by (11), is the prefactor. For large values of $t$ , (10) can be rephrased as $$ u (x, t) \approx \frac {M}{\sqrt {t}} f _ {\mu} ^ {*} \left(\frac {x}{\sqrt {t}}\right). \tag {12} $$ The above notation means that the two functions in (12) are asymptotically equivalent, that is, their ratio tends to one, when $t$ goes to infinity. Despite of the fact that equations (1) and (3) describe phenomena of distinct nature, it is amazing that their solutions are asymptotically related as $t \to \infty$ . The aim of this note is to explain to undergraduate mathematics, science and engineering students how the solutions to the above two equations are connected. Before presenting the main theorem and its proof, we provide a heuristic argument to highlight the intuition that distinct space and time scaling is behind the explanation for the changing mechanism from hyperbolic to parabolic behavior. To present this heuristic reasoning, let $u(x,t)$ be a solution to the following Cauchy Problem (CP) $$ \left\{ \begin{array}{c} \mu u _ {t} + u _ {t t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x), \\ u _ {t} (x, 0) = g (x). \end{array} \right. \tag {13} $$ Our purpose here is to prove that, if $u(x,t)$ is a solution to (13), then the limit expressed in (10) holds, with the prefactor $M$ given by $$ M = \int_ {\mathbb {R}} \left[ f (x) + \frac {1}{\mu} g (x) \right] d x. \tag {14} $$ Now define $$ v (x, t) \equiv L ^ {\frac {1}{2}} u \left(L ^ {\frac {1}{2}} x, L t\right), \tag {15} $$ where $L > 1$ . We say that the above $v(x,t)$ is a rescaling of $u(x,t)$ . Exercise 1.7 Show that $v(x,t)$ , defined by (15), solves the equation $$ \mu v _ {t} + \frac {1}{L} v _ {t t} - v _ {x x} = 0. \tag {16} $$ Assuming that $|v_{tt}(x,t)|$ is uniformly bounded for $x \in \mathbb{R}$ and $t > 0$ , the second term on the left hand side of (16) will be small if we choose $L$ large enough. Then, for large $L$ , it is reasonable to drop this term off thus generating the diffusion equation (3). More precisely, we conclude that, for large values of $t$ , $$ u (x, t) \approx v (x, t), \tag {17} $$ where $v(x,t)$ is the solution to the IVP $$ \left\{ \begin{array}{c} \mu u _ {t} - u _ {x x} = 0, x \in \mathbb {R}, t > 0, \\ u (x, 0) = f (x) + \frac {1}{\mu} g (x). \end{array} \right. \tag {18} $$ The approximation (17) reflects the surprisingly fact that damped propagating waves decay and spread out with rates $1 / \sqrt{t}$ and $\sqrt{t}$ , respectively. This formal argument or, equivalently, the approximation (17), is rigorously translated into the following theorem which we will prove in the next sections. Theorem 1.1 If $u(x,t)$ is the solution to the Cauchy problem (13), with $\mu > 0$ , $f \in C_0^2(\mathbb{R})$ and $g \in C_0^1(\mathbb{R})$ , then $$ \lim _ {t \rightarrow \infty} \sqrt {t} u (\sqrt {t} x, t) = M f _ {\mu} ^ {*} (x), \tag {19} $$ where $M$ and $f_{\mu}^{*}(x)$ are given by (14) and (5), respectively. Remark: Theorem 1.1 is a simpler version of more general theorems which require advanced mathematical methods to be proven. In the papers, the reader will be able to check in which directions the above theorem can be generalized. # 2 Integral representations of solutions to (13) The DW Equation (1) is classified as hyperbolic and the pair of straight lines $$ \alpha = x + t \mathrm {e} \beta = x - t. $$ form its characteristic curves, (see). According to, the solution to the CP (13) can be expressed as follows $$ \begin{array}{l} u (x, t) = \frac {e ^ {- \frac {\mu}{2} t}}{2} \left[ f (x + t) + f (x - t) + \int_ {x - t} ^ {x + t} f (\alpha) \frac {d}{d t} I _ {0} \left(\frac {\mu}{2} \sqrt {t ^ {2} - (x - \alpha) ^ {2}}\right) d \alpha \right. \\ \left. + \int_ {x - t} ^ {x + t} \left(g (\alpha) + \frac {\mu}{2} f (\alpha)\right) I _ {0} \left(\frac {\mu}{2} \sqrt {t ^ {2} - (x - \alpha) ^ {2}}\right) d \alpha \right], \tag {20} \\ \end{array} $$ where $I_{n}(x)$ , for $n = 0,1,2,\dots$ , represents the modified Bessel function of order $n$ , given by $$ I _ {n} (x) = i ^ {- n} J _ {n} (i x) = \sum_ {j = 0} ^ {\infty} \frac {1}{j ! (j + n) !} \left(\frac {x}{2}\right) ^ {2 j + n}, \tag {21} $$ where $J_{n}(x)$ is the Bessel function of order $n$ $$ J _ {n} (x) = \sum_ {j = 0} ^ {\infty} \frac {(- 1) ^ {j}}{j ! (j + n) !} \left(\frac {x}{2}\right) ^ {2 j + n}. \tag {22} $$ If $\mu = 0$ in (13) then we obtain the CP for the Wave Equation, whose solution is given by the D'Alembert's formula $$ u (x, t) = \frac {1}{2} [ f (x + t) + f (x - t) ] + \frac {1}{2} \int_ {x - t} ^ {x + t} g (\alpha) d \alpha . \tag {23} $$ Exercise 2.1 Show that D'Alembert's formula can be recovered from (20) when $\mu = 0$ . The representation formula (20), for $\mu > 0$ , inherits the left and right propagating waves structure of (23): at the point $P = (x_0, t_0)$ , the solution is the superposition of two waves, both travelling with velocity one along the characteristics and both being exponentially damped if $\mu > 0$ . Besides that, the value $u(x_0, t_0)$ depends uniquely on the values of $f(\alpha)$ e.g. $g(\alpha)$ , for $\alpha \in [x_0 - t_0, x_0 + t_0]$ , the domain of dependence. Exercise 2.2 Using, for $n \in \mathbb{Z}$ , that $\frac{d}{d\xi} (\xi^n I_n(\xi)) = \xi^n I_{n-1}(\xi)$ and $I_{-n}(\xi) = I_n(\xi)$ , show that $I_0'(\xi) = I_{-1}(\xi) = I_1(\xi)$ , where $I_0'(z)$ means $\frac{d}{dz} I_0(z)$ and conclude that the first integral on the right hand side of (20) can be rewritten as $$ \int_ {x - t} ^ {x + t} \frac {\mu}{2} f (\alpha) \frac {t}{\sqrt {t ^ {2} - (x - \alpha) ^ {2}}} I _ {1} \left(\frac {\mu}{2} \sqrt {t ^ {2} - (x - \alpha) ^ {2}}\right) d \alpha . \tag {24} $$ Notice that the integration interval $|x - \alpha| \leq t$ , also expressed as $t^2 - (x - \alpha)^2 \geq 0$ , leads that the Bessel functions $I_0$ in (20) and $I_1$ in (24) are in fact real numbers. # 3 Rescaling For fixed $x \in \mathbb{R}$ , $t > 0$ and $\mu > 0$ , our aim is to verify that the rescaling (15), applied to the representation (20), yields the limit (19). For $L > 1$ , define $$ \xi = \xi (\alpha ; L, x) \equiv \frac {\mu}{2} \sqrt {L ^ {2} - (L ^ {\frac {1}{2}} x - \alpha) ^ {2}}. \tag {25} $$ Exercise 3.1 Use (20) and exercise 2.2 to obtain a representation for the rescaling $v(x,t)$ given by (15) and show that, at $t = 1$ , this representation is given by $$ \begin{array}{l} L ^ {\frac {1}{2}} u \left(L ^ {\frac {1}{2}} x, L\right) = L ^ {\frac {1}{2}} \frac {e ^ {- \frac {\mu}{2} L}}{2} \left[ f \left(L ^ {\frac {1}{2}} x + L\right) + f \left(L ^ {\frac {1}{2}} x - L\right) + \int_ {L ^ {\frac {1}{2}} x - L} ^ {L ^ {\frac {1}{2}} x + L} \frac {\mu^ {2} L}{4 \xi} f (\alpha) I _ {1} (\xi) d \alpha \right. \\ \left. + \int_ {L ^ {\frac {1}{2}} x - L} ^ {L ^ {\frac {1}{2}} x + L} \left(g (\alpha) + \frac {\mu}{2} f (\alpha)\right) I _ {0} (\xi) d \alpha \right]. \tag {26} \\ \end{array} $$ Notice that we can replace $L$ by $L / t$ in the rescaled function $v(x,t)$ , so that the results will hold for $L^{\frac{1}{2}}u(L^{\frac{1}{2}}y,L)$ , being enough to replace $y$ by $x / \sqrt{t}$ and multiply $u$ by $1 / \sqrt{t}$ . That is why, from now on we will consider $t = 1$ and therefore, we will use (26) to analyze the behavior of $L^{\frac{1}{2}}u(L^{\frac{1}{2}}x,L)$ when $L\gg 1$ . Furthermore, notice that the Bessel functions $I_0$ and $I_{1}$ in equation (26) are also real numbers. Since $f(x)$ is a compact support function, then, $f(x) = 0$ if $x \notin I_f$ , where $I_f$ is its support. In particular, there exists $L_0 = L_0(x) > 1$ such that if $L > L_0$ then $(L^{\frac{1}{2}}x \pm L) \notin I_f$ , i.e., $f(L^{\frac{1}{2}}x + L) = 0 = f(L^{\frac{1}{2}}x - L)$ . Therefore, if $L > L_0$ , then, the right hand side of (26) can be rewritten as $$ \frac {\sqrt {L} e ^ {- \frac {\mu}{2} L}}{2} \left[ \int_ {\sqrt {L} x - L} ^ {\sqrt {L} x + L} \frac {\mu^ {2} L}{4 \xi} f (\alpha) I _ {1} (\xi) d \alpha + \int_ {\sqrt {L} x - L} ^ {\sqrt {L} x + L} \left[ g (\alpha) + \frac {\mu}{2} f (\alpha) \right] I _ {0} (\xi) d \alpha \right]. \tag {27} $$ # 4 Approximations for Bessel Functions It follows from Theorem A.1 that, given $n \in \mathbb{Z}$ , there exist positive constants $C$ and $\xi_0$ such that, for all $\xi > \xi_0$ , $$ \left| I _ {n} (\xi) - \frac {1}{\sqrt {2 \pi}} \frac {e ^ {\xi}}{\sqrt {\xi}} \right| \leq \frac {C}{\sqrt {2 \pi}} \frac {e ^ {\xi}}{\xi}. \tag {28} $$ We want to use (28) to estimate the integrals on (27). In order to do that, we must ensure that $\xi$ , defined by (25) and which appears as the argument of $I_0$ and $I_1$ , will satisfy the condition $\xi > \xi_0$ . Exercise 4.1 Define $L_{1} = 2\xi_{0} / \mu$ and $\overline{L} = \sqrt{L^2 - L_1^2}$ . Show that $\xi > \xi_{0}$ if and only if $L > L_{1}$ and $\alpha \in [L^{1/2}x - \overline{L}, L^{1/2}x + \overline{L}]$ . Exercise 4.2 Show that, for $L > L_{1}$ , there exists $L_{2}(x)$ such that, if $L > L_{2}$ , then $f$ and $g$ vanish outside the interval $[L^{1/2}x - \overline{L}, L^{1/2}x + \overline{L}]$ . From exercises 4.1 and 4.2, if we take $L > \max \{L_0, L_1, L_2\}$ then we are allowed to replace (28) on (27). In what follows, we prove the limit (19) by showing that, as $L \to \infty$ , the term $$ \left| \frac {\sqrt {L} e ^ {- \frac {\mu}{2} L}}{2} \left\{\frac {C}{\sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2}} x - \overline {{L}}} ^ {L ^ {\frac {1}{2}} x + \overline {{L}}} \frac {\mu^ {2} L e ^ {\xi}}{4 \xi^ {2}} f (\alpha) d \alpha + \frac {C}{\sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2}} x - \overline {{L}}} ^ {L ^ {\frac {1}{2}} x + \overline {{L}}} \left[ g (\alpha) + \frac {\mu}{2} f (\alpha) \right] \frac {e ^ {\xi}}{\xi} d \alpha \right\} \right| $$ goes to zero, while the term $$ \frac {\sqrt {L} e ^ {- \frac {\mu}{2} L}}{2} \left\{\frac {1}{\sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2}} x - \overline {{L}}} ^ {L ^ {\frac {1}{2}} x + \overline {{L}}} \frac {\mu^ {2} L e ^ {\xi}}{4 \xi^ {3 / 2}} f (\alpha) d \alpha + \frac {1}{\sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2}} x - \overline {{L}}} ^ {L ^ {\frac {1}{2}} x + \overline {{L}}} \left[ g (\alpha) + \frac {\mu}{2} f (\alpha) \right] \frac {e ^ {\xi}}{\sqrt {\xi}} d \alpha \right\} $$ goes to $Mf_{\mu}^{*}(x)$ , where $f_{\mu}^{*}$ is the Gaussian distribution (5) and $M$ is given by (14). Defining $$ T _ {\beta} = \frac {\mu}{4 \sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2} x - \overline {{L}}}} ^ {L ^ {\frac {1}{2} x + \overline {{L}}}} \left(L ^ {3 / 2} e ^ {- \frac {\mu}{2} L} \frac {e ^ {\xi}}{\xi^ {\beta}}\right) \frac {\mu}{2} f (\alpha) d \alpha $$ and $$ S _ {\beta} = \frac {1}{2 \sqrt {2 \pi}} \int_ {L ^ {\frac {1}{2}} x - \overline {{L}}} ^ {L ^ {\frac {1}{2}} x + \overline {{L}}} \left(L ^ {1 / 2} e ^ {- \frac {\mu}{2} L} \frac {e ^ {\xi}}{\xi^ {\beta}}\right) \left[ g (\alpha) + \frac {\mu}{2} f (\alpha) \right] d \alpha , $$ we must prove that $|C(T_2 + S_1)| \to 0$ and $T_{3/2} + S_{1/2} \to Mf_\mu^*$ as $L \to \infty$ . Notice that, if we rewrite $\xi$ as $$ \xi = \frac {\mu L}{2} \left[ \sqrt {1 - \left(\frac {x}{L ^ {\frac {1}{2}}} - \frac {\alpha}{L}\right) ^ {2}} - 1 \right] + \frac {\mu L}{2}, $$ then, the $L$ -dependence of the preceding integrands can be expressed in the form $$ \left(\frac {2}{\mu}\right) ^ {\beta} L ^ {\gamma - \beta} \left\{\frac {e ^ {\frac {\mu L}{2} \left[ \sqrt {1 - \left(\frac {x}{L ^ {\frac {1}{2}}} - \frac {\alpha}{L}\right) ^ {2}} - 1 \right]}}{\left[ 1 - \left(\frac {x}{L ^ {\frac {1}{2}}} - \frac {\alpha}{L}\right) ^ {2} \right] ^ {\beta / 2}} \right\}, \tag {29} $$ where $\gamma = 3 / 2$ for $T_{\beta}$ , $\gamma = 1 / 2$ for $S_{\beta}$ and $\beta \in \{1 / 2, 1, 3 / 2, 2\}$ . Exercise 4.3 Using the Taylor expansion with integral reminder, show that $$ \left| \sqrt {1 - x} - 1 + \frac {1}{2} x \right| \leq \max _ {[ - x, 0 ]} \left\{\frac {1}{4} (1 + t) ^ {- 3 / 2} \right\} x ^ {2}, | x | < 1. $$ Exercise 4.4 Using Exercise 4.3, show that $$ \lim _ {L \to \infty} \frac {\mu L}{2} \left[ \sqrt {1 - \left(\frac {x}{L ^ {\frac {1}{2}}} - \frac {\alpha}{L}\right) ^ {2}} - 1 \right] = - \mu \frac {x ^ {2}}{4}. $$ From Exercise 4.4, it follows that the term enclosed in braces in (29) converges to $e^{-\mu \frac{x^2}{4}}$ as $L \to \infty$ . Applying the Dominated Convergence Theorem, one readily verifies that $|C(T_2 + S_1)| \to 0$ , since the factor $L^{\gamma - \beta}$ reduces to $L^{3/2 - 2}$ for $T_2$ and $L^{1/2 - 1}$ for $S_1$ , both of which vanish as $L \to \infty$ . Moreover, for $T_{3/2}$ and $S_{1/2}$ , the factor simplifies to $L^{\gamma - \beta} = L^0 = 1$ , so that, in the limit $L \to \infty$ , $T_{3/2} + S_{1/2}$ converges to $$ \frac {1}{2 \sqrt {2 \pi}} \left(\frac {2}{\mu}\right) ^ {1 / 2} e ^ {- \mu \frac {x ^ {2}}{4}} \left\{\int_ {- \infty} ^ {\infty} \frac {\mu}{2} f (\alpha) d \alpha + \int_ {- \infty} ^ {\infty} \left[ g (\alpha) + \frac {\mu}{2} f (\alpha) \right] d \alpha \right\} = $$ $$ \sqrt {\frac {\mu}{4 \pi}} e ^ {- \mu \frac {x ^ {2}}{4}} \int_ {- \infty} ^ {\infty} \left[ \frac {1}{\mu} g (\alpha) + f (\alpha) \right] d \alpha = M f _ {\mu} ^ {*} (x). $$ # A Bessel Functions From the series representation (21), one sees that $I_{n}(x)$ , $x \in \mathbb{R}$ , is an even or odd function depending if $n$ is even or odd, respectively. Also, for fixed $(x,t)$ and for $\alpha \in [x - t,x + t]$ , the argument of $I_0(x)$ and $I_{1}(x)$ , in the integral representation (20), is non-negative, see Remark 2 of Section 2. Said that, in the sequel we will prove the asymptotic behaviour (28): Theorem A.1 Consider the modified Bessel function $I_{\nu}(x)$ , with $\nu \geq 0$ and $x > 0$ . Then $$ \left| \frac {\sqrt {2 \pi x}}{e ^ {x}} I _ {\nu} (x) - 1 \right| \leq C (x), \tag {30} $$ where $$ C (x) \equiv \sqrt {\frac {\pi^ {3} x}{2}} e ^ {- x (1 - \cos \delta)} + \sqrt {\frac {\pi^ {3}}{2 x}} (e ^ {- x} - e ^ {- 2 x}) + e ^ {- 2 x} + \frac {2 | \nu |}{e x \sigma} + \frac {1 2 8 \sqrt {2}}{x e ^ {2} \sigma^ {5 / 2}} + \sqrt {2} e ^ {- \frac {\delta^ {2} x}{4}}, $$ with $\sigma = 1 - 4\delta^2 >0$ and $0 < \delta < 1 / 2$ # Corollary A.1 $$ \lim _ {x \rightarrow \infty} \frac {\sqrt {2 \pi x}}{e ^ {x}} I _ {\nu} (x) = 1. \tag {31} $$ Remark: Corollary A.1 says that $I_{\nu}(x)$ and $\frac{e^x}{\sqrt{2\pi x}}$ are asymptotically equivalent as $x \to \infty$ while Theorem A.1 says that the difference $I_{\nu}(x) - \frac{e^x}{\sqrt{2\pi x}}$ is a little order of $\frac{e^x}{\sqrt{2\pi x}}$ as $x \to \infty$ . Therefore, inequality (30) is a stronger statement since it implies the limit (31). Proof of Theorem A.1: We make it explicitly the necessary estimates needed to prove estimate (30). The starting point is the following integral representation $$ I _ {\nu} (x) = \frac {1}{\pi} \int_ {0} ^ {\pi} e ^ {x \cos t} \cos (\nu t) d t - \frac {\sin (\nu \pi)}{\pi} \int_ {0} ^ {\infty} e ^ {- x \cosh t - \nu t} d t, \tag {32} $$ see. Exercise A.1 Show that, for $a > 0$ , $\int_{-\infty}^{\infty} e^{-at^2} dt = \sqrt{\frac{\pi}{a}}$ , then, use this to prove that, for $\nu \geq 0$ $$ \int_ {0} ^ {\infty} e ^ {- x \cosh t - \nu t} d t \leq \sqrt {\frac {\pi}{2 x}} e ^ {- x}. $$ From (32) and exercise A.1, we get $$ \left| I _ {\nu} (x) - \frac {1}{\pi} \int_ {0} ^ {\pi} e ^ {x \cos t} \cos (\nu t) d t \right| \leq \sqrt {\frac {1}{2 \pi x}} e ^ {- x}, \quad x > 0. \tag {33} $$ Now we rewrite the integral on the lhs of (33) as a sum of two integrals, the second of which being $$ \int_ {\pi / 2} ^ {\pi} e ^ {x \cos t} \cos (\nu t) d t = \int_ {0} ^ {\pi / 2} e ^ {- x \sin u} \cos (\nu (u + \pi / 2)) d u. $$ Exercise A.2 Show that, if $u \in [0, \pi/2]$ , then $u \geq \sin u \geq (2u)/\pi$ and use this to prove that $$ \left| \int_ {\pi / 2} ^ {\pi} e ^ {x \cos t} \cos (\nu t) d t \right| \leq \pi \frac {1 - e ^ {- x}}{2 x}, x > 0. $$ From (33) and exercise A.2, we get $$ \left| I _ {\nu} (x) - \frac {1}{\pi} \int_ {0} ^ {\pi / 2} e ^ {x \cos t} \cos (\nu t) d t \right| \leq \pi \frac {1 - e ^ {- x}}{2 x} + \sqrt {\frac {1}{2 \pi x}} e ^ {- x}, \quad x > 0. \tag {34} $$ To handle the integral on the left-hand side of (34), we decompose it into two parts: $$ \frac {1}{\pi} \int_ {0} ^ {\pi / 2} e ^ {x \cos t} \cos (\nu t) d t = \frac {1}{\pi} \int_ {0} ^ {\delta} e ^ {x \cos t} \cos (\nu t) d t + \frac {1}{\pi} \int_ {\delta} ^ {\pi / 2} e ^ {x \cos t} \cos (\nu t) d t, (3 5) $$ where $0 < \delta < \pi / 2$ . We then choose $\delta$ sufficiently small so that, for $t \in [0, \delta]$ , the approximation $x \cos t \approx x\left(1 - \frac{t^2}{2}\right)$ is valid, which will allow us to estimate the first integral in the $\mathrm{rh}$ s of (35). Exercise A.3 Show that the second integral in the rhs of (35) can be bounded above by $\pi e^{x\cos \delta} / 2$ . Later on, we will choose $\delta$ as a function of $x$ so that the appropriate limits can be taken as $x\to \infty$ . Having this in mind, we use exercise A.3 to replace (34) by $$ \left| I _ {\nu} (x) - \frac {1}{\pi} \int_ {0} ^ {\delta} e ^ {x \cos t} \cos (\nu t) d t \right| \leq \frac {\pi}{2} e ^ {x \cos \delta} + \pi \frac {1 - e ^ {- x}}{2 x} + \sqrt {\frac {1}{2 \pi x}} e ^ {- x}, x > 0. \tag {36} $$ Exercise A.4 Let $t \in [0, \delta]$ , $0 < \delta < 1$ and $R(t) \equiv \cos t - (1 - t^2 / 2)$ . Show that $$ | R (t) | \leq (\cosh 1) t ^ {4}. $$ Replacing $\cos t$ by $1 - t^2 / 2 + R(t)$ in (36) and multiplying both sides by $(\sqrt{2\pi x})e^{-x}$ , we get $$ \left| \frac {\sqrt {2 \pi x}}{e ^ {x}} I _ {\nu} (x) - \sqrt {\frac {2}{\pi}} \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right)} \cos \left(\nu \frac {u}{\sqrt {x}}\right) d u \right| \leq $$ $$ \sqrt {\frac {\pi^ {3} x}{2}} e ^ {- x (1 - \cos \delta)} + \sqrt {\frac {\pi^ {3}}{2 x}} \left(e ^ {- x} - e ^ {- 2 x}\right) + e ^ {- 2 x}, x > 0. \tag {37} $$ Taking into account (37) and making use of the triangle inequality, to obtain the desired bound for $\left|\sqrt{2\pi x} e^{-x}I_{\nu}(x) - 1\right|$ in Theorem A.1, it remains to estimate the term $$ \left| \sqrt {\frac {2}{\pi}} \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right)} \cos \left(\nu \frac {u}{\sqrt {x}}\right) d u - 1 \right| = \sqrt {\frac {2}{\pi}} \left| V _ {x} - \sqrt {\frac {\pi}{2}} \right|, $$ where $V_{x} \equiv \int_{0}^{\delta \sqrt{x}} e^{-\frac{u^{2}}{2} + xR\left(\frac{u}{\sqrt{x}}\right)} \cos \left(\nu \frac{u}{\sqrt{x}}\right) du$ . Our goal from now on is to establish an upper bound for the term $|V_{x} - \sqrt{\pi / 2}|$ , which we will accomplish by bounding $$ \left| V _ {x} - \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right)} d u \right| + \left| \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right)} d u - \sqrt {\frac {\pi}{2}} \right|. \tag {38} $$ For this purpose, we shall employ the following two exercises. Exercise A.5 Show that, for $t \geq 0$ , we have $|\cos t - 1| \leq t$ . Exercise A.6 Show that, for $t \in [0, \delta]$ and $0 < \delta < 1$ , the inequality in exercise A.4 can be improved to $0 \leq R(t) \leq (\cosh 1)t^4$ . From exercise A.6 we get that $R(t) \leq 2t^4 \leq 2t^2\delta^2$ . Therefore, $$ - \frac {u ^ {2}}{2} \leq - \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right) \leq - \left(1 - 4 \delta^ {2}\right) \frac {u ^ {2}}{2} $$ and combining this result with Exercise A.5, we can bound the first term in (38) as follows: $$ \left| \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2} + x R \left(\frac {u}{\sqrt {x}}\right)} \left[ \cos \left(\nu \frac {u}{\sqrt {x}}\right) - 1 \right] d u \right| \leq \frac {| \nu |}{\sqrt {x}} \int_ {0} ^ {\infty} u e ^ {- \left(1 - 4 \delta^ {2}\right) \frac {u ^ {2}}{2}} d u. \tag {39} $$ Before proceeding to the analysis of the above integral, we notice that, since $$ \sqrt {\frac {\pi}{2}} = \int_ {0} ^ {\infty} e ^ {- \frac {u ^ {2}}{2}} d u = \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2}} d u + \int_ {\delta \sqrt {x}} ^ {\infty} e ^ {- \frac {u ^ {2}}{2}} d u, $$ we can rewrite the second term in (38) as $$ \left| \int_ {0} ^ {\delta \sqrt {x}} e ^ {- \frac {u ^ {2}}{2}} \left[ e ^ {x R \left(\frac {u}{\sqrt {x}}\right)} - 1 \right] d u - \int_ {\delta \sqrt {x}} ^ {\infty} e ^ {- \frac {u ^ {2}}{2}} d u \right|. $$ Exercise A.7 Show that $e^t - 1 \leq te^t$ , for $t \geq 0$ . Using exercises A.6 and A.7 and since $0 \leq \int_{\delta \sqrt{x}}^{\infty} e^{-\frac{u^2}{2}} du \leq e^{-\frac{\delta^2 x}{4}} \int_0^{\infty} e^{-\frac{u^2}{4}} du = \sqrt{\pi} e^{-\frac{\delta^2 x}{4}}$ , we obtain the following upper bound for the second term in (38): $$ \frac {2}{x} \int_ {0} ^ {\delta \sqrt {x}} u ^ {4} e ^ {- (1 - 4 \delta^ {2}) \frac {u ^ {2}}{2}} d u + \sqrt {\pi} e ^ {- \frac {\delta^ {2} x}{4}}. \tag {40} $$ Exercise A.8 Show that, for fixed positive constants $\alpha, \sigma$ , $$ \int_ {0} ^ {\infty} u ^ {\alpha} e ^ {- \sigma \frac {u ^ {2}}{2}} d u \leq \left(\frac {2 \alpha}{\sigma e}\right) ^ {\alpha / 2} \sqrt {\frac {\pi}{\sigma}}. $$ Notice that we can use exercise A.8 to bound the integrals in (39) and (40), with $\alpha \in \{1,4\}$ , as long as $\delta$ is small enough so that $\sigma = 1 - 4\delta^2 > 0$ . Accordingly, our estimate for (38) takes the form $$ \frac {\sqrt {2 \pi}}{e x} \frac {| \nu |}{\sigma} + \frac {1 2 8}{x e ^ {2}} \frac {\sqrt {\pi}}{\sigma^ {5 / 2}} + \sqrt {\pi} e ^ {- \frac {\delta^ {2} x}{4}}. $$ Using the above estimate and (37), we can finally obtain as a bound for $\left|\sqrt{2\pi x} e^{-x}I_{\nu}(x) - 1\right|$ : $$ \sqrt {\frac {\pi^ {3} x}{2}} e ^ {- x (1 - \cos \delta)} + \sqrt {\frac {\pi^ {3}}{2 x}} (e ^ {- x} - e ^ {- 2 x}) + e ^ {- 2 x} + \frac {2 | \nu |}{e x \sigma} + \frac {1 2 8 \sqrt {2}}{x e ^ {2} \sigma^ {5 / 2}} + \sqrt {2} e ^ {- \frac {\delta^ {2} x}{4}}, $$ where $\sigma = 1 - 4\delta^2 >0$
arxiv_math
2025-12-12T00:00:00Z
https://arxiv.org/pdf/2512.15770
{"title": "The Diffusive Behavior of Solutions to the Linear Damped Wave Equation: an Undergraduate D.I.Y. Classnote", "raw_content": "# The Diffusive Behavior of Solutions to the Linear Damped Wave Equation: an Undergraduate D.I.Y. Classnote\n\nGASTAO A. BRAGA\n\nDepartamento de Matematica\n\nUniversidade Federal de Minas Gerais\n\nCaixa Postal 1621, Belo Horizonte, 30161-970, Brazil\n\nANTONIO MARCOS DA SILVA\n\nDepartamento de Matemática\n\nUniversidade Federal de Ouro Preto\n\nR. Diogo de Vasconcelos, 122, Pilar, 35400-000, Brazil\n\nJUSSARA DE MATOS MOREIRA\n\nDepartamento de Matemática\n\nUniversidade Federal de Minas Gerais\n\nCaixa Postal 1621, Belo Horizonte, 30161-970, Brazil\n\n# Abstract\n\nDespite of the fact that the Damped Wave and the Heat equations describe phenomena of distinct nature, it is amazing that their solutions are related in the limit as $t \\to \\infty$ . The aim of this note is to explain to undergraduate students, with a good calculus background, how the relation between these solutions is established. We follow a \"do it yourself\" strategy and the students are invited to do the suggested exercises in order to understand the content of this note.\n\n# 1 Introduction\n\nConsider the following Partial Differential Equation (PDE)\n\n$$\n\\mu u _ {t} + u _ {t t} - u _ {x x} = 0, \\tag {1}\n$$\n\nwhere $u = u(x,t)$ , $x,t\\in \\mathbb{R}$ and $\\mu \\geq 0$ . When $\\mu = 0$ , this is the one-dimensional wave equation, a classical example of a hyperbolic equation. Its solutions are the superposition of two travelling waves, one to the right and one to the left, both propagating with velocity one, see [1]. Jump discontinuities at time zero will also propagate over characteristic curves with velocity one. When $\\mu > 0$ , the case we are interested in, the equation is still hyperbolic and its solutions retain the properties of the $\\mu = 0$ solutions. It is called the Damped Wave Equation (DWE), or the Telegraph Equation (TE), see [3] for instance.\n\nExercise 1.1 Multiplying both sides of (1) by $u_{t}$ , integrating over $\\mathbb{R}$ and assuming all operations are allowed, conclude that\n\n$$\n\\partial_ {t} \\left(\\int_ {\\mathbb {R}} \\frac {1}{2} [ u _ {t} ^ {2} + u _ {x} ^ {2} ] d x\\right) = - \\mu \\int_ {\\mathbb {R}} u _ {t} ^ {2} d x. \\tag {2}\n$$\n\nThe integral on the left hand side (lhs) of (2) is the wave's total energy. If $\\mu > 0$ and if $u_{t}$ is not identically zero then the right hand side (rhs) of (2) is negative implying that the wave's total energy decreases with time, not being conserved. As we will see later on, the solutions to the DWE are also a superposition of left and right travelling waves but, due to the damping term $\\mu u_{t}$ , their amplitudes will diminish with time.\n\nOn the other hand, the Heat or Diffusion Equation, (HE) or (DE),\n\n$$\n\\mu u _ {t} - u _ {x x} = 0, \\tag {3}\n$$\n\nwhere $u = u(x,t)$ , $t > 0$ , $x \\in \\mathbb{R}$ and $\\mu > 0$ , is a classical example of a parabolic PDE. In (3), $\\sigma = 1 / \\mu$ is the diffusion coefficient.\n\nExercise 1.2 For $t > 0$ and $x\\in \\mathbb{R}$ , show that\n\n$$\nK (x, t) = \\frac {1}{\\sqrt {t}} f _ {\\mu} ^ {*} \\left(\\frac {x}{\\sqrt {t}}\\right), \\tag {4}\n$$\n\nwhere\n\n$$\nf _ {\\mu} ^ {*} (x) = \\sqrt {\\frac {\\mu}{4 \\pi}} e ^ {- \\mu \\frac {x ^ {2}}{4}}, \\tag {5}\n$$\n\nis a solution to (3).\n\nWe observe that $f_{\\mu}^{*}(x)$ , defined by (5), is the probability density function of a zero mean Gaussian random variable with $\\sqrt{2 / \\mu}$ variance.\n\nExercise 1.3 Show that\n\n$$\n\\int_ {\\mathbb {R}} f _ {\\mu} ^ {*} (x) d x = 1, \\int_ {\\mathbb {R}} x f _ {\\mu} ^ {*} (x) d x = 0, \\int_ {\\mathbb {R}} x ^ {2} f _ {\\mu} ^ {*} (x) d x = \\frac {2}{\\mu}. \\tag {6}\n$$\n\n$K(x,t)$ , given by (4), is said to be a fundamental solution to (3). The following properties for $K(x,t)$ are easily verified:\n\nExercise 1.4 For $t > 0$ and $x \\in \\mathbb{R}$ , verify that: 1) $K(x,t)$ is a $C^\\infty$ function of $x$ ; 2) $K(x,t)$ is scaling invariant, i.e.,\n\n$$\n\\sqrt {t} K (\\sqrt {t} x, t) = f _ {\\mu} ^ {*} (x). \\tag {7}\n$$\n\nIt turns out that any solution to the Initial Value Problem (IVP), with $\\mu > 0$ and a continuous $f(x)$ ,\n\n$$\n\\left\\{ \\begin{array}{c} \\mu u _ {t} - u _ {x x} = 0, x \\in \\mathbb {R}, t > 0, \\\\ u (x, 0) = f (x), \\end{array} \\right. \\tag {8}\n$$\n\nretains the properties of the $K(x,t)$ , stated in Exercise 1.4, in the sense that: 1) for $t > 0$ , $u(\\cdot ,t)$ is a $C^\\infty (\\mathbb{R})$ function even if $u(x,t)$ has a jump discontinuity at time $t = 0$ . In this case, we say that the discontinuities at time $t = 0$ are instantaneously smoothed out at any later time $t > 0$ ; 2) solutions to the IVP (8), as a function of $t$ , decay and spread out at rates $1 / \\sqrt{t}$ and $\\sqrt{t}$ , respectively, i.e., identity (7)\n\nholds in the limit as $t \\to \\infty$ , see identity (10). The above two properties are a straightforward consequence of the well known integral representation formula [1].\n\n$$\nu (x, t) = \\int_ {\\mathbb {R}} K (x - y, t) f (y) d y \\tag {9}\n$$\n\nwhich holds for solutions to the IVP (8) where $f(x)$ is a bounded and continuous function except possibly for a finite number of jump discontinuities.\n\nExercise 1.5 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ , is $C^\\infty$ as a function of $x$ .\n\nExercise 1.6 Conclude, from (9), that the solution $u(x,t)$ of the IVP (8) with the above specified $f$ such that $\\int_{\\mathbb{R}} |f(x)| dx < \\infty$ , satisfies\n\n$$\n\\lim _ {t \\rightarrow \\infty} \\sqrt {t} u (\\sqrt {t} x, t) = M f _ {\\mu} ^ {*} (x), \\tag {10}\n$$\n\nwhere\n\n$$\nM = \\int_ {\\mathbb {R}} f (x) d x \\tag {11}\n$$\n\nand $f_{\\mu}^{*}(x)$ is given by (5).\n\nThe limit (10) expresses the fact that identity (4), which is satisfied for the kernel $K(x,t)$ , holds asymptotically for the solution $u(x,t)$ of the IVP (8) so that $u(x,t)$ decays and spreads out at rates $1 / \\sqrt{t}$ and $\\sqrt{t}$ , respectively, having $f_{\\mu}^{*}(x)$ as its profile function, in the limit $t \\to \\infty$ . $M$ , given by (11), is the prefactor.\n\nFor large values of $t$ , (10) can be rephrased as\n\n$$\nu (x, t) \\approx \\frac {M}{\\sqrt {t}} f _ {\\mu} ^ {*} \\left(\\frac {x}{\\sqrt {t}}\\right). \\tag {12}\n$$\n\nThe above notation means that the two functions in (12) are asymptotically equivalent, that is, their ratio tends to one, when $t$ goes to infinity. Despite of the fact that equations (1) and (3) describe phenomena of distinct nature, it is amazing that their solutions are asymptotically related as $t \\to \\infty$ . The aim of this\n\nnote is to explain to undergraduate mathematics, science and engineering students how the solutions to the above two equations are connected.\n\nBefore presenting the main theorem and its proof, we provide a heuristic argument to highlight the intuition that distinct space and time scaling is behind the explanation for the changing mechanism from hyperbolic to parabolic behavior. To present this heuristic reasoning, let $u(x,t)$ be a solution to the following Cauchy Problem (CP)\n\n$$\n\\left\\{ \\begin{array}{c} \\mu u _ {t} + u _ {t t} - u _ {x x} = 0, x \\in \\mathbb {R}, t > 0, \\\\ u (x, 0) = f (x), \\\\ u _ {t} (x, 0) = g (x). \\end{array} \\right. \\tag {13}\n$$\n\nOur purpose here is to prove that, if $u(x,t)$ is a solution to (13), then the limit expressed in (10) holds, with the prefactor $M$ given by\n\n$$\nM = \\int_ {\\mathbb {R}} \\left[ f (x) + \\frac {1}{\\mu} g (x) \\right] d x. \\tag {14}\n$$\n\nNow define\n\n$$\nv (x, t) \\equiv L ^ {\\frac {1}{2}} u \\left(L ^ {\\frac {1}{2}} x, L t\\right), \\tag {15}\n$$\n\nwhere $L > 1$ . We say that the above $v(x,t)$ is a rescaling of $u(x,t)$ .\n\nExercise 1.7 Show that $v(x,t)$ , defined by (15), solves the equation\n\n$$\n\\mu v _ {t} + \\frac {1}{L} v _ {t t} - v _ {x x} = 0. \\tag {16}\n$$\n\nAssuming that $|v_{tt}(x,t)|$ is uniformly bounded for $x \\in \\mathbb{R}$ and $t > 0$ , the second term on the left hand side of (16) will be small if we choose $L$ large enough. Then, for large $L$ , it is reasonable to drop this term off thus generating the diffusion equation (3). More precisely, we conclude that, for large values of $t$ ,\n\n$$\nu (x, t) \\approx v (x, t), \\tag {17}\n$$\n\nwhere $v(x,t)$ is the solution to the IVP\n\n$$\n\\left\\{ \\begin{array}{c} \\mu u _ {t} - u _ {x x} = 0, x \\in \\mathbb {R}, t > 0, \\\\ u (x, 0) = f (x) + \\frac {1}{\\mu} g (x). \\end{array} \\right. \\tag {18}\n$$\n\nThe approximation (17) reflects the surprisingly fact that damped propagating waves decay and spread out with rates $1 / \\sqrt{t}$ and $\\sqrt{t}$ , respectively. This formal argument or, equivalently, the approximation (17), is rigorously translated into the following theorem which we will prove in the next sections.\n\nTheorem 1.1 If $u(x,t)$ is the solution to the Cauchy problem (13), with $\\mu > 0$ , $f \\in C_0^2(\\mathbb{R})$ and $g \\in C_0^1(\\mathbb{R})$ , then\n\n$$\n\\lim _ {t \\rightarrow \\infty} \\sqrt {t} u (\\sqrt {t} x, t) = M f _ {\\mu} ^ {*} (x), \\tag {19}\n$$\n\nwhere $M$ and $f_{\\mu}^{*}(x)$ are given by (14) and (5), respectively.\n\nRemark: Theorem 1.1 is a simpler version of more general theorems which require advanced mathematical methods to be proven. In the papers [8, 9, 10, 11, 12], the reader will be able to check in which directions the above theorem can be generalized.\n\n# 2 Integral representations of solutions to (13)\n\nThe DW Equation (1) is classified as hyperbolic and the pair of straight lines\n\n$$\n\\alpha = x + t \\mathrm {e} \\beta = x - t.\n$$\n\nform its characteristic curves, (see [5]). According to [5], the solution to the CP (13) can be expressed as follows\n\n$$\n\\begin{array}{l} u (x, t) = \\frac {e ^ {- \\frac {\\mu}{2} t}}{2} \\left[ f (x + t) + f (x - t) + \\int_ {x - t} ^ {x + t} f (\\alpha) \\frac {d}{d t} I _ {0} \\left(\\frac {\\mu}{2} \\sqrt {t ^ {2} - (x - \\alpha) ^ {2}}\\right) d \\alpha \\right. \\\\ \\left. + \\int_ {x - t} ^ {x + t} \\left(g (\\alpha) + \\frac {\\mu}{2} f (\\alpha)\\right) I _ {0} \\left(\\frac {\\mu}{2} \\sqrt {t ^ {2} - (x - \\alpha) ^ {2}}\\right) d \\alpha \\right], \\tag {20} \\\\ \\end{array}\n$$\n\nwhere $I_{n}(x)$ , for $n = 0,1,2,\\dots$ , represents the modified Bessel function of order $n$ , given by\n\n$$\nI _ {n} (x) = i ^ {- n} J _ {n} (i x) = \\sum_ {j = 0} ^ {\\infty} \\frac {1}{j ! (j + n) !} \\left(\\frac {x}{2}\\right) ^ {2 j + n}, \\tag {21}\n$$\n\nwhere $J_{n}(x)$ is the Bessel function of order $n$\n\n$$\nJ _ {n} (x) = \\sum_ {j = 0} ^ {\\infty} \\frac {(- 1) ^ {j}}{j ! (j + n) !} \\left(\\frac {x}{2}\\right) ^ {2 j + n}. \\tag {22}\n$$\n\nIf $\\mu = 0$ in (13) then we obtain the CP for the Wave Equation, whose solution is given by the D'Alembert's formula\n\n$$\nu (x, t) = \\frac {1}{2} [ f (x + t) + f (x - t) ] + \\frac {1}{2} \\int_ {x - t} ^ {x + t} g (\\alpha) d \\alpha . \\tag {23}\n$$\n\nExercise 2.1 Show that D'Alembert's formula can be recovered from (20) when $\\mu = 0$ .\n\nThe representation formula (20), for $\\mu > 0$ , inherits the left and right propagating waves structure of (23): at the point $P = (x_0, t_0)$ , the solution is the superposition of two waves, both travelling with velocity one along the characteristics and both being exponentially damped if $\\mu > 0$ . Besides that, the value $u(x_0, t_0)$ depends uniquely on the values of $f(\\alpha)$ e.g. $g(\\alpha)$ , for $\\alpha \\in [x_0 - t_0, x_0 + t_0]$ , the domain of dependence.\n\nExercise 2.2 Using, for $n \\in \\mathbb{Z}$ , that $\\frac{d}{d\\xi} (\\xi^n I_n(\\xi)) = \\xi^n I_{n-1}(\\xi)$ and $I_{-n}(\\xi) = I_n(\\xi)$ , show that $I_0'(\\xi) = I_{-1}(\\xi) = I_1(\\xi)$ , where $I_0'(z)$ means $\\frac{d}{dz} I_0(z)$ and conclude that the first integral on the right hand side of (20) can be rewritten as\n\n$$\n\\int_ {x - t} ^ {x + t} \\frac {\\mu}{2} f (\\alpha) \\frac {t}{\\sqrt {t ^ {2} - (x - \\alpha) ^ {2}}} I _ {1} \\left(\\frac {\\mu}{2} \\sqrt {t ^ {2} - (x - \\alpha) ^ {2}}\\right) d \\alpha . \\tag {24}\n$$\n\nNotice that the integration interval $|x - \\alpha| \\leq t$ , also expressed as $t^2 - (x - \\alpha)^2 \\geq 0$ , leads that the Bessel functions $I_0$ in (20) and $I_1$ in (24) are in fact real numbers.\n\n# 3 Rescaling\n\nFor fixed $x \\in \\mathbb{R}$ , $t > 0$ and $\\mu > 0$ , our aim is to verify that the rescaling (15), applied to the representation (20), yields the limit (19). For $L > 1$ , define\n\n$$\n\\xi = \\xi (\\alpha ; L, x) \\equiv \\frac {\\mu}{2} \\sqrt {L ^ {2} - (L ^ {\\frac {1}{2}} x - \\alpha) ^ {2}}. \\tag {25}\n$$\n\nExercise 3.1 Use (20) and exercise 2.2 to obtain a representation for the rescaling $v(x,t)$ given by (15) and show that, at $t = 1$ , this representation is given by\n\n$$\n\\begin{array}{l} L ^ {\\frac {1}{2}} u \\left(L ^ {\\frac {1}{2}} x, L\\right) = L ^ {\\frac {1}{2}} \\frac {e ^ {- \\frac {\\mu}{2} L}}{2} \\left[ f \\left(L ^ {\\frac {1}{2}} x + L\\right) + f \\left(L ^ {\\frac {1}{2}} x - L\\right) + \\int_ {L ^ {\\frac {1}{2}} x - L} ^ {L ^ {\\frac {1}{2}} x + L} \\frac {\\mu^ {2} L}{4 \\xi} f (\\alpha) I _ {1} (\\xi) d \\alpha \\right. \\\\ \\left. + \\int_ {L ^ {\\frac {1}{2}} x - L} ^ {L ^ {\\frac {1}{2}} x + L} \\left(g (\\alpha) + \\frac {\\mu}{2} f (\\alpha)\\right) I _ {0} (\\xi) d \\alpha \\right]. \\tag {26} \\\\ \\end{array}\n$$\n\nNotice that we can replace $L$ by $L / t$ in the rescaled function $v(x,t)$ , so that the results will hold for $L^{\\frac{1}{2}}u(L^{\\frac{1}{2}}y,L)$ , being enough to replace $y$ by $x / \\sqrt{t}$ and multiply $u$ by $1 / \\sqrt{t}$ . That is why, from now on we will consider $t = 1$ and therefore, we will use (26) to analyze the behavior of $L^{\\frac{1}{2}}u(L^{\\frac{1}{2}}x,L)$ when $L\\gg 1$ . Furthermore, notice that the Bessel functions $I_0$ and $I_{1}$ in equation (26) are also real numbers.\n\nSince $f(x)$ is a compact support function, then, $f(x) = 0$ if $x \\notin I_f$ , where $I_f$ is its support. In particular, there exists $L_0 = L_0(x) > 1$ such that if $L > L_0$ then $(L^{\\frac{1}{2}}x \\pm L) \\notin I_f$ , i.e., $f(L^{\\frac{1}{2}}x + L) = 0 = f(L^{\\frac{1}{2}}x - L)$ . Therefore, if $L > L_0$ , then, the right hand side of (26) can be rewritten as\n\n$$\n\\frac {\\sqrt {L} e ^ {- \\frac {\\mu}{2} L}}{2} \\left[ \\int_ {\\sqrt {L} x - L} ^ {\\sqrt {L} x + L} \\frac {\\mu^ {2} L}{4 \\xi} f (\\alpha) I _ {1} (\\xi) d \\alpha + \\int_ {\\sqrt {L} x - L} ^ {\\sqrt {L} x + L} \\left[ g (\\alpha) + \\frac {\\mu}{2} f (\\alpha) \\right] I _ {0} (\\xi) d \\alpha \\right]. \\tag {27}\n$$\n\n# 4 Approximations for Bessel Functions\n\nIt follows from Theorem A.1 that, given $n \\in \\mathbb{Z}$ , there exist positive constants $C$ and $\\xi_0$ such that, for all $\\xi > \\xi_0$ ,\n\n$$\n\\left| I _ {n} (\\xi) - \\frac {1}{\\sqrt {2 \\pi}} \\frac {e ^ {\\xi}}{\\sqrt {\\xi}} \\right| \\leq \\frac {C}{\\sqrt {2 \\pi}} \\frac {e ^ {\\xi}}{\\xi}. \\tag {28}\n$$\n\nWe want to use (28) to estimate the integrals on (27). In order to do that, we must ensure that $\\xi$ , defined by (25) and which appears as the argument of $I_0$ and $I_1$ , will satisfy the condition $\\xi > \\xi_0$ .\n\nExercise 4.1 Define $L_{1} = 2\\xi_{0} / \\mu$ and $\\overline{L} = \\sqrt{L^2 - L_1^2}$ . Show that $\\xi > \\xi_{0}$ if and only if $L > L_{1}$ and $\\alpha \\in [L^{1/2}x - \\overline{L}, L^{1/2}x + \\overline{L}]$ .\n\nExercise 4.2 Show that, for $L > L_{1}$ , there exists $L_{2}(x)$ such that, if $L > L_{2}$ , then $f$ and $g$ vanish outside the interval $[L^{1/2}x - \\overline{L}, L^{1/2}x + \\overline{L}]$ .\n\nFrom exercises 4.1 and 4.2, if we take $L > \\max \\{L_0, L_1, L_2\\}$ then we are allowed to replace (28) on (27). In what follows, we prove the limit (19) by showing that, as $L \\to \\infty$ , the term\n\n$$\n\\left| \\frac {\\sqrt {L} e ^ {- \\frac {\\mu}{2} L}}{2} \\left\\{\\frac {C}{\\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2}} x - \\overline {{L}}} ^ {L ^ {\\frac {1}{2}} x + \\overline {{L}}} \\frac {\\mu^ {2} L e ^ {\\xi}}{4 \\xi^ {2}} f (\\alpha) d \\alpha + \\frac {C}{\\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2}} x - \\overline {{L}}} ^ {L ^ {\\frac {1}{2}} x + \\overline {{L}}} \\left[ g (\\alpha) + \\frac {\\mu}{2} f (\\alpha) \\right] \\frac {e ^ {\\xi}}{\\xi} d \\alpha \\right\\} \\right|\n$$\n\ngoes to zero, while the term\n\n$$\n\\frac {\\sqrt {L} e ^ {- \\frac {\\mu}{2} L}}{2} \\left\\{\\frac {1}{\\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2}} x - \\overline {{L}}} ^ {L ^ {\\frac {1}{2}} x + \\overline {{L}}} \\frac {\\mu^ {2} L e ^ {\\xi}}{4 \\xi^ {3 / 2}} f (\\alpha) d \\alpha + \\frac {1}{\\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2}} x - \\overline {{L}}} ^ {L ^ {\\frac {1}{2}} x + \\overline {{L}}} \\left[ g (\\alpha) + \\frac {\\mu}{2} f (\\alpha) \\right] \\frac {e ^ {\\xi}}{\\sqrt {\\xi}} d \\alpha \\right\\}\n$$\n\ngoes to $Mf_{\\mu}^{*}(x)$ , where $f_{\\mu}^{*}$ is the Gaussian distribution (5) and $M$ is given by (14).\n\nDefining\n\n$$\nT _ {\\beta} = \\frac {\\mu}{4 \\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2} x - \\overline {{L}}}} ^ {L ^ {\\frac {1}{2} x + \\overline {{L}}}} \\left(L ^ {3 / 2} e ^ {- \\frac {\\mu}{2} L} \\frac {e ^ {\\xi}}{\\xi^ {\\beta}}\\right) \\frac {\\mu}{2} f (\\alpha) d \\alpha\n$$\n\nand\n\n$$\nS _ {\\beta} = \\frac {1}{2 \\sqrt {2 \\pi}} \\int_ {L ^ {\\frac {1}{2}} x - \\overline {{L}}} ^ {L ^ {\\frac {1}{2}} x + \\overline {{L}}} \\left(L ^ {1 / 2} e ^ {- \\frac {\\mu}{2} L} \\frac {e ^ {\\xi}}{\\xi^ {\\beta}}\\right) \\left[ g (\\alpha) + \\frac {\\mu}{2} f (\\alpha) \\right] d \\alpha ,\n$$\n\nwe must prove that $|C(T_2 + S_1)| \\to 0$ and $T_{3/2} + S_{1/2} \\to Mf_\\mu^*$ as $L \\to \\infty$ . Notice that, if we rewrite $\\xi$ as\n\n$$\n\\xi = \\frac {\\mu L}{2} \\left[ \\sqrt {1 - \\left(\\frac {x}{L ^ {\\frac {1}{2}}} - \\frac {\\alpha}{L}\\right) ^ {2}} - 1 \\right] + \\frac {\\mu L}{2},\n$$\n\nthen, the $L$ -dependence of the preceding integrands can be expressed in the form\n\n$$\n\\left(\\frac {2}{\\mu}\\right) ^ {\\beta} L ^ {\\gamma - \\beta} \\left\\{\\frac {e ^ {\\frac {\\mu L}{2} \\left[ \\sqrt {1 - \\left(\\frac {x}{L ^ {\\frac {1}{2}}} - \\frac {\\alpha}{L}\\right) ^ {2}} - 1 \\right]}}{\\left[ 1 - \\left(\\frac {x}{L ^ {\\frac {1}{2}}} - \\frac {\\alpha}{L}\\right) ^ {2} \\right] ^ {\\beta / 2}} \\right\\}, \\tag {29}\n$$\n\nwhere $\\gamma = 3 / 2$ for $T_{\\beta}$ , $\\gamma = 1 / 2$ for $S_{\\beta}$ and $\\beta \\in \\{1 / 2, 1, 3 / 2, 2\\}$ .\n\nExercise 4.3 Using the Taylor expansion with integral reminder, show that\n\n$$\n\\left| \\sqrt {1 - x} - 1 + \\frac {1}{2} x \\right| \\leq \\max _ {[ - x, 0 ]} \\left\\{\\frac {1}{4} (1 + t) ^ {- 3 / 2} \\right\\} x ^ {2}, | x | < 1.\n$$\n\nExercise 4.4 Using Exercise 4.3, show that\n\n$$\n\\lim _ {L \\to \\infty} \\frac {\\mu L}{2} \\left[ \\sqrt {1 - \\left(\\frac {x}{L ^ {\\frac {1}{2}}} - \\frac {\\alpha}{L}\\right) ^ {2}} - 1 \\right] = - \\mu \\frac {x ^ {2}}{4}.\n$$\n\nFrom Exercise 4.4, it follows that the term enclosed in braces in (29) converges to $e^{-\\mu \\frac{x^2}{4}}$ as $L \\to \\infty$ . Applying the Dominated Convergence Theorem [2], one readily verifies that $|C(T_2 + S_1)| \\to 0$ , since the factor $L^{\\gamma - \\beta}$ reduces to $L^{3/2 - 2}$ for $T_2$ and $L^{1/2 - 1}$ for $S_1$ , both of which vanish as $L \\to \\infty$ . Moreover, for $T_{3/2}$ and $S_{1/2}$ , the factor simplifies to $L^{\\gamma - \\beta} = L^0 = 1$ , so that, in the limit $L \\to \\infty$ , $T_{3/2} + S_{1/2}$ converges to\n\n$$\n\\frac {1}{2 \\sqrt {2 \\pi}} \\left(\\frac {2}{\\mu}\\right) ^ {1 / 2} e ^ {- \\mu \\frac {x ^ {2}}{4}} \\left\\{\\int_ {- \\infty} ^ {\\infty} \\frac {\\mu}{2} f (\\alpha) d \\alpha + \\int_ {- \\infty} ^ {\\infty} \\left[ g (\\alpha) + \\frac {\\mu}{2} f (\\alpha) \\right] d \\alpha \\right\\} =\n$$\n\n$$\n\\sqrt {\\frac {\\mu}{4 \\pi}} e ^ {- \\mu \\frac {x ^ {2}}{4}} \\int_ {- \\infty} ^ {\\infty} \\left[ \\frac {1}{\\mu} g (\\alpha) + f (\\alpha) \\right] d \\alpha = M f _ {\\mu} ^ {*} (x).\n$$\n\n# A Bessel Functions\n\nFrom the series representation (21), one sees that $I_{n}(x)$ , $x \\in \\mathbb{R}$ , is an even or odd function depending if $n$ is even or odd, respectively. Also, for fixed $(x,t)$ and for $\\alpha \\in [x - t,x + t]$ , the argument of $I_0(x)$ and $I_{1}(x)$ , in the integral representation (20), is non-negative, see Remark 2 of Section 2. Said that, in the sequel we will prove the asymptotic behaviour (28):\n\nTheorem A.1 Consider the modified Bessel function $I_{\\nu}(x)$ , with $\\nu \\geq 0$ and $x > 0$ .\n\nThen\n\n$$\n\\left| \\frac {\\sqrt {2 \\pi x}}{e ^ {x}} I _ {\\nu} (x) - 1 \\right| \\leq C (x), \\tag {30}\n$$\n\nwhere\n\n$$\nC (x) \\equiv \\sqrt {\\frac {\\pi^ {3} x}{2}} e ^ {- x (1 - \\cos \\delta)} + \\sqrt {\\frac {\\pi^ {3}}{2 x}} (e ^ {- x} - e ^ {- 2 x}) + e ^ {- 2 x} + \\frac {2 | \\nu |}{e x \\sigma} + \\frac {1 2 8 \\sqrt {2}}{x e ^ {2} \\sigma^ {5 / 2}} + \\sqrt {2} e ^ {- \\frac {\\delta^ {2} x}{4}},\n$$\n\nwith $\\sigma = 1 - 4\\delta^2 >0$ and $0 < \\delta < 1 / 2$\n\n# Corollary A.1\n\n$$\n\\lim _ {x \\rightarrow \\infty} \\frac {\\sqrt {2 \\pi x}}{e ^ {x}} I _ {\\nu} (x) = 1. \\tag {31}\n$$\n\nRemark: Corollary A.1 says that $I_{\\nu}(x)$ and $\\frac{e^x}{\\sqrt{2\\pi x}}$ are asymptotically equivalent as $x \\to \\infty$ while Theorem A.1 says that the difference $I_{\\nu}(x) - \\frac{e^x}{\\sqrt{2\\pi x}}$ is a little order of $\\frac{e^x}{\\sqrt{2\\pi x}}$ as $x \\to \\infty$ . Therefore, inequality (30) is a stronger statement since it implies the limit (31).\n\nProof of Theorem A.1: We make it explicitly the necessary estimates needed to prove estimate (30). The starting point is the following integral representation\n\n$$\nI _ {\\nu} (x) = \\frac {1}{\\pi} \\int_ {0} ^ {\\pi} e ^ {x \\cos t} \\cos (\\nu t) d t - \\frac {\\sin (\\nu \\pi)}{\\pi} \\int_ {0} ^ {\\infty} e ^ {- x \\cosh t - \\nu t} d t, \\tag {32}\n$$\n\nsee [6, 7].\n\nExercise A.1 Show that, for $a > 0$ , $\\int_{-\\infty}^{\\infty} e^{-at^2} dt = \\sqrt{\\frac{\\pi}{a}}$ , then, use this to prove that, for $\\nu \\geq 0$\n\n$$\n\\int_ {0} ^ {\\infty} e ^ {- x \\cosh t - \\nu t} d t \\leq \\sqrt {\\frac {\\pi}{2 x}} e ^ {- x}.\n$$\n\nFrom (32) and exercise A.1, we get\n\n$$\n\\left| I _ {\\nu} (x) - \\frac {1}{\\pi} \\int_ {0} ^ {\\pi} e ^ {x \\cos t} \\cos (\\nu t) d t \\right| \\leq \\sqrt {\\frac {1}{2 \\pi x}} e ^ {- x}, \\quad x > 0. \\tag {33}\n$$\n\nNow we rewrite the integral on the lhs of (33) as a sum of two integrals, the second of which being\n\n$$\n\\int_ {\\pi / 2} ^ {\\pi} e ^ {x \\cos t} \\cos (\\nu t) d t = \\int_ {0} ^ {\\pi / 2} e ^ {- x \\sin u} \\cos (\\nu (u + \\pi / 2)) d u.\n$$\n\nExercise A.2 Show that, if $u \\in [0, \\pi/2]$ , then $u \\geq \\sin u \\geq (2u)/\\pi$ and use this to prove that\n\n$$\n\\left| \\int_ {\\pi / 2} ^ {\\pi} e ^ {x \\cos t} \\cos (\\nu t) d t \\right| \\leq \\pi \\frac {1 - e ^ {- x}}{2 x}, x > 0.\n$$\n\nFrom (33) and exercise A.2, we get\n\n$$\n\\left| I _ {\\nu} (x) - \\frac {1}{\\pi} \\int_ {0} ^ {\\pi / 2} e ^ {x \\cos t} \\cos (\\nu t) d t \\right| \\leq \\pi \\frac {1 - e ^ {- x}}{2 x} + \\sqrt {\\frac {1}{2 \\pi x}} e ^ {- x}, \\quad x > 0. \\tag {34}\n$$\n\nTo handle the integral on the left-hand side of (34), we decompose it into two parts:\n\n$$\n\\frac {1}{\\pi} \\int_ {0} ^ {\\pi / 2} e ^ {x \\cos t} \\cos (\\nu t) d t = \\frac {1}{\\pi} \\int_ {0} ^ {\\delta} e ^ {x \\cos t} \\cos (\\nu t) d t + \\frac {1}{\\pi} \\int_ {\\delta} ^ {\\pi / 2} e ^ {x \\cos t} \\cos (\\nu t) d t, (3 5)\n$$\n\nwhere $0 < \\delta < \\pi / 2$ . We then choose $\\delta$ sufficiently small so that, for $t \\in [0, \\delta]$ , the approximation $x \\cos t \\approx x\\left(1 - \\frac{t^2}{2}\\right)$ is valid, which will allow us to estimate the first integral in the $\\mathrm{rh}$ s of (35).\n\nExercise A.3 Show that the second integral in the rhs of (35) can be bounded above by $\\pi e^{x\\cos \\delta} / 2$ .\n\nLater on, we will choose $\\delta$ as a function of $x$ so that the appropriate limits can be taken as $x\\to \\infty$ . Having this in mind, we use exercise A.3 to replace (34) by\n\n$$\n\\left| I _ {\\nu} (x) - \\frac {1}{\\pi} \\int_ {0} ^ {\\delta} e ^ {x \\cos t} \\cos (\\nu t) d t \\right| \\leq \\frac {\\pi}{2} e ^ {x \\cos \\delta} + \\pi \\frac {1 - e ^ {- x}}{2 x} + \\sqrt {\\frac {1}{2 \\pi x}} e ^ {- x}, x > 0. \\tag {36}\n$$\n\nExercise A.4 Let $t \\in [0, \\delta]$ , $0 < \\delta < 1$ and $R(t) \\equiv \\cos t - (1 - t^2 / 2)$ . Show that\n\n$$\n| R (t) | \\leq (\\cosh 1) t ^ {4}.\n$$\n\nReplacing $\\cos t$ by $1 - t^2 / 2 + R(t)$ in (36) and multiplying both sides by $(\\sqrt{2\\pi x})e^{-x}$ , we get\n\n$$\n\\left| \\frac {\\sqrt {2 \\pi x}}{e ^ {x}} I _ {\\nu} (x) - \\sqrt {\\frac {2}{\\pi}} \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right)} \\cos \\left(\\nu \\frac {u}{\\sqrt {x}}\\right) d u \\right| \\leq\n$$\n\n$$\n\\sqrt {\\frac {\\pi^ {3} x}{2}} e ^ {- x (1 - \\cos \\delta)} + \\sqrt {\\frac {\\pi^ {3}}{2 x}} \\left(e ^ {- x} - e ^ {- 2 x}\\right) + e ^ {- 2 x}, x > 0. \\tag {37}\n$$\n\nTaking into account (37) and making use of the triangle inequality, to obtain the desired bound for $\\left|\\sqrt{2\\pi x} e^{-x}I_{\\nu}(x) - 1\\right|$ in Theorem A.1, it remains to estimate the term\n\n$$\n\\left| \\sqrt {\\frac {2}{\\pi}} \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right)} \\cos \\left(\\nu \\frac {u}{\\sqrt {x}}\\right) d u - 1 \\right| = \\sqrt {\\frac {2}{\\pi}} \\left| V _ {x} - \\sqrt {\\frac {\\pi}{2}} \\right|,\n$$\n\nwhere $V_{x} \\equiv \\int_{0}^{\\delta \\sqrt{x}} e^{-\\frac{u^{2}}{2} + xR\\left(\\frac{u}{\\sqrt{x}}\\right)} \\cos \\left(\\nu \\frac{u}{\\sqrt{x}}\\right) du$ . Our goal from now on is to establish an upper bound for the term $|V_{x} - \\sqrt{\\pi / 2}|$ , which we will accomplish by bounding\n\n$$\n\\left| V _ {x} - \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right)} d u \\right| + \\left| \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right)} d u - \\sqrt {\\frac {\\pi}{2}} \\right|. \\tag {38}\n$$\n\nFor this purpose, we shall employ the following two exercises.\n\nExercise A.5 Show that, for $t \\geq 0$ , we have $|\\cos t - 1| \\leq t$ .\n\nExercise A.6 Show that, for $t \\in [0, \\delta]$ and $0 < \\delta < 1$ , the inequality in exercise A.4 can be improved to $0 \\leq R(t) \\leq (\\cosh 1)t^4$ .\n\nFrom exercise A.6 we get that $R(t) \\leq 2t^4 \\leq 2t^2\\delta^2$ . Therefore,\n\n$$\n- \\frac {u ^ {2}}{2} \\leq - \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right) \\leq - \\left(1 - 4 \\delta^ {2}\\right) \\frac {u ^ {2}}{2}\n$$\n\nand combining this result with Exercise A.5, we can bound the first term in (38) as follows:\n\n$$\n\\left| \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2} + x R \\left(\\frac {u}{\\sqrt {x}}\\right)} \\left[ \\cos \\left(\\nu \\frac {u}{\\sqrt {x}}\\right) - 1 \\right] d u \\right| \\leq \\frac {| \\nu |}{\\sqrt {x}} \\int_ {0} ^ {\\infty} u e ^ {- \\left(1 - 4 \\delta^ {2}\\right) \\frac {u ^ {2}}{2}} d u. \\tag {39}\n$$\n\nBefore proceeding to the analysis of the above integral, we notice that, since\n\n$$\n\\sqrt {\\frac {\\pi}{2}} = \\int_ {0} ^ {\\infty} e ^ {- \\frac {u ^ {2}}{2}} d u = \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2}} d u + \\int_ {\\delta \\sqrt {x}} ^ {\\infty} e ^ {- \\frac {u ^ {2}}{2}} d u,\n$$\n\nwe can rewrite the second term in (38) as\n\n$$\n\\left| \\int_ {0} ^ {\\delta \\sqrt {x}} e ^ {- \\frac {u ^ {2}}{2}} \\left[ e ^ {x R \\left(\\frac {u}{\\sqrt {x}}\\right)} - 1 \\right] d u - \\int_ {\\delta \\sqrt {x}} ^ {\\infty} e ^ {- \\frac {u ^ {2}}{2}} d u \\right|.\n$$\n\nExercise A.7 Show that $e^t - 1 \\leq te^t$ , for $t \\geq 0$ .\n\nUsing exercises A.6 and A.7 and since $0 \\leq \\int_{\\delta \\sqrt{x}}^{\\infty} e^{-\\frac{u^2}{2}} du \\leq e^{-\\frac{\\delta^2 x}{4}} \\int_0^{\\infty} e^{-\\frac{u^2}{4}} du = \\sqrt{\\pi} e^{-\\frac{\\delta^2 x}{4}}$ , we obtain the following upper bound for the second term in (38):\n\n$$\n\\frac {2}{x} \\int_ {0} ^ {\\delta \\sqrt {x}} u ^ {4} e ^ {- (1 - 4 \\delta^ {2}) \\frac {u ^ {2}}{2}} d u + \\sqrt {\\pi} e ^ {- \\frac {\\delta^ {2} x}{4}}. \\tag {40}\n$$\n\nExercise A.8 Show that, for fixed positive constants $\\alpha, \\sigma$ ,\n\n$$\n\\int_ {0} ^ {\\infty} u ^ {\\alpha} e ^ {- \\sigma \\frac {u ^ {2}}{2}} d u \\leq \\left(\\frac {2 \\alpha}{\\sigma e}\\right) ^ {\\alpha / 2} \\sqrt {\\frac {\\pi}{\\sigma}}.\n$$\n\nNotice that we can use exercise A.8 to bound the integrals in (39) and (40), with $\\alpha \\in \\{1,4\\}$ , as long as $\\delta$ is small enough so that $\\sigma = 1 - 4\\delta^2 > 0$ . Accordingly, our estimate for (38) takes the form\n\n$$\n\\frac {\\sqrt {2 \\pi}}{e x} \\frac {| \\nu |}{\\sigma} + \\frac {1 2 8}{x e ^ {2}} \\frac {\\sqrt {\\pi}}{\\sigma^ {5 / 2}} + \\sqrt {\\pi} e ^ {- \\frac {\\delta^ {2} x}{4}}.\n$$\n\nUsing the above estimate and (37), we can finally obtain as a bound for $\\left|\\sqrt{2\\pi x} e^{-x}I_{\\nu}(x) - 1\\right|$ :\n\n$$\n\\sqrt {\\frac {\\pi^ {3} x}{2}} e ^ {- x (1 - \\cos \\delta)} + \\sqrt {\\frac {\\pi^ {3}}{2 x}} (e ^ {- x} - e ^ {- 2 x}) + e ^ {- 2 x} + \\frac {2 | \\nu |}{e x \\sigma} + \\frac {1 2 8 \\sqrt {2}}{x e ^ {2} \\sigma^ {5 / 2}} + \\sqrt {2} e ^ {- \\frac {\\delta^ {2} x}{4}},\n$$\n\nwhere $\\sigma = 1 - 4\\delta^2 >0$\n\n# References\n\n[1] W. Strauss, *Partial Differential Equations: An Introduction*. second edition, John Wiley & Sons, Ltd., 2008. \n[2] W. Rudin, *Principles of Mathematical Analysis*. third edition, International Editions, 1976.\n\n[3] R. Courant and D. Hilbert, Methods of Mathematical Physics. volume 2, Wiley-VCH Verlag GmbH & Co. KGaA, 1962. \n[4] Fritz John, *Partial Differential Equations*, Springer-Verlag, New York, 1981. \n[5] Arthur G. Webster, *Partial Differential Equations of Mathematical Physics*. Dover publications. Worcester, 1955. \n[6] Watson, G. N., A Treatise On The Theory of Bessel Functions. second edition. Cambridge, 1966. \n[7] Mainardi, F., *Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models*. second edition. New Jersey: Word Scientific, 2022. \n[8] Grezegorz Karch, Selfsimilar profiles in large time asymptotics of solutions to damped wave equations, Studia Mathematica 143(2), 175-197, (2000). \n[9] L. Hsiao, T. P. Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping, Comm. Math. Phys. 43, 599-605, (1992). \n[10] L. Hsiao, T. P. Liu, Nonlinear diffusive phenomena of nonlinear hyperbolic systems, Chinese Ann. Math. Ser. B 14, 465-480, (1993). \n[11] A. Matsumura, On the asymptotic behavior of solutions of semi-linear wave equations, Publ. Res. Inst. Math. Sci., 12(1), 169-189, 1976/77. \n[12] K. Nishihara, Asymptotic Behavior of Solutions of Quasilinear Hyperbolic Equations with Linear Damping. Journal of Differential Equations, 137, 384-395, 1997."}
# LANGUAGE, PARTICIPATION AND INCLUSIVITY IN URBAN PLANNING PROCESSES IN MZUZU CITY, MALAWI ABSTRACT Participation in urban planning is championed for entrenching democracy and development. Malawi passed the Local Government Act (1998) and Decentralization Policy (1998) to facilitate community participation in decision-making processes. Several studies have been conducted on decentralization and local governance on community participation. Little attention has been paid to examining the impact of the language used in planning processes on democracy and inclusivity envisaged in the law and policy. Using communicative action theory, the study examined challenges posed by language used in planning processes on inclusivity in the approval processes of urban plans. Data were collected through semi-structured interviews, focus group discussions, observations and document review and analyzed using thematic and discourse analysis. The findings show that while there is high participation at community planning levels, because planners communicate using local languages, participation is compromised in the service committees at city level where final planning decisions are made due to language barrier. Specifically, lack of sincerity, truthfulness, comprehensibility and therefore legitimacy are apparent. Planners are reluctant to simplify written language and translate planning jargon into local languages for councillors to understand. The study concludes that community participation in the urban planning process in Mzuzu fails to entrench democracy due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made. The study proposes a framework for inclusive participation in urban planning including the motivation, conditions for effective participation and outcomes of participation. Key Words: Community participation, inclusivity, local governance, communicative action, urban planning. Community participation in decision-making in the urban planning process has been championed for entrenching democratic ideals and development outcomes. Participation was defined by the United Nations Research Institute for Social Development (UNRISD) as: “the organized efforts to increase control over resources and regulative institutions in given situations, on the part of groups and movements hitherto excluded from such control” (Stiefel & Wolfe, 1994: 5). The Malawi Government passed the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024) to facilitate community participation in decision-making in planning and development. Several studies have been conducted on decentralization and local governance in relation to community participation. Little attention has been paid to examining the impact of the language used in planning processes on inclusivity to realize the democratic ideals and development outcomes envisaged in the law and policy. Using the Habermasian communicative action theory, the study examined the challenges posed by the language used in the planning process on inclusivity in the approval processes of urban development plans. Specifically, the study evaluated the influence of planning language on participatory democracy and inclusiveness, by examining the extent to which Habermas precondition of communication, also known as validity claims, namely: a) comprehensibility, b) sincerity, c) truthfulness and d) legitimacy have been met adequately. The failure to meet Habermas validity claims implies failure to entrench democracy and inclusiveness, because communication is itself a precondition of democracy (Taylor, 1998). The paper is structured as follows: section 2 presents literature review. Section 3 presents methodology. Section 4 presents results and discussion. Section 5 presents conclusion and the proposed framework for inclusive participation in urban planning. Habermas developed a general theory that provides a platform to critique the contemporary capitalist society, while providing the preconditions for a more democratic society, which later inspired the Habermasian theory of Communicative Action (Taylor, 1998). According to Habermas, if two or more people are to communicate effectively with each other, certain conditions have to be met, which he termed; "general presuppositions of communication" (Habermas, 1979: 1). Habermas suggests that, when person A communicates with person B, A implicitly assumes or makes four validity claims; first, A assumes that what he is saying is, comprehensible (i.e., understandable) to B. This is obviously a precondition of communication because, if what A is saying is incomprehensible to B, then clearly no communication is taking place between A and B (Habermas, 1979; 1987; Taylor, 1998). Secondly, for A to communicate to B, it must be A, himself who communicates, from which Habermas infers that A must be "Sincere" in communicating to B. According to Habermas (1987; 1979), the validity claim of sincerity is that for genuine communication to occur between two persons, the speaker must not deceive the listener. Thirdly, A must communicate 'something' to B, from which Habermas infers that A must assume or make the validity claim that the 'something' he is saying is factually "True" (Taylor, 1998: 123). Fourthly, in order for A to communicate to B, A must be seeking to come to an understanding with B. Thus, A must assume that what he is saying is legitimate, within the context of moral norms and conventions shared by both A and B (Habermas, 1979: 2-3). Habermas (1979) argues that the communicative action theory enables us to envisage the four preconditions of communication as: comprehensibility, truthfulness, sincerity, and legitimacy (Taylor, 1998). If these four preconditions cannot be met, then no genuine communication will take place. As communication is itself a preconditions for real democracy, and hence, of any democratic participation in planning, and without genuine communication, there can be no genuine participation in urban planning and decision-making (Taylor, 1998). The leading pioneer of the communicative planning theory has been an American, known as John Forester, who has drawn extensively on Habermasian theory as a vehicle for evaluating planning practice in terms of the ideals of good communication and democratic participation (Taylor, 1998). In his 1989 book; Planning in the Face of Power, Forester begins from the premises that "planning is for the people" and in Western liberal democracies, the planning practice is constrained by the political realities of a capitalist society (p. 3). His aim is to explore the skills that planners need to maximize their effectiveness in planning for people in the face of power (Forester, 1989). He asserts that in order to get things done, planners have to be effective communicators and negotiators, because in planning, talk and argument matter and that the daily business of a planner is basically communicative (Forester, 1989: 5 & 11). He insists that in getting things done, urban planning should aspire to the ideals of democratic decision-making over the development proposals (Forester, 1989). While planners will be negotiating with powerful developers, they should also be active in protecting the interests of all groups in the society, including the less powerful or marginalized communities (Forester, 1989). Drawing from Habermas, Forester emphasizes the duty of planners to facilitate participatory democracy in planning. By emphasizing planner's duty to involve the less powerful groups, by exposing distorted communication and misinformation, Forester sees planning as a communicative process carrying with it, a communicative ethos (Forester, 1989: 22-24). F U O t h aim number 3: the plans should be righteous - meaning that the plans should be right when i According to Dados & Cornell (2012; 13), "It references an entire history of colonialism, neo- standards, life expectancy and access to resources are maintained; and opens new possibilities in politics and social sciences". The origin of the prevailing communicative planning theory is the p e However, in his study; 'The Dark Side of Planning: Rationality and Real Rationalitat', in A a h b o b g e A qualitative research approach was employed for this study in order to investigate social interactions and explore meanings that communities ascribed to their participation in the urban planning process. As the philosophical stance of this study is interpretive, the qualitative methods enabled the researcher to explore the multiple constructions of reality from the diverse opinions in order to understand the phenomenon (Bryman, 2004; Myers, 2008). The study used the multi-stage cluster sampling, to divide the study population into smaller groups starting from the members at block level, to neighbourhood and ward committees and all the way up to Council Service committees at Mzuzu City Council. Purposive sampling was used to select members of these groups for an in-depth and key informant interviews (KIIs), focus group discussions (FGDs) and observation. Data was collected through semi-structured interviews, FGDs, observations of planning engagements and document review. Qualitative data from community members, such as, block leaders, members of the neighbourhood and ward committees, were extracted through in-depth interviews and focus group discussions (FGDs). Qualitative data from key informants such as councillors, planners and planning officials at Mzuzu City Council, were collected by administering key informant interviews (KIIIs) and observation. Qualitative secondary data was also collected from key planning documents such as urban development plan (UDP), urban structure plan (USP) and planning and service committee minutes. Data was analyzed using deductive thematic analysis and discourse analysis. Unlike inductive thematic
# LANGUAGE, PARTICIPATION AND INCLUSIVITY IN URBAN PLANNING PROCESSES IN MZUZU CITY, MALAWI ABSTRACT Participation in urban planning is championed for entrenching democracy and development. Malawi passed the Local Government Act (1998) and Decentralization Policy (1998) to facilitate community participation in decision-making processes. Several studies have been conducted on decentralization and local governance on community participation. Little attention has been paid to examining the impact of the language used in planning processes on democracy and inclusivity envisaged in the law and policy. Using communicative action theory, the study examined challenges posed by language used in planning processes on inclusivity in the approval processes of urban plans. Data were collected through semi-structured interviews, focus group discussions, observations and document review and analyzed using thematic and discourse analysis. The findings show that while there is high participation at community planning levels, because planners communicate using local languages, participation is compromised in the service committees at city level where final planning decisions are made due to language barrier. Specifically, lack of sincerity, truthfulness, comprehensibility and therefore legitimacy are apparent. Planners are reluctant to simplify written language and translate planning jargon into local languages for councillors to understand. The study concludes that community participation in the urban planning process in Mzuzu fails to entrench democracy due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made. The study proposes a framework for inclusive participation in urban planning including the motivation, conditions for effective participation and outcomes of participation. Key Words: Community participation, inclusivity, local governance, communicative action, urban planning. Community participation in decision-making in the urban planning process has been championed for entrenching democratic ideals and development outcomes. Participation was defined by the United Nations Research Institute for Social Development (UNRISD) as: “the organized efforts to increase control over resources and regulative institutions in given situations, on the part of groups and movements hitherto excluded from such control” (Stiefel & Wolfe, 1994: 5). The Malawi Government passed the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024) to facilitate community participation in decision-making in planning and development. Several studies have been conducted on decentralization and local governance in relation to community participation. Little attention has been paid to examining the impact of the language used in planning processes on inclusivity to realize the democratic ideals and development outcomes envisaged in the law and policy. Using the Habermasian communicative action theory, the study examined the challenges posed by the language used in the planning process on inclusivity in the approval processes of urban development plans. Specifically, the study evaluated the influence of planning language on participatory democracy and inclusiveness, by examining the extent to which Habermas precondition of communication, also known as validity claims, namely: a) comprehensibility, b) sincerity, c) truthfulness and d) legitimacy have been met adequately. The failure to meet Habermas validity claims implies failure to entrench democracy and inclusiveness, because communication is itself a precondition of democracy (Taylor, 1998). The paper is structured as follows: section 2 presents literature review. Section 3 presents methodology. Section 4 presents results and discussion. Section 5 presents conclusion and the proposed framework for inclusive participation in urban planning. Habermas developed a general theory that provides a platform to critique the contemporary capitalist society, while providing the preconditions for a more democratic society, which later inspired the Habermasian theory of Communicative Action (Taylor, 1998). According to Habermas, if two or more people are to communicate effectively with each other, certain conditions have to be met, which he termed; "general presuppositions of communication" (Habermas, 1979: 1). Habermas suggests that, when person A communicates with person B, A implicitly assumes or makes four validity claims; first, A assumes that what he is saying is, comprehensible (i.e., understandable) to B. This is obviously a precondition of communication because, if what A is saying is incomprehensible to B, then clearly no communication is taking place between A and B (Habermas, 1979; 1987; Taylor, 1998). Secondly, for A to communicate to B, it must be A, himself who communicates, from which Habermas infers that A must be "Sincere" in communicating to B. According to Habermas (1987; 1979), the validity claim of sincerity is that for genuine communication to occur between two persons, the speaker must not deceive the listener. Thirdly, A must communicate 'something' to B, from which Habermas infers that A must assume or make the validity claim that the 'something' he is saying is factually "True" (Taylor, 1998: 123). Fourthly, in order for A to communicate to B, A must be seeking to come to an understanding with B. Thus, A must assume that what he is saying is legitimate, within the context of moral norms and conventions shared by both A and B (Habermas, 1979: 2-3). Habermas (1979) argues that the communicative action theory enables us to envisage the four preconditions of communication as: comprehensibility, truthfulness, sincerity, and legitimacy (Taylor, 1998). If these four preconditions cannot be met, then no genuine communication will take place. As communication is itself a preconditions for real democracy, and hence, of any democratic participation in planning, and without genuine communication, there can be no genuine participation in urban planning and decision-making (Taylor, 1998). The leading pioneer of the communicative planning theory has been an American, known as John Forester, who has drawn extensively on Habermasian theory as a vehicle for evaluating planning practice in terms of the ideals of good communication and democratic participation (Taylor, 1998). In his 1989 book; Planning in the Face of Power, Forester begins from the premises that "planning is for the people" and in Western liberal democracies, the planning practice is constrained by the political realities of a capitalist society (p. 3). His aim is to explore the skills that planners need to maximize their effectiveness in planning for people in the face of power (Forester, 1989). He asserts that in order to get things done, planners have to be effective communicators and negotiators, because in planning, talk and argument matter and that the daily business of a planner is basically communicative (Forester, 1989: 5 & 11). He insists that in getting things done, urban planning should aspire to the ideals of democratic decision-making over the development proposals (Forester, 1989). While planners will be negotiating with powerful developers, they should also be active in protecting the interests of all groups in the society, including the less powerful or marginalized communities (Forester, 1989). Drawing from Habermas, Forester emphasizes the duty of planners to facilitate participatory democracy in planning. By emphasizing planner's duty to involve the less powerful groups, by exposing distorted communication and misinformation, Forester sees planning as a communicative process carrying with it, a communicative ethos (Forester, 1989: 22-24). F U O t h aim number 3: the plans should be righteous - meaning that the plans should be right when i According to Dados & Cornell (2012; 13), "It references an entire history of colonialism, neo- standards, life expectancy and access to resources are maintained; and opens new possibilities in politics and social sciences". The origin of the prevailing communicative planning theory is the p e However, in his study; 'The Dark Side of Planning: Rationality and Real Rationalitat', in A a h b o b g e A qualitative research approach was employed for this study in order to investigate social interactions and explore meanings that communities ascribed to their participation in the urban planning process. As the philosophical stance of this study is interpretive, the qualitative methods enabled the researcher to explore the multiple constructions of reality from the diverse opinions in order to understand the phenomenon (Bryman, 2004; Myers, 2008). The study used the multi-stage cluster sampling, to divide the study population into smaller groups starting from the members at block level, to neighbourhood and ward committees and all the way up to Council Service committees at Mzuzu City Council. Purposive sampling was used to select members of these groups for an in-depth and key informant interviews (KIIs), focus group discussions (FGDs) and observation. Data was collected through semi-structured interviews, FGDs, observations of planning engagements and document review. Qualitative data from community members, such as, block leaders, members of the neighbourhood and ward committees, were extracted through in-depth interviews and focus group discussions (FGDs). Qualitative data from key informants such as councillors, planners and planning officials at Mzuzu City Council, were collected by administering key informant interviews (KIIIs) and observation. Qualitative secondary data was also collected from key planning documents such as urban development plan (UDP), urban structure plan (USP) and planning and service committee minutes. Data was analyzed using deductive thematic analysis and discourse analysis. Unlike inductive thematic analysis (a qualitative research methods where themes emerge directly from the data), the researcher used pre-defined themes derived from Habermas validity claims of comprehensibility, sincerity, truthfulness and legitimacy, to confirm or refute them. The researcher also used discourse analysis to analyse planning language within its social context, in terms of how planners speak to communities. By using the discourse analysis, the researcher was able to identify how power dynamics between communities influence meanings of the planning concepts. The data enabled the researcher to explore the extent to which planning language influences the entrenchment of democracy and inclusiveness in the decision-making processes in urban planning in Mzuzu City. The study focused on confirming or refuting the pre-defined themes drawn from Habermas validity claims of comprehensibility, sincerity, truthfulness and legitimacy. This theme investigated the extent to which planning language is understandable to communities when planners communicate advice, information and knowledge orally and in writing. The findings show that the Habermas validity claim of comprehensibility has not been met. There is a higher level of understanding between planners and communities at the community planning level than in the Council Service Committees at City level, because at the community level, planners speak the local language and all communities understand the communication: "When the conversation is done in local languages, many people speak a lot of sense..." (KII-CLLR1/18-11-24). "We understand them $100\%$ because they speak to us in local languages" (KII-CLLR2/14-11-24). These quotes reveal that community participation is higher at the block level, neighbourhood and Ward development committee levels. At the community planning level, communities are required to identify, select and prioritize development projects that they need. It is the first level of planning. Final planning and funding decisions not are made here. However, participants said that community representatives (councillors) struggle to follow the deliberations in the council service committees at City level, due to language barrier: ...the language is English ...councilors do not understand the language... using jargons which are not of their area. Attempts are there to orient them on the operations, they know what to put, but you find that by the end of the day the language still needs to be simplified for them (KII-P1/14-10-24). Not all of us [councillors] understand English. Most of the information about planning and development is spoken in English. So, most people do not understand... The deliberations in all service committees are conducted in English (KII-CLLR1/18-11-24). The extracts reveal that the level of understanding is low because councillors who represent communities in the Council Service Committees fail to follow the deliberations because many of them do not fully comprehend English. The problem gets worse when technocrats use technical language, planning and financial jargons. The study revealed evidence of reluctance to simplify spoken language for community representatives' ease of understanding: "And that's why, I have a fight with the Secretariat, because... [They] don't want a learned person .... They want these councilors with lower education so that when they present the reports, councilors do not understand...." (KII-P4/22-11-24). These findings do not agree with Habermas (1979; 1987) who stresses the need for planning language to be understandable to hearers. The failure of councillors to understand and follow the deliberations in the Physical Planning, Finance and other committees implies that grassroots communities are not participating and certainly not being included in the decision-making processes that lead to final decisions. Evidence from the minutes of the Physical Planning and Finance Committees, also indicate that councillors are just passive listeners. Further, the study also found that key planning documents like the urban development plans (UDP) and the urban structure plans (USP) have not been translated into the local languages for the majority to understand. There was a perceived reluctance to translate technical language, planning jargons and documents into the local language: "...during community outreach programmes, we present ourselves in local languages... but to date the Five Year-Urban Development Plan has not yet been translated... [Of course] translating the whole document would not be meaningful for me because not every information... is relevant to a local person" (KII-P3/29-10-24). However, communicative planning requires planners to use an easy-to-understand language in planning texts, to simplify planning and technical jargons and even translate them into vernacular languages. For instance, Forester, (1989: 149) advises planners to present their ideas and information in a manner that is less obscure, comprehensible or easy to understand. Habermas (1979; 1987) emphasizes the need for language to always meet the validity claim of comprehensibility, meaning that information being presented by speakers (planners) should be understandable to hearers (communities). The failure of communities to understand what planners are communicating implies that communities are manipulated. It is in the Council Service Committee meetings that crucial planning and funding decisions are made before being forwarded to the Full Council for approval. The fact that inputs from communities are lacking means that the policies and city by-laws do not reflect the will of the majority or the common people. Thus, planning is not democratic and inclusive in Mzuzu City. The results of this investigation do not align with the Habermas (1979; 1987) validity claim of comprehensibility. According to Habermas (1979), if A communicates to B, he must assume that what A is saying is comprehensible to B. However, in this study, what A (planners) were saying could only be understood by B (communities) at the block and ward level, where communities only identify, select and prioritize development projects, because the local language is used for communication. However, when the prioritized projects are sent to service committees, councillors (B), fail to participate effectively in the decision-making process about planning and funding allocation, due to language barrier, as A, uses English that is full of technical and planning jargons and the planning texts are not translated into the local language for communities to understand easily. These two overlapping and similar themes investigated the extent to which planners' oral and written communication is honest, less deceptive and truthful in order to determine whether planning language enhances participation and inclusivity in the decision-making process in urban planning. First, the validity claim of sincerity investigated levels of honesty or deception in planners' communication. The findings indicate that while the language planners use to communicate with communities appears to be superficially honest and less deceptive during planning meetings, but many interviewees complained that planners are deceptive and dishonest during project implementation: They sound sincere ...but deceptions arise during budgeting and resource allocation. Selection and identification of contractors is done by themselves. Councillors are not involved... the Internal Procurement Committee (IPC) sits down to discuss bids on their own. There are no community representatives in the IPC... This is where deceit comes in because they select a contractor who promises kickbacks (KII-CLLR2/14-11-24). They speak to us very sincerely during planning meetings. The problems arise during implementation. This is when I think they indulge in fishy businesses. The process gets messed up during the identification and awarding of contracts (KII-CLLR1/18-11-24). No. they are not sincere. There is a kind of deception. Mostly about $30\%$ of the language is deceitful. Largely, deceit comes in so that they can easily convince and walk through. They use deception to advance their own ulterior motives (KII-P1/14-10-24). These extracts reveals how deceitful planners are. There is no transparency and accountability when it comes to selecting project contractors. This is done in the internal procurement committee (IPC) in the absence of community representatives. So, while planners sound very sincere during planning meetings, but after that, they implement different things that are contrary to what they communicated publicly. Further, the study revealed evidence of reluctance on the part of the Secretariat to simplify spoken language to enable councillors in service committees understand when financial reports and statements are being presented. In an interview, the City Mayor revealed that he sometimes fights with the Secretariat for resisting concerns about councillors' failure to follow deliberations in the service committees because their level of education is low. He said that the Secretariat is aware that councillors do not understand presentations of the financial reports and statements and that the use of technical language and planning jargons is a deliberate ploy to conceal crucial information from public scrutiny. Also, the key planning documents such as the Urban Development Plan (UDP) and the Urban Structure Plan (USP) are professionally written and that there is no evidence of insincerity and untruthfulness, but problems arise during implementation. This is when what has been communicated according to Habermas criterion, gets distorted during implementation. Second, the theme of truthfulness investigated the extent to which planners' oral and written communication is factually accurate or truthful. Participants revealed that planners' language is generally truthful when they speak during planning meetings and in planning texts, but there are instances when planners cheat communities: ... [planners] always sound very truthful when they speak to us... but the problems ...arise during budgeting, resource allocation ... [when] members of the Internal Procurement Committee sit down to discuss bids, there is no community representative ....what is implemented is different from what they agreed with people...." (KII-CLLR2/14-11-24) Decentralization is just a myth. The real powers are still at the Council level... Usually if it is coming with resources they withhold information from the grassroots communities because they would want to hide financial resources. They don't want to diverge more information to communities because it will make it so difficult to play fishy businesses. The Council gets more meat and give bones to the communities (KII-P4/22-11-24). These results are consistent to Healey's (1995: 259) assertions that verbal agreements reached according to Habermas validity claims can still be distorted in writing by planners in their offices. Flyvbjerg (1996: 392) questions the idea of viewing 'planners as noble individuals' due to failure to 'speak truthfully'. As further proof of lack of sincerity and truthfulness, the researcher was not allowed to observe deliberations in the council service committees, including the Physical Planning Committee, the Finance Committee and the Internal Procurement Committee, despite prior approval to the letter requesting for consent and repeated requests to observe these committees. The lack of both sincerity and truthfulness impedes a genuine flow of communication which prevents communities from participating in the decision-making process and thus compromises the goal of entrenching democracy and inclusive urban planning as envisaged in the Local Government Act (1998 amended 2024) and the Malawi decentralization policy (UNDP, 2000; Malawi Government, 1998). This is also contrary to Habermas (1979; 1987) emphasis that the language should always meet the validity claim of comprehensibility, meaning that information being presented by speakers should be understandable to hearers, the validity claim of sincerity which requires higher levels of honesty and the validity claim of truthfulness which demand higher levels of factual accuracy. This apparent lack of honesty and truthfulness impedes a genuine flow of communication between planners and communities, and prevents inclusive community participation in decision-making processes over budgeting, resources allocation, determination of planning applications for development permission and the selection of project contractors in the IPC. This also means that the final budgeting and funding decisions are devoid of the inputs from the grassroots communities thereby, rendering the planning process less participatory and not inclusive. On the one hand, the findings agree with Flyvbjerg (1996) study entitled; 'The Dark Side of Planning: Rationality and the Real Rationalitat', in Aalborg, Den Mark, a European country in the Global North. Flyvbjerg (1996) found the idea of viewing planners as noble creatures as a myth, that planners are unethical, crooks, liars, deceivers and corrupt professionals, that the reality in which planning takes place is false, cruel, contradictory and seductive, and that planners, as human beings need lies to survive (Flyvbjerg, 1996: 391). He insists that most observers would agree that deception is part and parcel of many everyday decisions in government because the social incentives of deception are at present very powerful, while controls are often weak, and that deceptions are part and parcel of the decisions planners are involved in and that the incentives for planners to deceive others are strong (Flyvbjerg, 1996: 392). He concludes that the idea that planners are noble individuals with good manners are plain lies of the planning theorists (Flyvbjerg, 1996: 391). On the other hand, these results do not align and spits in the face of Henri Lefebvre's (1996) influential ideas about the 'right to the city', in which he states that cities should be understood as common goods, benefiting all residents, rather than just the rich and powerful oligarchs, that cities should be inclusive, where every street, every building and every corner belongs to the people who live there, not just the rich, the planners and the entitled, but all of us. He criticized technocratic planning, arguing that the common people should have power to shape the cities they live in and that cities are not just spaces for power and control, but are living spaces that we create together, yet too often they are controlled by capitalist oligarchs, profit hungry developers and apathetic governments (Lefebvre, 1996). This theme investigated the extent to which planners, oral and written communication meets Habermas (1979; 1987) validity claim of legitimacy. Legitimacy is concerned with how communication complies with the normative values and conventions (Forester, 1993). The findings from a review of the planning language in the three key planning texts; urban structure plan (USP), urban development plan (UDP) and urban profile (SEP) for Mzuzu City, indicate that communication complies with the normative values, conventions and laws as outlined in the legal provisions of the Constitution of the Republic of Malawi (1995), the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024) and the Town and Country Planning Act (1988), (Mzuzu City Council, 2023 – 2030). The language in the three key planning documents also comply with the provisions of the Constitution of the Republic of Malawi of 1995, which requires the full participation of the grassroots communities in the decision-making processes in order to entrench democracy in Malawi. The language also emphasizes the decentralization of powers from Central to local authorities and from the top half to the lower half of the Council: The Grassroots Participation Process (GPP) is the bottom-up process, which involves consultation with the communities that aims to gather information on their needs. A GPP task force is formed whose members work in collaboration with Ward Development Committee; block leaders perform the GPP (NAP process). The output of this is the prioritized list of projects. The resultant output of the urban socio-economic profile and the GPP is the formulation of the Urban Development Planning Framework, which highlights major issues, potentials and development objectives and strategies. The framework forms the basis for the formulation of projects and programmes (Mzuzu City (UDP), 2023-2030: p. VIII). The excerpt underscores that community participation is central to the urban planning process in Mzuzu City. The urban profile document assessed the current situation and identified available developmental potential for the Council and using these findings, planners developed the urban development strategy, programmes and projects to be implemented in the next 10 to 15 years. The formulation of the UDP and USP was based on the findings of the process of dialogue, inclusive democracy and discourse with equal distribution of power for argumentation with communities. Thus, planning texts are legitimate because they are based on the outcome of the communicative process and in compliance with the legal provisions of Malawi. A review of the key planning texts reveals that written communication complies with Habermas (1979) validity claim of legitimacy, which requires the language that planners use to communicate planning ideas, knowledge and information, conform to the normative values, moral conventions and legislation. The three planning texts were written professionally and complies with the legal provisions outlined in the Malawi Constitution, the Local Government Act and the national decentralization policy documents outlined above. All the three key planning documents emphasize participation and representation of the grassroots communities as a way of entrenching participatory democracy and inclusive urban planning. However, in spite of this compliance, participants lamented that problems still arise in the service committees and in the IPC at City level where final decisions are made in absence of community representatives, as alluded to earlier, and during the implementation phase. The study revealed that while planners may communicate orally that a particular area has been designated as a residential area, in conformity with what is indicated in the urban structure plan (USP), but during implementation, the area designated as residential turns out to be a mixed-use zone of residential and commercial structures and a high density area becomes a mixed zone of high, medium and low density housing structures: The structure plan – they approved it, to say, by law this is zoned for residential and the like, but when it comes to implementation, it becomes different – a mixture of residential, commercial and etc... So the documentation itself is legitimate, but here, you are talking of oral – they are communicating orally legitimate things, but when implementing, it's different - it's illegitimate. Thus, legitimacy is really there. They communicate according to the law – but when it comes to implementation, this legitimacy ceases (KII-P1/14-10-24). This tendency implies that the actual physical infrastructure development is not in line with what is indicated in the Urban Structure Plan (USP) and the Urban Development Plan (UDP), resulting into a kind of development that reflects the ulterior motives of the rich and powerful oligarchs, instead of the will of the majority of the Mzuzu City residents. This also means that the development of the City is illegitimate because it fails to comply with the legitimate urban plans which were formulated in compliance with the legal provision that emphasize participation and representation of communities in planning and decision-making. These findings are consistent with Healey (1993) who reveals that verbal agreements reached according to Habermas (1987; 1979) validity claims can still be distorted in writing by planners in their offices. Thus in theory, planners communicate in accordance with the provisions of the laws, which require the participation and representation of communities in all decision-making processes and the decentralization of powers to grassroots communities (Malawi Government, 1995; 1998; UNDP, 2000). However, in practice, all decisions are made by technocrats in service committees at City level and planners resist delegating these powers to community planning committees despite the decentralization policy requiring them to do so. The fact that councillors who represent their electorate are passive listeners in Council service committees, implies that their communities are not actively participating and influencing planning decisions. They do not actively participate in the deliberations that lead to final planning decisions. They are thus placated and manipulated. On the one hand, these findings are consistent with those of Yiftachel (1998). In his article, 'Planning and Social Control: Exploring the Dark Side'. Yiftachel (1998) argues that urban planning, despite its potential for positive change, has a hidden dark side, where it functions as a tool for social control and oppression, especially for the marginalized groups. Influenced by Michel Foucault, Yiftachel (1998) highlights how planners can reinforce existing power structures by manipulating space and socioeconomic conditions to benefit certain groups while excluding others. In the same vein, the results reveal a hidden dark side, where planners exclude community representatives from the decision-making processes about planning, budgeting, resource allocation and the selection of the project contractors. On the other hand, the findings are contrary to Habermasian theories of communicative rationality and action. Habermas asserts that if two or more people are to communicate effectively, certain conditions have to be met (Habermas 1979; 1). One of these preconditions is legitimacy. According to Habermas, if A communicates to B, he should assume that what A is saying is legitimate (complies with moral norms, conventions and laws) (Habermas 1979; 1; Taylor 1998; 123). The failure to meet the validity claim of legitimacy implies that no genuine communication is taking place, and thus a lack of community participation in urban planning. As genuine communication is a pre-condition for participatory democracy, its absence means the absence of participatory democracy and inclusivity in urban planning. The limitation of the findings is that the researcher was not granted access to observe deliberations in service committee meetings. However, the researcher concludes that this was further proof of lack of sincerity and truthfulness, because he was not granted access to observe Council service committee deliberations, despite consenting to this earlier on (see the Request for Consent Letter on the Appendix Section). Nevertheless, the researcher managed to access the minutes of the previous service committee meetings and was able to draw conclusions. # onclusion The study concludes that planning language in the decision-making process in urban planning in Mzuzu City fails to enhance community participation and inclusivity, due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made, resulting in failure to entrench participatory democracy. The study found that Habermas (1979; 1987) validity claims have not been met because planning language is incomprehensible, insincere, untruthful and illegitimate, thereby compromising participation and inclusive urban planning in Mzuzu City. First, this study revealed that the validity claim of comprehensibility was met at the block and ward level and community participation is high, because planners use the local language to communicate, but the validity claim of comprehensibility was not met in the service committees at city level and participation was compromised because planners use English, fraught with technical language and planning jargons, as the official language of communication. Second, the study revealed that the validity claims of sincerity was not met because the levels of planners' deception, which impede genuine flow of communication in service committees at city level, were high. While planners speak and write in a manner that appears to meet Habermas (1979; 1987) validity claims of sincerity, participants revealed episodes of deception and dishonesty during budgeting, resource allocation and the selection of project contractors in absence of community representatives. Third, the study unveiled that the validity claim of truthfulness was not met because the levels of cheating and factual inaccuracies in planners' oral and written communication, which impedes a genuine flow of planning ideas, knowledge and information, is high. While on the surface, planners' oral and written communication sounds as though they meet Habermas validity claim of truthfulness, but participants narrated episodes of lies, cheating and inaccuracies that arise in the service committees at City level, especially during budgeting, resource allocation, and determination of planning applications for development permission and in writing of the certificate of escalation. Fourth, while both spoken and written language during planning meetings and in the UDP and USP documents meet Habermas (1979; 1987) validity claim of legitimacy, in that they comply with the legal provisions outlined in the Malawi Constitution (1995), the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024), however, participants complained that problems arise in the service committees and during the implementation phase. They revealed that what is legitimately communicated is not what usually gets implemented. It was also found that although key planning texts (UDP and USP) are sincerely, truthfully and legitimately written, but these documents have not been translated into the local language for everyone to read. Many communities are not aware of their existence. Therefore, planning language fails to enhance community participation and inclusivity in the decision-making process in urban planning due to language barrier, due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made, resulting in failure to entrench participatory democracy # framework for Inclusive Participation in Urban Planning In order to realize the intentions of the policy and law in local governance, a framework that enables easy communication and understanding is proposed. Habermas (1979; 1987) requires the meeting of the validity claims of comprehensibility, sincerity, truthfulness and legitimacy. As communication is itself a precondition for democracy, poor communication implies lack of participation in the democratic decision-making process. Therefore, to achieve full participation for effective inclusion in planning processes, certain conditions have to be met in Mzuzu City. These include, the motivations and conditions for inclusive participation for the purposes of realizing sustainable development goals (SDG11) as well as regional (Africa 2063) and national aspirations (mw2063). The study proposes a framework for inclusive community participation in planning. This framework has four tiers of community participation in urban planning. The first tier begins at the Block Level. This is where grassroots communities directly participate in the planning process. This is the first stage in the planning process. The motives of participation is that grassroots communities should identify and select community development projects and send them to the Neighbourhood Committees. The condition for inclusive participation should be that block leaders ensure that the identified projects truly and genuinely reflect the needs of the grassroots communities, rather than the ulterior motives of the community leaders. The results of participation of the grassroots communities must be projects that communities really need. The second tier of participation is the community planning committees, split into two: Neighbourhood and Ward committees. The form of participation in this tier is indirect participation by elected members who participate on behalf of their people. The motive of participation must be to prepare area action plans which reflect the needs of communities at block levels. The conditions for inclusive community participation include: that whatever the members say and do should always reflect the true and genuine aspirations, needs and will of the grassroots communities. The results of participation, should be the action area plans which truly reflect the will of the grassroots communities, rather than the selfish needs of the community leaders and representatives. The third level of participation is the Council Service Committees. This category should have both direct (planners/technocrats) and indirect (councillors representing communities and other stakeholders) participation. Participants must include; planners, councillors, other government officials; representatives of other interest groups. The motives of participation for planners should be to make planning, budgeting and funding decisions that advance the best interests of the grassroots communities, to provide technical, advice and orientation to councillors to enable them to ably represent their communities, to provide a conducive environment for councillors to fully participate in all decision-making processes of the service committees. The motives for the councillors should be to represent and amplify the voices of the grassroots communities, participate in the decision-making processes on the behalf of the communities, participate in budgeting and funding allocation and play a significant role in the selection of project contractors in the IPC. The conditions for effective participation should include: planners providing good technical advice and adequate orientation to councillors; planners sharing decision-making powers with community representatives; communities from all 15 wards must be represented by their councillors, not like it is right now where there only three councillors in service committees; and ensure that councillors fully participate in the selection process of the project contractors, to ensure transparency and accountability. The results of participation must indicate that the final decisions regarding plans and budgets must reflect the will of the communities, rather than the will of the technocrats and the councillors.
arxiv_physics
2025-12-10T00:00:00Z
https://arxiv.org/pdf/2512.14730
{"title": "Language, participation and inclusivity in the urban planning process in Mzuzu City", "raw_content": "# LANGUAGE, PARTICIPATION AND INCLUSIVITY IN URBAN PLANNING PROCESSES IN MZUZU CITY, MALAWI\n\nFrancis Engwayo Mgawadere<sup>1</sup> and Mtafu Manda<sup>2</sup>\n\n$^{1}$ Dept of Humanities, University of Livingstonia, P.O. Box 37, Laws Campus, Livingstonia, Malawi. Email: fmgawadere@unilia.ac.mw Cell: +265 992014094\n\n$^{2}$ Department of the Built Environment, Mzuzu University, Private Bag 201, Mzuzu, Malawi. Email: manda.ma@mzuni.ac.mw Cell: +265 991457275\n\n# ABSTRACT\n\nParticipation in urban planning is championed for entrenching democracy and development. Malawi passed the Local Government Act (1998) and Decentralization Policy (1998) to facilitate community participation in decision-making processes. Several studies have been conducted on decentralization and local governance on community participation. Little attention has been paid to examining the impact of the language used in planning processes on democracy and inclusivity envisaged in the law and policy. Using communicative action theory, the study examined challenges posed by language used in planning processes on inclusivity in the approval processes of urban plans. Data were collected through semi-structured interviews, focus group discussions, observations and document review and analyzed using thematic and discourse analysis. The findings show that while there is high participation at community planning levels, because planners communicate using local languages, participation is compromised in the service committees at city level where final planning decisions are made due to language barrier. Specifically, lack of sincerity, truthfulness, comprehensibility and therefore legitimacy are apparent. Planners are reluctant to simplify written language and translate planning jargon into local languages for councillors to understand. The study concludes that community participation in the urban planning process in Mzuzu fails to entrench democracy due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made. The study proposes a framework for inclusive participation in urban planning including the motivation, conditions for effective participation and outcomes of participation.\n\nKey Words: Community participation, inclusivity, local governance, communicative action, urban planning.\n\nCommunity participation in decision-making in the urban planning process has been championed for entrenching democratic ideals and development outcomes. Participation was defined by the United Nations Research Institute for Social Development (UNRISD) as: “the organized efforts to increase control over resources and regulative institutions in given situations, on the part of groups and movements hitherto excluded from such control” (Stiefel & Wolfe, 1994: 5). The Malawi Government passed the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024) to facilitate community participation in decision-making in planning and development. Several studies have been conducted on decentralization and local governance in relation to community participation. Little attention has been paid to examining the impact of the language used in planning processes on inclusivity to realize the democratic ideals and development outcomes envisaged in the law and policy. Using the Habermasian communicative action theory, the study examined the challenges posed by the language used in the planning process on inclusivity in the approval processes of urban development plans. Specifically, the study evaluated the influence of planning language on participatory democracy and inclusiveness, by examining the extent to which Habermas precondition of communication, also known as validity claims, namely: a) comprehensibility, b) sincerity, c) truthfulness and d) legitimacy have been met adequately. The failure to meet Habermas validity claims implies failure to entrench democracy and inclusiveness, because communication is itself a precondition of democracy (Taylor, 1998). The paper is structured as follows: section 2 presents literature review. Section 3 presents methodology. Section 4 presents results and discussion. Section 5 presents conclusion and the proposed framework for inclusive participation in urban planning.\n\nHabermas developed a general theory that provides a platform to critique the contemporary capitalist society, while providing the preconditions for a more democratic society, which later inspired the Habermasian theory of Communicative Action (Taylor, 1998). According to Habermas, if two or more people are to communicate effectively with each other, certain conditions have to be met, which he termed; \"general presuppositions of communication\" (Habermas, 1979: 1). Habermas suggests that, when person A communicates with person B, A\n\nimplicitly assumes or makes four validity claims; first, A assumes that what he is saying is, comprehensible (i.e., understandable) to B. This is obviously a precondition of communication because, if what A is saying is incomprehensible to B, then clearly no communication is taking place between A and B (Habermas, 1979; 1987; Taylor, 1998). Secondly, for A to communicate to B, it must be A, himself who communicates, from which Habermas infers that A must be \"Sincere\" in communicating to B. According to Habermas (1987; 1979), the validity claim of sincerity is that for genuine communication to occur between two persons, the speaker must not deceive the listener. Thirdly, A must communicate 'something' to B, from which Habermas infers that A must assume or make the validity claim that the 'something' he is saying is factually \"True\" (Taylor, 1998: 123). Fourthly, in order for A to communicate to B, A must be seeking to come to an understanding with B. Thus, A must assume that what he is saying is legitimate, within the context of moral norms and conventions shared by both A and B (Habermas, 1979: 2-3). Habermas (1979) argues that the communicative action theory enables us to envisage the four preconditions of communication as: comprehensibility, truthfulness, sincerity, and legitimacy (Taylor, 1998). If these four preconditions cannot be met, then no genuine communication will take place. As communication is itself a preconditions for real democracy, and hence, of any democratic participation in planning, and without genuine communication, there can be no genuine participation in urban planning and decision-making (Taylor, 1998).\n\nThe leading pioneer of the communicative planning theory has been an American, known as John Forester, who has drawn extensively on Habermasian theory as a vehicle for evaluating planning practice in terms of the ideals of good communication and democratic participation (Taylor, 1998). In his 1989 book; Planning in the Face of Power, Forester begins from the premises that \"planning is for the people\" and in Western liberal democracies, the planning practice is constrained by the political realities of a capitalist society (p. 3). His aim is to explore the skills that planners need to maximize their effectiveness in planning for people in the face of power (Forester, 1989). He asserts that in order to get things done, planners have to be effective communicators and negotiators, because in planning, talk and argument matter and that the daily business of a planner is basically communicative (Forester, 1989: 5 & 11). He insists that in getting things done, urban planning should aspire to the ideals of democratic decision-making over the development proposals (Forester, 1989). While planners will be negotiating with powerful developers, they should also be active in protecting the interests of all groups in the society,\n\nincluding the less powerful or marginalized communities (Forester, 1989). Drawing from Habermas, Forester emphasizes the duty of planners to facilitate participatory democracy in planning. By emphasizing planner's duty to involve the less powerful groups, by exposing distorted communication and misinformation, Forester sees planning as a communicative process carrying with it, a communicative ethos (Forester, 1989: 22-24).\n\nF \nU \nO \nt \nh \naim number 3: the plans should be righteous - meaning that the plans should be right when \ni According to Dados & Cornell (2012; 13), \"It references an entire history of colonialism, neo- standards, life expectancy and access to resources are maintained; and opens new possibilities in politics and social sciences\". The origin of the prevailing communicative planning theory is the \np \ne However, in his study; 'The Dark Side of Planning: Rationality and Real Rationalitat', in \nA \na \nh \nb \no \nb \ng \ne\n\nA qualitative research approach was employed for this study in order to investigate social interactions and explore meanings that communities ascribed to their participation in the urban planning process. As the philosophical stance of this study is interpretive, the qualitative methods enabled the researcher to explore the multiple constructions of reality from the diverse opinions in order to understand the phenomenon (Bryman, 2004; Myers, 2008).\n\nThe study used the multi-stage cluster sampling, to divide the study population into smaller groups starting from the members at block level, to neighbourhood and ward committees and all the way up to Council Service committees at Mzuzu City Council. Purposive sampling was used to select members of these groups for an in-depth and key informant interviews (KIIs), focus group discussions (FGDs) and observation.\n\nData was collected through semi-structured interviews, FGDs, observations of planning engagements and document review. Qualitative data from community members, such as, block leaders, members of the neighbourhood and ward committees, were extracted through in-depth interviews and focus group discussions (FGDs). Qualitative data from key informants such as councillors, planners and planning officials at Mzuzu City Council, were collected by administering key informant interviews (KIIIs) and observation. Qualitative secondary data was also collected from key planning documents such as urban development plan (UDP), urban structure plan (USP) and planning and service committee minutes.\n\nData was analyzed using deductive thematic analysis and discourse analysis. Unlike inductive thematic analysis (a qualitative research methods where themes emerge directly from the data), the researcher used pre-defined themes derived from Habermas validity claims of comprehensibility, sincerity, truthfulness and legitimacy, to confirm or refute them. The researcher also used discourse analysis to analyse planning language within its social context, in terms of how planners speak to communities. By using the discourse analysis, the researcher was able to identify how power dynamics between communities influence meanings of the planning concepts.\n\nThe data enabled the researcher to explore the extent to which planning language influences the entrenchment of democracy and inclusiveness in the decision-making processes in urban planning in Mzuzu City. The study focused on confirming or refuting the pre-defined themes drawn from Habermas validity claims of comprehensibility, sincerity, truthfulness and legitimacy.\n\nThis theme investigated the extent to which planning language is understandable to communities when planners communicate advice, information and knowledge orally and in\n\nwriting. The findings show that the Habermas validity claim of comprehensibility has not been met. There is a higher level of understanding between planners and communities at the community planning level than in the Council Service Committees at City level, because at the community level, planners speak the local language and all communities understand the communication:\n\n\"When the conversation is done in local languages, many people speak a lot of sense...\" (KII-CLLR1/18-11-24). \"We understand them $100\\%$ because they speak to us in local languages\" (KII-CLLR2/14-11-24).\n\nThese quotes reveal that community participation is higher at the block level, neighbourhood and Ward development committee levels. At the community planning level, communities are required to identify, select and prioritize development projects that they need. It is the first level of planning. Final planning and funding decisions not are made here.\n\nHowever, participants said that community representatives (councillors) struggle to follow the deliberations in the council service committees at City level, due to language barrier:\n\n...the language is English ...councilors do not understand the language... using jargons which are not of their area. Attempts are there to orient them on the operations, they know what to put, but you find that by the end of the day the language still needs to be simplified for them (KII-P1/14-10-24).\n\nNot all of us [councillors] understand English. Most of the information about planning and development is spoken in English. So, most people do not understand... The deliberations in all service committees are conducted in English (KII-CLLR1/18-11-24).\n\nThe extracts reveal that the level of understanding is low because councillors who represent communities in the Council Service Committees fail to follow the deliberations because many of them do not fully comprehend English. The problem gets worse when technocrats use technical language, planning and financial jargons. The study revealed evidence of reluctance to simplify spoken language for community representatives' ease of understanding:\n\n\"And that's why, I have a fight with the Secretariat, because... [They] don't want a learned person .... They want these councilors with lower education so that when they present the reports, councilors do not understand....\" (KII-P4/22-11-24).\n\nThese findings do not agree with Habermas (1979; 1987) who stresses the need for planning language to be understandable to hearers. The failure of councillors to understand and follow the deliberations in the Physical Planning, Finance and other committees implies that grassroots communities are not participating and certainly not being included in the decision-making processes that lead to final decisions. Evidence from the minutes of the Physical Planning and Finance Committees, also indicate that councillors are just passive listeners.\n\nFurther, the study also found that key planning documents like the urban development plans (UDP) and the urban structure plans (USP) have not been translated into the local languages for the majority to understand. There was a perceived reluctance to translate technical language, planning jargons and documents into the local language:\n\n\"...during community outreach programmes, we present ourselves in local languages... but to date the Five Year-Urban Development Plan has not yet been translated... [Of course] translating the whole document would not be meaningful for me because not every information... is relevant to a local person\" (KII-P3/29-10-24).\n\nHowever, communicative planning requires planners to use an easy-to-understand language in planning texts, to simplify planning and technical jargons and even translate them into vernacular languages. For instance, Forester, (1989: 149) advises planners to present their ideas and information in a manner that is less obscure, comprehensible or easy to understand. Habermas (1979; 1987) emphasizes the need for language to always meet the validity claim of comprehensibility, meaning that information being presented by speakers (planners) should be understandable to hearers (communities). The failure of communities to understand what planners are communicating implies that communities are manipulated. It is in the Council Service Committee meetings that crucial planning and funding decisions are made before being forwarded to the Full Council for approval. The fact that inputs from communities are lacking means that the policies and city by-laws do not reflect the will of the majority or the common people. Thus, planning is not democratic and inclusive in Mzuzu City.\n\nThe results of this investigation do not align with the Habermas (1979; 1987) validity claim of comprehensibility. According to Habermas (1979), if A communicates to B, he must assume that what A is saying is comprehensible to B. However, in this study, what A (planners) were saying could only be understood by B (communities) at the block and ward level, where communities only identify, select and prioritize development projects, because the local language is used for communication. However, when the prioritized projects are sent to service committees, councillors (B), fail to participate effectively in the decision-making process about planning and funding allocation, due to language barrier, as A, uses English that is full of technical and planning jargons and the planning texts are not translated into the local language for communities to understand easily.\n\nThese two overlapping and similar themes investigated the extent to which planners' oral and written communication is honest, less deceptive and truthful in order to determine whether planning language enhances participation and inclusivity in the decision-making process in urban planning.\n\nFirst, the validity claim of sincerity investigated levels of honesty or deception in planners' communication. The findings indicate that while the language planners use to communicate with communities appears to be superficially honest and less deceptive during planning meetings, but many interviewees complained that planners are deceptive and dishonest during project implementation:\n\nThey sound sincere ...but deceptions arise during budgeting and resource allocation. Selection and identification of contractors is done by themselves. Councillors are not involved... the Internal Procurement Committee (IPC) sits down to discuss bids on their own. There are no community representatives in the IPC... This is where deceit comes in because they select a contractor who promises kickbacks (KII-CLLR2/14-11-24).\n\nThey speak to us very sincerely during planning meetings. The problems arise during implementation. This is when I think they indulge in fishy businesses. The process gets messed up during the identification and awarding of contracts (KII-CLLR1/18-11-24).\n\nNo. they are not sincere. There is a kind of deception. Mostly about $30\\%$ of the language is deceitful. Largely, deceit comes in so that they can easily convince and walk through. They use deception to advance their own ulterior motives (KII-P1/14-10-24).\n\nThese extracts reveals how deceitful planners are. There is no transparency and accountability when it comes to selecting project contractors. This is done in the internal procurement committee (IPC) in the absence of community representatives. So, while planners sound very sincere during planning meetings, but after that, they implement different things that are contrary to what they communicated publicly. Further, the study revealed evidence of reluctance on the part of the Secretariat to simplify spoken language to enable councillors in service committees understand when financial reports and statements are being presented. In an interview, the City Mayor revealed that he sometimes fights with the Secretariat for resisting concerns about councillors' failure to follow deliberations in the service committees because their level of education is low. He said that the Secretariat is aware that councillors do not understand presentations of the financial reports and statements and that the use of technical language and planning jargons is a deliberate ploy to conceal crucial information from public scrutiny. Also, the key planning documents such as the Urban Development Plan (UDP) and the Urban Structure Plan (USP) are professionally written and that there is no evidence of insincerity and untruthfulness, but problems arise during implementation. This is when what has been communicated according to Habermas criterion, gets distorted during implementation.\n\nSecond, the theme of truthfulness investigated the extent to which planners' oral and written communication is factually accurate or truthful. Participants revealed that planners' language is generally truthful when they speak during planning meetings and in planning texts, but there are instances when planners cheat communities:\n\n... [planners] always sound very truthful when they speak to us... but the problems ...arise during budgeting, resource allocation ... [when] members of the Internal Procurement Committee sit down to discuss bids, there is no community representative ....what is implemented is different from what they agreed with people....\" (KII-CLLR2/14-11-24)\n\nDecentralization is just a myth. The real powers are still at the Council level... Usually if it is coming with resources they withhold information from the grassroots communities because they would want to hide financial resources. They don't want to diverge more information to\n\ncommunities because it will make it so difficult to play fishy businesses. The Council gets more meat and give bones to the communities (KII-P4/22-11-24).\n\nThese results are consistent to Healey's (1995: 259) assertions that verbal agreements reached according to Habermas validity claims can still be distorted in writing by planners in their offices. Flyvbjerg (1996: 392) questions the idea of viewing 'planners as noble individuals' due to failure to 'speak truthfully'. As further proof of lack of sincerity and truthfulness, the researcher was not allowed to observe deliberations in the council service committees, including the Physical Planning Committee, the Finance Committee and the Internal Procurement Committee, despite prior approval to the letter requesting for consent and repeated requests to observe these committees.\n\nThe lack of both sincerity and truthfulness impedes a genuine flow of communication which prevents communities from participating in the decision-making process and thus compromises the goal of entrenching democracy and inclusive urban planning as envisaged in the Local Government Act (1998 amended 2024) and the Malawi decentralization policy (UNDP, 2000; Malawi Government, 1998). This is also contrary to Habermas (1979; 1987) emphasis that the language should always meet the validity claim of comprehensibility, meaning that information being presented by speakers should be understandable to hearers, the validity claim of sincerity which requires higher levels of honesty and the validity claim of truthfulness which demand higher levels of factual accuracy. This apparent lack of honesty and truthfulness impedes a genuine flow of communication between planners and communities, and prevents inclusive community participation in decision-making processes over budgeting, resources allocation, determination of planning applications for development permission and the selection of project contractors in the IPC. This also means that the final budgeting and funding decisions are devoid of the inputs from the grassroots communities thereby, rendering the planning process less participatory and not inclusive.\n\nOn the one hand, the findings agree with Flyvbjerg (1996) study entitled; 'The Dark Side of Planning: Rationality and the Real Rationalitat', in Aalborg, Den Mark, a European country in the Global North. Flyvbjerg (1996) found the idea of viewing planners as noble creatures as a myth, that planners are unethical, crooks, liars, deceivers and corrupt professionals, that the reality in which planning takes place is false, cruel, contradictory and seductive, and that planners, as\n\nhuman beings need lies to survive (Flyvbjerg, 1996: 391). He insists that most observers would agree that deception is part and parcel of many everyday decisions in government because the social incentives of deception are at present very powerful, while controls are often weak, and that deceptions are part and parcel of the decisions planners are involved in and that the incentives for planners to deceive others are strong (Flyvbjerg, 1996: 392). He concludes that the idea that planners are noble individuals with good manners are plain lies of the planning theorists (Flyvbjerg, 1996: 391).\n\nOn the other hand, these results do not align and spits in the face of Henri Lefebvre's (1996) influential ideas about the 'right to the city', in which he states that cities should be understood as common goods, benefiting all residents, rather than just the rich and powerful oligarchs, that cities should be inclusive, where every street, every building and every corner belongs to the people who live there, not just the rich, the planners and the entitled, but all of us. He criticized technocratic planning, arguing that the common people should have power to shape the cities they live in and that cities are not just spaces for power and control, but are living spaces that we create together, yet too often they are controlled by capitalist oligarchs, profit hungry developers and apathetic governments (Lefebvre, 1996).\n\nThis theme investigated the extent to which planners, oral and written communication meets Habermas (1979; 1987) validity claim of legitimacy. Legitimacy is concerned with how communication complies with the normative values and conventions (Forester, 1993).\n\nThe findings from a review of the planning language in the three key planning texts; urban structure plan (USP), urban development plan (UDP) and urban profile (SEP) for Mzuzu City, indicate that communication complies with the normative values, conventions and laws as outlined in the legal provisions of the Constitution of the Republic of Malawi (1995), the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024) and the Town and Country Planning Act (1988), (Mzuzu City Council, 2023 – 2030). The language in the three key planning documents also comply with the provisions of the Constitution of the Republic of Malawi of 1995, which requires the full participation of the grassroots communities in the decision-making processes in order to entrench democracy in Malawi. The language also\n\nemphasizes the decentralization of powers from Central to local authorities and from the top half to the lower half of the Council:\n\nThe Grassroots Participation Process (GPP) is the bottom-up process, which involves consultation with the communities that aims to gather information on their needs. A GPP task force is formed whose members work in collaboration with Ward Development Committee; block leaders perform the GPP (NAP process). The output of this is the prioritized list of projects. The resultant output of the urban socio-economic profile and the GPP is the formulation of the Urban Development Planning Framework, which highlights major issues, potentials and development objectives and strategies. The framework forms the basis for the formulation of projects and programmes (Mzuzu City (UDP), 2023-2030: p. VIII).\n\nThe excerpt underscores that community participation is central to the urban planning process in Mzuzu City. The urban profile document assessed the current situation and identified available developmental potential for the Council and using these findings, planners developed the urban development strategy, programmes and projects to be implemented in the next 10 to 15 years. The formulation of the UDP and USP was based on the findings of the process of dialogue, inclusive democracy and discourse with equal distribution of power for argumentation with communities. Thus, planning texts are legitimate because they are based on the outcome of the communicative process and in compliance with the legal provisions of Malawi.\n\nA review of the key planning texts reveals that written communication complies with Habermas (1979) validity claim of legitimacy, which requires the language that planners use to communicate planning ideas, knowledge and information, conform to the normative values, moral conventions and legislation. The three planning texts were written professionally and complies with the legal provisions outlined in the Malawi Constitution, the Local Government Act and the national decentralization policy documents outlined above. All the three key planning documents emphasize participation and representation of the grassroots communities as a way of entrenching participatory democracy and inclusive urban planning.\n\nHowever, in spite of this compliance, participants lamented that problems still arise in the service committees and in the IPC at City level where final decisions are made in absence of community representatives, as alluded to earlier, and during the implementation phase. The study revealed that while planners may communicate orally that a particular area has been designated as\n\na residential area, in conformity with what is indicated in the urban structure plan (USP), but during implementation, the area designated as residential turns out to be a mixed-use zone of residential and commercial structures and a high density area becomes a mixed zone of high, medium and low density housing structures:\n\nThe structure plan – they approved it, to say, by law this is zoned for residential and the like, but when it comes to implementation, it becomes different – a mixture of residential, commercial and etc... So the documentation itself is legitimate, but here, you are talking of oral – they are communicating orally legitimate things, but when implementing, it's different - it's illegitimate. Thus, legitimacy is really there. They communicate according to the law – but when it comes to implementation, this legitimacy ceases (KII-P1/14-10-24).\n\nThis tendency implies that the actual physical infrastructure development is not in line with what is indicated in the Urban Structure Plan (USP) and the Urban Development Plan (UDP), resulting into a kind of development that reflects the ulterior motives of the rich and powerful oligarchs, instead of the will of the majority of the Mzuzu City residents. This also means that the development of the City is illegitimate because it fails to comply with the legitimate urban plans which were formulated in compliance with the legal provision that emphasize participation and representation of communities in planning and decision-making.\n\nThese findings are consistent with Healey (1993) who reveals that verbal agreements reached according to Habermas (1987; 1979) validity claims can still be distorted in writing by planners in their offices. Thus in theory, planners communicate in accordance with the provisions of the laws, which require the participation and representation of communities in all decision-making processes and the decentralization of powers to grassroots communities (Malawi Government, 1995; 1998; UNDP, 2000). However, in practice, all decisions are made by technocrats in service committees at City level and planners resist delegating these powers to community planning committees despite the decentralization policy requiring them to do so. The fact that councillors who represent their electorate are passive listeners in Council service committees, implies that their communities are not actively participating and influencing planning decisions. They do not actively participate in the deliberations that lead to final planning decisions. They are thus placated and manipulated.\n\nOn the one hand, these findings are consistent with those of Yiftachel (1998). In his article, 'Planning and Social Control: Exploring the Dark Side'. Yiftachel (1998) argues that urban\n\nplanning, despite its potential for positive change, has a hidden dark side, where it functions as a tool for social control and oppression, especially for the marginalized groups. Influenced by Michel Foucault, Yiftachel (1998) highlights how planners can reinforce existing power structures by manipulating space and socioeconomic conditions to benefit certain groups while excluding others. In the same vein, the results reveal a hidden dark side, where planners exclude community representatives from the decision-making processes about planning, budgeting, resource allocation and the selection of the project contractors.\n\nOn the other hand, the findings are contrary to Habermasian theories of communicative rationality and action. Habermas asserts that if two or more people are to communicate effectively, certain conditions have to be met (Habermas 1979; 1). One of these preconditions is legitimacy. According to Habermas, if A communicates to B, he should assume that what A is saying is legitimate (complies with moral norms, conventions and laws) (Habermas 1979; 1; Taylor 1998; 123). The failure to meet the validity claim of legitimacy implies that no genuine communication is taking place, and thus a lack of community participation in urban planning. As genuine communication is a pre-condition for participatory democracy, its absence means the absence of participatory democracy and inclusivity in urban planning.\n\nThe limitation of the findings is that the researcher was not granted access to observe deliberations in service committee meetings. However, the researcher concludes that this was further proof of lack of sincerity and truthfulness, because he was not granted access to observe Council service committee deliberations, despite consenting to this earlier on (see the Request for Consent Letter on the Appendix Section). Nevertheless, the researcher managed to access the minutes of the previous service committee meetings and was able to draw conclusions.\n\n# onclusion\n\nThe study concludes that planning language in the decision-making process in urban planning in Mzuzu City fails to enhance community participation and inclusivity, due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made, resulting in failure to entrench participatory democracy. The study found that Habermas (1979; 1987) validity claims have not been met because planning language is incomprehensible, insincere, untruthful and illegitimate, thereby compromising participation and inclusive urban planning in\n\nMzuzu City. First, this study revealed that the validity claim of comprehensibility was met at the block and ward level and community participation is high, because planners use the local language to communicate, but the validity claim of comprehensibility was not met in the service committees at city level and participation was compromised because planners use English, fraught with technical language and planning jargons, as the official language of communication. Second, the study revealed that the validity claims of sincerity was not met because the levels of planners' deception, which impede genuine flow of communication in service committees at city level, were high. While planners speak and write in a manner that appears to meet Habermas (1979; 1987) validity claims of sincerity, participants revealed episodes of deception and dishonesty during budgeting, resource allocation and the selection of project contractors in absence of community representatives. Third, the study unveiled that the validity claim of truthfulness was not met because the levels of cheating and factual inaccuracies in planners' oral and written communication, which impedes a genuine flow of planning ideas, knowledge and information, is high. While on the surface, planners' oral and written communication sounds as though they meet Habermas validity claim of truthfulness, but participants narrated episodes of lies, cheating and inaccuracies that arise in the service committees at City level, especially during budgeting, resource allocation, and determination of planning applications for development permission and in writing of the certificate of escalation.\n\nFourth, while both spoken and written language during planning meetings and in the UDP and USP documents meet Habermas (1979; 1987) validity claim of legitimacy, in that they comply with the legal provisions outlined in the Malawi Constitution (1995), the Local Government Act (1998 amended 2024) and the National Decentralization Policy (1998 amended 2024), however, participants complained that problems arise in the service committees and during the implementation phase. They revealed that what is legitimately communicated is not what usually gets implemented. It was also found that although key planning texts (UDP and USP) are sincerely, truthfully and legitimately written, but these documents have not been translated into the local language for everyone to read. Many communities are not aware of their existence.\n\nTherefore, planning language fails to enhance community participation and inclusivity in the decision-making process in urban planning due to language barrier, due to lack of inclusiveness owing to the language barrier at city level where final planning decisions are made, resulting in failure to entrench participatory democracy\n\n# framework for Inclusive Participation in Urban Planning\n\nIn order to realize the intentions of the policy and law in local governance, a framework that enables easy communication and understanding is proposed. Habermas (1979; 1987) requires the meeting of the validity claims of comprehensibility, sincerity, truthfulness and legitimacy. As communication is itself a precondition for democracy, poor communication implies lack of participation in the democratic decision-making process.\n\nTherefore, to achieve full participation for effective inclusion in planning processes, certain conditions have to be met in Mzuzu City. These include, the motivations and conditions for inclusive participation for the purposes of realizing sustainable development goals (SDG11) as well as regional (Africa 2063) and national aspirations (mw2063). The study proposes a framework for inclusive community participation in planning. This framework has four tiers of community participation in urban planning. The first tier begins at the Block Level. This is where grassroots communities directly participate in the planning process. This is the first stage in the planning process. The motives of participation is that grassroots communities should identify and select community development projects and send them to the Neighbourhood Committees. The condition for inclusive participation should be that block leaders ensure that the identified projects truly and genuinely reflect the needs of the grassroots communities, rather than the ulterior motives of the community leaders. The results of participation of the grassroots communities must be projects that communities really need.\n\nThe second tier of participation is the community planning committees, split into two: Neighbourhood and Ward committees. The form of participation in this tier is indirect participation by elected members who participate on behalf of their people. The motive of participation must be to prepare area action plans which reflect the needs of communities at block levels. The conditions for inclusive community participation include: that whatever the members say and do should always reflect the true and genuine aspirations, needs and will of the grassroots communities. The results of participation, should be the action area plans which truly reflect the will of the grassroots communities, rather than the selfish needs of the community leaders and representatives.\n\nThe third level of participation is the Council Service Committees. This category should have both direct (planners/technocrats) and indirect (councillors representing communities and\n\nother stakeholders) participation. Participants must include; planners, councillors, other government officials; representatives of other interest groups. The motives of participation for planners should be to make planning, budgeting and funding decisions that advance the best interests of the grassroots communities, to provide technical, advice and orientation to councillors to enable them to ably represent their communities, to provide a conducive environment for councillors to fully participate in all decision-making processes of the service committees. The motives for the councillors should be to represent and amplify the voices of the grassroots communities, participate in the decision-making processes on the behalf of the communities, participate in budgeting and funding allocation and play a significant role in the selection of project contractors in the IPC. The conditions for effective participation should include: planners providing good technical advice and adequate orientation to councillors; planners sharing decision-making powers with community representatives; communities from all 15 wards must be represented by their councillors, not like it is right now where there only three councillors in service committees; and ensure that councillors fully participate in the selection process of the project contractors, to ensure transparency and accountability. The results of participation must indicate that the final decisions regarding plans and budgets must reflect the will of the communities, rather than the will of the technocrats and the councillors.\n\n# 6.0 References\n\nBryman, A. (2004). Social Science Research Methods (2<sup>nd</sup> Ed), Oxford University Press. \nChasukwa, M., Chiweza, A. L., & Chikapa-Jamali, M. (2014). Public Participation in Local Councils in Malawi in the Absence of Local Elected Representatives-Political Eliticism or Pluralism? Journal of Asian and African Studies, 49(6), 705-720. \nDados, N. & Cornell, R. (2012). The Global South. Context. Vol. 11 (1); pp 12-13 \nFlyvbjerg, B. (1996). The dark side of planning: rationality and 'real rationalitat'. University of Oxford Press. \nForester, J. (1982). Planning in the Face of Power. Journal of the American Planning Association, 48(1), 67-80. https://doi.org/10.1080/01944368208976167 \nForester, J. (1989). Planning in the Face of Power, Journal of the American Planning Association. January 1989 DOI: 10.1080/01944368208976167 \nHabermas, J. (1979). What is universal pragmatics? Communication and the Evolution of Society, 1, 2-4.\n\nHabermas, J. (1987). The theory of communicative action. Vol. 2: Lifeworld and system: A critique of functionalist reason. Boston, MA: Bacon Press. \nHealey, P. (1996). The communicative turn in planning theory and its implications for spatial strategy formation. *Environment and Planning B: Planning and design*, 23(2), 217-234. \nHealey, P. (1997). Collaborative planning in a stakeholder society. *Town planning review*, 69(1), 1. \nHussein, M. K. (2003). The role of Malawian local government in community development. Development Southern Africa, 20(2), 271-282. \nInnes, J. E. (1995). Planning theory's emerging paradigm: Communicative action and interactive practice. Journal of planning education and research, 14(3), 183-189. \nJohn, F. (1993). Critical Theory, Public Policy, and Planning Practice - Toward a Critical Pragmatism, State University of New York Press. \nMiraftab, F. (2004). Invited and Invented Spaces of Participation: Neoliberal Citizenship and Feminists' Expanded Notion of Politics. Wagadu: A Journal of Transnational Women's & Gender Studies, 1(1), 3. \nMiraftab, F. (2009). Insurgent planning: Situating radical planning in the global south. Planning theory, 8(1), 32-50. \nMyers, M.D. (2008) Qualitative Research in Business & Management. Sage Publications, Thousand Oaks. Retrieved from https://research-methodology.net/research-philosophy/interpretivism/#_ftn1 \nMzuzu City Council, (2015 - 2030). Urban Structure Plan. Mzuzu \nMzuzu City Council, (2022 - 2030). Mzuzu Urban Profile. Mzuzu. \nMzuzu City Council, (2023 - 2030). Urban Development Plan. Mzuzu \nSager, T. (1994). Communicative planning theory. (No Title). \nSandercock, L. (Ed.). (1998). making the invisible visible: A multicultural planning history (Vol. 2). University of California Press. \nTambulasi, R. I. (2009). Decentralization as a breeding ground for conflicts: An analysis of institutional conflicts in Malawi's decentralized system. *Joaag*, 4(2). \nTambulasi, R. I. (2011). Local government without governance: A new institutional perspective of local governance policy paralysis in Malawi. *Public policy and administration*, 26(3), 333-352. \nTaylor, N. (1998). Urban planning theory since 1945.\n\nThe Local Government Act, (1998 amended 2024). Lilongwe, Malawi \nThe National Decentralization Policy (1998 amended 2024). Lilongwe, Malawi. \nWatson, V. (2011). \"Inclusive urban planning for the working poor: Planning education trends and potential shifts.\" WIEGO Working Paper 21 (2011). \nWatson, V. (2013). Planning and the 'stubborn realities' of global south-east cities: Some emerging ideas. Planning Theory, 12(1), 81-100. \nWatson, V. (2016). Shifting approaches to planning theory: Global North and South. *Urban Planning*, 1(4), 32-41. \nYassin, I. (2022). Community Participation in Local Governance in Malawi: A Case of Blantyre District, International Research Journal of Modernization in Engineering Technology and Science, Volume: 04/Issue: 05. \nYiftachel, O. (2009). Critical theory and 'gray space': Mobilization of the colonized. *City*, 13(2-3), 246-263."}
# Evolving the Loeb Scale Abstract We develop a differential formulation of the Loeb Scale that extends the original static framework into a radially evolving, real-time classification scheme for interstellar objects. By promoting each anomaly metric to a function of heliocentric distance and introducing a relaxation equation for the effective score, our method incorporates memory, hysteresis and predictive capability. This allows us to have early, stable forecasts of an object's eventual Loeb level based on sparse data obtained at large distances, which is more helpful to quantify its true nature when near Earth. # 1 Introduction Over the past decade, the discovery of interstellar objects (ISOs) has transformed our understanding of the diversity of bodies that traverse the Solar System. The identification of 1I/ $^\prime$ Oumuamua in 2017, 2I/Borisov in 2019 and most recently 3I/ATLAS in 2025 has opened an entirely new window into the study of extrasolar planetesimals. While 2I/Borisov behaved as a conventional comet, the anomalous characteristics of 1I/ $^\prime$ Oumuamua and 3I/ATLAS show the possibility that the Solar System may occasionally be visited by objects that deviate significantly from the physical and dynamical properties of familiar comets and asteroids. With the imminent operations of the Vera C. Rubin Observatory, detection rates of ISOs are expected to rise by one to two orders of magnitude, making it imperative to develop quantitative tools that can rapidly assess the nature of newly discovered objects and discriminate between ordinary interstellar debris and bodies exhibiting potentially technological signatures. As the catalog of ISOs grows, so too does the need to evaluate not only their scientific significance but also the extent to which they may pose a hazard to Earth. Motivated by this challenge, the Loeb Scale was introduced as a structured ten-level classification scheme that ranks objects according to the degree of anomaly they exhibit relative to natural icy rocks. Much like the Kardashev scale provides us with a classification for the energy capacities of civilizations, the Loeb Scale offers a unified language for characterizing potential interstellar artifacts. It considers a wide variety of objects, ranging from objects entirely consistent with natural origins (Level 0) to those whose behavior may indicate artificial construction or even constitute a technological threat (Levels 8-10). While this framework has provided an essential conceptual foundation, the increasing pace of ISO discoveries demands methods that can evaluate their Loeb classification continuously as new observations accumulate during their passage through the Solar System. A full mathematical formulation of the Loeb Scale was established in Ref., providing a quantitative mapping from observed anomalies to a continuous score and subsequent discrete level assignment. One key observation though is that this formulation remains fundamentally static, depending on measurements obtained at a single epoch and offering no means to incorporate the evolving physical and dynamical characteristics of an ISO as it approaches the inner Solar System. Because most ISOs are detected at large heliocentric distances where observational uncertainties are substantial, a static evaluation may poorly reflect the eventual classification once richer datasets become available near Earth. This motivates the development of a differential, radially evolving version of the Loeb Scale that updates continuously with incoming data naturally incorporates memory of sustained anomalies and forecasts the likely Loeb level by the time the object reaches Earth's vicinity. This is what we aim to achieve in this work and we organize it as follows. In section 2 we review the mathematical structure of the Loeb Scale and in section 3 we introduce the differential evolution equation governing its radial behavior. In section 4 we discuss caveats and limitations of this formulation and in section 5 we summarize our conclusions. # 2 Mathematical Foundations of the Loeb Scale In order to formulate a differential generalization of the Loeb scale, it is useful to summarize the mathematical framework of the scale itself as developed in our previous work. The Loeb scale is a ten-level classification scheme for the technosignature significance of interstellar objects, ranging from fully natural bodies at level 0 to confirmed existential threats at level 10. The purpose of the scale is to provide a reproducible and quantitative mapping from observational anomalies to a well defined integer level that reflects both the physical character of the object and its potential technological implications. To achieve this, one begins by defining a set of normalized anomaly metrics that encode the degree to which a given observable departs from expectations for natural Solar System populations. Note that here each metric is constructed from raw measurements and is transformed into a normalized variable $m_i \in$ , where $m_i = 0$ indicates full consistency with natural behavior and $m_i = 1$ represents a maximally anomalous, technologically suggestive or extreme value. The metrics include non-gravitational acceleration anomaly $A$ , spectral or compositional anomaly $B$ , shape or lightcurve anomaly $C$ , albedo or surface-weathering anomaly $D$ , trajectory or targeting improbability $E$ , electromagnetic signal significance $F$ , operational or behavioral indicators $G$ and optionally an impact-risk factor $H$ for differentiating between upper levels. Each metric is computed from a raw observable and mapped into the normalized range via monotonic transforms and calibrated clamping functions. To begin, we would briefly summarize the various metrics considered, starting with the non-gravitational acceleration anomaly which begins with the raw value $$ A _ {\mathrm {r a w}} = \log_ {1 0} \left(\frac {a _ {\mathrm {o b s}}}{a _ {\mathrm {r e f}}}\right) \tag {1} $$ where $a_{\mathrm{obs}}$ denotes the measured non-gravitational acceleration and $a_{\mathrm{ref}}$ is a reference value chosen to represent nominal cometary behavior and the raw quantity is mapped into through $$ A = \operatorname {c l a m p} \left(\frac {A _ {\mathrm {r a w}} + 2}{4}, 0, 1\right) \tag {2} $$ where the constants shift and scale the logarithmic range so that typical cometary accelerations yield values near $A \approx 0.5$ . The spectral anomaly metric $B$ compares observed spectra, gas production rates and line ratios to empirical population distributions of cometary species. If $\chi_{\mathrm{mismatch}}^2$ denotes a measure of deviation between the observed spectrum and the best-fit natural template then one may define the continuous mapping $$ B = \operatorname {c l a m p} \left(\frac {\chi_ {\text {m i s m a t c h}} ^ {2}}{\chi_ {\text {m i s m a t c h}} ^ {2} + K _ {B}}, 0, 1\right) \tag {3} $$ where $K_{B}$ is a tunable constant controlling the sensitivity and a more refined construction uses the population percentile of each measured quantity, which can be defined as $$ s _ {x} = 1 - 2 \min \left(F _ {\mathrm {p o p}} \left(x _ {\star}\right), 1 - F _ {\mathrm {p o p}} \left(x _ {\star}\right)\right) \tag {4} $$ where we note that $F_{\mathrm{pop}}$ is the cumulative distribution function for the relevant cometary dataset. For censored measurements with upper limits, one replaces $F_{\mathrm{pop}}$ by the corresponding survival function. Multiple indicators are used here which are combined as a weighted sum of their rarity scores and mapped to by a rational transform of the form $$ B = \operatorname {c l a m p} \left(\frac {\sum_ {k} \alpha_ {k} s _ {x _ {k}}}{\sum_ {k} \alpha_ {k} s _ {x _ {k}} + K _ {B}}, 0, 1\right) \tag {5} $$ The shape anomaly $C$ is derived from the inferred aspect ratio $R$ of the body, which ends up using $$ C = \operatorname {c l a m p} \left(\frac {\log_ {1 0} (R)}{\log_ {1 0} \left(R _ {\max }\right)}, 0, 1\right) \tag {6} $$ where $R_{\mathrm{max}}$ is a maximum reference ratio chosen to encapsulate the upper tail of plausible natural shapes. The albedo anomaly $D$ is constructed relative to the two-Rayleigh mixture distribution that describes the empirical albedo distribution of small Solar System bodies. If the mixture probability density is denoted by $$ p _ {2 R} (p _ {V}) = f _ {D} \operatorname {R a y} \left(p _ {V}; d\right) + \left(1 - f _ {D}\right) \operatorname {R a y} \left(p _ {V}; b\right) \tag {7} $$ with the Rayleigh components $$ \operatorname {R a y} (x; \sigma) = \frac {x}{\sigma^ {2}} \exp \left(- \frac {x ^ {2}}{2 \sigma^ {2}}\right) \tag {8} $$ and empirically fitted parameters $f_{D}$ , $d$ , and $b$ , then the rarity of an observed albedo $p_V^\star$ is quantified through the two-sided tail probability $$ p _ {\text {t a i l}} \left(p _ {V} ^ {\star}\right) = \min \left(\int_ {0} ^ {p _ {V} ^ {\star}} p _ {2 R} \left(p _ {V}\right) d p _ {V}, \int_ {p _ {V} ^ {\star}} ^ {\infty} p _ {2 R} \left(p _ {V}\right) d p _ {V}\right) \tag {9} $$ and the normalized albedo anomaly is $$ s _ {\text {a l b e d o}} = 1 - 2 p _ {\text {t a i l}} \left(p _ {V} ^ {\star}\right) \tag {10} $$ This is then converted to $D$ by $$ D = \operatorname {c l a m p} \left(\frac {s _ {\text {a l b e d o}}}{s _ {\text {a l b e d o}} + K _ {D}}, 0, 1\right) \tag {11} $$ The trajectory anomaly $E$ is based on the improbability of the arrival geometry under an isotropic flux of incoming interstellar objects and if $p$ denotes this probability, one defines $$ E = \operatorname {c l a m p} \left(\frac {- \log_ {1 0} (p)}{X}, 0, 1\right) \tag {12} $$ where $X$ is a scaling parameter that tunes the sensitivity of the metric to rare arrival trajectories. One can also define the electromagnetic signal score $F$ and operational behavior metric $G$ , which are constructed from monotonic
# Evolving the Loeb Scale Abstract We develop a differential formulation of the Loeb Scale that extends the original static framework into a radially evolving, real-time classification scheme for interstellar objects. By promoting each anomaly metric to a function of heliocentric distance and introducing a relaxation equation for the effective score, our method incorporates memory, hysteresis and predictive capability. This allows us to have early, stable forecasts of an object's eventual Loeb level based on sparse data obtained at large distances, which is more helpful to quantify its true nature when near Earth. # 1 Introduction Over the past decade, the discovery of interstellar objects (ISOs) has transformed our understanding of the diversity of bodies that traverse the Solar System. The identification of 1I/ $^\prime$ Oumuamua in 2017, 2I/Borisov in 2019 and most recently 3I/ATLAS in 2025 has opened an entirely new window into the study of extrasolar planetesimals. While 2I/Borisov behaved as a conventional comet, the anomalous characteristics of 1I/ $^\prime$ Oumuamua and 3I/ATLAS show the possibility that the Solar System may occasionally be visited by objects that deviate significantly from the physical and dynamical properties of familiar comets and asteroids. With the imminent operations of the Vera C. Rubin Observatory, detection rates of ISOs are expected to rise by one to two orders of magnitude, making it imperative to develop quantitative tools that can rapidly assess the nature of newly discovered objects and discriminate between ordinary interstellar debris and bodies exhibiting potentially technological signatures. As the catalog of ISOs grows, so too does the need to evaluate not only their scientific significance but also the extent to which they may pose a hazard to Earth. Motivated by this challenge, the Loeb Scale was introduced as a structured ten-level classification scheme that ranks objects according to the degree of anomaly they exhibit relative to natural icy rocks. Much like the Kardashev scale provides us with a classification for the energy capacities of civilizations, the Loeb Scale offers a unified language for characterizing potential interstellar artifacts. It considers a wide variety of objects, ranging from objects entirely consistent with natural origins (Level 0) to those whose behavior may indicate artificial construction or even constitute a technological threat (Levels 8-10). While this framework has provided an essential conceptual foundation, the increasing pace of ISO discoveries demands methods that can evaluate their Loeb classification continuously as new observations accumulate during their passage through the Solar System. A full mathematical formulation of the Loeb Scale was established in Ref., providing a quantitative mapping from observed anomalies to a continuous score and subsequent discrete level assignment. One key observation though is that this formulation remains fundamentally static, depending on measurements obtained at a single epoch and offering no means to incorporate the evolving physical and dynamical characteristics of an ISO as it approaches the inner Solar System. Because most ISOs are detected at large heliocentric distances where observational uncertainties are substantial, a static evaluation may poorly reflect the eventual classification once richer datasets become available near Earth. This motivates the development of a differential, radially evolving version of the Loeb Scale that updates continuously with incoming data naturally incorporates memory of sustained anomalies and forecasts the likely Loeb level by the time the object reaches Earth's vicinity. This is what we aim to achieve in this work and we organize it as follows. In section 2 we review the mathematical structure of the Loeb Scale and in section 3 we introduce the differential evolution equation governing its radial behavior. In section 4 we discuss caveats and limitations of this formulation and in section 5 we summarize our conclusions. # 2 Mathematical Foundations of the Loeb Scale In order to formulate a differential generalization of the Loeb scale, it is useful to summarize the mathematical framework of the scale itself as developed in our previous work. The Loeb scale is a ten-level classification scheme for the technosignature significance of interstellar objects, ranging from fully natural bodies at level 0 to confirmed existential threats at level 10. The purpose of the scale is to provide a reproducible and quantitative mapping from observational anomalies to a well defined integer level that reflects both the physical character of the object and its potential technological implications. To achieve this, one begins by defining a set of normalized anomaly metrics that encode the degree to which a given observable departs from expectations for natural Solar System populations. Note that here each metric is constructed from raw measurements and is transformed into a normalized variable $m_i \in$ , where $m_i = 0$ indicates full consistency with natural behavior and $m_i = 1$ represents a maximally anomalous, technologically suggestive or extreme value. The metrics include non-gravitational acceleration anomaly $A$ , spectral or compositional anomaly $B$ , shape or lightcurve anomaly $C$ , albedo or surface-weathering anomaly $D$ , trajectory or targeting improbability $E$ , electromagnetic signal significance $F$ , operational or behavioral indicators $G$ and optionally an impact-risk factor $H$ for differentiating between upper levels. Each metric is computed from a raw observable and mapped into the normalized range via monotonic transforms and calibrated clamping functions. To begin, we would briefly summarize the various metrics considered, starting with the non-gravitational acceleration anomaly which begins with the raw value $$ A _ {\mathrm {r a w}} = \log_ {1 0} \left(\frac {a _ {\mathrm {o b s}}}{a _ {\mathrm {r e f}}}\right) \tag {1} $$ where $a_{\mathrm{obs}}$ denotes the measured non-gravitational acceleration and $a_{\mathrm{ref}}$ is a reference value chosen to represent nominal cometary behavior and the raw quantity is mapped into through $$ A = \operatorname {c l a m p} \left(\frac {A _ {\mathrm {r a w}} + 2}{4}, 0, 1\right) \tag {2} $$ where the constants shift and scale the logarithmic range so that typical cometary accelerations yield values near $A \approx 0.5$ . The spectral anomaly metric $B$ compares observed spectra, gas production rates and line ratios to empirical population distributions of cometary species. If $\chi_{\mathrm{mismatch}}^2$ denotes a measure of deviation between the observed spectrum and the best-fit natural template then one may define the continuous mapping $$ B = \operatorname {c l a m p} \left(\frac {\chi_ {\text {m i s m a t c h}} ^ {2}}{\chi_ {\text {m i s m a t c h}} ^ {2} + K _ {B}}, 0, 1\right) \tag {3} $$ where $K_{B}$ is a tunable constant controlling the sensitivity and a more refined construction uses the population percentile of each measured quantity, which can be defined as $$ s _ {x} = 1 - 2 \min \left(F _ {\mathrm {p o p}} \left(x _ {\star}\right), 1 - F _ {\mathrm {p o p}} \left(x _ {\star}\right)\right) \tag {4} $$ where we note that $F_{\mathrm{pop}}$ is the cumulative distribution function for the relevant cometary dataset. For censored measurements with upper limits, one replaces $F_{\mathrm{pop}}$ by the corresponding survival function. Multiple indicators are used here which are combined as a weighted sum of their rarity scores and mapped to by a rational transform of the form $$ B = \operatorname {c l a m p} \left(\frac {\sum_ {k} \alpha_ {k} s _ {x _ {k}}}{\sum_ {k} \alpha_ {k} s _ {x _ {k}} + K _ {B}}, 0, 1\right) \tag {5} $$ The shape anomaly $C$ is derived from the inferred aspect ratio $R$ of the body, which ends up using $$ C = \operatorname {c l a m p} \left(\frac {\log_ {1 0} (R)}{\log_ {1 0} \left(R _ {\max }\right)}, 0, 1\right) \tag {6} $$ where $R_{\mathrm{max}}$ is a maximum reference ratio chosen to encapsulate the upper tail of plausible natural shapes. The albedo anomaly $D$ is constructed relative to the two-Rayleigh mixture distribution that describes the empirical albedo distribution of small Solar System bodies. If the mixture probability density is denoted by $$ p _ {2 R} (p _ {V}) = f _ {D} \operatorname {R a y} \left(p _ {V}; d\right) + \left(1 - f _ {D}\right) \operatorname {R a y} \left(p _ {V}; b\right) \tag {7} $$ with the Rayleigh components $$ \operatorname {R a y} (x; \sigma) = \frac {x}{\sigma^ {2}} \exp \left(- \frac {x ^ {2}}{2 \sigma^ {2}}\right) \tag {8} $$ and empirically fitted parameters $f_{D}$ , $d$ , and $b$ , then the rarity of an observed albedo $p_V^\star$ is quantified through the two-sided tail probability $$ p _ {\text {t a i l}} \left(p _ {V} ^ {\star}\right) = \min \left(\int_ {0} ^ {p _ {V} ^ {\star}} p _ {2 R} \left(p _ {V}\right) d p _ {V}, \int_ {p _ {V} ^ {\star}} ^ {\infty} p _ {2 R} \left(p _ {V}\right) d p _ {V}\right) \tag {9} $$ and the normalized albedo anomaly is $$ s _ {\text {a l b e d o}} = 1 - 2 p _ {\text {t a i l}} \left(p _ {V} ^ {\star}\right) \tag {10} $$ This is then converted to $D$ by $$ D = \operatorname {c l a m p} \left(\frac {s _ {\text {a l b e d o}}}{s _ {\text {a l b e d o}} + K _ {D}}, 0, 1\right) \tag {11} $$ The trajectory anomaly $E$ is based on the improbability of the arrival geometry under an isotropic flux of incoming interstellar objects and if $p$ denotes this probability, one defines $$ E = \operatorname {c l a m p} \left(\frac {- \log_ {1 0} (p)}{X}, 0, 1\right) \tag {12} $$ where $X$ is a scaling parameter that tunes the sensitivity of the metric to rare arrival trajectories. One can also define the electromagnetic signal score $F$ and operational behavior metric $G$ , which are constructed from monotonic transforms of narrowband signal-to-noise ratios, modulation properties, maneuvering residuals or sub-object detections. Here each are mapped into the unit interval using logistic or rational functions and the impact-risk factor $H$ is defined in terms of impact probability and kinetic energy, normalized so that objects posing negligible risk satisfy $H \approx 0$ and impactors with catastrophic energy yield $H \approx 1$ . Having defined all metrics $m_i$ , the Loeb scale consolidates them into a single composite anomaly score $S \in$ by combining linear contributions and pairwise synergies. Here we have the linear contribution being given as $$ S _ {\text {l i n}} = \sum_ {i} w _ {i} m _ {i} \tag {13} $$ where the weights $w_{i}$ satisfy $w_{i} \geq 0$ and $\sum_{i} w_{i} = 1$ and to capture the fact that distinct anomalies reinforce each other, one includes interaction terms $$ S = \sum_ {i} w _ {i} m _ {i} + \sum_ {i < j} w _ {i j} m _ {i} m _ {j} \tag {14} $$ where $w_{ij}$ are small, tunable coefficients restricted to physically motivated pairs. Note that this composite score increases more strongly when multiple independent anomalies co-occur, reflecting heightened suspicion. The composite score is mapped to the integer Loeb levels via calibrated thresholds where, for example, one may assign level 0 for $S < 0.20$ , level 1 for $0.20 \leq S < 0.35$ , level 2 for $0.35 \leq S < 0.50$ and so forth, with the critical threshold for formal technosignature consideration placed at $S \approx 0.60$ corresponding to level 4. At the upper end scores $S \geq 0.995$ correspond to level 10, which indicates a confirmed artificial object on an Earth-impact trajectory with globally catastrophic consequences. Uncertainty propagation follows directly from the measurement uncertainties of each metric. In a first-order approximation, if $\sigma_{m_i}$ denotes the error of metric $m_i$ , then the variance of $S$ is $$ \sigma_ {S} ^ {2} \approx \sum_ {i} \left(w _ {i} + \sum_ {j \neq i} w _ {i j} m _ {j}\right) ^ {2} \sigma_ {m _ {i}} ^ {2} \tag {15} $$ although a full Monte Carlo propagation of the metric distributions is recommended for robust communication. By treating the metrics as functions of heliocentric distance and introducing a dynamical evolution equation for the effective score, one can extend the static mapping into a continuous, real-time classification scheme that evolves with observational data. The remainder of this work develops such a differential extension by promoting the composite score $S$ to a radially evolving quantity and then coupling it to parametric models for the radial dependence of each anomaly metric. # 3 Evolving the Loeb Scale In order to develop a dynamical formulation of the Loeb scale that updates continuously as observational data evolve, it is natural to promote the composite anomaly score into a radially dependent quantity. The instantaneous Loeb score already possesses an explicit mathematical definition in terms of the anomaly metrics evaluated at a single epoch and if $m_{i}(r)$ denotes the normalized value of metric $i$ at heliocentric distance $r$ , then the instantaneous score is given by $$ S _ {\mathrm {i n s t}} (r) = \sum_ {i} w _ {i} m _ {i} (r) + \sum_ {i < j} w _ {i j} m _ {i} (r) m _ {j} (r) $$ where the constants $w_{i}$ and $w_{ij}$ denote the linear and pairwise interaction weights introduced earlier. This expression reduces to the static Loeb score when evaluated at a single fixed value of $r$ , but for an object whose properties evolve through its solar encounter, one generally obtains different values of $S_{\mathrm{inst}}$ as more data accumulate at different distances and the quantity $S_{\mathrm{inst}}(r)$ therefore represents the raw anomaly score inferred directly from the most recent measurements, without any smoothing, averaging or persistence of past information. A fundamental limitation of using $S_{\mathrm{inst}}(r)$ directly is that real observational data are uneven in quality and cadence and brief anomalous measurements at isolated radii may spuriously elevate or depress the Loeb classification. To provide a mathematically stable and physically interpretable alternative, one may define an effective anomaly score $S_{\mathrm{eff}}(r)$ that evolves gradually toward the instantaneous value without matching it immediately. This prescription introduces a form of dynamical memory into the classification and reflects the expectation that sustained anomalies are more significant than transient ones and a simple formulation achieving this goal is a radial first order relaxation equation, $$ \frac {d S _ {\text {e f f}}}{d r} = \frac {S _ {\text {i n s t}} (r) - S _ {\text {e f f}} (r)}{L} \tag {16} $$ where $L$ is a characteristic relaxation length scale measured in astronomical units. The parameter $L$ determines the responsiveness of the dynamical score and note that if $L$ is small then $S_{\mathrm{eff}}(r)$ closely traces $S_{\mathrm{inst}}(r)$ and the classification reacts rapidly to new information. For larger $L$ , the evolution becomes more inertial and significant changes in heliocentric distance are required before the effective score moves appreciably toward the instantaneous value. This structure induces a natural hysteresis as short lived fluctuations in individual anomaly metrics do not immediately alter the classification and only sustained departures from natural expectation generate long term changes in the effective level. To evaluate $S_{\mathrm{eff}}(r)$ one must first specify the functional dependence of each anomaly metric $m_i(r)$ on heliocentric distance and because new data typically arrive sparsely and with significant uncertainty, it is useful to describe these metrics using simple parametric forms that can be continuously updated as improved constraints become available. A representative example is provided by the non-gravitational acceleration anomaly. If $a_{\mathrm{obs}}(r)$ is the measured nongravitational acceleration and $a_{\mathrm{nat}}(r)$ a reference value predicted by a natural sublimation model, then one may write $$ A (r) = \operatorname {c l a m p} \left(\frac {\log_ {1 0} [ a _ {\mathrm {o b s}} (r) / a _ {\mathrm {n a t}} (r) ] + C _ {A}}{D _ {A}}, 0, 1\right) \tag {17} $$ with the reference model parametrized as $$ a _ {\mathrm {n a t}} (r) = a _ {0} \left(\frac {1}{r}\right) ^ {n} \Theta (r - r _ {\mathrm {i c e}}) \tag {18} $$ note here that $n$ controls the steepness of the sublimation response, $r_{\mathrm{ice}}$ defines the characteristic activation radius of relevant volatiles, and $\Theta$ is a smoothed step function that switches on activity as $r$ decreases. This formulation keeps the anomaly metric well defined even when only a few acceleration measurements exist and the parameters $(a_0, n, r_{\mathrm{ice}})$ may be refined as the object is observed over a wider range of distances. The spectral anomaly metric may be expressed as a sigmoid function in $r$ , which goes towards reflecting the onset of gas emission or unusual chemical signatures as the object receives greater insolation. A convenient representation in this case would be $$ B (r) = B _ {\max } \left[ 1 + e ^ {(r - r _ {\mathrm {c r i t}}) / \Delta r} \right] ^ {- 1} \tag {19} $$ where $r_{\mathrm{crit}}$ denotes a characteristic activation radius and $\Delta r$ measures the sharpness of the transition. This captures early or delayed onset behavior and naturally accommodates nondetections at large heliocentric distances. There are also some metrics which vary only weakly with $r$ but have uncertainties that shrink as additional measurements become available. The shape anomaly $C(r)$ and albedo anomaly $D(r)$ fall into this category and a simple model for the radial dependence of their uncertainties is $$ \sigma_ {C} (r) = \sigma_ {C, 0} \exp \left[ - N _ {\mathrm {L C}} (r) / N _ {0} \right] \tag {20} $$ where $N_{\mathrm{LC}}(r)$ is the cumulative number of lightcurve measurements obtained up to distance $r$ and $N_0$ is a scale factor controlling the reduction rate. The mean values of $C(r)$ and $D(r)$ may be treated as approximately constant, but the shrinking uncertainty allows the instantaneous score $S_{\mathrm{inst}}(r)$ to become more accurate as the object approaches the inner Solar System. The trajectory anomaly metric is particularly sensitive to improved orbit determination as one would expect and so if $p(r)$ is the isotropic arrival probability based on the best fit orbit solution at distance $r$ , one may express it as $$ p (r) = p _ {\mathrm {i s o}} \exp [ - \kappa Q (r) ] \tag {21} $$ where $Q(r)$ quantifies how geometrically unusual the fitted orbit is relative to the isotropic assumption and the anomaly metric is then $$ E (r) = \operatorname {c l a m p} \left(- \frac {\log_ {1 0} p (r)}{X}, 0, 1\right) \tag {22} $$ with $X$ the scaling parameter introduced earlier. As astrometric uncertainties shrink with additional observations, the value of $Q(r)$ may rise rapidly which ends up producing a corresponding radial growth in $E(r)$ if the object's trajectory is unexpectedly close to a significant Solar System target. Once the radial dependence of all metrics has been specified, the instantaneous score can then follow from $$ S _ {\text {i n s t}} (r) = \sum_ {i} w _ {i} m _ {i} (r) + \sum_ {i < j} w _ {i j} m _ {i} (r) m _ {j} (r) \tag {23} $$ and the effective score is obtained by integrating (16) and the resulting function $S_{\mathrm{eff}}(r)$ may then be compared against the level thresholds, with hysteresis introduced by requiring that the effective score remain above (or below) a threshold over a finite radial interval before a change in classification is adopted. This ensures that transitions between levels arise from persistent radial trends rather than isolated data points or transient anomalies. The differential formulation here incorporates the Loeb score as an evolving quality and quantity of observational data in a natural way and takes a principled notion of memory, enabling the classification to reflect sustained evidence of anomalous behavior. This makes the framework well suited for real time tracking of newly discovered interstellar objects and suggests practical applications such as automated monitoring pipelines that update the object's effective Loeb score as new observations become available. # 4 Operational directions and Caveats The differential formulation of the Loeb scale that we developed above admits a concrete operational implementation once an interstellar object is detected at large heliocentric distance, for example near the Kuiper Belt. At the detection radius $r_{\mathrm{det}}$ , the anomaly metrics $m_i(r)$ are only weakly constrained, but one can already construct parametric models $m_i(r; \theta_i)$ that could take in both the limited data and prior expectations for natural objects. In this representation, the instantaneous anomaly score becomes a function of the heliocentric distance and the parameter set $$ S _ {\text {i n s t}} (r; \theta) = \sum_ {i} w _ {i} m _ {i} (r; \theta_ {i}) + \sum_ {i < j} w _ {i j} m _ {i} (r; \theta_ {i}) m _ {j} (r; \theta_ {j}) \tag {24} $$ where $\theta \equiv \{\theta_i\}$ denotes the full set of metric parameters. The effective Loeb score $S_{\mathrm{eff}}(r)$ is then governed by the evolution equation described in (16) and subject to an initial condition $S_{\mathrm{eff}}(r_{\mathrm{det}}) = S_{\mathrm{det}}$ , where $S_{\mathrm{det}}$ is obtained from the initial data. For a given choice of parameters $\theta$ , one can write the formal solution of the first order evolution as $$ S _ {\text {e f f}} (r; \theta) = \mathcal {K} \left(r, r _ {\det }\right) S _ {\det } + \int_ {r _ {\det }} ^ {r} \mathcal {K} \left(r, r ^ {\prime}\right) \mathcal {F} \left(r ^ {\prime}; \theta\right) d r ^ {\prime} \tag {25} $$ where $\mathcal{K}(r,r^{\prime})$ is the Green's function associated with eq. Loeb and $\mathcal{F}(r;\theta)$ is a source term proportional to $S_{\mathrm{inst}}(r;\theta)$ . The explicit forms of $\mathcal{K}$ and $\mathcal{F}$ follow straightforwardly from (16). This expression shows that the effective score at any future radius $r$ is a weighted combination of the initial score and a radial integral of the instantaneous anomaly, with recent values of $S_{\mathrm{inst}}$ contributing more strongly than distant ones along the trajectory. To forecast the Loeb scale near Earth, one can evaluate the distribution of $S_{\mathrm{eff}}(r_{\oplus};\theta)$ at the heliocentric distance $r_{\oplus} \approx 1$ au and one can note here that at early times, when the observational constraints are weak, the parameters $\theta$ are described by broad priors or posteriors $P(\theta | \mathcal{D}_{\mathrm{det}})$ conditioned on the initial dataset $\mathcal{D}_{\mathrm{det}}$ at $r_{\mathrm{det}}$ . One may then define the predictive distribution $$ P \left(S _ {\text {e f f}} (r _ {\oplus}) \mid \mathcal {D} _ {\text {d e t}}\right) = \int d \theta P (\theta \mid \mathcal {D} _ {\text {d e t}}) \delta \left(S _ {\text {e f f}} (r _ {\oplus}; \theta) - s\right) \tag {26} $$ which can be estimated in practice by Monte Carlo sampling of $\theta$ , propagating each realization forward in radius and recording the resulting $S_{\mathrm{eff}}(r_{\oplus};\theta)$ . The mean and credible intervals of this distribution provide an operational forecast for the effective Loeb score at Earth long before the object reaches the inner Solar System. As additional data $\mathcal{D}_r$ are collected at intermediate radii $r$ , the parameter distribution is updated to $P(\theta|\mathcal{D}_{\mathrm{det}},\mathcal{D}_r)$ and the forecast for $S_{\mathrm{eff}}(r_{\oplus})$ is recomputed. The continuous dependence on $r$ ensures that these updates can be performed at any stage of the approach without discontinuities. In a simplified deterministic implementation, one may use a best fit parameter set $\hat{\theta}$ obtained from the current data and compute the corresponding trajectory $S_{\mathrm{eff}}(r;\hat{\theta})$ as a function of radius and the predicted Loeb score at Earth is then simply $$ S _ {\oplus} \equiv S _ {\mathrm {e f f}} \left(r _ {\oplus}; \hat {\theta}\right) \tag {27} $$ while the uncertainty can be approximated by linear error propagation or, more robustly, by sampling around $\hat{\theta}$ within its covariance matrix. This procedure gives us an evolving forecast $S_{\oplus}$ that is updated each time new measurements refine the metrics, for example when improved astrometry tightens $E(r)$ or when new spectra constrain $B(r)$ . Because the effective score depends on a radial integral of $S_{\mathrm{inst}}(r)$ , transient spikes in individual metrics at isolated radii contribute only modest corrections, preserving the stability of the forecast unless persistent anomalies develop. Operationally, an automated "Loeb monitor" for an ISO would therefore consist of a sequence of steps which can be summarized as follows. At each new observation epoch, the metrics $m_{i}(r)$ are updated and their parametric forms $m_{i}(r;\theta_{i})$ refitted, the instantaneous score $S_{\mathrm{inst}}(r;\theta)$ is recomputed along the future trajectory, the equation (16) is integrated from the current radius to $r_{\oplus}$ to obtain a new prediction for $S_{\mathrm{eff}}(r_{\oplus};\theta)$ and the resulting distribution or median value is mapped to an anticipated Loeb level via the same thresholding scheme used for static objects. This process can be repeated as frequently as new data become available, ensuring that the classification forecast incorporates the latest measurements while preserving the hysteresis and smoothing inherent in the differential formulation. An important feature of this framework is its adaptability as the global population of interstellar objects becomes better characterized. As more ISOs are discovered and analyzed, the empirical distributions underlying the anomaly metrics will narrow and the priors on $\theta$ will become more informative as example, the reference distribution for non-gravitational accelerations, spectral ratios and albedo values will be derived from a larger and more diverse sample of natural interstellar bodies. In this regime, the mapping from observables to rarity scores and hence to anomaly metrics will sharpen and the same measured properties may lead to higher or lower anomaly scores than in the early days of ISO science. This implies that for any given object, its inferred anomaly profile $m_{i}(r;\theta_{i})$ and consequently its effective score $S_{\mathrm{eff}}(r)$ , may acquire a mild time dependence in retrospect as the community's understanding of "normal" interstellar behavior improves. An object that initially appeared anomalous under broad, poorly constrained distributions might later be recognized as typical or conversely, it may move further into the tails of a better measured population, revealing its anomalous nature more starkly. Another conceptual caveat arises when applying the Loeb scale or its differential generalization that we have developed here, to objects detected near Earth for which no interstellar origin has been established. The Loeb scale is designed to quantify anomalies relative to natural interstellar bodies and thus uses ISO-based distributions as its baseline and for objects in Earth orbit or cis-lunar space that might represent non-human technologies, the appropriate comparison class is not natural comets or asteroids but rather human-made spacecraft and debris. In such cases, the anomaly metrics must be redefined so that $m_{i} = 0$ corresponds to full consistency with known human technologies and $m_{i} = 1$ corresponds to behavior incompatible with any catalogued class of artificial objects. The mathematical structure of the composite score and the differential evolution described in (16) can be retained, but the underlying probability distributions and normalization conventions must be replaced by those derived from the ensemble of human-made devices. This adaptation ensures that the interpretation of "anomaly" remains meaningful as what is highly unusual for a natural ISO may be entirely mundane for a human satellite and vice versa. # 5 Conclusions The differential formulation of the Loeb scale presented in this work extends the original static framework into a continuously evolving system capable of assimilating observational data across a full heliocentric trajectory. By promoting each anomaly metric to a radial function and introducing an effective score that responds gradually to new information, the Loeb scale can now produce stable and predictive classifications long before an interstellar object reaches the inner Solar System. The formal structure developed here establishes a clear mathematical relationship between metric evolution, instantaneous anomaly significance, and the smoothed effective score, thereby providing a natural and physically motivated means of incorporating memory, hysteresis and forecast capability into the classification scheme. This reformulation also enables practical forward modeling once an object is detected at large heliocentric distance. Through parametric representations of the metrics and predictive evaluations of the instantaneous anomaly score, one can integrate the Loeb evolution equation inward and estimate the future classification at Earth's distance, complete with uncertainty quantification arising from parameter posteriors and the operational pipeline developed in this framework is flexible enough to accommodate hardly available early data, increasingly precise measurements in the inner Solar System and retrospective updates as the empirical understanding of natural interstellar populations improves. Looking ahead, the dynamical formulation opens several possibilities as given the notion that large surveys discover growing numbers of interstellar objects, the anomaly metrics themselves will sharpen which will end up enabling the Loeb scale to become increasingly predictive and discriminating. One may envisage automated monitoring systems that continuously ingest astrometric, photometric, spectroscopic, and radar data to produce evolving anomaly forecasts for every newly detected ISO. More ambitiously, the same mathematical structure could be adapted into an early warning system for potential technological objects, guiding observational priorities and even future intercept missions. As the Solar System becomes a richer laboratory of interstellar visitors and as the frontier of anomalous object science expands, the differential Loeb scale may serve as a unifying, quantitative language for identifying natural outliers, evaluating technosignatures and ultimately perhaps also informing us about the search for extraterrestrial technologies across multiple observational domains.
arxiv_physics
2025-12-14T00:00:00Z
https://arxiv.org/pdf/2512.13743
{"title": "Evolving the Loeb Scale", "raw_content": "# Evolving the Loeb Scale\n\nOem Trivedi\\*1 and Abraham Loeb\n\n$^{1}$ Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA \n$^{2}$ Astronomy Department, Harvard University, 60 Garden St., Cambridge, MA 02138, USA\n\nDecember 17, 2025\n\n# Abstract\n\nWe develop a differential formulation of the Loeb Scale that extends the original static framework into a radially evolving, real-time classification scheme for interstellar objects. By promoting each anomaly metric to a function of heliocentric distance and introducing a relaxation equation for the effective score, our method incorporates memory, hysteresis and predictive capability. This allows us to have early, stable forecasts of an object's eventual Loeb level based on sparse data obtained at large distances, which is more helpful to quantify its true nature when near Earth.\n\n# 1 Introduction\n\nOver the past decade, the discovery of interstellar objects (ISOs) has transformed our understanding of the diversity of bodies that traverse the Solar System. The identification of 1I/ $^\\prime$ Oumuamua in 2017 [1], 2I/Borisov in 2019 [2] and most recently 3I/ATLAS in 2025 [3] has opened an entirely new window into the study of extrasolar planetesimals. While 2I/Borisov behaved as a conventional comet, the anomalous characteristics of 1I/ $^\\prime$ Oumuamua and 3I/ATLAS [4-13] show the possibility that the Solar System may occasionally be visited by objects that deviate significantly from the physical and dynamical properties of familiar comets and asteroids. With the imminent operations of the Vera C. Rubin Observatory [14-16], detection rates of ISOs are expected to rise by one to two orders of magnitude, making it imperative to develop quantitative tools that can rapidly assess the nature of newly discovered objects and discriminate between ordinary interstellar debris and bodies exhibiting potentially technological signatures.\n\nAs the catalog of ISOs grows, so too does the need to evaluate not only their scientific significance but also the extent to which they may pose a hazard to Earth. Motivated by this challenge, the Loeb Scale was introduced as a structured ten-level classification scheme that ranks objects according to the degree of anomaly they exhibit relative to natural icy rocks [17]. Much like the Kardashev scale provides us with a classification for the energy capacities of civilizations [18-23], the Loeb Scale offers a unified language for characterizing potential interstellar artifacts. It considers a wide variety of objects, ranging from objects entirely consistent with natural origins (Level 0) to those whose behavior may indicate artificial construction or even constitute a technological threat (Levels 8-10). While this framework has provided an essential conceptual foundation, the increasing pace of ISO discoveries demands methods that can evaluate their Loeb classification continuously as new observations accumulate during their passage through the Solar System.\n\nA full mathematical formulation of the Loeb Scale was established in Ref. [24], providing a quantitative mapping from observed anomalies to a continuous score and subsequent discrete level assignment. One key observation though is that this formulation remains fundamentally static, depending on measurements obtained at a single epoch and offering no means to incorporate the evolving physical and dynamical characteristics of an ISO as it approaches the inner Solar System. Because most ISOs are detected at large heliocentric distances where observational uncertainties are substantial, a static evaluation may poorly reflect the eventual classification once richer datasets become available near Earth. This motivates the development of a differential, radially evolving version of the Loeb Scale that updates continuously with incoming data naturally incorporates memory of sustained anomalies and forecasts the likely Loeb level by the time the object reaches Earth's vicinity. This is what we aim to achieve in this work and we organize it as follows. In section 2 we review the mathematical structure of the Loeb Scale and in section 3 we introduce the\n\ndifferential evolution equation governing its radial behavior. In section 4 we discuss caveats and limitations of this formulation and in section 5 we summarize our conclusions.\n\n# 2 Mathematical Foundations of the Loeb Scale\n\nIn order to formulate a differential generalization of the Loeb scale, it is useful to summarize the mathematical framework of the scale itself as developed in our previous work [24]. The Loeb scale is a ten-level classification scheme for the technosignature significance of interstellar objects, ranging from fully natural bodies at level 0 to confirmed existential threats at level 10. The purpose of the scale is to provide a reproducible and quantitative mapping from observational anomalies to a well defined integer level that reflects both the physical character of the object and its potential technological implications. To achieve this, one begins by defining a set of normalized anomaly metrics that encode the degree to which a given observable departs from expectations for natural Solar System populations.\n\nNote that here each metric is constructed from raw measurements and is transformed into a normalized variable $m_i \\in [0,1]$ , where $m_i = 0$ indicates full consistency with natural behavior and $m_i = 1$ represents a maximally anomalous, technologically suggestive or extreme value. The metrics include non-gravitational acceleration anomaly $A$ , spectral or compositional anomaly $B$ , shape or lightcurve anomaly $C$ , albedo or surface-weathering anomaly $D$ , trajectory or targeting improbability $E$ , electromagnetic signal significance $F$ , operational or behavioral indicators $G$ and optionally an impact-risk factor $H$ for differentiating between upper levels. Each metric is computed from a raw observable and mapped into the normalized range via monotonic transforms and calibrated clamping functions. To begin, we would briefly summarize the various metrics considered, starting with the non-gravitational acceleration anomaly which begins with the raw value\n\n$$\nA _ {\\mathrm {r a w}} = \\log_ {1 0} \\left(\\frac {a _ {\\mathrm {o b s}}}{a _ {\\mathrm {r e f}}}\\right) \\tag {1}\n$$\n\nwhere $a_{\\mathrm{obs}}$ denotes the measured non-gravitational acceleration and $a_{\\mathrm{ref}}$ is a reference value chosen to represent nominal cometary behavior and the raw quantity is mapped into [0, 1] through\n\n$$\nA = \\operatorname {c l a m p} \\left(\\frac {A _ {\\mathrm {r a w}} + 2}{4}, 0, 1\\right) \\tag {2}\n$$\n\nwhere the constants shift and scale the logarithmic range so that typical cometary accelerations yield values near $A \\approx 0.5$ .\n\nThe spectral anomaly metric $B$ compares observed spectra, gas production rates and line ratios to empirical population distributions of cometary species. If $\\chi_{\\mathrm{mismatch}}^2$ denotes a measure of deviation between the observed spectrum and the best-fit natural template then one may define the continuous mapping\n\n$$\nB = \\operatorname {c l a m p} \\left(\\frac {\\chi_ {\\text {m i s m a t c h}} ^ {2}}{\\chi_ {\\text {m i s m a t c h}} ^ {2} + K _ {B}}, 0, 1\\right) \\tag {3}\n$$\n\nwhere $K_{B}$ is a tunable constant controlling the sensitivity and a more refined construction uses the population percentile of each measured quantity, which can be defined as\n\n$$\ns _ {x} = 1 - 2 \\min \\left(F _ {\\mathrm {p o p}} \\left(x _ {\\star}\\right), 1 - F _ {\\mathrm {p o p}} \\left(x _ {\\star}\\right)\\right) \\tag {4}\n$$\n\nwhere we note that $F_{\\mathrm{pop}}$ is the cumulative distribution function for the relevant cometary dataset. For censored measurements with upper limits, one replaces $F_{\\mathrm{pop}}$ by the corresponding survival function. Multiple indicators are used here which are combined as a weighted sum of their rarity scores and mapped to [0, 1] by a rational transform of the form\n\n$$\nB = \\operatorname {c l a m p} \\left(\\frac {\\sum_ {k} \\alpha_ {k} s _ {x _ {k}}}{\\sum_ {k} \\alpha_ {k} s _ {x _ {k}} + K _ {B}}, 0, 1\\right) \\tag {5}\n$$\n\nThe shape anomaly $C$ is derived from the inferred aspect ratio $R$ of the body, which ends up using\n\n$$\nC = \\operatorname {c l a m p} \\left(\\frac {\\log_ {1 0} (R)}{\\log_ {1 0} \\left(R _ {\\max }\\right)}, 0, 1\\right) \\tag {6}\n$$\n\nwhere $R_{\\mathrm{max}}$ is a maximum reference ratio chosen to encapsulate the upper tail of plausible natural shapes. The albedo anomaly $D$ is constructed relative to the two-Rayleigh mixture distribution that describes the empirical albedo distribution of small Solar System bodies. If the mixture probability density is denoted by\n\n$$\np _ {2 R} (p _ {V}) = f _ {D} \\operatorname {R a y} \\left(p _ {V}; d\\right) + \\left(1 - f _ {D}\\right) \\operatorname {R a y} \\left(p _ {V}; b\\right) \\tag {7}\n$$\n\nwith the Rayleigh components\n\n$$\n\\operatorname {R a y} (x; \\sigma) = \\frac {x}{\\sigma^ {2}} \\exp \\left(- \\frac {x ^ {2}}{2 \\sigma^ {2}}\\right) \\tag {8}\n$$\n\nand empirically fitted parameters $f_{D}$ , $d$ , and $b$ , then the rarity of an observed albedo $p_V^\\star$ is quantified through the two-sided tail probability\n\n$$\np _ {\\text {t a i l}} \\left(p _ {V} ^ {\\star}\\right) = \\min \\left(\\int_ {0} ^ {p _ {V} ^ {\\star}} p _ {2 R} \\left(p _ {V}\\right) d p _ {V}, \\int_ {p _ {V} ^ {\\star}} ^ {\\infty} p _ {2 R} \\left(p _ {V}\\right) d p _ {V}\\right) \\tag {9}\n$$\n\nand the normalized albedo anomaly is\n\n$$\ns _ {\\text {a l b e d o}} = 1 - 2 p _ {\\text {t a i l}} \\left(p _ {V} ^ {\\star}\\right) \\tag {10}\n$$\n\nThis is then converted to $D$ by\n\n$$\nD = \\operatorname {c l a m p} \\left(\\frac {s _ {\\text {a l b e d o}}}{s _ {\\text {a l b e d o}} + K _ {D}}, 0, 1\\right) \\tag {11}\n$$\n\nThe trajectory anomaly $E$ is based on the improbability of the arrival geometry under an isotropic flux of incoming interstellar objects and if $p$ denotes this probability, one defines\n\n$$\nE = \\operatorname {c l a m p} \\left(\\frac {- \\log_ {1 0} (p)}{X}, 0, 1\\right) \\tag {12}\n$$\n\nwhere $X$ is a scaling parameter that tunes the sensitivity of the metric to rare arrival trajectories. One can also define the electromagnetic signal score $F$ and operational behavior metric $G$ , which are constructed from monotonic transforms of narrowband signal-to-noise ratios, modulation properties, maneuvering residuals or sub-object detections. Here each are mapped into the unit interval using logistic or rational functions and the impact-risk factor $H$ is defined in terms of impact probability and kinetic energy, normalized so that objects posing negligible risk satisfy $H \\approx 0$ and impactors with catastrophic energy yield $H \\approx 1$ .\n\nHaving defined all metrics $m_i$ , the Loeb scale consolidates them into a single composite anomaly score $S \\in [0,1]$ by combining linear contributions and pairwise synergies. Here we have the linear contribution being given as\n\n$$\nS _ {\\text {l i n}} = \\sum_ {i} w _ {i} m _ {i} \\tag {13}\n$$\n\nwhere the weights $w_{i}$ satisfy $w_{i} \\geq 0$ and $\\sum_{i} w_{i} = 1$ and to capture the fact that distinct anomalies reinforce each other, one includes interaction terms\n\n$$\nS = \\sum_ {i} w _ {i} m _ {i} + \\sum_ {i < j} w _ {i j} m _ {i} m _ {j} \\tag {14}\n$$\n\nwhere $w_{ij}$ are small, tunable coefficients restricted to physically motivated pairs. Note that this composite score increases more strongly when multiple independent anomalies co-occur, reflecting heightened suspicion.\n\nThe composite score is mapped to the integer Loeb levels via calibrated thresholds where, for example, one may assign level 0 for $S < 0.20$ , level 1 for $0.20 \\leq S < 0.35$ , level 2 for $0.35 \\leq S < 0.50$ and so forth, with the critical threshold for formal technosignature consideration placed at $S \\approx 0.60$ corresponding to level 4. At the upper end scores $S \\geq 0.995$ correspond to level 10, which indicates a confirmed artificial object on an Earth-impact trajectory with globally catastrophic consequences. Uncertainty propagation follows directly from the measurement uncertainties of each metric. In a first-order approximation, if $\\sigma_{m_i}$ denotes the error of metric $m_i$ , then the variance of $S$ is\n\n$$\n\\sigma_ {S} ^ {2} \\approx \\sum_ {i} \\left(w _ {i} + \\sum_ {j \\neq i} w _ {i j} m _ {j}\\right) ^ {2} \\sigma_ {m _ {i}} ^ {2} \\tag {15}\n$$\n\nalthough a full Monte Carlo propagation of the metric distributions is recommended for robust communication. By treating the metrics as functions of heliocentric distance and introducing a dynamical evolution equation for the effective score, one can extend the static mapping into a continuous, real-time classification scheme that evolves with\n\nobservational data. The remainder of this work develops such a differential extension by promoting the composite score $S$ to a radially evolving quantity and then coupling it to parametric models for the radial dependence of each anomaly metric.\n\n# 3 Evolving the Loeb Scale\n\nIn order to develop a dynamical formulation of the Loeb scale that updates continuously as observational data evolve, it is natural to promote the composite anomaly score into a radially dependent quantity. The instantaneous Loeb score already possesses an explicit mathematical definition in terms of the anomaly metrics evaluated at a single epoch and if $m_{i}(r)$ denotes the normalized value of metric $i$ at heliocentric distance $r$ , then the instantaneous score is given by\n\n$$\nS _ {\\mathrm {i n s t}} (r) = \\sum_ {i} w _ {i} m _ {i} (r) + \\sum_ {i < j} w _ {i j} m _ {i} (r) m _ {j} (r)\n$$\n\nwhere the constants $w_{i}$ and $w_{ij}$ denote the linear and pairwise interaction weights introduced earlier. This expression reduces to the static Loeb score when evaluated at a single fixed value of $r$ , but for an object whose properties evolve through its solar encounter, one generally obtains different values of $S_{\\mathrm{inst}}$ as more data accumulate at different distances and the quantity $S_{\\mathrm{inst}}(r)$ therefore represents the raw anomaly score inferred directly from the most recent measurements, without any smoothing, averaging or persistence of past information.\n\nA fundamental limitation of using $S_{\\mathrm{inst}}(r)$ directly is that real observational data are uneven in quality and cadence and brief anomalous measurements at isolated radii may spuriously elevate or depress the Loeb classification. To provide a mathematically stable and physically interpretable alternative, one may define an effective anomaly score $S_{\\mathrm{eff}}(r)$ that evolves gradually toward the instantaneous value without matching it immediately. This prescription introduces a form of dynamical memory into the classification and reflects the expectation that sustained anomalies are more significant than transient ones and a simple formulation achieving this goal is a radial first order relaxation equation,\n\n$$\n\\frac {d S _ {\\text {e f f}}}{d r} = \\frac {S _ {\\text {i n s t}} (r) - S _ {\\text {e f f}} (r)}{L} \\tag {16}\n$$\n\nwhere $L$ is a characteristic relaxation length scale measured in astronomical units. The parameter $L$ determines the responsiveness of the dynamical score and note that if $L$ is small then $S_{\\mathrm{eff}}(r)$ closely traces $S_{\\mathrm{inst}}(r)$ and the classification reacts rapidly to new information. For larger $L$ , the evolution becomes more inertial and significant changes in heliocentric distance are required before the effective score moves appreciably toward the instantaneous value. This structure induces a natural hysteresis as short lived fluctuations in individual anomaly metrics do not immediately alter the classification and only sustained departures from natural expectation generate long term changes in the effective level.\n\nTo evaluate $S_{\\mathrm{eff}}(r)$ one must first specify the functional dependence of each anomaly metric $m_i(r)$ on heliocentric distance and because new data typically arrive sparsely and with significant uncertainty, it is useful to describe these metrics using simple parametric forms that can be continuously updated as improved constraints become available. A representative example is provided by the non-gravitational acceleration anomaly. If $a_{\\mathrm{obs}}(r)$ is the measured nongravitational acceleration and $a_{\\mathrm{nat}}(r)$ a reference value predicted by a natural sublimation model, then one may write\n\n$$\nA (r) = \\operatorname {c l a m p} \\left(\\frac {\\log_ {1 0} [ a _ {\\mathrm {o b s}} (r) / a _ {\\mathrm {n a t}} (r) ] + C _ {A}}{D _ {A}}, 0, 1\\right) \\tag {17}\n$$\n\nwith the reference model parametrized as\n\n$$\na _ {\\mathrm {n a t}} (r) = a _ {0} \\left(\\frac {1}{r}\\right) ^ {n} \\Theta (r - r _ {\\mathrm {i c e}}) \\tag {18}\n$$\n\nnote here that $n$ controls the steepness of the sublimation response, $r_{\\mathrm{ice}}$ defines the characteristic activation radius of relevant volatiles, and $\\Theta$ is a smoothed step function that switches on activity as $r$ decreases. This formulation keeps the anomaly metric well defined even when only a few acceleration measurements exist and the parameters $(a_0, n, r_{\\mathrm{ice}})$ may be refined as the object is observed over a wider range of distances.\n\nThe spectral anomaly metric may be expressed as a sigmoid function in $r$ , which goes towards reflecting the onset of\n\ngas emission or unusual chemical signatures as the object receives greater insolation. A convenient representation in this case would be\n\n$$\nB (r) = B _ {\\max } \\left[ 1 + e ^ {(r - r _ {\\mathrm {c r i t}}) / \\Delta r} \\right] ^ {- 1} \\tag {19}\n$$\n\nwhere $r_{\\mathrm{crit}}$ denotes a characteristic activation radius and $\\Delta r$ measures the sharpness of the transition. This captures early or delayed onset behavior and naturally accommodates nondetections at large heliocentric distances. There are also some metrics which vary only weakly with $r$ but have uncertainties that shrink as additional measurements become available. The shape anomaly $C(r)$ and albedo anomaly $D(r)$ fall into this category and a simple model for the radial dependence of their uncertainties is\n\n$$\n\\sigma_ {C} (r) = \\sigma_ {C, 0} \\exp \\left[ - N _ {\\mathrm {L C}} (r) / N _ {0} \\right] \\tag {20}\n$$\n\nwhere $N_{\\mathrm{LC}}(r)$ is the cumulative number of lightcurve measurements obtained up to distance $r$ and $N_0$ is a scale factor controlling the reduction rate. The mean values of $C(r)$ and $D(r)$ may be treated as approximately constant, but the shrinking uncertainty allows the instantaneous score $S_{\\mathrm{inst}}(r)$ to become more accurate as the object approaches the inner Solar System.\n\nThe trajectory anomaly metric is particularly sensitive to improved orbit determination as one would expect and so if $p(r)$ is the isotropic arrival probability based on the best fit orbit solution at distance $r$ , one may express it as\n\n$$\np (r) = p _ {\\mathrm {i s o}} \\exp [ - \\kappa Q (r) ] \\tag {21}\n$$\n\nwhere $Q(r)$ quantifies how geometrically unusual the fitted orbit is relative to the isotropic assumption and the anomaly metric is then\n\n$$\nE (r) = \\operatorname {c l a m p} \\left(- \\frac {\\log_ {1 0} p (r)}{X}, 0, 1\\right) \\tag {22}\n$$\n\nwith $X$ the scaling parameter introduced earlier. As astrometric uncertainties shrink with additional observations, the value of $Q(r)$ may rise rapidly which ends up producing a corresponding radial growth in $E(r)$ if the object's trajectory is unexpectedly close to a significant Solar System target. Once the radial dependence of all metrics has been specified, the instantaneous score can then follow from\n\n$$\nS _ {\\text {i n s t}} (r) = \\sum_ {i} w _ {i} m _ {i} (r) + \\sum_ {i < j} w _ {i j} m _ {i} (r) m _ {j} (r) \\tag {23}\n$$\n\nand the effective score is obtained by integrating (16) and the resulting function $S_{\\mathrm{eff}}(r)$ may then be compared against the level thresholds, with hysteresis introduced by requiring that the effective score remain above (or below) a threshold over a finite radial interval before a change in classification is adopted. This ensures that transitions between levels arise from persistent radial trends rather than isolated data points or transient anomalies. The differential formulation here incorporates the Loeb score as an evolving quality and quantity of observational data in a natural way and takes a principled notion of memory, enabling the classification to reflect sustained evidence of anomalous behavior. This makes the framework well suited for real time tracking of newly discovered interstellar objects and suggests practical applications such as automated monitoring pipelines that update the object's effective Loeb score as new observations become available.\n\n# 4 Operational directions and Caveats\n\nThe differential formulation of the Loeb scale that we developed above admits a concrete operational implementation once an interstellar object is detected at large heliocentric distance, for example near the Kuiper Belt. At the detection radius $r_{\\mathrm{det}}$ , the anomaly metrics $m_i(r)$ are only weakly constrained, but one can already construct parametric models $m_i(r; \\theta_i)$ that could take in both the limited data and prior expectations for natural objects. In this representation, the instantaneous anomaly score becomes a function of the heliocentric distance and the parameter set\n\n$$\nS _ {\\text {i n s t}} (r; \\theta) = \\sum_ {i} w _ {i} m _ {i} (r; \\theta_ {i}) + \\sum_ {i < j} w _ {i j} m _ {i} (r; \\theta_ {i}) m _ {j} (r; \\theta_ {j}) \\tag {24}\n$$\n\nwhere $\\theta \\equiv \\{\\theta_i\\}$ denotes the full set of metric parameters. The effective Loeb score $S_{\\mathrm{eff}}(r)$ is then governed by the evolution equation described in (16) and subject to an initial condition $S_{\\mathrm{eff}}(r_{\\mathrm{det}}) = S_{\\mathrm{det}}$ , where $S_{\\mathrm{det}}$ is obtained from the initial data. For a given choice of parameters $\\theta$ , one can write the formal solution of the first order evolution as\n\n$$\nS _ {\\text {e f f}} (r; \\theta) = \\mathcal {K} \\left(r, r _ {\\det }\\right) S _ {\\det } + \\int_ {r _ {\\det }} ^ {r} \\mathcal {K} \\left(r, r ^ {\\prime}\\right) \\mathcal {F} \\left(r ^ {\\prime}; \\theta\\right) d r ^ {\\prime} \\tag {25}\n$$\n\nwhere $\\mathcal{K}(r,r^{\\prime})$ is the Green's function associated with eq. Loeb and $\\mathcal{F}(r;\\theta)$ is a source term proportional to $S_{\\mathrm{inst}}(r;\\theta)$ . The explicit forms of $\\mathcal{K}$ and $\\mathcal{F}$ follow straightforwardly from (16). This expression shows that the effective score at any future radius $r$ is a weighted combination of the initial score and a radial integral of the instantaneous anomaly, with recent values of $S_{\\mathrm{inst}}$ contributing more strongly than distant ones along the trajectory.\n\nTo forecast the Loeb scale near Earth, one can evaluate the distribution of $S_{\\mathrm{eff}}(r_{\\oplus};\\theta)$ at the heliocentric distance $r_{\\oplus} \\approx 1$ au and one can note here that at early times, when the observational constraints are weak, the parameters $\\theta$ are described by broad priors or posteriors $P(\\theta | \\mathcal{D}_{\\mathrm{det}})$ conditioned on the initial dataset $\\mathcal{D}_{\\mathrm{det}}$ at $r_{\\mathrm{det}}$ . One may then define the predictive distribution\n\n$$\nP \\left(S _ {\\text {e f f}} (r _ {\\oplus}) \\mid \\mathcal {D} _ {\\text {d e t}}\\right) = \\int d \\theta P (\\theta \\mid \\mathcal {D} _ {\\text {d e t}}) \\delta \\left(S _ {\\text {e f f}} (r _ {\\oplus}; \\theta) - s\\right) \\tag {26}\n$$\n\nwhich can be estimated in practice by Monte Carlo sampling of $\\theta$ , propagating each realization forward in radius and recording the resulting $S_{\\mathrm{eff}}(r_{\\oplus};\\theta)$ . The mean and credible intervals of this distribution provide an operational forecast for the effective Loeb score at Earth long before the object reaches the inner Solar System. As additional data $\\mathcal{D}_r$ are collected at intermediate radii $r$ , the parameter distribution is updated to $P(\\theta|\\mathcal{D}_{\\mathrm{det}},\\mathcal{D}_r)$ and the forecast for $S_{\\mathrm{eff}}(r_{\\oplus})$ is recomputed. The continuous dependence on $r$ ensures that these updates can be performed at any stage of the approach without discontinuities.\n\nIn a simplified deterministic implementation, one may use a best fit parameter set $\\hat{\\theta}$ obtained from the current data and compute the corresponding trajectory $S_{\\mathrm{eff}}(r;\\hat{\\theta})$ as a function of radius and the predicted Loeb score at Earth is then simply\n\n$$\nS _ {\\oplus} \\equiv S _ {\\mathrm {e f f}} \\left(r _ {\\oplus}; \\hat {\\theta}\\right) \\tag {27}\n$$\n\nwhile the uncertainty can be approximated by linear error propagation or, more robustly, by sampling around $\\hat{\\theta}$ within its covariance matrix. This procedure gives us an evolving forecast $S_{\\oplus}$ that is updated each time new measurements refine the metrics, for example when improved astrometry tightens $E(r)$ or when new spectra constrain $B(r)$ . Because the effective score depends on a radial integral of $S_{\\mathrm{inst}}(r)$ , transient spikes in individual metrics at isolated radii contribute only modest corrections, preserving the stability of the forecast unless persistent anomalies develop.\n\nOperationally, an automated \"Loeb monitor\" for an ISO would therefore consist of a sequence of steps which can be summarized as follows. At each new observation epoch, the metrics $m_{i}(r)$ are updated and their parametric forms $m_{i}(r;\\theta_{i})$ refitted, the instantaneous score $S_{\\mathrm{inst}}(r;\\theta)$ is recomputed along the future trajectory, the equation (16) is integrated from the current radius to $r_{\\oplus}$ to obtain a new prediction for $S_{\\mathrm{eff}}(r_{\\oplus};\\theta)$ and the resulting distribution or median value is mapped to an anticipated Loeb level via the same thresholding scheme used for static objects. This process can be repeated as frequently as new data become available, ensuring that the classification forecast incorporates the latest measurements while preserving the hysteresis and smoothing inherent in the differential formulation.\n\nAn important feature of this framework is its adaptability as the global population of interstellar objects becomes better characterized. As more ISOs are discovered and analyzed, the empirical distributions underlying the anomaly metrics will narrow and the priors on $\\theta$ will become more informative as example, the reference distribution for non-gravitational accelerations, spectral ratios and albedo values will be derived from a larger and more diverse sample of natural interstellar bodies. In this regime, the mapping from observables to rarity scores and hence to anomaly metrics will sharpen and the same measured properties may lead to higher or lower anomaly scores than in the early days of ISO science. This implies that for any given object, its inferred anomaly profile $m_{i}(r;\\theta_{i})$ and consequently its effective score $S_{\\mathrm{eff}}(r)$ , may acquire a mild time dependence in retrospect as the community's understanding of \"normal\" interstellar behavior improves. An object that initially appeared anomalous under broad, poorly constrained distributions might later be recognized as typical or conversely, it may move further into the tails of a better measured population, revealing its anomalous nature more starkly.\n\nAnother conceptual caveat arises when applying the Loeb scale or its differential generalization that we have developed here, to objects detected near Earth for which no interstellar origin has been established. The Loeb scale is designed to quantify anomalies relative to natural interstellar bodies and thus uses ISO-based distributions as its baseline and for objects in Earth orbit or cis-lunar space that might represent non-human technologies, the appropriate comparison class is not natural comets or asteroids but rather human-made spacecraft and debris. In such cases, the anomaly metrics must be redefined so that $m_{i} = 0$ corresponds to full consistency with known human technologies and $m_{i} = 1$ corresponds to behavior incompatible with any catalogued class of artificial objects. The mathematical structure of the composite score and the differential evolution described in (16) can be retained, but the underlying probability distributions and normalization conventions must be replaced by those derived from the ensemble of\n\nhuman-made devices. This adaptation ensures that the interpretation of \"anomaly\" remains meaningful as what is highly unusual for a natural ISO may be entirely mundane for a human satellite and vice versa.\n\n# 5 Conclusions\n\nThe differential formulation of the Loeb scale presented in this work extends the original static framework into a continuously evolving system capable of assimilating observational data across a full heliocentric trajectory. By promoting each anomaly metric to a radial function and introducing an effective score that responds gradually to new information, the Loeb scale can now produce stable and predictive classifications long before an interstellar object reaches the inner Solar System. The formal structure developed here establishes a clear mathematical relationship between metric evolution, instantaneous anomaly significance, and the smoothed effective score, thereby providing a natural and physically motivated means of incorporating memory, hysteresis and forecast capability into the classification scheme.\n\nThis reformulation also enables practical forward modeling once an object is detected at large heliocentric distance. Through parametric representations of the metrics and predictive evaluations of the instantaneous anomaly score, one can integrate the Loeb evolution equation inward and estimate the future classification at Earth's distance, complete with uncertainty quantification arising from parameter posteriors and the operational pipeline developed in this framework is flexible enough to accommodate hardly available early data, increasingly precise measurements in the inner Solar System and retrospective updates as the empirical understanding of natural interstellar populations improves.\n\nLooking ahead, the dynamical formulation opens several possibilities as given the notion that large surveys discover growing numbers of interstellar objects, the anomaly metrics themselves will sharpen which will end up enabling the Loeb scale to become increasingly predictive and discriminating. One may envisage automated monitoring systems that continuously ingest astrometric, photometric, spectroscopic, and radar data to produce evolving anomaly forecasts for every newly detected ISO. More ambitiously, the same mathematical structure could be adapted into an early warning system for potential technological objects, guiding observational priorities and even future intercept missions. As the Solar System becomes a richer laboratory of interstellar visitors and as the frontier of anomalous object science expands, the differential Loeb scale may serve as a unifying, quantitative language for identifying natural outliers, evaluating technosignatures and ultimately perhaps also informing us about the search for extraterrestrial technologies across multiple observational domains.\n\n# Acknowledgments\n\nOT was supported in part by the Vanderbilt Discovery Alliance Fellowship. AL was supported in part by the Galileo Project and the Black Hole Initiative.\n\n# References\n\n[1] Karen J Meech, Robert Weryk, Marco Micheli, Jan T Kleyna, Olivier R Hainaut, Robert Jedicke, Richard J Wainscoat, Kenneth C Chambers, Jacqueline V Keane, Andreea Petric, et al. A brief visit from a red and extremely elongated interstellar asteroid. Nature, 552(7685):378-381, 2017. \n[2] Piotr Guzik, Michal Drahus, Krzysztof Rusek, Wacław Waniak, Giacomo Cannizzaro, and Inés Pastor-Marazuela. Initial characterization of interstellar comet 2i/borisov. Nature Astronomy, 4(1):53-57, 2020. \n[3] Darryl Z Seligman, Marco Micheli, Davide Farnocchia, Larry Denneau, John W Noonan, Henry H Hsieh, Toni Santana-Ros, John Tonry, Katie Auchettl, Luca Conversi, et al. Discovery and preliminary characterization of a third interstellar object: 3i/atlas. The Astrophysical Journal Letters, 989(2):L36, 2025. \n[4] Avi Loeb. On the possibility of an artificial origin for 'oumuamua. Astrobiology, 22(12):1392-1399, 2022. \n[5] John C Forbes and Abraham Loeb. Turning up the heat on 'oumuamua. The Astrophysical Journal Letters, 875(2):L23, 2019.\n\n[6] Amir Siraj and Abraham Loeb. The mass budget necessary to explain 'oumuamua as a nitrogen iceberg. New Astronomy, 92:101730, 2022. \n[7] Amir Siraj and Abraham Loeb. The 2019 discovery of a meteor of interstellar origin. arXiv preprint arXiv:1904.07224, 2019. \n[8] Shmuel Bialy and Abraham Loeb. Could solar radiation pressure explain 'oumuamua's peculiar acceleration? The Astrophysical Journal Letters, 868(1):L1, 2018. \n[9] Abraham Loeb. 3i/atlas is smaller or rarer than it looks. Research Notes of the AAS, 9(7):178, 2025. \n[10] Adam Hibberd, Adam Crowl, and Abraham Loeb. Is the interstellar object 3i/atlas alien technology? International Journal of Aerodynamic Control and Aviation Mechanics; arXiv preprint arXiv:2507.12213, 2025. \n[11] R De La Fuente Marcos, MR Alarcon, J Licandro, M Serra-Ricart, J De León, C De La Fuente Marcos, G Lombardi, A Tejero, A Cabrera-Lavers, S Guerra Arencibia, et al. Assessing interstellar comet 3i/atlas with the 10.4 m gran telescopic canarias and the two-meter twin telescope. *Astronomy & Astrophysics*, 700:L9, 2025. \n[12] Abraham Loeb, Adam Hibberd, and Adam Crowl. Intercepting 3i/atlas at closest approach to jupiter with the juno spacecraft. arXiv preprint arXiv:2507.21402, 2025. \n[13] Matthew J Hopkins, Rosemary C Dorsey, John C Forbes, Michele T Bannister, Chris J Lintott, and Brayden Leicester. From a different star: 3i/Atlas in the context of the $\\backslash = \\{\\mathrm{O}\\}$ tautahi-oxford interstellar object population model. arXiv preprint arXiv:2507.05318, 2025. \n[14] Sandrine J Thomas, Jeffrey Barr, Shawn Callahan, Andy W Clements, Felipe Daruich, Juan Fabrega, Patrick Ingraham, William Gressler, Freddy Munoz, Doug Neill, et al. Vera c. rubin observatory: telescope and site status. In Ground-based and Airborne Telescopes VIII, volume 11445, pages 68-82. SPIE, 2020. \n[15] Bob Blum, Seth W Digel, Alex Drlica-Wagner, Salman Habib, Katrin Heitmann, Mustapha Ishak, Saurabh W Jha, Steven M Kahn, Rachel Mandelbaum, Phil Marshall, et al. Snowmass2021 cosmic frontier white paper: rubin observatory after lsst. arXiv preprint arXiv:2203.07220, 2022. \n[16] Jacques Sebag, Charles F Claver, Sandrine J Thomas, Kevin Reil, Jeffrey Barr, Keith Bechtol, Felipe Daruich, Juan Fabrega, Leanne P Guy, Patrick Ingraham, et al. Vera c. rubin observatory system integration, test, and commissioning: strategy and status. In Ground-based and Airborne Telescopes VIII, volume 11445, pages 396-415. SPIE, 2020. \n[17] Omer Eldadi, Gershon Tenenbaum, and Abraham Loeb. The loeb scale: Astronomical classification of interstellar objects. International Journal of Astrobiology, 24:e22, November 2025. \n[18] Nikolai S Kardashev. Transmission of information by extraterrestrial civilizations. Soviet Astronomy, Vol. 8, p. 217, 8:217, 1964. \n[19] Nikolai S Kardashev. Strategies of searching for extraterrestrial intelligence: a fundamental approach to the basic problem. *Cosmic Search*, 2(7):36, 1980. \n[20] Milan M Cirkovic. Kardashev's classification at $50+$ : a fine vehicle with room for improvement. arXiv preprint arXiv:1601.05112, 2016. \n[21] Robert H Gray. The extended kardashev scale. The Astronomical Journal, 159(5):228, 2020. \n[22] Aditi Namboodiripad and CN Nimal. Predicting the timeline for earth achieving kardashev scale type 1 status. J. Sci. Technol, 6:2456-5660, 2021. \n[23] Anuj Sonia, Koena Majia, and Aniket Prasadc. Civilizations through the eye of kardashev: An extended scaling. 2022. \n[24] Oem Trivedi and Abraham Loeb. Quantitative mapping of the loeb scale. arXiv preprint arXiv:2509.06253, 2025."}
# Analysis of airport runway pavement reliability considering temperature variation: The case of São Paulo-Congonhas international airport ABSTRACT: Airport pavement design methods typically rely on standard documents, such as those provided by the FAA, which assume general climatic conditions. Nonetheless, the temperature between different regions tends to influence the behavior of the pavements, which impacts how stress and strains are distributed within the pavement structure and influence pavement performance and reliability. Furthermore, global warming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. This study aims to perform a reliability analysis for a pavement designed by traditional methods combined with the temperature variation, considering the case of São Paulo-Congonhas International Airport (CGH). It analyzed temperature variations across the four seasons. The procedure includes designing pavement, considering the airport's traffic mix, and performing a Monte Carlo Simulation (MCS) to verify the pavement structure's reliability under temperature variation. The study shows that the total cumulative damage factor (CDF) computed through MCS is $79\%$ lower than the value obtained using FAA method. Considering the pavement temperatures at CGH, all aircraft tend to cause less damage than expected. Furthermore, the pavement designed could withstand traffic 2.5 times greater at $95\%$ reliability and 5.0 times greater at $50\%$ reliability when considering temperature variation. These numbers indicate that in Brazilian airports where fatigue is the primary design criterion, FAARFIELD overestimates the damage and consequently increases pavement construction costs. These results suggest that the airport pavement design method requires calibration for Brazilian climatic conditions to improve fatigue damage prediction, especially for airports where fatigue is the primary failure criterion. The limitations of this study should be acknowledged to inform future research. The pavement temperature equation applied is deterministic, assuming fixed values for albedo, wind speed, and atmospheric transmission. Future research should assess the suitability of this equation for Brazilian regions, particularly in relation to actual measured temperatures at pavement depth. In this study, pavement reliability was evaluated considering only temperature variations; factors such as precipitation and variability in pavement thickness were not included, although they may affect pavement performance. Additionally, fatigue tests under different asphalt temperatures were not conducted, and a standard stiffness value for P-401 was used to assess fatigue behavior. Future studies by the authors will aim to calibrate the performance equations and address these limitations. AUTHOR KEYWORDS: airfield pavement design, climatic conditions, airfield pavement reliability, resilient pavements. # INTRODUCTION: The reliability of airport pavements is important for ensuring safety. Some airports require particular attention due to the high volume of traffic and environmental conditions. Various climatic factors may influence pavement performance, such as surface energy balance, moisture, climate change, frost heaving, and temperature (Alavi, Pouranian and Hajj, 2014). Thus, among all these factors, temperature is considered the critical variable, directly affecting the material behavior and performance (Qiao et al., 2013; Cheng et al., 2020; Zhang et al., 2023). In countries with continental dimensions, such as Brazil, the difference in temperature between the states to the north and south tend to influence the behavior of the pavements. These differences in asphalt layer temperature impact how stress and strain are distributed within the pavement structure and influence pavement performance and reliability (Hasan, Hiller and You, 2015; Kodipilly et al., 2018; Zhang et al., 2023; Luo et al., 2023; Zhuang et al., 2024). Furthermore, global warming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. Studies in several regions have been carried out to verify the impacts of climate change on asphalt pavements (Zhang et al., 2022; Liu et al, 2023; Barbi Tavassoti and Tighe, 2023; Zhang, Yang and Chen, 2024; Yang et al, 2024; Hosseini et al., 2024). All these studies agree that global warming tends to reduce pavement lifespan and increase pavement reconstruction costs. Nonetheless, the FAA pavement design method considers a constant asphalt field temperature, which is assumed to be equal to $32^{\circ}\mathrm{C}$ over the design period (FAA, 2021). That is, the airport pavement design method does not differentiate the damage caused by aircraft in different weather seasons. Furthermore, the fatigue equation used in the pavement design method was not calibrated for field conditions (Shen and Carpenter, 2005; Shen and Carpenter, 2007), which tends to be problematic for airports where fatigue is the primary failure criterion. In this scenario, the objective of this study is to perform a reliability analysis of the pavement designed using FAARFIELD software for a specific airport in Brazil and consider temperature variation. Climatic data from the past 10 years were collected to analyze the thermal gradient in the asphalt layers of the pavement structure. These data were also used to perform a sensitivity analysis on how temperature affects pavement strains to understand the impact of climatic conditions on pavements. Subsequently, a reliability analysis was conducted on CGH by comparing the results from FAARFIELD with those obtained from the MCS analysis. It was observed that the thermal gradient present in the pavement structure increases the stiffness of the asphalt layers, reducing the stresses and strains acting along the depth. Additionally, for the CGH, it was noted that the current pavement design method overestimates the stresses acting on runways. # METHODS The methodology is divided into four parts. In the first part, according to Figure 1, the collection and data analysis of traffic data and air temperature were made to São Paulo-Congonhas International Airport (CGH). This study chose this airport because it is one of the busiest in Brazil. Infraero provided the traffic data for 2017, 2018 and 2022, corresponding to every landing and takeoff performed by each aircraft on the airport's runway. Data were provided separately for each year and were then computed to determine each aircraft's average annual takeoffs and landings. The air temperature was obtained from meteorological data provided by the BDMEP-INMET website. The application requires entering the time period and station type to retrieve meteorological data. This study utilized air temperature data from 2014 to 2024 from the Mirante de Santana meteorological station, which is the closest meteorological station to CGH. The temperature of the asphalt layer was obtained through the method developed by Huber (1994), which is also used in the SUPERPAVE software to select the asphalt binder according to the weather conditions. For this study, Equations 1 and 2 present the model considering CGH latitude. $$ T _ {s u r f} = T _ {a i r} + 1 5. 5 4 0 3 \tag {1} $$ $$ T _ {d} = \left[ T _ {\text {s u r f}} \cdot \left(1 - 0. 0 6 3 0 \cdot \frac {d}{2 5 . 4} + 0. 0 0 7 0 \cdot \left(\frac {d}{2 5 . 4}\right) ^ {2} - 0. 0 0 0 4 \cdot \left(\frac {d}{2 5 . 4}\right) ^ {3}\right) - 3 2 \right] \cdot \frac {5}{9} \tag {2} $$ Equation 1 corresponds to the temperature at the pavement surface $T_{surf}$ in CGH, calculated using air temperature $T_{air}$ in degrees Celsius. Equation 2 computes the pavement temperature profile $T_{d}$ using the layer depth $d$ in millimeters and the pavement temperature surface in degrees Celsius. These equations consider an albedo of $10\%$ , transmission through air of $81\%$ , atmospheric radiation of $70\%$ , and 4.5 meters per second wind speed. The author developed them using more than 6,000 weather stations (Huber, 1994). The second part of the method consists of the pavement design for the traffic data using FAARFIELD software. It designs the pavement considering the materials from the FAARFIELD bibliography (FAA, 2021) and uses them as a reference for the following analysis. After obtaining the pavement structure, a sensitivity analysis was performed in the third part, varying only the Hot Mix Asphalt (HMA) modulus based on the asphalt layer temperature. Equations 3 to 5 use laboratory tests conducted by Kuchiishi, Vasconcelos, and Bernucci (2019) on asphalt mixtures for this correlation. $$ \log \alpha_ {T} = 0. 0 0 1 1 6 \cdot T ^ {2} + 0. 1 9 6 0 0 \cdot T + 3. 4 5 \tag {3} $$ $$ f _ {r} = f \cdot a _ {T} \tag {4} $$ $$ \log \left| E _ {H M A} \right| = 1. 6 9 0 + \frac {2 . 7 8 0}{1 + e ^ {- 1 . 4 1 0 - 0 . 7 1 9 \cdot \log f _ {r}}} \tag {5} $$ Equation 3 corresponds to the shift factor $\alpha_{T}$ of the model, calculated for each asphalt temperature $T$ in degrees Celsius. Equation 4 calculates the reduced frequency $f_{r}$ , representing the effect of frequency $f$ in the dynamic modulus test. This study considers the frequency of 1Hz, which is the same as computing the resilient modulus in Brazil. Equation 5 gives the HMA dynamic modulus $E_{HMA}$ in MPa. Equations 3 to 5 correspond to the dynamic modulus sigmoidal model, with coefficients obtained by Kuchiishi, Vasconcelos, and Bernucci (2019) for asphalt mixture samples. The third part involves a sensitivity analysis that considers the traffic, climatic data, and pavement structure obtained in this method's first and second parts. The method performs a sensitivity analysis to obtain the pavement
# Analysis of airport runway pavement reliability considering temperature variation: The case of São Paulo-Congonhas international airport ABSTRACT: Airport pavement design methods typically rely on standard documents, such as those provided by the FAA, which assume general climatic conditions. Nonetheless, the temperature between different regions tends to influence the behavior of the pavements, which impacts how stress and strains are distributed within the pavement structure and influence pavement performance and reliability. Furthermore, global warming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. This study aims to perform a reliability analysis for a pavement designed by traditional methods combined with the temperature variation, considering the case of São Paulo-Congonhas International Airport (CGH). It analyzed temperature variations across the four seasons. The procedure includes designing pavement, considering the airport's traffic mix, and performing a Monte Carlo Simulation (MCS) to verify the pavement structure's reliability under temperature variation. The study shows that the total cumulative damage factor (CDF) computed through MCS is $79\%$ lower than the value obtained using FAA method. Considering the pavement temperatures at CGH, all aircraft tend to cause less damage than expected. Furthermore, the pavement designed could withstand traffic 2.5 times greater at $95\%$ reliability and 5.0 times greater at $50\%$ reliability when considering temperature variation. These numbers indicate that in Brazilian airports where fatigue is the primary design criterion, FAARFIELD overestimates the damage and consequently increases pavement construction costs. These results suggest that the airport pavement design method requires calibration for Brazilian climatic conditions to improve fatigue damage prediction, especially for airports where fatigue is the primary failure criterion. The limitations of this study should be acknowledged to inform future research. The pavement temperature equation applied is deterministic, assuming fixed values for albedo, wind speed, and atmospheric transmission. Future research should assess the suitability of this equation for Brazilian regions, particularly in relation to actual measured temperatures at pavement depth. In this study, pavement reliability was evaluated considering only temperature variations; factors such as precipitation and variability in pavement thickness were not included, although they may affect pavement performance. Additionally, fatigue tests under different asphalt temperatures were not conducted, and a standard stiffness value for P-401 was used to assess fatigue behavior. Future studies by the authors will aim to calibrate the performance equations and address these limitations. AUTHOR KEYWORDS: airfield pavement design, climatic conditions, airfield pavement reliability, resilient pavements. # INTRODUCTION: The reliability of airport pavements is important for ensuring safety. Some airports require particular attention due to the high volume of traffic and environmental conditions. Various climatic factors may influence pavement performance, such as surface energy balance, moisture, climate change, frost heaving, and temperature (Alavi, Pouranian and Hajj, 2014). Thus, among all these factors, temperature is considered the critical variable, directly affecting the material behavior and performance (Qiao et al., 2013; Cheng et al., 2020; Zhang et al., 2023). In countries with continental dimensions, such as Brazil, the difference in temperature between the states to the north and south tend to influence the behavior of the pavements. These differences in asphalt layer temperature impact how stress and strain are distributed within the pavement structure and influence pavement performance and reliability (Hasan, Hiller and You, 2015; Kodipilly et al., 2018; Zhang et al., 2023; Luo et al., 2023; Zhuang et al., 2024). Furthermore, global warming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. Studies in several regions have been carried out to verify the impacts of climate change on asphalt pavements (Zhang et al., 2022; Liu et al, 2023; Barbi Tavassoti and Tighe, 2023; Zhang, Yang and Chen, 2024; Yang et al, 2024; Hosseini et al., 2024). All these studies agree that global warming tends to reduce pavement lifespan and increase pavement reconstruction costs. Nonetheless, the FAA pavement design method considers a constant asphalt field temperature, which is assumed to be equal to $32^{\circ}\mathrm{C}$ over the design period (FAA, 2021). That is, the airport pavement design method does not differentiate the damage caused by aircraft in different weather seasons. Furthermore, the fatigue equation used in the pavement design method was not calibrated for field conditions (Shen and Carpenter, 2005; Shen and Carpenter, 2007), which tends to be problematic for airports where fatigue is the primary failure criterion. In this scenario, the objective of this study is to perform a reliability analysis of the pavement designed using FAARFIELD software for a specific airport in Brazil and consider temperature variation. Climatic data from the past 10 years were collected to analyze the thermal gradient in the asphalt layers of the pavement structure. These data were also used to perform a sensitivity analysis on how temperature affects pavement strains to understand the impact of climatic conditions on pavements. Subsequently, a reliability analysis was conducted on CGH by comparing the results from FAARFIELD with those obtained from the MCS analysis. It was observed that the thermal gradient present in the pavement structure increases the stiffness of the asphalt layers, reducing the stresses and strains acting along the depth. Additionally, for the CGH, it was noted that the current pavement design method overestimates the stresses acting on runways. # METHODS The methodology is divided into four parts. In the first part, according to Figure 1, the collection and data analysis of traffic data and air temperature were made to São Paulo-Congonhas International Airport (CGH). This study chose this airport because it is one of the busiest in Brazil. Infraero provided the traffic data for 2017, 2018 and 2022, corresponding to every landing and takeoff performed by each aircraft on the airport's runway. Data were provided separately for each year and were then computed to determine each aircraft's average annual takeoffs and landings. The air temperature was obtained from meteorological data provided by the BDMEP-INMET website. The application requires entering the time period and station type to retrieve meteorological data. This study utilized air temperature data from 2014 to 2024 from the Mirante de Santana meteorological station, which is the closest meteorological station to CGH. The temperature of the asphalt layer was obtained through the method developed by Huber (1994), which is also used in the SUPERPAVE software to select the asphalt binder according to the weather conditions. For this study, Equations 1 and 2 present the model considering CGH latitude. $$ T _ {s u r f} = T _ {a i r} + 1 5. 5 4 0 3 \tag {1} $$ $$ T _ {d} = \left[ T _ {\text {s u r f}} \cdot \left(1 - 0. 0 6 3 0 \cdot \frac {d}{2 5 . 4} + 0. 0 0 7 0 \cdot \left(\frac {d}{2 5 . 4}\right) ^ {2} - 0. 0 0 0 4 \cdot \left(\frac {d}{2 5 . 4}\right) ^ {3}\right) - 3 2 \right] \cdot \frac {5}{9} \tag {2} $$ Equation 1 corresponds to the temperature at the pavement surface $T_{surf}$ in CGH, calculated using air temperature $T_{air}$ in degrees Celsius. Equation 2 computes the pavement temperature profile $T_{d}$ using the layer depth $d$ in millimeters and the pavement temperature surface in degrees Celsius. These equations consider an albedo of $10\%$ , transmission through air of $81\%$ , atmospheric radiation of $70\%$ , and 4.5 meters per second wind speed. The author developed them using more than 6,000 weather stations (Huber, 1994). The second part of the method consists of the pavement design for the traffic data using FAARFIELD software. It designs the pavement considering the materials from the FAARFIELD bibliography (FAA, 2021) and uses them as a reference for the following analysis. After obtaining the pavement structure, a sensitivity analysis was performed in the third part, varying only the Hot Mix Asphalt (HMA) modulus based on the asphalt layer temperature. Equations 3 to 5 use laboratory tests conducted by Kuchiishi, Vasconcelos, and Bernucci (2019) on asphalt mixtures for this correlation. $$ \log \alpha_ {T} = 0. 0 0 1 1 6 \cdot T ^ {2} + 0. 1 9 6 0 0 \cdot T + 3. 4 5 \tag {3} $$ $$ f _ {r} = f \cdot a _ {T} \tag {4} $$ $$ \log \left| E _ {H M A} \right| = 1. 6 9 0 + \frac {2 . 7 8 0}{1 + e ^ {- 1 . 4 1 0 - 0 . 7 1 9 \cdot \log f _ {r}}} \tag {5} $$ Equation 3 corresponds to the shift factor $\alpha_{T}$ of the model, calculated for each asphalt temperature $T$ in degrees Celsius. Equation 4 calculates the reduced frequency $f_{r}$ , representing the effect of frequency $f$ in the dynamic modulus test. This study considers the frequency of 1Hz, which is the same as computing the resilient modulus in Brazil. Equation 5 gives the HMA dynamic modulus $E_{HMA}$ in MPa. Equations 3 to 5 correspond to the dynamic modulus sigmoidal model, with coefficients obtained by Kuchiishi, Vasconcelos, and Bernucci (2019) for asphalt mixture samples. The third part involves a sensitivity analysis that considers the traffic, climatic data, and pavement structure obtained in this method's first and second parts. The method performs a sensitivity analysis to obtain the pavement behavior under temperature variation and uses it in the last part. Finally, this work employs a Monte Carlo Simulation (MCS) to analyze the reliability of pavement structures designed using FAARFIELD software for Brazilian climatic conditions. The MCS is a computerized technique that generates random values for the independent variable (Toan et al., 2022). Based on these values, the method obtained reliability $R$ by counting the number of variable combinations that satisfy the design criteria relative to the number of random variables generated, according to Equations 6 to 8. Many authors have used this technique to analyze pavement reliability (Maji & Das, 2008; Dilip & Babu, 2013; Ioannides & Tingle, 2021; Norouzi et al., 2022; Dinegdae, Ahmed and Erlingsson, 2022). $$ f (C D F > 1) = 1 \tag {6} $$ $$ f (C D F \leq 1) = 0 \tag {7} $$ $$ R = \frac {\text {s a m p l e s w i t h} f \left(\mathrm {C D F} \leq 1\right)}{\text {t o t a l n u m b e r o f s a m p l e s}} \tag {8} $$ This study used the MCS method to generate values for asphalt temperature per season. A convergence analysis determined the number of random values adequate for the MCS. These values were then employed to calculate the tensile strains on pavement structure. Subsequently, the reliability was calculated considering the number of samples that met the design criteria for airport pavement structures. A Microsoft Excel® spreadsheet developed by the authors performed pavement reliability calculations. The following section discusses the results of this analysis on enhancing airport pavement projects and maintenance. # RESULTS This study divides the results into four parts: traffic and climatic data, pavement design, sensitivity analysis, and pavement reliability. # Traffic and climatic data The traffic data analysis considered the most common aircraft at CGH. Note that B738 is the primary aircraft, performing an average of approximately 26,000 takeoffs and landings per year. Figure 2 illustrates the airport's leading aircraft, representing $90\%$ of the traffic data. The remaining $10\%$ consists of smaller aircraft that are not considered relevant for pavement design and analysis. Furthermore, the differences in operations over the years are less than $0.5\%$ . Figure 2 shows that the number of takeoffs and landings in CGH are similar. Only takeoffs were considered in the pavement design, as they represent the critical loading condition. About the climatic data, Figure 3 illustrates the dispersion of temperature data collected and shows that air temperature presents a seasonal variation, decreasing and increasing over time. Thus, data were analyzed considering the probability density of air temperature for each season in the Southern Hemisphere, according to Figure 4. According to Figure 4, January to March typically exhibits the highest temperature, corresponding to summer in Brazil, while winter generally experiences the lowest temperature. Furthermore, winter is the season with the most significant coefficient of variation (COV), i.e., there is more air temperature dispersion among this season's months. These differences in air temperature and pavement temperature also change the HMA modulus due to the viscoelastic behavior of asphalt mixtures. For reliability purposes, the Kolmogorov-Smirnov test was performed on air temperature data per season to verify normality. Figure 5 illustrates the histogram of air temperature per season. According to Figure 5 and the Kolmogorov-Smirnov test, temperature follows a normal distribution with a p-value of 0.8044 in the worst case. Then, the reliability analysis conducted in this study considers a normal temperature distribution for each season. # Pavement design using FAARFIELD The airport pavement was designed to achieve a minor pavement structure capable of sustaining the projected traffic load for 20 years. Table 1 presents the pavement structure obtained and each layer's modulus (E), corresponding to a CDF of 1.00 and 0.00 for fatigue and rutting, respectively. CDF corresponds to the ratio between the actual number of load applications and the allowable number of coverages for a given failure model, and it is a dimensionless parameter. According to Table 1, the pavement structure requires a total thickness of 1,070 millimeters, with 250 millimeters of asphalt. The pavement design method considers the HMA modulus equal to 1,378.95MPa, obtained for $32^{\circ}\mathrm{C}$ according to the FAA (2021). Table 2 presents the CDF for each aircraft. Comparing the CDF contribution for fatigue and subgrade leads to the conclusion that fatigue is the critical failure criterion for pavement design at CGH. The type of aircraft operating at the airport explains this conclusion, as they have similar main gears, MTOW, and high-intensity takeoff, which cause the pavement to be consistently stressed in the same areas. The B738 is the aircraft that causes the most damage to the pavement, representing $43\%$ of total damage. Thus, the E195 aircraft causes the most minor damage, representing just $1\%$ of total damage during the design period. Based on this structure presented in Table 1, the method discretizes the asphalt layer into ten parts and calculates the pavement temperature considering the center of each layer. A convergence analysis was conducted, varying the discretization of the asphalt layer between 2 and 15 parts. Based on this analysis, the discretization was defined to be 10 parts. Applying Equations 1 and 2, it is observed an average three degrees Celsius difference between the centers of the asphalt layers. The HMA modulus was then computed using Equations 3 - 5. Table 3 presents the result. According to Table 3, the average HMA modulus changes more than six times between the center of asphalt layers one and ten. Considering the first asphalt layer in the fall and winter, the HMA modulus is similar to that considered by FAARFIELD for the material P-401. Nonetheless, as the thickness increases the HMA modulus also increases due to the temperature profile. This difference shows that considering only one HMA modulus for thick asphalt layers, as FAARFIELD does, is unsuitable. Furthermore, Table 3 shows differences in HMA modulus across the seasons. According to Table 3, there is an average variation of $76\%$ in the HMA modulus between winter and summer. Additionally, there are similarities between the average temperatures of summer and spring, and winter and fall. # Sensitivity Analysis The study's next step was performing a sensitivity analysis, varying the temperature of the first pavement layer and considering a range from $22^{\circ}\mathrm{C}$ to $41^{\circ}\mathrm{C}$ . The upper value was defined based on the maximum temperature calculated at the center of the first layer at CGH. The lower value was defined based on the probability distribution function, ensuring that only $2.5\%$ of the probability is less than this value. Equations 3-5 led to an HMA modulus variation from 5,636MPa to 455MPa, respectively. The procedure obtains pavement strains using mePADS software (Maina et al., 2008), which considers the layered elastic theory. Figure 6 illustrates the influence of pavement temperature on tensile strain at the bottom of the asphalt layer. According to Figure 6, tensile strains increase by $122\%$ on average between $22^{\circ}\mathrm{C}$ and $41^{\circ}\mathrm{C}$ , indicating that the variation in temperature increases the pavement strains. In Brazil, labs usually measure the HMA modulus at a temperature of $25^{\circ}\mathrm{C}$ . Considering the average pavement temperature in summer, i.e., around $37^{\circ}\mathrm{C}$ in the center of the first asphalt layer, the tensile strain at the bottom of the HMA layer (εht) is, on average, $47\%$ higher than that observed at $25^{\circ}\mathrm{C}$ for all aircraft. This demonstrates that the temperature used in Brazil for HMA modulus tests is not consistent with the average pavement temperatures, which could lead to errors in considering a modulus higher than the field conditions during the pavement design. When considering the FAA model (FAA, 2021), this difference in pavement strains results, on average, in a reduction of $92\%$ in fatigue life if the pavement is designed considering $25^{\circ}\mathrm{C}$ . With the reduction of HMA modulus due to the pavement temperature, the compressive strains at the top of the subgrade (εvc) also increase on average by 48% between $22^{\circ}\mathrm{C}$ and $41^{\circ}\mathrm{C}$ , according to Figure 7. Rutting is not the main design criterion in this work, as the strains leading to fatigue are more critical, as previously presented. Therefore, temperature variation does not reduce the life of the pavement structure. Thus, designers should consider this variation for airports where rutting is the main failure criterion. The increment in compressive strain at the subgrade may reduce the pavement life for rutting criteria, as mentioned by Zhang et al. (2023). As fatigue is the primary design criterion in this study, a regression analysis was performed based on data from Figure 6 for each aircraft used in the reliability analysis. Equation 9 shows the general form of regression. $$ \varepsilon_ {h t _ {i}} = \beta_ {0} \cdot E _ {H M A} ^ {\beta_ {1}} \tag {9} $$ Table 4 summarizes the regression coefficients for each aircraft, obtained using the least squares method. According to Table 4 and Equation 9, the regression analysis resulted in high coefficients of determination $(R^2)$ to express the tensile strains at the bottom of the asphalt layer. This equation generates tensile strains at the bottom of asphalt layers according to the HMA modulus. # Reliability Analysis This study performs a reliability analysis considering the normal distribution of temperature for each season and its influence on the tensile strain at the bottom of the HMA layer. The design period was divided into the four seasons of the year, analyzing each season's contribution and aircraft's contribution. A convergence analysis concluded that 5,000 samples are sufficient for the MCS, as shown in Figure 8. According to Figure 8, the MCS reliability converges into approximately 1,500 samples. The standard deviation of the MCS is less than $0.1\%$ for more than 2000 samples. In this study, 5,000 samples were used for the reliability analysis to reduce the standard deviation (SD) of the random sample generation, which is approximately $0,09\%$ for 5,000 samples. Table 5 presents the average damage per aircraft and season using MCS. According to Table 5, due to the high temperatures and reduced HMA modulus in summer, aircraft tend to cause more damage to the pavement structure. In the same way, the aircraft tend to cause the most minor damage in winter. Thus, the total CDF computed through MCS is $79\%$ lower than the value obtained using FAARFIELD software. Considering the pavement temperature at CGH, all aircraft tend to cause less damage than expected. This tendency occurs when considering the thermal differential of the asphalt layer, i.e., the temperature decreases with the depth of the asphalt layer. Due to this reduction in the pavement temperature, the bottom of the asphalt layer has a higher modulus and consequently experiences less tensile strains, which is responsible for fatigue damage. According to White (2018), fatigue cracking is rarely encountered in airport pavements, which may be caused by these findings. Barbi, Tavassoti, and Tighe (2023) observations corroborate this result, even for regions without a pavement freezing period. Considering the damage level observed using MCS, the pavement designed using FAARFIELD would withstand traffic 2.5 times greater at $95\%$ reliability and 5.0 times greater at $50\%$ reliability, according to Figure 9. Clearly, traffic levels five times higher are unrealistic compared to field observations, which may indicate that the fatigue model is not suitable for Brazilian climate conditions and requires a laboratory to field calibration. Due to the minor damage caused when considering Brazilian climatic conditions at CGH, the design period could be approximately 50 years for $95\%$ reliability and 102 years for $50\%$ reliability. These design periods are much longer than those typically considered in airport pavement projects, which can be corroborated with the statements of White (2018) that fatigue cracks are not common in these pavements. On the other hand, the fact that they are not common also indicates a failure in predicting the structural behavior, possibly increasing construction costs, which requires a laboratory-field calibration. Furthermore, the need to calibrate the fatigue equation used in the design method was also mentioned by Shen and Carpenter (2007). These differences between the CDF from the FAA method and those calculated via MCS in this study indicate that the fatigue model requires calibration for different regions. The fixed pavement temperature of $32^{\circ}\mathrm{C}$ during all the design periods is unrealistic because it does not account for the temperature differences between seasons. # SUMMARY AND CONCLUSIONS This study performed a pavement reliability analysis considering CGH, the second busiest airport in Brazil. Furthermore, a sensitivity analysis was performed, considering pavement temperature variation. This study showed that an increase in temperature also increases pavement strains for both tensile at the bottom of the asphalt layer and compressive strain at the top of the subgrade. The reliability analysis showed that the pavement designed using FAARFIELD might result in an unrealistic design period for less than $95\%$ reliability, which may not be suitable for accurately predicting asphalt fatigue life and could increase pavement construction costs. These unrealistic values may occur due to the absence of calibration in the fatigue performance prediction model used in the design software, indicating the need to calibrate the fatigue performance model for field conditions. Furthermore, Brazil exhibits significant climatic differences from north to south, and depending on the region analyzed, the pavement may experience different strains, which are not considered in the airport pavement design method. The limitations of this study should be acknowledged to inform future research. The equation used in this study for pavement temperature is deterministic, always considering the same albedo, wind speed, and transmission through air. Future studies must verify if this equation is adequate for Brazilian regions, especially considering actual temperature values at pavement depth. This study performs pavement reliability by considering only the temperature variations, and then precipitation and the variability in pavement thickness were not considered, which may influence the pavement performance. Furthermore, this study did not perform fatigue tests varying the asphalt temperature and considered the standard stiffness value for P-401 for fatigue performance. In future studies, the authors will focus on calibrating the fatigue equation for Brazilian climatic conditions. Laboratory tests are currently being conducted to evaluate the performance of asphalt mixtures under different temperature variations. In the next phase, a full-scale airport pavement structure will be constructed and monitored across different seasons. A digital twin, combining finite element analysis (FEA) and machine learning, will then be used to support the calibration process. Additionally, future studies can focus on verifying the temperature equation for Brazilian airports and incorporating actual temperature values at pavement depth. Furthermore, fatigue tests at different asphalt temperatures are recommended to verify fatigue behavior and enhance pavement design methods for Brazilian airports. # DATA AVAILABILITY STATEMENT: Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request. # NOTATION LIST The following symbols are used in this paper: $$ E = \text {l a y e r m o d u l u s}, \mathrm {M P a} $$ $$ E _ {H M A} = \text {a s p h a l t l a y e r m o d u l u s}, \mathrm {M P a} $$ $$ R = \text {R e l i a b i l i t y} $$ $$ T = \text {t e m p e r a t u r e}, ^ {\circ} \mathrm {C} $$ $$ T _ {A i r} = \text {A i r t e m p e r a t u r e}, ^ {\circ} \mathrm {C} $$ $$ T _ {d} = \text {P a v e m e n t} ^ {\circ} $$ $$ T _ {s u r f} = \text {T e m p e r a t u r e} $$ $$ d = \text {d e p t h}, \mathrm {m m} $$ $$ f = \text {t e s t f r e q u e n c y , H z} $$ $$ f _ {r} = \text {r e d u c e d f r e q u e n c y , H z} $$ $$ \alpha_ {T} = \text {s h i f t f a c t o r a t t h e t e m p e r a t u r e T}. $$ $$ \beta_ {0}, \beta_ {1} = \text {R e g r e s s i o n c o e f f i c i e n t s} $$ $$ \varepsilon_ {h t _ {i}} = \text {t e n s i l e} $$
arxiv_physics
2025-12-11T00:00:00Z
https://arxiv.org/pdf/2512.14736
{"title": "Analysis of airport runway pavement reliability considering temperature variation: The case of Sao Paulo-Congonhas international airport", "raw_content": "# Analysis of airport runway pavement reliability considering temperature variation: The case of São Paulo-Congonhas international airport\n\nFelipe H. Cava, M. Sc.,<sup>1</sup> Dimas B. Ribeiro, D. Sc.,<sup>2</sup> Claudia A. Pereira, D. Sc.,<sup>3</sup> Mauro Caetano, D. Sc.<sup>4</sup> and Evandro José da Silva, D. Sc.<sup>5</sup>\n\n<sup>1</sup> Researcher, Civil Engineering Division, Aeronautics Institute of Technology, São José dos Campos, Brazil, 12228-900; email: felipe.cava.101745@ga.ita.br (Corresponding author) \n$^{2}$ Researcher, Civil Engineering Division, Aeronautics Institute of Technology, São José dos Campos, Brazil, 12228-900; email: dimas@ita.br \n<sup>3</sup> Researcher, Civil Engineering Division, Aeronautics Institute of Technology, São José dos Campos, Brazil, 12228-900; email: claudia.azevedo@ita.br \n<sup>4</sup> Researcher, Air Transport Laboratory, Aeronautics Institute of Technology, São José dos Campos, Brazil, 12228-900; email: caetano@ita.br \n$^{5}$ Researcher, Civil Engineering Division, Aeronautics Institute of Technology, São José dos Campos, Brazil, 12228-900; email: evandro@ita.br\n\n# ABSTRACT:\n\nAirport pavement design methods typically rely on standard documents, such as those provided by the FAA, which assume general climatic conditions. Nonetheless, the temperature between different regions tends to influence the behavior of the pavements, which impacts how stress and strains are distributed within the pavement structure and influence pavement performance and reliability. Furthermore, global\n\nwarming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. This study aims to perform a reliability analysis for a pavement designed by traditional methods combined with the temperature variation, considering the case of São Paulo-Congonhas International Airport (CGH). It analyzed temperature variations across the four seasons. The procedure includes designing pavement, considering the airport's traffic mix, and performing a Monte Carlo Simulation (MCS) to verify the pavement structure's reliability under temperature variation. The study shows that the total cumulative damage factor (CDF) computed through MCS is $79\\%$ lower than the value obtained using FAA method. Considering the pavement temperatures at CGH, all aircraft tend to cause less damage than expected. Furthermore, the pavement designed could withstand traffic 2.5 times greater at $95\\%$ reliability and 5.0 times greater at $50\\%$ reliability when considering temperature variation. These numbers indicate that in Brazilian airports where fatigue is the primary design criterion, FAARFIELD overestimates the damage and consequently increases pavement construction costs. These results suggest that the airport pavement design method requires calibration for Brazilian climatic conditions to improve fatigue damage prediction, especially for airports where fatigue is the primary failure criterion. The limitations of this study should be acknowledged to inform future research. The pavement temperature equation applied is deterministic, assuming fixed values for albedo, wind speed, and atmospheric transmission. Future research should assess the suitability of this equation for Brazilian regions, particularly in relation to actual measured temperatures at pavement depth. In this study, pavement reliability was evaluated considering only temperature variations; factors such as precipitation and variability in pavement thickness were not included, although they may affect pavement performance. Additionally, fatigue tests under different asphalt temperatures were not conducted, and a standard stiffness value for P-401 was used to assess fatigue behavior. Future studies by the authors will aim to calibrate the performance equations and address these limitations.\n\nAUTHOR KEYWORDS: airfield pavement design, climatic conditions, airfield pavement reliability, resilient pavements.\n\n# INTRODUCTION:\n\nThe reliability of airport pavements is important for ensuring safety. Some airports require particular attention due to the high volume of traffic and environmental conditions. Various climatic factors may influence pavement performance, such as surface energy balance, moisture, climate change, frost heaving, and temperature (Alavi, Pouranian and Hajj, 2014). Thus, among all these factors, temperature is considered the critical variable, directly affecting the material behavior and performance (Qiao et al., 2013; Cheng et al., 2020; Zhang et al., 2023).\n\nIn countries with continental dimensions, such as Brazil, the difference in temperature between the states to the north and south tend to influence the behavior of the pavements. These differences in asphalt layer temperature impact how stress and strain are distributed within the pavement structure and influence pavement performance and reliability (Hasan, Hiller and You, 2015; Kodipilly et al., 2018; Zhang et al., 2023; Luo et al., 2023; Zhuang et al., 2024).\n\nFurthermore, global warming has required specific analyses of the behavior of infrastructures, such as pavements, regarding the choice of materials and performance needed to make pavements more resilient. Studies in several regions have been carried out to verify the impacts of climate change on asphalt pavements (Zhang et al., 2022; Liu et al, 2023; Barbi Tavassoti and Tighe, 2023; Zhang, Yang and Chen, 2024; Yang et al, 2024; Hosseini et al., 2024). All these studies agree that global warming tends to reduce pavement lifespan and increase pavement reconstruction costs.\n\nNonetheless, the FAA pavement design method considers a constant asphalt field temperature, which is assumed to be equal to $32^{\\circ}\\mathrm{C}$ over the design period (FAA, 2021). That is, the airport pavement design method does not differentiate the damage caused by aircraft in different weather seasons. Furthermore, the fatigue equation used in the pavement design method was not calibrated for field conditions (Shen and Carpenter, 2005; Shen and Carpenter, 2007), which tends to be problematic for airports where fatigue is the primary failure criterion.\n\nIn this scenario, the objective of this study is to perform a reliability analysis of the pavement designed using FAARFIELD software for a specific airport in Brazil and consider temperature variation. Climatic data from the past 10 years were collected to analyze the thermal gradient in the asphalt layers of the pavement structure. These data were also used to perform a sensitivity analysis on how temperature affects pavement strains to understand the impact of climatic conditions on pavements. Subsequently, a reliability analysis was conducted on CGH by comparing the results from FAARFIELD with those obtained from the MCS analysis. It was observed that the thermal gradient present in the pavement structure increases the stiffness of the asphalt layers, reducing the stresses and strains acting along the depth. Additionally, for the CGH, it was noted that the current pavement design method overestimates the stresses acting on runways.\n\n# METHODS\n\nThe methodology is divided into four parts.\n\nIn the first part, according to Figure 1, the collection and data analysis of traffic data and air temperature were made to São Paulo-Congonhas International Airport (CGH). This study chose this airport because it is one of the busiest in Brazil. Infraero provided the traffic data for 2017, 2018 and 2022, corresponding to every landing and takeoff performed by each aircraft on the airport's runway. Data were provided\n\nseparately for each year and were then computed to determine each aircraft's average annual takeoffs and landings.\n\nThe air temperature was obtained from meteorological data provided by the BDMEP-INMET website. The application requires entering the time period and station type to retrieve meteorological data. This study utilized air temperature data from 2014 to 2024 from the Mirante de Santana meteorological station, which is the closest meteorological station to CGH. The temperature of the asphalt layer was obtained through the method developed by Huber (1994), which is also used in the SUPERPAVE software to select the asphalt binder according to the weather conditions. For this study, Equations 1 and 2 present the model considering CGH latitude.\n\n$$\nT _ {s u r f} = T _ {a i r} + 1 5. 5 4 0 3 \\tag {1}\n$$\n\n$$\nT _ {d} = \\left[ T _ {\\text {s u r f}} \\cdot \\left(1 - 0. 0 6 3 0 \\cdot \\frac {d}{2 5 . 4} + 0. 0 0 7 0 \\cdot \\left(\\frac {d}{2 5 . 4}\\right) ^ {2} - 0. 0 0 0 4 \\cdot \\left(\\frac {d}{2 5 . 4}\\right) ^ {3}\\right) - 3 2 \\right] \\cdot \\frac {5}{9} \\tag {2}\n$$\n\nEquation 1 corresponds to the temperature at the pavement surface $T_{surf}$ in CGH, calculated using air temperature $T_{air}$ in degrees Celsius. Equation 2 computes the pavement temperature profile $T_{d}$ using the layer depth $d$ in millimeters and the pavement temperature surface in degrees Celsius. These equations consider an albedo of $10\\%$ , transmission through air of $81\\%$ , atmospheric radiation of $70\\%$ , and 4.5 meters per second wind speed. The author developed them using more than 6,000 weather stations (Huber, 1994).\n\nThe second part of the method consists of the pavement design for the traffic data using FAARFIELD software. It designs the pavement considering the materials from the FAARFIELD bibliography (FAA, 2021) and uses them as a reference for the following analysis. After obtaining the pavement structure, a sensitivity analysis was performed in the third part, varying only the Hot Mix Asphalt (HMA) modulus\n\nbased on the asphalt layer temperature. Equations 3 to 5 use laboratory tests conducted by Kuchiishi, Vasconcelos, and Bernucci (2019) on asphalt mixtures for this correlation.\n\n$$\n\\log \\alpha_ {T} = 0. 0 0 1 1 6 \\cdot T ^ {2} + 0. 1 9 6 0 0 \\cdot T + 3. 4 5 \\tag {3}\n$$\n\n$$\nf _ {r} = f \\cdot a _ {T} \\tag {4}\n$$\n\n$$\n\\log \\left| E _ {H M A} \\right| = 1. 6 9 0 + \\frac {2 . 7 8 0}{1 + e ^ {- 1 . 4 1 0 - 0 . 7 1 9 \\cdot \\log f _ {r}}} \\tag {5}\n$$\n\nEquation 3 corresponds to the shift factor $\\alpha_{T}$ of the model, calculated for each asphalt temperature $T$ in degrees Celsius. Equation 4 calculates the reduced frequency $f_{r}$ , representing the effect of frequency $f$ in the dynamic modulus test. This study considers the frequency of 1Hz, which is the same as computing the resilient modulus in Brazil. Equation 5 gives the HMA dynamic modulus $E_{HMA}$ in MPa. Equations 3 to 5 correspond to the dynamic modulus sigmoidal model, with coefficients obtained by Kuchiishi, Vasconcelos, and Bernucci (2019) for asphalt mixture samples.\n\nThe third part involves a sensitivity analysis that considers the traffic, climatic data, and pavement structure obtained in this method's first and second parts. The method performs a sensitivity analysis to obtain the pavement behavior under temperature variation and uses it in the last part.\n\nFinally, this work employs a Monte Carlo Simulation (MCS) to analyze the reliability of pavement structures designed using FAARFIELD software for Brazilian climatic conditions. The MCS is a computerized technique that generates random values for the independent variable (Toan et al., 2022). Based on these values, the method obtained reliability $R$ by counting the number of variable combinations that satisfy the design criteria relative to the number of random variables generated, according to Equations 6 to 8. Many authors have used this technique to analyze pavement reliability (Maji & Das, 2008; Dilip & Babu, 2013; Ioannides & Tingle, 2021; Norouzi et al., 2022; Dinegdae, Ahmed and Erlingsson, 2022).\n\n$$\nf (C D F > 1) = 1 \\tag {6}\n$$\n\n$$\nf (C D F \\leq 1) = 0 \\tag {7}\n$$\n\n$$\nR = \\frac {\\text {s a m p l e s w i t h} f \\left(\\mathrm {C D F} \\leq 1\\right)}{\\text {t o t a l n u m b e r o f s a m p l e s}} \\tag {8}\n$$\n\nThis study used the MCS method to generate values for asphalt temperature per season. A convergence analysis determined the number of random values adequate for the MCS. These values were then employed to calculate the tensile strains on pavement structure. Subsequently, the reliability was calculated considering the number of samples that met the design criteria for airport pavement structures. A Microsoft Excel® spreadsheet developed by the authors performed pavement reliability calculations.\n\nThe following section discusses the results of this analysis on enhancing airport pavement projects and maintenance.\n\n# RESULTS\n\nThis study divides the results into four parts: traffic and climatic data, pavement design, sensitivity analysis, and pavement reliability.\n\n# Traffic and climatic data\n\nThe traffic data analysis considered the most common aircraft at CGH. Note that B738 is the primary aircraft, performing an average of approximately 26,000 takeoffs and landings per year. Figure 2 illustrates the airport's leading aircraft, representing $90\\%$ of the traffic data. The remaining $10\\%$ consists of smaller aircraft that are not considered relevant for pavement design and analysis. Furthermore, the differences in operations over the years are less than $0.5\\%$ .\n\nFigure 2 shows that the number of takeoffs and landings in CGH are similar. Only takeoffs were considered in the pavement design, as they represent the critical loading condition. About the climatic data, Figure 3 illustrates the dispersion of temperature data collected and shows that air temperature presents a seasonal variation, decreasing and increasing over time. Thus, data were analyzed considering the probability density of air temperature for each season in the Southern Hemisphere, according to Figure 4.\n\nAccording to Figure 4, January to March typically exhibits the highest temperature, corresponding to summer in Brazil, while winter generally experiences the lowest temperature. Furthermore, winter is the season with the most significant coefficient of variation (COV), i.e., there is more air temperature dispersion among this season's months. These differences in air temperature and pavement temperature also change the HMA modulus due to the viscoelastic behavior of asphalt mixtures.\n\nFor reliability purposes, the Kolmogorov-Smirnov test was performed on air temperature data per season to verify normality. Figure 5 illustrates the histogram of air temperature per season.\n\nAccording to Figure 5 and the Kolmogorov-Smirnov test, temperature follows a normal distribution with a p-value of 0.8044 in the worst case. Then, the reliability analysis conducted in this study considers a normal temperature distribution for each season.\n\n# Pavement design using FAARFIELD\n\nThe airport pavement was designed to achieve a minor pavement structure capable of sustaining the projected traffic load for 20 years. Table 1 presents the pavement structure obtained and each layer's modulus (E), corresponding to a CDF of 1.00 and 0.00 for fatigue and rutting, respectively. CDF\n\ncorresponds to the ratio between the actual number of load applications and the allowable number of coverages for a given failure model, and it is a dimensionless parameter.\n\nAccording to Table 1, the pavement structure requires a total thickness of 1,070 millimeters, with 250 millimeters of asphalt. The pavement design method considers the HMA modulus equal to 1,378.95MPa, obtained for $32^{\\circ}\\mathrm{C}$ according to the FAA (2021). Table 2 presents the CDF for each aircraft.\n\nComparing the CDF contribution for fatigue and subgrade leads to the conclusion that fatigue is the critical failure criterion for pavement design at CGH. The type of aircraft operating at the airport explains this conclusion, as they have similar main gears, MTOW, and high-intensity takeoff, which cause the pavement to be consistently stressed in the same areas. The B738 is the aircraft that causes the most damage to the pavement, representing $43\\%$ of total damage. Thus, the E195 aircraft causes the most minor damage, representing just $1\\%$ of total damage during the design period.\n\nBased on this structure presented in Table 1, the method discretizes the asphalt layer into ten parts and calculates the pavement temperature considering the center of each layer. A convergence analysis was conducted, varying the discretization of the asphalt layer between 2 and 15 parts. Based on this analysis, the discretization was defined to be 10 parts. Applying Equations 1 and 2, it is observed an average three degrees Celsius difference between the centers of the asphalt layers. The HMA modulus was then computed using Equations 3 - 5. Table 3 presents the result.\n\nAccording to Table 3, the average HMA modulus changes more than six times between the center of asphalt layers one and ten. Considering the first asphalt layer in the fall and winter, the HMA modulus is similar to that considered by FAARFIELD for the material P-401. Nonetheless, as the thickness increases the HMA modulus also increases due to the temperature profile. This difference shows that considering only one HMA modulus for thick asphalt layers, as FAARFIELD does, is unsuitable.\n\nFurthermore, Table 3 shows differences in HMA modulus across the seasons. According to Table 3, there is an average variation of $76\\%$ in the HMA modulus between winter and summer. Additionally, there are similarities between the average temperatures of summer and spring, and winter and fall.\n\n# Sensitivity Analysis\n\nThe study's next step was performing a sensitivity analysis, varying the temperature of the first pavement layer and considering a range from $22^{\\circ}\\mathrm{C}$ to $41^{\\circ}\\mathrm{C}$ . The upper value was defined based on the maximum temperature calculated at the center of the first layer at CGH. The lower value was defined based on the probability distribution function, ensuring that only $2.5\\%$ of the probability is less than this value. Equations 3-5 led to an HMA modulus variation from 5,636MPa to 455MPa, respectively. The procedure obtains pavement strains using mePADS software (Maina et al., 2008), which considers the layered elastic theory. Figure 6 illustrates the influence of pavement temperature on tensile strain at the bottom of the asphalt layer.\n\nAccording to Figure 6, tensile strains increase by $122\\%$ on average between $22^{\\circ}\\mathrm{C}$ and $41^{\\circ}\\mathrm{C}$ , indicating that the variation in temperature increases the pavement strains. In Brazil, labs usually measure the HMA modulus at a temperature of $25^{\\circ}\\mathrm{C}$ . Considering the average pavement temperature in summer, i.e., around $37^{\\circ}\\mathrm{C}$ in the center of the first asphalt layer, the tensile strain at the bottom of the HMA layer (εht) is, on average, $47\\%$ higher than that observed at $25^{\\circ}\\mathrm{C}$ for all aircraft. This demonstrates that the temperature used in Brazil for HMA modulus tests is not consistent with the average pavement temperatures, which could lead to errors in considering a modulus higher than the field conditions during the pavement design. When considering the FAA model (FAA, 2021), this difference in pavement strains results, on average, in a reduction of $92\\%$ in fatigue life if the pavement is designed considering $25^{\\circ}\\mathrm{C}$ .\n\nWith the reduction of HMA modulus due to the pavement temperature, the compressive strains at the top of the subgrade (εvc) also increase on average by 48% between $22^{\\circ}\\mathrm{C}$ and $41^{\\circ}\\mathrm{C}$ , according to Figure 7. Rutting is not the main design criterion in this work, as the strains leading to fatigue are more critical, as previously presented. Therefore, temperature variation does not reduce the life of the pavement structure. Thus, designers should consider this variation for airports where rutting is the main failure criterion. The increment in compressive strain at the subgrade may reduce the pavement life for rutting criteria, as mentioned by Zhang et al. (2023).\n\nAs fatigue is the primary design criterion in this study, a regression analysis was performed based on data from Figure 6 for each aircraft used in the reliability analysis. Equation 9 shows the general form of regression.\n\n$$\n\\varepsilon_ {h t _ {i}} = \\beta_ {0} \\cdot E _ {H M A} ^ {\\beta_ {1}} \\tag {9}\n$$\n\nTable 4 summarizes the regression coefficients for each aircraft, obtained using the least squares method.\n\nAccording to Table 4 and Equation 9, the regression analysis resulted in high coefficients of determination $(R^2)$ to express the tensile strains at the bottom of the asphalt layer. This equation generates tensile strains at the bottom of asphalt layers according to the HMA modulus.\n\n# Reliability Analysis\n\nThis study performs a reliability analysis considering the normal distribution of temperature for each season and its influence on the tensile strain at the bottom of the HMA layer. The design period was divided into the four seasons of the year, analyzing each season's contribution and aircraft's contribution. A convergence analysis concluded that 5,000 samples are sufficient for the MCS, as shown in Figure 8.\n\nAccording to Figure 8, the MCS reliability converges into approximately 1,500 samples. The standard deviation of the MCS is less than $0.1\\%$ for more than 2000 samples. In this study, 5,000 samples were used for the reliability analysis to reduce the standard deviation (SD) of the random sample generation, which is approximately $0,09\\%$ for 5,000 samples. Table 5 presents the average damage per aircraft and season using MCS.\n\nAccording to Table 5, due to the high temperatures and reduced HMA modulus in summer, aircraft tend to cause more damage to the pavement structure. In the same way, the aircraft tend to cause the most minor damage in winter. Thus, the total CDF computed through MCS is $79\\%$ lower than the value obtained using FAARFIELD software. Considering the pavement temperature at CGH, all aircraft tend to cause less damage than expected. This tendency occurs when considering the thermal differential of the asphalt layer, i.e., the temperature decreases with the depth of the asphalt layer. Due to this reduction in the pavement temperature, the bottom of the asphalt layer has a higher modulus and consequently experiences less tensile strains, which is responsible for fatigue damage. According to White (2018), fatigue cracking is rarely encountered in airport pavements, which may be caused by these findings. Barbi, Tavassoti, and Tighe (2023) observations corroborate this result, even for regions without a pavement freezing period.\n\nConsidering the damage level observed using MCS, the pavement designed using FAARFIELD would withstand traffic 2.5 times greater at $95\\%$ reliability and 5.0 times greater at $50\\%$ reliability, according to Figure 9.\n\nClearly, traffic levels five times higher are unrealistic compared to field observations, which may indicate that the fatigue model is not suitable for Brazilian climate conditions and requires a laboratory to field calibration. Due to the minor damage caused when considering Brazilian climatic conditions at CGH, the design period could be approximately 50 years for $95\\%$ reliability and 102 years for $50\\%$ reliability. These\n\ndesign periods are much longer than those typically considered in airport pavement projects, which can be corroborated with the statements of White (2018) that fatigue cracks are not common in these pavements. On the other hand, the fact that they are not common also indicates a failure in predicting the structural behavior, possibly increasing construction costs, which requires a laboratory-field calibration. Furthermore, the need to calibrate the fatigue equation used in the design method was also mentioned by Shen and Carpenter (2007).\n\nThese differences between the CDF from the FAA method and those calculated via MCS in this study indicate that the fatigue model requires calibration for different regions. The fixed pavement temperature of $32^{\\circ}\\mathrm{C}$ during all the design periods is unrealistic because it does not account for the temperature differences between seasons.\n\n# SUMMARY AND CONCLUSIONS\n\nThis study performed a pavement reliability analysis considering CGH, the second busiest airport in Brazil. Furthermore, a sensitivity analysis was performed, considering pavement temperature variation.\n\nThis study showed that an increase in temperature also increases pavement strains for both tensile at the bottom of the asphalt layer and compressive strain at the top of the subgrade. The reliability analysis showed that the pavement designed using FAARFIELD might result in an unrealistic design period for less than $95\\%$ reliability, which may not be suitable for accurately predicting asphalt fatigue life and could increase pavement construction costs.\n\nThese unrealistic values may occur due to the absence of calibration in the fatigue performance prediction model used in the design software, indicating the need to calibrate the fatigue performance model for field conditions. Furthermore, Brazil exhibits significant climatic differences from north to south, and\n\ndepending on the region analyzed, the pavement may experience different strains, which are not considered in the airport pavement design method.\n\nThe limitations of this study should be acknowledged to inform future research. The equation used in this study for pavement temperature is deterministic, always considering the same albedo, wind speed, and transmission through air. Future studies must verify if this equation is adequate for Brazilian regions, especially considering actual temperature values at pavement depth. This study performs pavement reliability by considering only the temperature variations, and then precipitation and the variability in pavement thickness were not considered, which may influence the pavement performance. Furthermore, this study did not perform fatigue tests varying the asphalt temperature and considered the standard stiffness value for P-401 for fatigue performance.\n\nIn future studies, the authors will focus on calibrating the fatigue equation for Brazilian climatic conditions. Laboratory tests are currently being conducted to evaluate the performance of asphalt mixtures under different temperature variations. In the next phase, a full-scale airport pavement structure will be constructed and monitored across different seasons. A digital twin, combining finite element analysis (FEA) and machine learning, will then be used to support the calibration process.\n\nAdditionally, future studies can focus on verifying the temperature equation for Brazilian airports and incorporating actual temperature values at pavement depth. Furthermore, fatigue tests at different asphalt temperatures are recommended to verify fatigue behavior and enhance pavement design methods for Brazilian airports.\n\n# DATA AVAILABILITY STATEMENT:\n\nSome or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.\n\n# NOTATION LIST\n\nThe following symbols are used in this paper:\n\n$$\nE = \\text {l a y e r m o d u l u s}, \\mathrm {M P a}\n$$\n\n$$\nE _ {H M A} = \\text {a s p h a l t l a y e r m o d u l u s}, \\mathrm {M P a}\n$$\n\n$$\nR = \\text {R e l i a b i l i t y}\n$$\n\n$$\nT = \\text {t e m p e r a t u r e}, ^ {\\circ} \\mathrm {C}\n$$\n\n$$\nT _ {A i r} = \\text {A i r t e m p e r a t u r e}, ^ {\\circ} \\mathrm {C}\n$$\n\n$$\nT _ {d} = \\text {P a v e m e n t} ^ {\\circ}\n$$\n\n$$\nT _ {s u r f} = \\text {T e m p e r a t u r e}\n$$\n\n$$\nd = \\text {d e p t h}, \\mathrm {m m}\n$$\n\n$$\nf = \\text {t e s t f r e q u e n c y , H z}\n$$\n\n$$\nf _ {r} = \\text {r e d u c e d f r e q u e n c y , H z}\n$$\n\n$$\n\\alpha_ {T} = \\text {s h i f t f a c t o r a t t h e t e m p e r a t u r e T}.\n$$\n\n$$\n\\beta_ {0}, \\beta_ {1} = \\text {R e g r e s s i o n c o e f f i c i e n t s}\n$$\n\n$$\n\\varepsilon_ {h t _ {i}} = \\text {t e n s i l e}\n$$\n\n# REFERENCE:\n\nAASHTO 2008. \"Mechanistic-empirical pavement design guide: A manual of practice\". Washington. \nAlavi, M.Z; Pouranian, M.R; Hajj, E.Y 2014. \"Prediction of Asphalt Pavement Temperature Profile with Finite Control Volume Method\". Transportation Research Record: Journal of the Transportation Research Board. DOI: 10.3141/2456-10. \nBarbi, P. S. R.; Tavassoti, P.; Tighe, S 2023. \"Enhanced Pavement Design and Analysis Framework to Improve the Resiliency of Flexible Airfield Pavements\". Transportation Research Record: Journal of the Transportation Research Board. DOI: 10.1177/03611981231155909. \nCheng, H; Liu, J; Sun, L; Liu, L; Zhang, Y 2020. \"Fatigue behaviors of asphalt mixture at different temperatures in four-point bending and indirect tensile fatigue tests\". Construction and Building Materials. DOI: https://doi.org/10.1016/j.conbuildmat.2020.121675 \nDilip, D.M; P; Babu, G.L.S 2013. \"Methodology for Pavement Design Reliability and Back Analysis Using Markov Chain Monte Carlo Simulation\". Journal of Transportation Engineering, ASCE. DOI: https://doi.org/10.1061/(ASCE)TE.1943-5436.0000455. \nDinegdae, T; Ahmed, A; Erlingsson, S 2022. \"Toward a comprehensive pavement reliability analysis approach\". Transportation Research Record: Journal of the Transportation Research Board. DOI: https://doi.org/10.1177/03611981231155179. \nFAA 2021. \"AC 150/5320 6G - Airport Pavement Design and Evaluation\". Washington.\n\nHasan, M.R.M; Hiller, J.E; You, Z 2015. \"Effects of mean annual temperature and mean annual precipitation on the performance of flexible pavement using ME design\". International Journal of Pavement Engineering. DOI: https://doi.org/10.1080/10298436.2015.1019504. \nHosseini, F; Nasimifar, M; Sivaneswaran, N; Golalipour, A 2024. \"Mutual impacts of changing climate and flexible pavement performance considering resilience and sustainable aspects\". Elsevier: Journal of Cleaner Production. DOI: https://doi.org/10.1016/j.jclepro.2024.142594. \nHuber, G.A 1994. \"SHRP-A-648A – Weather database for the SUPERPAVE mix design system\". Strategic Highway Research Program. Washington, DC. \nIoannides, A.M; Tingle, J.S 2021. \"Monte Carlo Simulation for flexible pavement reliability\". Airfield and Highway pavements, ASCE. DOI: 10.1061/9780784483503.002. \nKodippilly, S; Yeaman, J; Henning, T; Tighe, S 2018. \"Effects of extreme climatic conditions on pavement response\". Road Materials and Pavement Design. DOI: https://doi.org/10.1080/14680629.2018.1552620. \nKuchiishi, A. K; Vasconcelos, K; Bernucci, L.B 2019. \"Effect of mixture composition on the mechanical behaviour of cold recycled asphalt mixtures\". International Journal of Pavement Engineering. DOI: https://doi.org/10.1080/10298436.2019.1655564. \nLiu, T; Yang, S; Jiang, X; Liao, B; Castillo-Camarena, E 2023. “Adaptation measures for asphalt pavements to climate change in China”. Elsevier: Journal of Cleaner Production. DOI: https://doi.org/10.1016/j.jclepro.2023.137861. \nLuo, Y; Wu, H; Song, W; Yin, J; Zhan, Y; Yu, J; Wada, S.A 2023. \"Thermal fatigue and cracking behaviors of asphalt mixtures under different temperature variations\". Construction and Building Materials. DOI: https://doi.org/10.1016/j.conbuildmat.2023.130623.\n\nMaina, J.W., Denneman, E; De Beer, M. Introduction of new road pavement response modelling software by means of benchmarking. Partnership for research and progress in Transportation. 27th Southern African Transport Conference (SATC), Pretoria, South Africa, July 7-11, 2008, pp 1-14. \nMaji, A; Das, A 2008. \"Reliability considerations of bituminous pavement design by mechanistic-empirical approach\". International Journal of Pavement Engineering. DOI: https://doi.org/10.1080/10298430600997240. \nNorouzi, Y; Ghasemi, S.H; Nowak, A.S; Jalayer, M; Mehta, Y; Chmielewski, J 2022. \"Performance-based design of asphalt pavements concerning the reliability analysis\". Construction and Building Materials, Vol. 332, 16. DOI: https://doi.org/10.1016/j.conbuildmat.2022.127393 \nQiao, Y; Flintsch, G; Dawson, A; Parry, T 2013. \"Examining effects of climatic factors on flexible pavement performance and service life\". Transportation Research Record: Journal of the Transportation Research Board, p. 100-107. DOI: 10.3141/2349-12. \nShen, S; Carpenter, S.H 2005. \"Application of the Dissipated Energy Concept in Fatigue Endurance Limit Testing\". Transportation Research Record: Journal of the Transportation Research Board. DOI: https://doi.org/10.1177/0361198105192900120. \nShen, S; Carpenter, S.H 2007. \"Development of an asphalt fatigue model based on energy principles\". Asphalt Paving Technology. \nToan, T.D; Long, N.H; Wong, Y.D; Nguyen, T 2022. \"Effects of variability in thickness and elastic modulus on the reliability of flexible pavement structural performance\". International Journal of Pavement Engineering, Taylor & Francis. DOI: https://doi.org/10.1080/10298436.2022.2039923. \nWhite, Greg 2018. \"State of the art: asphalt for airport pavement surfacing\". International Journal of Pavement Research and Technology. DOI: https://doi.org/10.1016/j.ijprt.2017.07.008.\n\nYang, Q; Cao, Z; Shen, L; Gu, F; Santos, J; Qiao, Y; Wang, H; Li, J; Zhang, Y; Chu, C 2024. \"Impacts of climate change on environmental and economic sustainability of flexible pavements across China\". Elsevier: Resources, Conservation & Recycling. DOI: https://doi.org/10.1016/j.resconrec.2024.107589. \nZhang, C; Tan, Y; Gao, Y; Fu, Y; Li, J; Li, S; Zhou, X 2022. \"Resilience assessment of asphalt pavement rutting under climate change\". Elsevier: Transportation Research Part D. DOI: https://doi.org/10.1016/j.trd.2022.103395. \nZhang, K; Wang, S; Yang, W; Zhong, X; Liang, S; Tang, Z; Quan, W 2023. \"Influence of temperature and humidity coupling on rutting deformation of asphalt pavement\". Science and Engineering of composite materials. DOI: https://doi.org/10.1515/secm-2022-0232. \nZhang, Q; Yang, S; Chen, G 2024. \"Regional variations of climate change impact on asphalt pavement rutting distress\". Elsevier: Transportation Research Part D. DOI: https://doi.org/10.1016/j.trd.2023.103968. \nZhuang, C; Guo, H; Zhao, S; Shu, S; Ye, Y; Xing, B 2024. \"Study on fatigue performance of asphalt mixture in service life based on accelerated loading test\". Construction and Building Materials. DOI: https://doi.org/10.1016/j.cscm.2024.e03055.\n\n# TABLES:\n\nTable 1. Pavement thickness and modulus \n\n<table><tr><td>Layer</td><td>FAARFIELD Material</td><td>Thickness (mm)</td><td>E (MPa)</td></tr><tr><td>HMA</td><td>P-401 / P403</td><td>250</td><td>1,378.95</td></tr><tr><td>Crushed Aggregate</td><td>P-209</td><td>400</td><td>508.87</td></tr><tr><td>Uncrushed Aggregate</td><td>P-154</td><td>420</td><td>148.92</td></tr><tr><td>Soil</td><td>Subgrade</td><td>-</td><td>103.42</td></tr></table>\n\nTable 2. CDF Contribution for each aircraft \n\n<table><tr><td>Aircraft</td><td>Fatigue CDF Contribution</td><td>Subgrade CDF Contribution</td><td>P/C Ratio</td></tr><tr><td>B738</td><td>0.43</td><td>0.00</td><td>2.01</td></tr><tr><td>A320</td><td>0.18</td><td>0.00</td><td>2.06</td></tr><tr><td>A319</td><td>0.24</td><td>0.00</td><td>2.04</td></tr><tr><td>B737</td><td>0.14</td><td>0.00</td><td>2.04</td></tr><tr><td>E195</td><td>0.01</td><td>0.00</td><td>2.12</td></tr></table>\n\nTable 3. HMA modulus for each season and layers \n\n<table><tr><td rowspan=\"2\">Asphalt Layer</td><td rowspan=\"2\">Variable</td><td colspan=\"4\">Season</td></tr><tr><td>Summer</td><td>Fall</td><td>Winter</td><td>Spring</td></tr><tr><td rowspan=\"2\">1</td><td>Average Pavement Temperature (°C)</td><td>37</td><td>33</td><td>33</td><td>36</td></tr><tr><td>HMA modulus (MPa)</td><td>720</td><td>1217</td><td>1217</td><td>817</td></tr><tr><td rowspan=\"2\">2</td><td>Average Pavement Temperature (°C)</td><td>35</td><td>30</td><td>30</td><td>33</td></tr><tr><td>HMA modulus (MPa)</td><td>930</td><td>1855</td><td>1855</td><td>1217</td></tr><tr><td rowspan=\"2\">3</td><td>Average Pavement Temperature (°C)</td><td>32</td><td>28</td><td>28</td><td>31</td></tr><tr><td>HMA modulus (MPa)</td><td>1398</td><td>2470</td><td>2470</td><td>1609</td></tr><tr><td rowspan=\"2\">4</td><td>Average Pavement Temperature (°C)</td><td>31</td><td>27</td><td>26</td><td>29</td></tr><tr><td>HMA modulus (MPa)</td><td>1609</td><td>2849</td><td>3282</td><td>2140</td></tr><tr><td rowspan=\"2\">5</td><td>Average Pavement Temperature (°C)</td><td>29</td><td>25</td><td>25</td><td>28</td></tr><tr><td>HMA modulus (MPa)</td><td>2140</td><td>3774</td><td>3774</td><td>2470</td></tr><tr><td rowspan=\"2\">6</td><td>Average Pavement Temperature (°C)</td><td>28</td><td>24</td><td>24</td><td>26</td></tr><tr><td>HMA modulus (MPa)</td><td>2470</td><td>4328</td><td>4328</td><td>3282</td></tr><tr><td rowspan=\"2\">7</td><td>Average Pavement Temperature (°C)</td><td>26</td><td>23</td><td>23</td><td>25</td></tr><tr><td>HMA modulus (MPa)</td><td>3282</td><td>4948</td><td>4948</td><td>3774</td></tr><tr><td rowspan=\"2\">8</td><td>Average Pavement Temperature (°C)</td><td>25</td><td>22</td><td>21</td><td>24</td></tr><tr><td>HMA modulus (MPa)</td><td>3774</td><td>5636</td><td>6390</td><td>4328</td></tr><tr><td rowspan=\"2\">9</td><td>Average Pavement Temperature (°C)</td><td>24</td><td>20</td><td>20</td><td>22</td></tr><tr><td>HMA modulus (MPa)</td><td>4328</td><td>7211</td><td>7211</td><td>5636</td></tr><tr><td rowspan=\"2\">10</td><td>Average Pavement Temperature (°C)</td><td>22</td><td>19</td><td>18</td><td>21</td></tr><tr><td>HMA modulus (MPa)</td><td>5636</td><td>8093</td><td>9034</td><td>6390</td></tr></table>\n\nTable 4. Regression analysis \n\n<table><tr><td>Aircraft</td><td>β0</td><td>β1</td><td>R²</td></tr><tr><td>B738</td><td>0.00209</td><td>-0.31399</td><td>0.997</td></tr><tr><td>A320</td><td>0.00183</td><td>-0.31498</td><td>0.997</td></tr><tr><td>A319</td><td>0.00196</td><td>-0.31428</td><td>0.997</td></tr><tr><td>B737</td><td>0.00207</td><td>-0.32418</td><td>0.997</td></tr><tr><td>E195</td><td>0.001653</td><td>-0.33706</td><td>0.996</td></tr></table>\n\nTable 5. Average damage per season using MCS. \n\n<table><tr><td rowspan=\"2\">Aircraft</td><td colspan=\"4\">Damage per season</td><td rowspan=\"2\">CDF MCS</td><td rowspan=\"2\">CDF FAARFIELD</td></tr><tr><td>Summer</td><td>Fall</td><td>Winter</td><td>Spring</td></tr><tr><td>B738</td><td>0.028</td><td>0.016</td><td>0.019</td><td>0.027</td><td>0.090</td><td>0.430</td></tr><tr><td>B737</td><td>0.009</td><td>0.005</td><td>0.006</td><td>0.009</td><td>0.029</td><td>0.140</td></tr><tr><td>A320</td><td>0.012</td><td>0.007</td><td>0.008</td><td>0.012</td><td>0.039</td><td>0.180</td></tr><tr><td>A319</td><td>0.016</td><td>0.009</td><td>0.010</td><td>0.015</td><td>0.050</td><td>0.240</td></tr><tr><td>E195</td><td>0.001</td><td>0.000</td><td>0.000</td><td>0.001</td><td>0.002</td><td>0.010</td></tr><tr><td>Total</td><td>0.066</td><td>0.037</td><td>0.043</td><td>0.064</td><td>0.210</td><td>1.000</td></tr></table>\n\n# FIGURE CAPTION LIST:\n\nFig. 1. Procedure framework of this study. \nFig. 2. Annual operations at the airport. \nFig. 3. Air temperature variation over time. \nFig. 4. Air temperatures throughout the seasons in the Southern Hemisphere. \nFig. 5. Kolmogorov-Smirnov test for air temperature data across seasons In the Southern Hemisphere. \nFig. 6. Influence of pavement temperature on tensile strains in the asphalt layer. \nFig. 7. Influence of pavement temperature on compressive strains in the subgrade. \nFig. 8a. Convergence analysis for the MCS. \nFig. 8b. Standard deviation for the MCS. \nFig. 9. Reliability and design period extension.\n\n![](images/006f2a5cb922ee2815359164044b0a8c442a99fcd170a36d754e931eac2bb20f.jpg) \nFIGURE FILES: \nFig. 1. Procedure framework of this study.\n\n![](images/28bf43293a7614b1e0a0ed4783a680972c7993b297ff32abd3b0c0b0a13487d8.jpg) \nFig. 2. Annual operations at the airport.\n\n![](images/e78cf25e78155c18f83cc6262cce15420624444575b835114d14902744b15b39.jpg) \nFig. 3. Air temperature variation over time\n\n![](images/04fff87584d7b46cee900c1ad091421774794b51f04ac863dcdc1261fd3a91b5.jpg) \nFig. 4. Air temperatures throughout the seasons in the Southern Hemisphere\n\n![](images/5ca60d23c7fd4d7d193744a5a00a0a29d63710ac6c4a875b535407778f57e622.jpg)\n\n![](images/0f6ad45126a08663d97762f48c2fd7da92328a31e27afdac80940396a9c07d21.jpg)\n\n![](images/86809e75280ee23bd0dd46fdc53af815573ee443d006d01bfc5284dcd5437fbc.jpg) \nFig. 5. Kolmogorov-Smirnov test for air temperature data across seasons in the Southern Hemisphere\n\n![](images/efdcbd89698ab86bdd95e87e2fc0cef688e2f936f6d8438e5a854259aef5f910.jpg)\n\n![](images/8f262f1f55c9adba0c1951596c687e597a28f70670160b7ca4c05f7cef3b5f56.jpg) \nFig. 6. Influence of pavement temperature on tensile strains in the asphalt layer.\n\n![](images/3508a4c290ca62d0855cf03827e5b952df00961082c649c9b60f423f1c5af53e.jpg) \nFig. 7. Influence of pavement temperature on compressive strains in the subgrade\n\n![](images/c144f34164db8cb5aa7893fa0ba3f1b17ff0735c609b7a52caffed6b272d395b.jpg) \n(a)\n\n![](images/0a4c24c23f2230c01577e84bbf13feb5ad48ceb781ce45bdb838767cc7772958.jpg) \n(b) \nFig. 8. Convergence analysis (a) and standard deviation (b) for the MCS\n\n![](images/286ca195af03820da1c6df14270c41b76d7c9b9c0c66ff33a210bb849557b20a.jpg) \nFig. 9. Reliability and design period extension"}
# Information-Theoretic Constraints on Variational Quantum Optimization: Efficiency Transitions and the Dynamical Lie Algebra Tan Jun Liang $^{1}$ , * $^{1}$ School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072, Australia (Dated: December 18, 2025) Variational quantum algorithms are the leading candidates for near-term quantum advantage, yet their scalability is limited by the "Barren Plateau" phenomenon. While traditionally attributed to geometric vanishing gradients, we propose an information-theoretic perspective. Using ancillamediated coherent feedback, we demonstrate an empirical constitutive relation $\Delta E\leq \eta I(S:A)$ linking work extraction to mutual information, with quantum entanglement providing a factor-of-2 advantage over classical Landauer bounds. By scaling the system size, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra. Systems with polynomial algebraic complexity exhibit sustained positive efficiency, whereas systems with exponential complexity undergo an "efficiency collapse" $(\eta \rightarrow 0)$ at $N\approx 6$ qubits. These results suggest that the trainability boundary in variational algorithms correlates with information-theoretic limits of quantum feedback control. # INTRODUCTION Variational Quantum Algorithms (VQAs) represent the primary strategy for achieving near-term quantum advantage, aiming to solve optimization problems by encoding them into the ground state of a Hamiltonian. However, their scalability is fundamentally limited by the "Barren Plateau" phenomenon, where gradients vanish exponentially with system size. While standard analyses characterize this barrier as a geometric concentration of measure in high-dimensional Hilbert spaces, these descriptions lack a unifying physical mechanism that explains the transition from trainability to intractability. Recent theoretical advances have identified the Dynamical Lie Algebra (DLA) as the key structural predictor of trainability. Specifically, circuits generating a polynomially scaling DLA are proven to escape barren plateaus, while those generating exponentially scaling DLAs succumb to them. We leverage this algebraic classification to probe the thermodynamic stability of these distinct regimes. Here, we propose that computational hardness in quantum circuits correlates with information-theoretic constraints on feedback control. By treating the ancilla interaction as an effective environment, we recover nonlinear control dynamics via information erasure, consistent with open-system decoherence approaches to measurement. We reframe the variational optimizer not as a mathematical function, but as a quantum Maxwell's Demon. To isolate the thermodynamic contribution, we implement a decoupled 'Coherent Feedback' protocol that fixes the actuation strength while varying the sensing duration, thereby proving that the optimization work is causally driven by the mutual information channel capacity. In this framework, the feedback loop acts as a thermodynamic engine: it extracts entropy from the system to lower its Hamiltonian expectation value $\langle H\rangle$ (Work), fueled by the mutual information established between the system and the control ancilla. This approach builds upon the theoretical framework of "Daemonic Ergotropy", which established that quantum correlations (discord and entanglement) can enhance work extraction beyond classical limits. However, while previous studies focused on single-qubit engines or small thermal baths, the thermodynamic implications for algorithmic complexity remain unexplored. Here, we extend this principle to the thermodynamic limit of many-body optimization. We ask: Does the capacity to extract work scale indefinitely, or does it face a critical complexity barrier? By comparing systems with polynomial vs. exponential algebraic complexity, we demonstrate that information-driven work extraction is not a guaranteed resource but is strictly bounded by the algebraic structure of the problem Hamiltonian. Crucially, this ancilla-mediated feedback induces an effective non-linearity in the parameter update trajectory. We demonstrate this via a controlled experiment comparing interacting vs. non-interacting Hamiltonians (see Methods). For a separable Hamiltonian $H = Z_0 + Z_1$ , where the phase factors as $e^{-i(E_0 + E_1)t} = e^{-iE_0t} \cdot e^{-iE_1t}$ , we observe zero mutual information between parameter qubits: $$ \left. I \left(P _ {0}: P _ {1}\right) \right| _ {H = Z _ {0} + Z _ {1}} = 0 \quad (\text {L i n e a r / S e p a r a b l e}) \tag {1} $$ In contrast, for an interacting Hamiltonian $H = Z_0Z_1$ where the phase depends on the product $E_0\cdot E_1$ , we observe significant entanglement: $$ \left. I \left(P _ {0}: P _ {1}\right) \right| _ {H = Z _ {0} Z _ {1}} = 1. 7 4 \text {b i t s} \quad (\text {N o n - L i n e a r / E n t a n g l e d}) \tag {2} $$ This result demonstrates that non-linearity arises specifically from interaction terms in the Hamiltonian that couple different parameter subspaces. The W-gate protocol (see Methods) successfully transfers this phase information back to the parameter register, enabling coherent interference between parameter configurations. By analyzing the scaling of this feedback mechanism, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra (DLA). We observe that systems with polynomial algebraic complexity (Ordered phase) exhibit "constructive information scaling," where efficiency increases with system size. In contrast, systems with exponential complexity (Chaotic phase) undergo an "Efficiency Collapse." This suggests that the trainability boundary in variational algorithms correlates with the information channel capacity of the feedback controller. Alternative strategies, such as the Quantum Walk-based Optimization Algorithm (QWOA), have sought to overcome local minima by exploiting coherent tunneling and variable-time evolution. Our framework unifies the QWOA paradigm with variational optimization by showing that the ancilla acts as a quantum walk "coin qubit" controlling trap-diffusion dynamics. While these dynamic approaches improve exploration, the fundamental thermodynamic bounds governing their efficiency in the many-body limit—specifically the transition from coherent flow to information scrambling—remain uncharacterized. # THERMODYNAMIC CONSTITUTIVE RELATIONS The mechanism by which the ancilla-mediated optimizer extracts work is not through explicit gradient direction sensing, but through a trap-diffusion mechanism analogous to discrete-time quantum walks. In a quantum walk, a "coin qubit" controls whether amplitude moves forward or backward; the Hadamard coin creates superposition enabling quadratic speedup over classical diffusion ( $\sigma^2 \propto T^2$ vs. $\sigma^2 \propto T$ ). We leverage this principle for optimization: the ancilla acts as a coin qubit controlling whether amplitude diffuses (explores parameter space) or traps (concentrates at low-energy configurations). We first establish the source of non-linearity via a minimal 4-qubit experiment. For a non-interacting Hamiltonian $H = Z_{0} + Z_{1}$ , the evolution phase factors separately, yielding zero correlation between parameter qubits (Eq. 1). For an interacting Hamiltonian $H = Z_{0}Z_{1}$ , the coupled phase creates entanglement, yielding $I(P_{0}:P_{1}) = 1.74$ bits (Eq. 2). This demonstrates that non-linearity requires interaction terms that couple parameter subspaces—a necessary condition for the trap-diffusion mechanism to operate. # The Trap-Diffusion Mechanism The core insight connecting our protocol to quantum walk speedup lies in the controlled-operation structure. In our W-gate protocol (see Methods), the circuit applies: $$ U _ {\text {s t e p}} = U _ {\text {m i x e r}} ^ {(c)} \cdot U _ {\text {d r i f t}} ^ {(c)} \tag {3} $$ where both the mixer (amplitude diffusion across parameter space) and drift (Hamiltonian evolution) are controlled by the ancilla state. This creates the following dynamics: - Ancilla $|0\rangle$ branch: Receives mixer operation $\rightarrow$ amplitude diffuses - Ancilla $|1\rangle$ branch: Receives drift operation $\rightarrow$ amplitude traps at current energy The sensing phase entangles the ancilla state with the system energy: low-energy configurations bias the ancilla toward states that receive more mixing, while high-energy configurations bias toward trapping. Critically, the ancilla does not measure "which direction is downhill"—it measures energy magnitude via the Hadamard test $\langle \cos (E\tau)\rangle$ . The gradient direction emerges statistically from the asymmetric survival probability: amplitude at low-energy configurations diffuses freely, while amplitude at high-energy configurations is trapped and eventually decoheres upon ancilla reset. This mechanism is mathematically equivalent to Grover's amplitude amplification, where the oracle marks "good" states (low energy) and the diffuser redistributes amplitude. Our protocol realizes this as a continuous process: the controlled-mixer acts as the diffuser, and the energy-dependent sensing acts as a soft oracle that preferentially marks high-energy states for trapping. Crucially, we observe that the work extraction correlates with Logarithmic Negativity ( $R \approx 0.9$ ), a strict entanglement monotone. A purely classical feedback loop (zero entanglement) would yield zero work in this protocol, confirming the quantum nature of the mechanism. To rigorously characterize the thermodynamic cost of this non-linearity, we implemented a controlled "Coherent Feedback" protocol (see Methods) designed to decouple the information gathering phase from the feedback actuation. Unlike standard variational updates where parameters are modified classically, our protocol
# Information-Theoretic Constraints on Variational Quantum Optimization: Efficiency Transitions and the Dynamical Lie Algebra Tan Jun Liang $^{1}$ , * $^{1}$ School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072, Australia (Dated: December 18, 2025) Variational quantum algorithms are the leading candidates for near-term quantum advantage, yet their scalability is limited by the "Barren Plateau" phenomenon. While traditionally attributed to geometric vanishing gradients, we propose an information-theoretic perspective. Using ancillamediated coherent feedback, we demonstrate an empirical constitutive relation $\Delta E\leq \eta I(S:A)$ linking work extraction to mutual information, with quantum entanglement providing a factor-of-2 advantage over classical Landauer bounds. By scaling the system size, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra. Systems with polynomial algebraic complexity exhibit sustained positive efficiency, whereas systems with exponential complexity undergo an "efficiency collapse" $(\eta \rightarrow 0)$ at $N\approx 6$ qubits. These results suggest that the trainability boundary in variational algorithms correlates with information-theoretic limits of quantum feedback control. # INTRODUCTION Variational Quantum Algorithms (VQAs) represent the primary strategy for achieving near-term quantum advantage, aiming to solve optimization problems by encoding them into the ground state of a Hamiltonian. However, their scalability is fundamentally limited by the "Barren Plateau" phenomenon, where gradients vanish exponentially with system size. While standard analyses characterize this barrier as a geometric concentration of measure in high-dimensional Hilbert spaces, these descriptions lack a unifying physical mechanism that explains the transition from trainability to intractability. Recent theoretical advances have identified the Dynamical Lie Algebra (DLA) as the key structural predictor of trainability. Specifically, circuits generating a polynomially scaling DLA are proven to escape barren plateaus, while those generating exponentially scaling DLAs succumb to them. We leverage this algebraic classification to probe the thermodynamic stability of these distinct regimes. Here, we propose that computational hardness in quantum circuits correlates with information-theoretic constraints on feedback control. By treating the ancilla interaction as an effective environment, we recover nonlinear control dynamics via information erasure, consistent with open-system decoherence approaches to measurement. We reframe the variational optimizer not as a mathematical function, but as a quantum Maxwell's Demon. To isolate the thermodynamic contribution, we implement a decoupled 'Coherent Feedback' protocol that fixes the actuation strength while varying the sensing duration, thereby proving that the optimization work is causally driven by the mutual information channel capacity. In this framework, the feedback loop acts as a thermodynamic engine: it extracts entropy from the system to lower its Hamiltonian expectation value $\langle H\rangle$ (Work), fueled by the mutual information established between the system and the control ancilla. This approach builds upon the theoretical framework of "Daemonic Ergotropy", which established that quantum correlations (discord and entanglement) can enhance work extraction beyond classical limits. However, while previous studies focused on single-qubit engines or small thermal baths, the thermodynamic implications for algorithmic complexity remain unexplored. Here, we extend this principle to the thermodynamic limit of many-body optimization. We ask: Does the capacity to extract work scale indefinitely, or does it face a critical complexity barrier? By comparing systems with polynomial vs. exponential algebraic complexity, we demonstrate that information-driven work extraction is not a guaranteed resource but is strictly bounded by the algebraic structure of the problem Hamiltonian. Crucially, this ancilla-mediated feedback induces an effective non-linearity in the parameter update trajectory. We demonstrate this via a controlled experiment comparing interacting vs. non-interacting Hamiltonians (see Methods). For a separable Hamiltonian $H = Z_0 + Z_1$ , where the phase factors as $e^{-i(E_0 + E_1)t} = e^{-iE_0t} \cdot e^{-iE_1t}$ , we observe zero mutual information between parameter qubits: $$ \left. I \left(P _ {0}: P _ {1}\right) \right| _ {H = Z _ {0} + Z _ {1}} = 0 \quad (\text {L i n e a r / S e p a r a b l e}) \tag {1} $$ In contrast, for an interacting Hamiltonian $H = Z_0Z_1$ where the phase depends on the product $E_0\cdot E_1$ , we observe significant entanglement: $$ \left. I \left(P _ {0}: P _ {1}\right) \right| _ {H = Z _ {0} Z _ {1}} = 1. 7 4 \text {b i t s} \quad (\text {N o n - L i n e a r / E n t a n g l e d}) \tag {2} $$ This result demonstrates that non-linearity arises specifically from interaction terms in the Hamiltonian that couple different parameter subspaces. The W-gate protocol (see Methods) successfully transfers this phase information back to the parameter register, enabling coherent interference between parameter configurations. By analyzing the scaling of this feedback mechanism, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra (DLA). We observe that systems with polynomial algebraic complexity (Ordered phase) exhibit "constructive information scaling," where efficiency increases with system size. In contrast, systems with exponential complexity (Chaotic phase) undergo an "Efficiency Collapse." This suggests that the trainability boundary in variational algorithms correlates with the information channel capacity of the feedback controller. Alternative strategies, such as the Quantum Walk-based Optimization Algorithm (QWOA), have sought to overcome local minima by exploiting coherent tunneling and variable-time evolution. Our framework unifies the QWOA paradigm with variational optimization by showing that the ancilla acts as a quantum walk "coin qubit" controlling trap-diffusion dynamics. While these dynamic approaches improve exploration, the fundamental thermodynamic bounds governing their efficiency in the many-body limit—specifically the transition from coherent flow to information scrambling—remain uncharacterized. # THERMODYNAMIC CONSTITUTIVE RELATIONS The mechanism by which the ancilla-mediated optimizer extracts work is not through explicit gradient direction sensing, but through a trap-diffusion mechanism analogous to discrete-time quantum walks. In a quantum walk, a "coin qubit" controls whether amplitude moves forward or backward; the Hadamard coin creates superposition enabling quadratic speedup over classical diffusion ( $\sigma^2 \propto T^2$ vs. $\sigma^2 \propto T$ ). We leverage this principle for optimization: the ancilla acts as a coin qubit controlling whether amplitude diffuses (explores parameter space) or traps (concentrates at low-energy configurations). We first establish the source of non-linearity via a minimal 4-qubit experiment. For a non-interacting Hamiltonian $H = Z_{0} + Z_{1}$ , the evolution phase factors separately, yielding zero correlation between parameter qubits (Eq. 1). For an interacting Hamiltonian $H = Z_{0}Z_{1}$ , the coupled phase creates entanglement, yielding $I(P_{0}:P_{1}) = 1.74$ bits (Eq. 2). This demonstrates that non-linearity requires interaction terms that couple parameter subspaces—a necessary condition for the trap-diffusion mechanism to operate. # The Trap-Diffusion Mechanism The core insight connecting our protocol to quantum walk speedup lies in the controlled-operation structure. In our W-gate protocol (see Methods), the circuit applies: $$ U _ {\text {s t e p}} = U _ {\text {m i x e r}} ^ {(c)} \cdot U _ {\text {d r i f t}} ^ {(c)} \tag {3} $$ where both the mixer (amplitude diffusion across parameter space) and drift (Hamiltonian evolution) are controlled by the ancilla state. This creates the following dynamics: - Ancilla $|0\rangle$ branch: Receives mixer operation $\rightarrow$ amplitude diffuses - Ancilla $|1\rangle$ branch: Receives drift operation $\rightarrow$ amplitude traps at current energy The sensing phase entangles the ancilla state with the system energy: low-energy configurations bias the ancilla toward states that receive more mixing, while high-energy configurations bias toward trapping. Critically, the ancilla does not measure "which direction is downhill"—it measures energy magnitude via the Hadamard test $\langle \cos (E\tau)\rangle$ . The gradient direction emerges statistically from the asymmetric survival probability: amplitude at low-energy configurations diffuses freely, while amplitude at high-energy configurations is trapped and eventually decoheres upon ancilla reset. This mechanism is mathematically equivalent to Grover's amplitude amplification, where the oracle marks "good" states (low energy) and the diffuser redistributes amplitude. Our protocol realizes this as a continuous process: the controlled-mixer acts as the diffuser, and the energy-dependent sensing acts as a soft oracle that preferentially marks high-energy states for trapping. Crucially, we observe that the work extraction correlates with Logarithmic Negativity ( $R \approx 0.9$ ), a strict entanglement monotone. A purely classical feedback loop (zero entanglement) would yield zero work in this protocol, confirming the quantum nature of the mechanism. To rigorously characterize the thermodynamic cost of this non-linearity, we implemented a controlled "Coherent Feedback" protocol (see Methods) designed to decouple the information gathering phase from the feedback actuation. Unlike standard variational updates where parameters are modified classically, our protocol utilizes a coherent conditional rotation $(CR_X)$ acting directly on the system, triggered by the ancilla's state. Crucially, we fixed the feedback actuation strength (the "kick") to a constant value and varied only the sensing duration $dt$ . This isolates the information content as the sole variable driving the performance difference. We observe that the extracted work $(-\Delta \langle H \rangle)$ scales linearly with the mutual information $I(S : A)$ established FIG. 1. The Thermodynamic Constitutive Law. Extracted Work $(-\Delta \langle H\rangle)$ vs. Mutual Information $I(S:A)$ for a 4-qubit transverse Ising system. The robust linear relationship $(R^2\approx 0.90$ slope $\eta = 0.247$ energy/bit) confirms the constitutive law $\Delta E\leq \eta I$ . The sensing time $\tau$ was varied from O to 1.5 while the feedback strength was held constant at $\theta_{gain} = 0.5$ rad, isolating information as the sole variable driving work extraction. between the system and the ancilla, yielding a strict constitutive Equation of State: $$ \Delta E \leq \eta (\mathcal {H}, \mathcal {A}) \cdot I (S: A) \tag {4} $$ Here, the proportionality constant $\eta$ represents the "Algo- rithmic Efficiency" or informational conductance of the ansatz-Hamiltonian pair. Our data indicates a robust linear correlation ( $R^2\approx 0.90$ ), confirming that the optimization rate is fundamentally limited by the information channel capacity of the probe, rather than the driving force of the optimizer. To determine the physical nature of these correlations, we measured the Logarithmic Negativity $E_N$ , a strict monotone of quantum entanglement which is zero for all separable (classical) states. We find that the work extraction correlates directly with the generated entanglement: $$ \Delta E \propto E _ {N} \left(\rho_ {S: A}\right) = \log_ {2} \left| \left| \rho^ {\Gamma_ {A}} \right| \right| _ {1} > 0 \tag {5} $$ The persistence of this correlation ( $R \approx 0.90$ ) and the strictly non-zero value of $E_N$ confirms that the mechanism relies specifically on quantum interference effects. This distinguishes the VQA optimizer from a classical Szilard engine, establishing that quantum entanglement is the essential thermodynamic resource driving the cooling process. To verify that this mechanism provides genuine thermodynamic advantage, we computed the Landauer erasure cost. Remarkably, we observe a constant ratio $I(S:A) / S(A) = 2.00$ across all sensing times (Fig. 2). This ratio of exactly 2 is the hallmark of pure bipartite entanglement, where $I(S:A) = 2S(A)$ . Since the Landauer cost for erasing the ancilla is $W_{cost} = k_{B}T\cdot S(A)$ , FIG. 2. Quantum Advantage via Landauer Analysis. Extracted work (blue) vs. Landauer erasure cost (red dashed) as a function of sensing time $\tau$ . The green shaded region indicates net positive work after accounting for information erasure. The constant ratio $I(S:A)/S(A)=2.00$ confirms that quantum entanglement (not classical correlation) fuels the engine. while the extractable work scales as $W_{ext} \propto I(S : A)$ , the quantum correlations provide a factor-of-2 advantage over the classical limit, yielding net positive work after accounting for erasure. The mutual information acquired by the ancilla is not arbitrary; it corresponds to a direct measurement of the local Fubini-Study metric tensor $g_{ij}$. The sensing protocol projects the local curvature of the quantum state manifold onto the ancilla's Z-basis, establishing a rigorous link between the abstract information-theoretic fuel and the concrete geometry of the Hilbert space. # THE COMPLEXITY-DEPENDENT EFFICIENCY TRANSITION To investigate the limits of this feedback mechanism, we analyzed the scaling of the algorithmic efficiency $\eta$ across two distinct topological classes of Hamiltonians as a function of system size $N$ . We compared an "Ordered" system (Complete Graph $K_{n}$ ) characterized by a polynomial Dynamical Lie Algebra (DLA) dimension $(O(N^{3}))$ against a "Chaotic" system (Sherrington-Kirkpatrick Spin Glass) characterized by an exponential DLA dimension $(O(4^{N}))$. While the system sizes $(N \leq 8)$ are limited by the exponential cost of classical simulation, the observed efficiency collapse in chaotic systems is consistent with the ancilla channel capacity becoming insufficient to resolve the scrambled gradient information. The Lie algebraic classification of trainability was recently unified by Ragone et al., who proved the exact variance formula $\operatorname{Var}[\ell] = \mathcal{P}_{\mathfrak{g}}(\rho) \cdot \mathcal{P}_{\mathfrak{g}}(O) / \dim(\mathfrak{g})$ , establishing that $\dim(\mathfrak{g}) \in \Omega(b^n)$ with $b > 2$ implies a barren plateau. Our efficiency metric $\eta$ provides a complementary thermodynamic diagnostic: while Ragone FIG. 3. The Complexity-Dependent Efficiency Transition. Normalized algorithmic efficiency $(\eta / N^2)$ vs. System Size $N$ for Ordered (Ferromagnet, blue) and Chaotic (Spin Glass, red) Hamiltonians. The Ordered phase maintains positive efficiency, while the Chaotic phase undergoes efficiency collapse, with efficiency approaching zero at $N \geq 6$ . Error bars represent $\pm 1\sigma$ over 5 random seeds. The transition is consistent with the ancilla channel capacity (1 bit) becoming insufficient to resolve the scrambled gradient information. et al. predict trainability from gradient variance, we directly measure the information-to-work conversion capacity of the feedback loop. Both observables collapse when DLA dimension grows exponentially, suggesting a unified information-theoretic origin for barren plateaus. We observe a sharp bifurcation in thermodynamic behavior (Fig. 3). For the Ordered system, we observe a regime of "Constructive Information Scaling," where the efficiency $\eta$ increases with system size, indicating that the computational resistance to information flow decreases as the Hilbert space grows. In this regime, the growing algebraic structure provides additional pathways for the Demon to navigate the Hilbert space without exceeding its information channel capacity. This counter-intuitive scaling arises because the polynomial DLA provides a dense network of symmetry-protected pathways through the Hilbert space, effectively reducing the ergodic search volume. In stark contrast, the Chaotic system undergoes an "Efficiency Collapse." As the system size increases, the efficiency drops precipitously, crossing zero at a critical size $N_{c} \approx 6.4$ . At this point, the rate of operator spreading (scrambling) generates new independent gradient directions faster than the single-qubit ancilla (1 bit/cycle) can interrogate them. Beyond $N_{c}$ , the information generation rate exceeds the channel capacity, and the ancilla cannot gather sufficient information to guide the trajectory. To quantify this transition, we introduce the Complexity Specific Heat, defined as the susceptibility of algorithmic efficiency to system size increase: $$ \chi_ {\mathrm {c o m p}} \equiv \frac {\partial \eta}{\partial N} \tag {6} $$ This quantity is analogous to thermal specific heat $C = \partial \langle E \rangle / \partial T$ , measuring the "response" of the optimization engine to changes in problem complexity. In the Ordered phase, $\chi_{\mathrm{comp}} > 0$ (efficiency increases with $N$ ), while in the Chaotic phase, $\chi_{\mathrm{comp}} < 0$ and diverges at $N_c$ , signaling a thermodynamic instability. This efficiency collapse provides a complementary diagnostic to the Quantum Fisher Information approach of Abbas et al., who showed that the effective dimension $d_{\mathrm{eff}}$ (derived from QFIM eigenvalues) correlates with model trainability. While their metric captures parameter redundancy, our efficiency $\eta$ directly measures the information-to-work conversion capacity of the feedback loop. The Ordered phase typically corresponds to Hamiltonians generating a Classical Lie Algebra (e.g., types $\mathfrak{so}_n, \mathfrak{sp}_n$ ). These algebras possess a rigid root system structure that restricts the dimension to scale polynomially with system size, $d \sim O(n^2)$ . The 'Information Superconductivity' in the Ordered phase arises because the optimization trajectory is dynamically constrained to low-dimensional Coadjoint Orbits of the polynomial algebra. Unlike the full Hilbert space, these symplectic submanifolds have polynomial volume, ensuring that the ergodic coverage time—and thus the thermodynamic search cost—remains finite. For the Ordered phase, we utilized the Complete Graph Hamiltonian $(K_{n})$ . Recent algebraic analysis by Allcock et al. proves the exact dimension $\dim (\mathfrak{g}_{K_n}) = \frac{1}{12} (n^3 +6n^2 +O(n))$ , placing it firmly within the polynomially tractable regime. # MICROSCOPIC ORIGIN To determine the microscopic driver of this thermodynamic collapse, we analyzed the structural statistics of the operators comprising the Dynamical Lie Algebra for both topological classes. Specifically, we computed the distribution of the Pauli weights (Hamming weights) for the basis operators generated during the Lie closure process. We observe that the macroscopic efficiency crash in the chaotic phase corresponds to a microscopic regime of maximal operator scrambling. In the Ordered $(K_{n})$ phase, the DLA operators remain sparsely supported, preserving a low average Pauli weight even as the system size scales. This sparsity implies that the information relevant to the optimization gradient remains localized in the Hilbert space, accessible to the finite-bandwidth probe of the ancilla. In contrast, the Chaotic phase exhibits an explosive growth in operator density. The average Pauli weight of the DLA generators converges rapidly to $\sim N / 2$ , indicating that the gradient information is delocalized or FIG. 4. DLA Efficiency Scaling. Comparison of thermodynamic efficiency for Ordered (Complete Graph, polynomial DLA $\sim O(N^3)$ ) vs. Chaotic (Spin Glass, exponential DLA $\sim O(4^N)$ ) systems. The polynomial scaling of the algebra ensures sustained positive efficiency, while the exponential algebra exhibits the efficiency crash observed in Fig. 3. "scrambled" across non-local multi-body correlations. This scrambling imposes a fundamental limit on the feedback cycle. The single-qubit ancilla acts as a low-rank probe with a channel capacity of 1 bit per measurement cycle. In the Ordered regime, the relevant information is compressed into a polynomial subspace, allowing efficient extraction. In the Chaotic regime, the information is delocalized across complex, high-weight correlations. The controller, limited by its channel capacity, cannot resolve this delocalized information, resulting in vanishing mutual information $I(S:A) \to 0$ and consequent cessation of work extraction. Thus, the efficiency transition is driven by the physical scrambling rate exceeding the information extraction rate of the controller. The scrambling dynamics can be rigorously defined via the adjoint representation of the Lie algebra. The time-evolution of a precursor operator $O_0$ is given by the adjoint action $O(t) = e^{\mathrm{ad}_H t} O_0$. In the chaotic phase, the repeated application of the Lie bracket $[H, \cdot]$ rapidly maps simple Pauli strings into the bulk of the operator space, maximizing the weight of the adjoint vector. # DISCUSSION The observed thermodynamic crash can be understood as a bandwidth limitation. The single-ancilla probe functions as a communication channel with a maximum capacity of 1 bit per measurement cycle. In the polynomial DLA regime (Ordered), the relevant control information is compressed into a subspace accessible to this finite bandwidth. However, in the exponential DLA regime (Chaotic), the rate of information generation by algebraic scrambling $(O(N)$ bits per step) exceeds the channel capacity of the controller (1 bit). This information bottleneck $(I_{\mathrm{req}}\gg I_{\mathrm{cap}})$ forces the demon to operate blindly, resulting in the decoherence of the control trajectory and the collapse of thermodynamic efficiency. Our experimental findings suggest that the limitations of variational quantum optimization are not merely algorithmic artifacts, but manifestations of fundamental thermodynamic bounds. By treating the optimizer as a heat engine, we have identified a Carnot-like bound for quantum optimization efficiency. Analogous to how Carnot efficiency is bounded by temperature ratios, the algorithmic efficiency is bounded by the information channel capacity: $$ \eta \leq \theta_ {\text {g a i n}} \cdot \chi_ {\text {H o l e v o}} \tag {7} $$ where $\theta_{\mathrm{gain}}$ is the feedback coupling strength and $\chi_{\mathrm{Holevo}} \leq 1$ bit is the Holevo capacity of the single-ancilla channel. Numerically, we observe $\eta = 0.456 \cdot \theta_{\mathrm{gain}}$ in the Ordered phase ( $R^2 = 0.9945$ ), indicating operation at approximately $50\%$ of the theoretical maximum—likely attributable to the linear response approximation inherent in gradient-based feedback and measurement back-action from the projective ancilla readout. This linear scaling with coupling strength—rather than an arbitrary empirical constant—establishes a universal limit derivable from first principles. The "Ancilla Bandwidth" restricts the rate of optimization based on information flow, just as the Carnot limit restricts the efficiency of heat engines based on temperature. Attempting to drive the system faster than this limit (e.g., via excessive learning rates or unconstrained ansatz expressivity) results in a regime of negative efficiency, where the entropy generation rate exceeds the information extraction rate, leading to algorithmic "overheating" rather than ground-state cooling. This framework offers a physical perspective on the hardness of variational optimization. We propose that the barrier between tractable and intractable problems is observable as the transition from finite to diverging information cost. In this view, hard problems are those where the thermodynamic cost of the solution grows efficiently, requiring an exponentially increasing information flux to maintain a finite cooling rate. While recent work by Ragone et al. established the Dynamical Lie Algebra as the geometric predictor of trainability, the physical mechanism driving this transition remains an open question. Here, we provide evidence that the geometric 'concentration of measure' correlates with an information-theoretic efficiency transition, characterized by the collapse of information-to-work conversion efficiency when DLA dimension scales exponentially. Consequently, the design of scalable Quantum Machine Learning models must be reframed as "Thermodynamic Engineering." To avoid Barren Plateaus, ansatz architectures must be constrained not just to limit parameter counts, but to confine the Dynamical Lie Algebra within the polynomial "Goldilocks Zone"—sufficiently complex to express the solution, yet sufficiently structured to maintain information superconductivity. The 'Control Authority' in the polynomial regime arises from the well-defined Root Space Decomposition of the algebra. The existence of specific root vectors $E_{\alpha}$ acts as a network of 'ladder operators,' allowing the optimizer to navigate the Hilbert space along protected symmetry paths, avoiding the trap of the exponentially large bulk. Geometrically, the 'Complexity Crash' in the chaotic phase corresponds to a pathological scaling of the Fubini-Study metric volume. In the exponential DLA regime, the volume of reachable states expands faster than the demon's metric sampling rate, leading to an effective horizon beyond which the landscape geometry becomes unresolvable. Our results provide a quantum-mechanical quantification of the "Computational Irreducibility" hypothesis, demonstrating that the thermodynamic cost of optimization diverges exactly when the system dynamics become algebraically irreducible. In this framework, the Second Law of Thermodynamics arises from the limitations of a computationally bounded observer. The Carnot-like bound $\eta \leq \theta_{\mathrm{gain}}$ (Eq. 7) can thus be interpreted as the channel capacity limit of a single-ancilla quantum observer, with the normalized efficiency $\eta / \theta_{\mathrm{gain}} \approx 0.5$ measuring how close the optimization operates to this fundamental limit. # Implications for Quantum Learning Based on the observed scaling of the information cost, we suggest that the tractability of variational learning is constrained by the thermodynamic stability of information flow. We define a Thermo-Efficient regime as the set of Hamiltonians for which the algorithmic efficiency remains positive ( $\eta > 0$ ) and the information susceptibility remains finite in the large- $N$ limit. Our results indicate that systems with Polynomial DLA growth (e.g., the Complete Graph $K_{n}$ ) belong to this regime. Conversely, chaotic systems (e.g., Spin Glasses) with Exponential DLA growth exhibit an "Efficiency Collapse" $(\eta \rightarrow 0)$ . Here, the information scrambling rate exceeds the controller's channel capacity. Our results suggest that the efficiency coefficient $\eta$ provides a practical diagnostic for variational algorithm trainability, complementary to geometric metrics like the Quantum Fisher Information. While QFIM-based approaches require computing the full parameter Hessian, our protocol directly measures the information-work conversion capacity through a single-ancilla probe. This information bottleneck can be formalized as a violation of the Holev-Schumacher-Westmoreland (HSW) theorem. The single-ancilla probe constitutes a quantum channel $\Phi$ with a classical capacity $\chi (\Phi)\leq 1$ bit. In the chaotic phase, the required information scaling $I_{\mathrm{req}}\propto N$ exceeds this capacity. # Outlook: Beyond the Barrier Our identification of the thermodynamic limit points toward four distinct pathways to extend the tractable regime of quantum optimization: 1. Multi-Demon Bandwidth: Since the crash is driven by a channel capacity bottleneck ( $I_{req} \gg 1$ bit), employing a $k$ -ancilla probe could linearly increase the information bandwidth. Future work will investigate whether a "Demon Ensemble" can delay the thermodynamic crash in chaotic systems by matching the scrambling rate with parallel information extraction. 2. Symplectic Shortcuts (Structure Learning): The efficiency of Ordered systems suggests that intractability is a function of the full algebra's volume. A "smart" optimizer might dynamically prune the DLA, identifying a Symplectic Shortcut—a polynomial subalgebra that contains the ground state—thereby artificially inducing an Ordered phase within a Chaotic landscape. 3. The Temperature of Complexity: We have analyzed the zero-temperature limit of the optimizer. Introducing finite thermal noise may reveal a critical "Complexity Temperature" $T_{c}$ , above which the Demon's information gathering is erased by thermal fluctuations, establishing a hard physical bound on the operating temperature of quantum computers solving NP-hard problems. 4. The QFT Continuum Limit: Our analysis utilized discretized parameters, analogous to a Lattice Gauge Theory formulation. In the limit of infinite bit precision $(B\to \infty)$ , the ansatz trajectory approaches a continuous field. Future work will investigate whether the thermodynamic bounds derived here imply a fundamental computational renormalization group flow, where the tractability of finding the vacuum state of a Quantum Field Theory depends on the information scaling of its effective lattice action. # METHODS # Non-Linearity Test To establish the source of non-linear dynamics, we constructed a minimal 4-qubit circuit comparing interacting vs. non-interacting Hamiltonians. Two parameter qubits $(P_0, P_1)$ control rotations on two system qubits $(S_0, S_1)$ : 1. Initialization: Parameter qubits prepared in superposition $|+\rangle$ . 2. W-Gate (Encode): Controlled- $R_{Y}(\pi)$ maps $|1\rangle_{P}\rightarrow |1\rangle_{S}$ . 3. Evolution: Apply $e^{-iHt}$ with $t = 1.0$ . 4. Inverse W-Gate: Disentangle system, transferring phase to parameters. 5. Measure: Compute $I(P_0 : P_1)$ from the parameter register. For $H = Z_0 + Z_1$ , the phase factors as $e^{-i(E_0 + E_1)t}$ , yielding $I = 0$ . For $H = Z_0Z_1$ , the coupled phase creates entanglement, yielding $I = 1.74$ bits. This demonstrates that non-linearity requires interaction terms. # Experimental Protocol: The Coherent Demon To rigorously quantify the thermodynamic cost of optimization, we constructed a single-ancilla probe designed to isolate the information-work exchange. The protocol consists of three distinct unitary phases applied to the joint state $\rho_{SA} = \rho_S\otimes |0\rangle \langle 0|_A$ .. 1. Sensing (Interaction): The ancilla is prepared in superposition $| + \rangle_A$ . The system and ancilla interact via a controlled-unitary evolution $U_{sense}(\tau) = |0\rangle \langle 0|_A\otimes I_S + |1\rangle \langle 1|_A\otimes e^{-iH_S\tau}$ . This maps the local energy gradient into the relative phase of the ancilla, effectively realizing a short-time Phase Estimation routine or a weak measurement of the operator $H_{S}$ . 2. Correlation (Information Storage): A Hadamard gate on the ancilla converts the phase information into population differences (Z-basis). At this stage, we measure the Mutual Information $I(S:A)$ and Logarithmic Negativity $E_N$ to quantify the "Fuel" available for extraction. Crucially, no projective measurement is performed yet; the information is stored in quantum correlations. 3. Feedback (Actuation): We apply a Coherent Feedback operation $U_{\text{kick}} = CR_X(\theta_{\text{gain}})$ , where the system undergoes a rotation conditioned on the ancilla state. Rather than "detecting" the gradient direction, this controlled operation creates asymmetric dynamics: amplitude at low-energy configurations receives different actuation than amplitude at high-energy configurations. The net effect is analogous to a quantum walk coin flip—one branch diffuses while the other traps. The effective non-linear dynamics emerge when the ancilla is reset (traced out) after the feedback step, exporting entropy $S_{anc}$ to the environment to pay for the work extracted $\Delta E$ . # Ansatz Architecture The variational ansatz $U(\pmb{\theta})$ was constructed using the EfficientSU2 architecture with linear entanglement, consisting of $L$ layers of parameterized $R_{Y}$ and $R_{Z}$ rotations interleaved with CNOT entanglers: $$ U (\boldsymbol {\theta}) = \prod_ {l = 1} ^ {L} \left[ \bigotimes_ {i = 1} ^ {N} R _ {Y} \left(\theta_ {l, i} ^ {(y)}\right) R _ {Z} \left(\theta_ {l, i} ^ {(z)}\right) \right] \cdot U _ {\text {e n t}} \tag {8} $$ where $U_{\mathrm{ent}} = \prod_{i=1}^{N-1} \mathrm{CNOT}_{i,i+1}$ implements the entang- gling layer. # The W-Gate Protocol To realize the trap-diffusion mechanism, we implemented a custom "W-Gate" protocol that treats the variational parameters as quantum degrees of freedom. The key insight is that the ancilla controls which operations are applied, creating asymmetric dynamics for different energy configurations: 1. Parameter Superposition: Parameters $\theta$ are encoded as quantum states in a secondary register, initialized in superposition via Hadamard gates. 2. Controlled Encoding: Each parameter qubit controls a rotation on the system: $|b\rangle_P|\psi \rangle_S\rightarrow |b\rangle_P R_Y(b\cdot \delta)|\psi \rangle_S$ , where $\delta = \pi$ for maximal distinguishability. 3. Controlled Drift (Oracle): The Hamiltonian evolution is applied controlled by the ancilla: $U_{\mathrm{drift}} = e^{-iH\tau}$ acts only on the $|1\rangle_A$ branch. This accumulates parameter-dependent phases that encode energy information. 4. Controlled Mixer (Diffusion): The mixer operation (parameterized rotations) is similarly controlled, enabling amplitude to diffuse in parameter space only for configurations where the ancilla is in the appropriate state. 5. Inverse Decoding $(W^{\dagger})$ : The inverse controlled rotations disentangle the system, transferring the phase information back to the parameter register. This protocol realizes the "Sandwich" operator $W^{\dagger}U(H)W$ that enables coherent interference between parameter configurations. The controlled structure ensures that low-energy configurations (which induce phases closer to unity) receive preferential mixing, while high-energy configurations are effectively trapped. The effective non-unitary dynamics of the system $\rho_{S}$ are mathematically guaranteed by the Stinespring Dilation Theorem. The interaction with the ancilla followed by the partial trace realizes a quantum channel $\mathcal{E}(\rho_S) = \mathrm{Tr}_A(U\rho_S U^\dagger)$ , which allows for entropy changes $(\Delta S \neq 0)$ that are impossible under strictly unitary evolution. The ancilla-system interaction can be geometrically interpreted as a coherent measurement of the Momentum Map $J: \mathcal{H} \to \mathfrak{g}^*$ associated with the Hamiltonian action. The gradient signal corresponds to the projection of $J$ onto the tangent space of the variational manifold. For the Ordered phase, we utilized a Complete Graph Hamiltonian $(K_{n})$ with uniform couplings, which is known to generate a polynomially bounded DLA with dimension $O(N^{3})$. # Simulation Hyperparameters All numerical experiments were performed with the parameters listed in Table I. The absolute efficiency value $\eta_{max}$ scales linearly with the coupling strength $J$ ; however, the critical exponent $\gamma$ and the crash threshold $N_{c}$ are invariant under rescaling. TABLE I. Simulation Hyperparameters. <table><tr><td>Parameter</td><td>Value</td><td>Description</td></tr><tr><td>N (System Size)</td><td>3 – 8</td><td>Qubits in system</td></tr><tr><td>J (Coupling)</td><td>1.0 / ±1.0</td><td>Ord. / Chaotic</td></tr><tr><td>θgain (Kick)</td><td>0.2 rad</td><td>Feedback angle</td></tr><tr><td>τ (Sensing)</td><td>0.0 – 1.5</td><td>Sensing duration</td></tr><tr><td>L (Depth)</td><td>1</td><td>Ansatz layers</td></tr><tr><td>Trials</td><td>5</td><td>Per data point</td></tr><tr><td>Method</td><td>SV / MPS</td><td>N ≤ 6 / N &gt; 6</td></tr></table> # DLA Analysis Dynamical Lie Algebra (DLA) dimensions were computed using the Lie Closure algorithm. We iteratively computed the nested commutators of the generating set $\mathcal{G} = \{iH_S\} \cup \{iP_k\}_{k=1}^M$ , where $P_k$ are the Pauli generators of the ansatz, until the set closed under commutation. # Hamiltonian Definitions: - Ordered Phase (Complete Graph $K_{n}$ ): $H_{\mathrm{ord}} = -J\sum_{i < j}Z_{i}Z_{j} + \sum_{i}h_{i}X_{i}$ , with uniform ferromagnetic coupling $J = 1.0$ and random transverse fields $h_i \in$ . This generates a polynomial DLA of dimension $O(N^3)$. Chaotic Phase (Sherrington-Kirkpatrick): $H_{\mathrm{cha}} = \sum_{i < j} J_{ij} Z_i Z_j + \sum_i h_i X_i$ , with random couplings $J_{ij} \in$ (spin glass frustration). This generates an exponential DLA of dimension $O(4^N)$ . # Efficiency Calculation The algorithmic efficiency $\eta$ was computed as the slope of the linear regression between extracted work $W$ and mutual information $I(S:A)$ across the sensing time sweep ( $\tau = 0$ to 1.5). The critical size $N_{c}$ was estimated by linear interpolation between the last positive and first negative efficiency values in the chaotic phase. # Work and Information Definitions We define the thermodynamic quantities as follows: - Extracted Work: $W = E_{\text{before}} - E_{\text{after}} = \langle H \rangle_{\rho_S^{(0)}} - \langle H \rangle_{\rho_S^{(f)}}$ , where $\rho_S^{(0)}$ is the reduced system state after sensing (before feedback) and $\rho_S^{(f)}$ is the state after feedback. - Mutual Information: $I(S:A) = S(\rho_S) + S(\rho_A) - S(\rho_{SA})$ , computed in base 2 (bits) using von Neumann entropy. - Algorithmic Efficiency: $\eta = dW / dI$ , the slope of the linear regression between extracted work and mutual information across the sensing time sweep. Thermodynamic Definitions: We explicitly define the thermodynamic system boundaries to enclose only the quantum information processing degrees of freedom (qubits). Our efficiency metric $\eta$ quantifies the differential algorithmic work extracted per unit of information entropy generated within the Hilbert space, distinct from the constant macroscopic control overhead. Simulation Rigor: All thermodynamic data was generated using exact statevector simulation to isolate the fundamental information-theoretic bounds free from hardware-specific noise (e.g., gate error, readout error). This approach allows for the precise calculation of entropic quantities $(S, I, E_N)$ that would require exponentially many measurements in experimental setups, thereby establishing the theoretical upper bounds of the architecture. Code Availability: The simulation code for reproducing all numerical experiments is available at https://github.com/poig/self-research/tree/main/Quantum_AI/QLTO/theory_test.
arxiv_physics
2025-12-02T00:00:00Z
https://arxiv.org/pdf/2512.14701
{"title": "Information-Theoretic Constraints on Variational Quantum Optimization: Efficiency Transitions and the Dynamical Lie Algebra", "raw_content": "# Information-Theoretic Constraints on Variational Quantum Optimization: Efficiency Transitions and the Dynamical Lie Algebra\n\nTan Jun Liang $^{1}$ , *\n\n$^{1}$ School of Information Technology and Electrical Engineering,\n\nUniversity of Queensland, St Lucia, QLD 4072, Australia\n\n(Dated: December 18, 2025)\n\nVariational quantum algorithms are the leading candidates for near-term quantum advantage, yet their scalability is limited by the \"Barren Plateau\" phenomenon. While traditionally attributed to geometric vanishing gradients, we propose an information-theoretic perspective. Using ancillamediated coherent feedback, we demonstrate an empirical constitutive relation $\\Delta E\\leq \\eta I(S:A)$ linking work extraction to mutual information, with quantum entanglement providing a factor-of-2 advantage over classical Landauer bounds. By scaling the system size, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra. Systems with polynomial algebraic complexity exhibit sustained positive efficiency, whereas systems with exponential complexity undergo an \"efficiency collapse\" $(\\eta \\rightarrow 0)$ at $N\\approx 6$ qubits. These results suggest that the trainability boundary in variational algorithms correlates with information-theoretic limits of quantum feedback control.\n\n# INTRODUCTION\n\nVariational Quantum Algorithms (VQAs) represent the primary strategy for achieving near-term quantum advantage, aiming to solve optimization problems by encoding them into the ground state of a Hamiltonian [1]. However, their scalability is fundamentally limited by the \"Barren Plateau\" phenomenon, where gradients vanish exponentially with system size [2]. While standard analyses characterize this barrier as a geometric concentration of measure in high-dimensional Hilbert spaces, these descriptions lack a unifying physical mechanism that explains the transition from trainability to intractability.\n\nRecent theoretical advances have identified the Dynamical Lie Algebra (DLA) as the key structural predictor of trainability [3]. Specifically, circuits generating a polynomially scaling DLA are proven to escape barren plateaus, while those generating exponentially scaling DLAs succumb to them. We leverage this algebraic classification to probe the thermodynamic stability of these distinct regimes.\n\nHere, we propose that computational hardness in quantum circuits correlates with information-theoretic constraints on feedback control. By treating the ancilla interaction as an effective environment, we recover nonlinear control dynamics via information erasure, consistent with open-system decoherence approaches to measurement [4].\n\nWe reframe the variational optimizer not as a mathematical function, but as a quantum Maxwell's Demon. To isolate the thermodynamic contribution, we implement a decoupled 'Coherent Feedback' protocol that fixes the actuation strength while varying the sensing duration, thereby proving that the optimization work is causally driven by the mutual information channel capacity. In this framework, the feedback loop acts as a thermodynamic engine: it extracts entropy from the system\n\nto lower its Hamiltonian expectation value $\\langle H\\rangle$ (Work), fueled by the mutual information established between the system and the control ancilla.\n\nThis approach builds upon the theoretical framework of \"Daemonic Ergotropy\" [5], which established that quantum correlations (discord and entanglement) can enhance work extraction beyond classical limits. However, while previous studies focused on single-qubit engines or small thermal baths, the thermodynamic implications for algorithmic complexity remain unexplored.\n\nHere, we extend this principle to the thermodynamic limit of many-body optimization. We ask: Does the capacity to extract work scale indefinitely, or does it face a critical complexity barrier? By comparing systems with polynomial vs. exponential algebraic complexity, we demonstrate that information-driven work extraction is not a guaranteed resource but is strictly bounded by the algebraic structure of the problem Hamiltonian.\n\nCrucially, this ancilla-mediated feedback induces an effective non-linearity in the parameter update trajectory. We demonstrate this via a controlled experiment comparing interacting vs. non-interacting Hamiltonians (see Methods). For a separable Hamiltonian $H = Z_0 + Z_1$ , where the phase factors as $e^{-i(E_0 + E_1)t} = e^{-iE_0t} \\cdot e^{-iE_1t}$ , we observe zero mutual information between parameter qubits:\n\n$$\n\\left. I \\left(P _ {0}: P _ {1}\\right) \\right| _ {H = Z _ {0} + Z _ {1}} = 0 \\quad (\\text {L i n e a r / S e p a r a b l e}) \\tag {1}\n$$\n\nIn contrast, for an interacting Hamiltonian $H = Z_0Z_1$ where the phase depends on the product $E_0\\cdot E_1$ , we observe significant entanglement:\n\n$$\n\\left. I \\left(P _ {0}: P _ {1}\\right) \\right| _ {H = Z _ {0} Z _ {1}} = 1. 7 4 \\text {b i t s} \\quad (\\text {N o n - L i n e a r / E n t a n g l e d}) \\tag {2}\n$$\n\nThis result demonstrates that non-linearity arises specifically from interaction terms in the Hamiltonian that couple different parameter subspaces. The W-gate protocol\n\n(see Methods) successfully transfers this phase information back to the parameter register, enabling coherent interference between parameter configurations.\n\nBy analyzing the scaling of this feedback mechanism, we identify a distinct efficiency transition governed by the dimension of the Dynamical Lie Algebra (DLA). We observe that systems with polynomial algebraic complexity (Ordered phase) exhibit \"constructive information scaling,\" where efficiency increases with system size. In contrast, systems with exponential complexity (Chaotic phase) undergo an \"Efficiency Collapse.\" This suggests that the trainability boundary in variational algorithms correlates with the information channel capacity of the feedback controller.\n\nAlternative strategies, such as the Quantum Walk-based Optimization Algorithm (QWOA) [6, 7], have sought to overcome local minima by exploiting coherent tunneling and variable-time evolution. Our framework unifies the QWOA paradigm with variational optimization by showing that the ancilla acts as a quantum walk \"coin qubit\" controlling trap-diffusion dynamics. While these dynamic approaches improve exploration, the fundamental thermodynamic bounds governing their efficiency in the many-body limit—specifically the transition from coherent flow to information scrambling—remain uncharacterized.\n\n# THERMODYNAMIC CONSTITUTIVE RELATIONS\n\nThe mechanism by which the ancilla-mediated optimizer extracts work is not through explicit gradient direction sensing, but through a trap-diffusion mechanism analogous to discrete-time quantum walks [8, 9]. In a quantum walk, a \"coin qubit\" controls whether amplitude moves forward or backward; the Hadamard coin creates superposition enabling quadratic speedup over classical diffusion ( $\\sigma^2 \\propto T^2$ vs. $\\sigma^2 \\propto T$ ). We leverage this principle for optimization: the ancilla acts as a coin qubit controlling whether amplitude diffuses (explores parameter space) or traps (concentrates at low-energy configurations).\n\nWe first establish the source of non-linearity via a minimal 4-qubit experiment. For a non-interacting Hamiltonian $H = Z_{0} + Z_{1}$ , the evolution phase factors separately, yielding zero correlation between parameter qubits (Eq. 1). For an interacting Hamiltonian $H = Z_{0}Z_{1}$ , the coupled phase creates entanglement, yielding $I(P_{0}:P_{1}) = 1.74$ bits (Eq. 2). This demonstrates that non-linearity requires interaction terms that couple parameter subspaces—a necessary condition for the trap-diffusion mechanism to operate.\n\n# The Trap-Diffusion Mechanism\n\nThe core insight connecting our protocol to quantum walk speedup lies in the controlled-operation structure. In our W-gate protocol (see Methods), the circuit applies:\n\n$$\nU _ {\\text {s t e p}} = U _ {\\text {m i x e r}} ^ {(c)} \\cdot U _ {\\text {d r i f t}} ^ {(c)} \\tag {3}\n$$\n\nwhere both the mixer (amplitude diffusion across parameter space) and drift (Hamiltonian evolution) are controlled by the ancilla state. This creates the following dynamics:\n\n- Ancilla $|0\\rangle$ branch: Receives mixer operation $\\rightarrow$ amplitude diffuses \n- Ancilla $|1\\rangle$ branch: Receives drift operation $\\rightarrow$ amplitude traps at current energy\n\nThe sensing phase entangles the ancilla state with the system energy: low-energy configurations bias the ancilla toward states that receive more mixing, while high-energy configurations bias toward trapping. Critically, the ancilla does not measure \"which direction is downhill\"—it measures energy magnitude via the Hadamard test $\\langle \\cos (E\\tau)\\rangle$ . The gradient direction emerges statistically from the asymmetric survival probability: amplitude at low-energy configurations diffuses freely, while amplitude at high-energy configurations is trapped and eventually decoheres upon ancilla reset.\n\nThis mechanism is mathematically equivalent to Grover's amplitude amplification [10], where the oracle marks \"good\" states (low energy) and the diffuser redistributes amplitude. Our protocol realizes this as a continuous process: the controlled-mixer acts as the diffuser, and the energy-dependent sensing acts as a soft oracle that preferentially marks high-energy states for trapping.\n\nCrucially, we observe that the work extraction correlates with Logarithmic Negativity ( $R \\approx 0.9$ ), a strict entanglement monotone. A purely classical feedback loop (zero entanglement) would yield zero work in this protocol, confirming the quantum nature of the mechanism.\n\nTo rigorously characterize the thermodynamic cost of this non-linearity, we implemented a controlled \"Coherent Feedback\" protocol (see Methods) designed to decouple the information gathering phase from the feedback actuation. Unlike standard variational updates where parameters are modified classically, our protocol utilizes a coherent conditional rotation $(CR_X)$ acting directly on the system, triggered by the ancilla's state.\n\nCrucially, we fixed the feedback actuation strength (the \"kick\") to a constant value and varied only the sensing duration $dt$ . This isolates the information content as the sole variable driving the performance difference. We observe that the extracted work $(-\\Delta \\langle H \\rangle)$ scales linearly with the mutual information $I(S : A)$ established\n\n![](images/750163ea0c169ba3eaec9ec97a8665ca36fd88349ab10b0672c93956e0f390e7.jpg) \nFIG. 1. The Thermodynamic Constitutive Law. Extracted Work $(-\\Delta \\langle H\\rangle)$ vs. Mutual Information $I(S:A)$ for a 4-qubit transverse Ising system. The robust linear relationship $(R^2\\approx 0.90$ slope $\\eta = 0.247$ energy/bit) confirms the constitutive law $\\Delta E\\leq \\eta I$ . The sensing time $\\tau$ was varied from O to 1.5 while the feedback strength was held constant at $\\theta_{gain} = 0.5$ rad, isolating information as the sole variable driving work extraction.\n\nbetween the system and the ancilla, yielding a strict constitutive Equation of State:\n\n$$\n\\Delta E \\leq \\eta (\\mathcal {H}, \\mathcal {A}) \\cdot I (S: A) \\tag {4}\n$$\n\nHere, the proportionality constant $\\eta$ represents the \"Algo- rithmic Efficiency\" or informational conductance of the ansatz-Hamiltonian pair. Our data indicates a robust linear correlation ( $R^2\\approx 0.90$ ), confirming that the optimization rate is fundamentally limited by the information channel capacity of the probe, rather than the driving force of the optimizer.\n\nTo determine the physical nature of these correlations, we measured the Logarithmic Negativity $E_N$ , a strict monotone of quantum entanglement which is zero for all separable (classical) states. We find that the work extraction correlates directly with the generated entanglement:\n\n$$\n\\Delta E \\propto E _ {N} \\left(\\rho_ {S: A}\\right) = \\log_ {2} \\left| \\left| \\rho^ {\\Gamma_ {A}} \\right| \\right| _ {1} > 0 \\tag {5}\n$$\n\nThe persistence of this correlation ( $R \\approx 0.90$ ) and the strictly non-zero value of $E_N$ confirms that the mechanism relies specifically on quantum interference effects. This distinguishes the VQA optimizer from a classical Szilard engine, establishing that quantum entanglement is the essential thermodynamic resource driving the cooling process.\n\nTo verify that this mechanism provides genuine thermodynamic advantage, we computed the Landauer erasure cost. Remarkably, we observe a constant ratio $I(S:A) / S(A) = 2.00$ across all sensing times (Fig. 2). This ratio of exactly 2 is the hallmark of pure bipartite entanglement, where $I(S:A) = 2S(A)$ . Since the Landauer cost for erasing the ancilla is $W_{cost} = k_{B}T\\cdot S(A)$ ,\n\n![](images/be509a6b17d958fcbc0da76eb002c26798d8768f7dc00e84b35aee5e8d7d1991.jpg) \nFIG. 2. Quantum Advantage via Landauer Analysis. Extracted work (blue) vs. Landauer erasure cost (red dashed) as a function of sensing time $\\tau$ . The green shaded region indicates net positive work after accounting for information erasure. The constant ratio $I(S:A)/S(A)=2.00$ confirms that quantum entanglement (not classical correlation) fuels the engine.\n\nwhile the extractable work scales as $W_{ext} \\propto I(S : A)$ , the quantum correlations provide a factor-of-2 advantage over the classical limit, yielding net positive work after accounting for erasure.\n\nThe mutual information acquired by the ancilla is not arbitrary; it corresponds to a direct measurement of the local Fubini-Study metric tensor $g_{ij}$ [11]. The sensing protocol projects the local curvature of the quantum state manifold onto the ancilla's Z-basis, establishing a rigorous link between the abstract information-theoretic fuel and the concrete geometry of the Hilbert space.\n\n# THE COMPLEXITY-DEPENDENT EFFICIENCY TRANSITION\n\nTo investigate the limits of this feedback mechanism, we analyzed the scaling of the algorithmic efficiency $\\eta$ across two distinct topological classes of Hamiltonians as a function of system size $N$ . We compared an \"Ordered\" system (Complete Graph $K_{n}$ ) characterized by a polynomial Dynamical Lie Algebra (DLA) dimension $(O(N^{3}))$ [12] against a \"Chaotic\" system (Sherrington-Kirkpatrick Spin Glass) characterized by an exponential DLA dimension $(O(4^{N}))$ [13]. While the system sizes $(N \\leq 8)$ are limited by the exponential cost of classical simulation, the observed efficiency collapse in chaotic systems is consistent with the ancilla channel capacity becoming insufficient to resolve the scrambled gradient information.\n\nThe Lie algebraic classification of trainability was recently unified by Ragone et al. [3], who proved the exact variance formula $\\operatorname{Var}[\\ell] = \\mathcal{P}_{\\mathfrak{g}}(\\rho) \\cdot \\mathcal{P}_{\\mathfrak{g}}(O) / \\dim(\\mathfrak{g})$ , establishing that $\\dim(\\mathfrak{g}) \\in \\Omega(b^n)$ with $b > 2$ implies a barren plateau. Our efficiency metric $\\eta$ provides a complementary thermodynamic diagnostic: while Ragone\n\n![](images/5024a533f0037ed3bc0c1492ddc98c61c3ef1f5228a7a89c2358382807032baf.jpg) \nFIG. 3. The Complexity-Dependent Efficiency Transition. Normalized algorithmic efficiency $(\\eta / N^2)$ vs. System Size $N$ for Ordered (Ferromagnet, blue) and Chaotic (Spin Glass, red) Hamiltonians. The Ordered phase maintains positive efficiency, while the Chaotic phase undergoes efficiency collapse, with efficiency approaching zero at $N \\geq 6$ . Error bars represent $\\pm 1\\sigma$ over 5 random seeds. The transition is consistent with the ancilla channel capacity (1 bit) becoming insufficient to resolve the scrambled gradient information.\n\net al. predict trainability from gradient variance, we directly measure the information-to-work conversion capacity of the feedback loop. Both observables collapse when DLA dimension grows exponentially, suggesting a unified information-theoretic origin for barren plateaus.\n\nWe observe a sharp bifurcation in thermodynamic behavior (Fig. 3). For the Ordered system, we observe a regime of \"Constructive Information Scaling,\" where the efficiency $\\eta$ increases with system size, indicating that the computational resistance to information flow decreases as the Hilbert space grows. In this regime, the growing algebraic structure provides additional pathways for the Demon to navigate the Hilbert space without exceeding its information channel capacity.\n\nThis counter-intuitive scaling arises because the polynomial DLA provides a dense network of symmetry-protected pathways through the Hilbert space, effectively reducing the ergodic search volume.\n\nIn stark contrast, the Chaotic system undergoes an \"Efficiency Collapse.\" As the system size increases, the efficiency drops precipitously, crossing zero at a critical size $N_{c} \\approx 6.4$ . At this point, the rate of operator spreading (scrambling) generates new independent gradient directions faster than the single-qubit ancilla (1 bit/cycle) can interrogate them. Beyond $N_{c}$ , the information generation rate exceeds the channel capacity, and the ancilla cannot gather sufficient information to guide the trajectory.\n\nTo quantify this transition, we introduce the Complexity Specific Heat, defined as the susceptibility of algorithmic efficiency to system size increase:\n\n$$\n\\chi_ {\\mathrm {c o m p}} \\equiv \\frac {\\partial \\eta}{\\partial N} \\tag {6}\n$$\n\nThis quantity is analogous to thermal specific heat $C = \\partial \\langle E \\rangle / \\partial T$ , measuring the \"response\" of the optimization engine to changes in problem complexity. In the Ordered phase, $\\chi_{\\mathrm{comp}} > 0$ (efficiency increases with $N$ ), while in the Chaotic phase, $\\chi_{\\mathrm{comp}} < 0$ and diverges at $N_c$ , signaling a thermodynamic instability.\n\nThis efficiency collapse provides a complementary diagnostic to the Quantum Fisher Information approach of Abbas et al. [14], who showed that the effective dimension $d_{\\mathrm{eff}}$ (derived from QFIM eigenvalues) correlates with model trainability. While their metric captures parameter redundancy, our efficiency $\\eta$ directly measures the information-to-work conversion capacity of the feedback loop.\n\nThe Ordered phase typically corresponds to Hamiltonians generating a Classical Lie Algebra (e.g., types $\\mathfrak{so}_n, \\mathfrak{sp}_n$ ) [15]. These algebras possess a rigid root system structure that restricts the dimension to scale polynomially with system size, $d \\sim O(n^2)$ .\n\nThe 'Information Superconductivity' in the Ordered phase arises because the optimization trajectory is dynamically constrained to low-dimensional Coadjoint Orbits of the polynomial algebra [16]. Unlike the full Hilbert space, these symplectic submanifolds have polynomial volume, ensuring that the ergodic coverage time—and thus the thermodynamic search cost—remains finite.\n\nFor the Ordered phase, we utilized the Complete Graph Hamiltonian $(K_{n})$ . Recent algebraic analysis by Allcock et al. [12] proves the exact dimension $\\dim (\\mathfrak{g}_{K_n}) = \\frac{1}{12} (n^3 +6n^2 +O(n))$ , placing it firmly within the polynomially tractable regime.\n\n# MICROSCOPIC ORIGIN\n\nTo determine the microscopic driver of this thermodynamic collapse, we analyzed the structural statistics of the operators comprising the Dynamical Lie Algebra for both topological classes. Specifically, we computed the distribution of the Pauli weights (Hamming weights) for the basis operators generated during the Lie closure process.\n\nWe observe that the macroscopic efficiency crash in the chaotic phase corresponds to a microscopic regime of maximal operator scrambling. In the Ordered $(K_{n})$ phase, the DLA operators remain sparsely supported, preserving a low average Pauli weight even as the system size scales. This sparsity implies that the information relevant to the optimization gradient remains localized in the Hilbert space, accessible to the finite-bandwidth probe of the ancilla.\n\nIn contrast, the Chaotic phase exhibits an explosive growth in operator density. The average Pauli weight of the DLA generators converges rapidly to $\\sim N / 2$ , indicating that the gradient information is delocalized or\n\n![](images/8502ecb301d4d79d16b3420114e44ef3e7470f87e284b21bdf7468fc71908686.jpg) \nFIG. 4. DLA Efficiency Scaling. Comparison of thermodynamic efficiency for Ordered (Complete Graph, polynomial DLA $\\sim O(N^3)$ ) vs. Chaotic (Spin Glass, exponential DLA $\\sim O(4^N)$ ) systems. The polynomial scaling of the algebra ensures sustained positive efficiency, while the exponential algebra exhibits the efficiency crash observed in Fig. 3.\n\n\"scrambled\" across non-local multi-body correlations.\n\nThis scrambling imposes a fundamental limit on the feedback cycle. The single-qubit ancilla acts as a low-rank probe with a channel capacity of 1 bit per measurement cycle. In the Ordered regime, the relevant information is compressed into a polynomial subspace, allowing efficient extraction. In the Chaotic regime, the information is delocalized across complex, high-weight correlations. The controller, limited by its channel capacity, cannot resolve this delocalized information, resulting in vanishing mutual information $I(S:A) \\to 0$ and consequent cessation of work extraction. Thus, the efficiency transition is driven by the physical scrambling rate exceeding the information extraction rate of the controller.\n\nThe scrambling dynamics can be rigorously defined via the adjoint representation of the Lie algebra [15]. The time-evolution of a precursor operator $O_0$ is given by the adjoint action $O(t) = e^{\\mathrm{ad}_H t} O_0$ [15]. In the chaotic phase, the repeated application of the Lie bracket $[H, \\cdot]$ rapidly maps simple Pauli strings into the bulk of the operator space, maximizing the weight of the adjoint vector.\n\n# DISCUSSION\n\nThe observed thermodynamic crash can be understood as a bandwidth limitation. The single-ancilla probe functions as a communication channel with a maximum capacity of 1 bit per measurement cycle. In the polynomial DLA regime (Ordered), the relevant control information is compressed into a subspace accessible to this finite bandwidth. However, in the exponential DLA regime (Chaotic), the rate of information generation by algebraic scrambling $(O(N)$ bits per step) exceeds the channel capacity of the controller (1 bit). This information bottleneck $(I_{\\mathrm{req}}\\gg I_{\\mathrm{cap}})$ forces the demon to operate blindly, resulting in the decoherence of the control trajectory and\n\nthe collapse of thermodynamic efficiency.\n\nOur experimental findings suggest that the limitations of variational quantum optimization are not merely algorithmic artifacts, but manifestations of fundamental thermodynamic bounds. By treating the optimizer as a heat engine, we have identified a Carnot-like bound for quantum optimization efficiency. Analogous to how Carnot efficiency is bounded by temperature ratios, the algorithmic efficiency is bounded by the information channel capacity:\n\n$$\n\\eta \\leq \\theta_ {\\text {g a i n}} \\cdot \\chi_ {\\text {H o l e v o}} \\tag {7}\n$$\n\nwhere $\\theta_{\\mathrm{gain}}$ is the feedback coupling strength and $\\chi_{\\mathrm{Holevo}} \\leq 1$ bit is the Holevo capacity of the single-ancilla channel [17]. Numerically, we observe $\\eta = 0.456 \\cdot \\theta_{\\mathrm{gain}}$ in the Ordered phase ( $R^2 = 0.9945$ ), indicating operation at approximately $50\\%$ of the theoretical maximum—likely attributable to the linear response approximation inherent in gradient-based feedback and measurement back-action from the projective ancilla readout. This linear scaling with coupling strength—rather than an arbitrary empirical constant—establishes a universal limit derivable from first principles.\n\nThe \"Ancilla Bandwidth\" restricts the rate of optimization based on information flow, just as the Carnot limit restricts the efficiency of heat engines based on temperature. Attempting to drive the system faster than this limit (e.g., via excessive learning rates or unconstrained ansatz expressivity) results in a regime of negative efficiency, where the entropy generation rate exceeds the information extraction rate, leading to algorithmic \"overheating\" rather than ground-state cooling.\n\nThis framework offers a physical perspective on the hardness of variational optimization. We propose that the barrier between tractable and intractable problems is observable as the transition from finite to diverging information cost. In this view, hard problems are those where the thermodynamic cost of the solution grows efficiently, requiring an exponentially increasing information flux to maintain a finite cooling rate.\n\nWhile recent work by Ragone et al.[3] established the Dynamical Lie Algebra as the geometric predictor of trainability, the physical mechanism driving this transition remains an open question. Here, we provide evidence that the geometric 'concentration of measure' correlates with an information-theoretic efficiency transition, characterized by the collapse of information-to-work conversion efficiency when DLA dimension scales exponentially.\n\nConsequently, the design of scalable Quantum Machine Learning models must be reframed as \"Thermodynamic Engineering.\" To avoid Barren Plateaus, ansatz architectures must be constrained not just to limit parameter counts, but to confine the Dynamical Lie Algebra within the polynomial \"Goldilocks Zone\"—sufficiently complex to express the solution, yet sufficiently structured to maintain information superconductivity.\n\nThe 'Control Authority' in the polynomial regime arises from the well-defined Root Space Decomposition of the algebra [15]. The existence of specific root vectors $E_{\\alpha}$ acts as a network of 'ladder operators,' allowing the optimizer to navigate the Hilbert space along protected symmetry paths, avoiding the trap of the exponentially large bulk.\n\nGeometrically, the 'Complexity Crash' in the chaotic phase corresponds to a pathological scaling of the Fubini-Study metric volume. In the exponential DLA regime, the volume of reachable states expands faster than the demon's metric sampling rate, leading to an effective horizon beyond which the landscape geometry becomes unresolvable.\n\nOur results provide a quantum-mechanical quantification of the \"Computational Irreducibility\" hypothesis [18], demonstrating that the thermodynamic cost of optimization diverges exactly when the system dynamics become algebraically irreducible. In this framework, the Second Law of Thermodynamics arises from the limitations of a computationally bounded observer. The Carnot-like bound $\\eta \\leq \\theta_{\\mathrm{gain}}$ (Eq. 7) can thus be interpreted as the channel capacity limit of a single-ancilla quantum observer, with the normalized efficiency $\\eta / \\theta_{\\mathrm{gain}} \\approx 0.5$ measuring how close the optimization operates to this fundamental limit.\n\n# Implications for Quantum Learning\n\nBased on the observed scaling of the information cost, we suggest that the tractability of variational learning is constrained by the thermodynamic stability of information flow.\n\nWe define a Thermo-Efficient regime as the set of Hamiltonians for which the algorithmic efficiency remains positive ( $\\eta > 0$ ) and the information susceptibility remains finite in the large- $N$ limit. Our results indicate that systems with Polynomial DLA growth (e.g., the Complete Graph $K_{n}$ ) belong to this regime.\n\nConversely, chaotic systems (e.g., Spin Glasses) with Exponential DLA growth exhibit an \"Efficiency Collapse\" $(\\eta \\rightarrow 0)$ . Here, the information scrambling rate exceeds the controller's channel capacity. Our results suggest that the efficiency coefficient $\\eta$ provides a practical diagnostic for variational algorithm trainability, complementary to geometric metrics like the Quantum Fisher Information [14]. While QFIM-based approaches require computing the full parameter Hessian, our protocol directly measures the information-work conversion capacity through a single-ancilla probe. This information bottleneck can be formalized as a violation of the Holev-Schumacher-Westmoreland (HSW) theorem [17]. The single-ancilla probe constitutes a quantum channel $\\Phi$ with a classical capacity $\\chi (\\Phi)\\leq 1$ bit. In the chaotic phase, the required information scaling $I_{\\mathrm{req}}\\propto N$ exceeds\n\nthis capacity.\n\n# Outlook: Beyond the Barrier\n\nOur identification of the thermodynamic limit points toward four distinct pathways to extend the tractable regime of quantum optimization:\n\n1. Multi-Demon Bandwidth: Since the crash is driven by a channel capacity bottleneck ( $I_{req} \\gg 1$ bit), employing a $k$ -ancilla probe could linearly increase the information bandwidth. Future work will investigate whether a \"Demon Ensemble\" can delay the thermodynamic crash in chaotic systems by matching the scrambling rate with parallel information extraction.\n\n2. Symplectic Shortcuts (Structure Learning): The efficiency of Ordered systems suggests that intractability is a function of the full algebra's volume. A \"smart\" optimizer might dynamically prune the DLA, identifying a Symplectic Shortcut—a polynomial subalgebra that contains the ground state—thereby artificially inducing an Ordered phase within a Chaotic landscape.\n\n3. The Temperature of Complexity: We have analyzed the zero-temperature limit of the optimizer. Introducing finite thermal noise may reveal a critical \"Complexity Temperature\" $T_{c}$ , above which the Demon's information gathering is erased by thermal fluctuations, establishing a hard physical bound on the operating temperature of quantum computers solving NP-hard problems.\n\n4. The QFT Continuum Limit: Our analysis utilized discretized parameters, analogous to a Lattice Gauge Theory formulation. In the limit of infinite bit precision $(B\\to \\infty)$ , the ansatz trajectory approaches a continuous field. Future work will investigate whether the thermodynamic bounds derived here imply a fundamental computational renormalization group flow, where the tractability of finding the vacuum state of a Quantum Field Theory depends on the information scaling of its effective lattice action.\n\n# METHODS\n\n# Non-Linearity Test\n\nTo establish the source of non-linear dynamics, we constructed a minimal 4-qubit circuit comparing interacting vs. non-interacting Hamiltonians. Two parameter qubits $(P_0, P_1)$ control rotations on two system qubits $(S_0, S_1)$ :\n\n1. Initialization: Parameter qubits prepared in superposition $|+\\rangle$ . \n2. W-Gate (Encode): Controlled- $R_{Y}(\\pi)$ maps $|1\\rangle_{P}\\rightarrow |1\\rangle_{S}$ .\n\n3. Evolution: Apply $e^{-iHt}$ with $t = 1.0$ . \n4. Inverse W-Gate: Disentangle system, transferring phase to parameters. \n5. Measure: Compute $I(P_0 : P_1)$ from the parameter register.\n\nFor $H = Z_0 + Z_1$ , the phase factors as $e^{-i(E_0 + E_1)t}$ , yielding $I = 0$ . For $H = Z_0Z_1$ , the coupled phase creates entanglement, yielding $I = 1.74$ bits. This demonstrates that non-linearity requires interaction terms.\n\n# Experimental Protocol: The Coherent Demon\n\nTo rigorously quantify the thermodynamic cost of optimization, we constructed a single-ancilla probe designed to isolate the information-work exchange. The protocol consists of three distinct unitary phases applied to the joint state $\\rho_{SA} = \\rho_S\\otimes |0\\rangle \\langle 0|_A$ ..\n\n1. Sensing (Interaction): The ancilla is prepared in superposition $| + \\rangle_A$ . The system and ancilla interact via a controlled-unitary evolution $U_{sense}(\\tau) = |0\\rangle \\langle 0|_A\\otimes I_S + |1\\rangle \\langle 1|_A\\otimes e^{-iH_S\\tau}$ . This maps the local energy gradient into the relative phase of the ancilla, effectively realizing a short-time Phase Estimation routine or a weak measurement of the operator $H_{S}$ .\n\n2. Correlation (Information Storage): A Hadamard gate on the ancilla converts the phase information into population differences (Z-basis). At this stage, we measure the Mutual Information $I(S:A)$ and Logarithmic Negativity $E_N$ to quantify the \"Fuel\" available for extraction. Crucially, no projective measurement is performed yet; the information is stored in quantum correlations.\n\n3. Feedback (Actuation): We apply a Coherent Feedback operation $U_{\\text{kick}} = CR_X(\\theta_{\\text{gain}})$ , where the system undergoes a rotation conditioned on the ancilla state. Rather than \"detecting\" the gradient direction, this controlled operation creates asymmetric dynamics: amplitude at low-energy configurations receives different actuation than amplitude at high-energy configurations. The net effect is analogous to a quantum walk coin flip—one branch diffuses while the other traps.\n\nThe effective non-linear dynamics emerge when the ancilla is reset (traced out) after the feedback step, exporting entropy $S_{anc}$ to the environment to pay for the work extracted $\\Delta E$ .\n\n# Ansatz Architecture\n\nThe variational ansatz $U(\\pmb{\\theta})$ was constructed using the EfficientSU2 architecture with linear entanglement,\n\nconsisting of $L$ layers of parameterized $R_{Y}$ and $R_{Z}$ rotations interleaved with CNOT entanglers:\n\n$$\nU (\\boldsymbol {\\theta}) = \\prod_ {l = 1} ^ {L} \\left[ \\bigotimes_ {i = 1} ^ {N} R _ {Y} \\left(\\theta_ {l, i} ^ {(y)}\\right) R _ {Z} \\left(\\theta_ {l, i} ^ {(z)}\\right) \\right] \\cdot U _ {\\text {e n t}} \\tag {8}\n$$\n\nwhere $U_{\\mathrm{ent}} = \\prod_{i=1}^{N-1} \\mathrm{CNOT}_{i,i+1}$ implements the entang- gling layer.\n\n# The W-Gate Protocol\n\nTo realize the trap-diffusion mechanism, we implemented a custom \"W-Gate\" protocol that treats the variational parameters as quantum degrees of freedom. The key insight is that the ancilla controls which operations are applied, creating asymmetric dynamics for different energy configurations:\n\n1. Parameter Superposition: Parameters $\\theta$ are encoded as quantum states in a secondary register, initialized in superposition via Hadamard gates. \n2. Controlled Encoding: Each parameter qubit controls a rotation on the system: $|b\\rangle_P|\\psi \\rangle_S\\rightarrow |b\\rangle_P R_Y(b\\cdot \\delta)|\\psi \\rangle_S$ , where $\\delta = \\pi$ for maximal distinguishability. \n3. Controlled Drift (Oracle): The Hamiltonian evolution is applied controlled by the ancilla: $U_{\\mathrm{drift}} = e^{-iH\\tau}$ acts only on the $|1\\rangle_A$ branch. This accumulates parameter-dependent phases that encode energy information. \n4. Controlled Mixer (Diffusion): The mixer operation (parameterized rotations) is similarly controlled, enabling amplitude to diffuse in parameter space only for configurations where the ancilla is in the appropriate state. \n5. Inverse Decoding $(W^{\\dagger})$ : The inverse controlled rotations disentangle the system, transferring the phase information back to the parameter register.\n\nThis protocol realizes the \"Sandwich\" operator $W^{\\dagger}U(H)W$ that enables coherent interference between parameter configurations. The controlled structure ensures that low-energy configurations (which induce phases closer to unity) receive preferential mixing, while high-energy configurations are effectively trapped.\n\nThe effective non-unitary dynamics of the system $\\rho_{S}$ are mathematically guaranteed by the Stinespring Dilation Theorem [19]. The interaction with the ancilla followed by the partial trace realizes a quantum channel $\\mathcal{E}(\\rho_S) = \\mathrm{Tr}_A(U\\rho_S U^\\dagger)$ , which allows for entropy changes $(\\Delta S \\neq 0)$ that are impossible under strictly unitary evolution.\n\nThe ancilla-system interaction can be geometrically interpreted as a coherent measurement of the Momentum Map $J: \\mathcal{H} \\to \\mathfrak{g}^*$ associated with the Hamiltonian action [16]. The gradient signal corresponds to the projection of $J$ onto the tangent space of the variational manifold.\n\nFor the Ordered phase, we utilized a Complete Graph Hamiltonian $(K_{n})$ with uniform couplings, which is known to generate a polynomially bounded DLA with dimension $O(N^{3})$ [12].\n\n# Simulation Hyperparameters\n\nAll numerical experiments were performed with the parameters listed in Table I. The absolute efficiency value $\\eta_{max}$ scales linearly with the coupling strength $J$ ; however, the critical exponent $\\gamma$ and the crash threshold $N_{c}$ are invariant under rescaling.\n\nTABLE I. Simulation Hyperparameters. \n\n<table><tr><td>Parameter</td><td>Value</td><td>Description</td></tr><tr><td>N (System Size)</td><td>3 – 8</td><td>Qubits in system</td></tr><tr><td>J (Coupling)</td><td>1.0 / ±1.0</td><td>Ord. / Chaotic</td></tr><tr><td>θgain (Kick)</td><td>0.2 rad</td><td>Feedback angle</td></tr><tr><td>τ (Sensing)</td><td>0.0 – 1.5</td><td>Sensing duration</td></tr><tr><td>L (Depth)</td><td>1</td><td>Ansatz layers</td></tr><tr><td>Trials</td><td>5</td><td>Per data point</td></tr><tr><td>Method</td><td>SV / MPS</td><td>N ≤ 6 / N &gt; 6</td></tr></table>\n\n# DLA Analysis\n\nDynamical Lie Algebra (DLA) dimensions were computed using the Lie Closure algorithm. We iteratively computed the nested commutators of the generating set $\\mathcal{G} = \\{iH_S\\} \\cup \\{iP_k\\}_{k=1}^M$ , where $P_k$ are the Pauli generators of the ansatz, until the set closed under commutation.\n\n# Hamiltonian Definitions:\n\n- Ordered Phase (Complete Graph $K_{n}$ ): $H_{\\mathrm{ord}} = -J\\sum_{i < j}Z_{i}Z_{j} + \\sum_{i}h_{i}X_{i}$ , with uniform ferromagnetic coupling $J = 1.0$ and random transverse fields $h_i \\in [-1,1]$ . This generates a polynomial DLA of dimension $O(N^3)$ [12]. \nChaotic Phase (Sherrington-Kirkpatrick): $H_{\\mathrm{cha}} = \\sum_{i < j} J_{ij} Z_i Z_j + \\sum_i h_i X_i$ , with random couplings $J_{ij} \\in [-1, 1]$ (spin glass frustration). This generates an exponential DLA of dimension $O(4^N)$ .\n\n# Efficiency Calculation\n\nThe algorithmic efficiency $\\eta$ was computed as the slope of the linear regression between extracted work $W$ and\n\nmutual information $I(S:A)$ across the sensing time sweep ( $\\tau = 0$ to 1.5). The critical size $N_{c}$ was estimated by linear interpolation between the last positive and first negative efficiency values in the chaotic phase.\n\n# Work and Information Definitions\n\nWe define the thermodynamic quantities as follows:\n\n- Extracted Work: $W = E_{\\text{before}} - E_{\\text{after}} = \\langle H \\rangle_{\\rho_S^{(0)}} - \\langle H \\rangle_{\\rho_S^{(f)}}$ , where $\\rho_S^{(0)}$ is the reduced system state after sensing (before feedback) and $\\rho_S^{(f)}$ is the state after feedback. \n- Mutual Information: $I(S:A) = S(\\rho_S) + S(\\rho_A) - S(\\rho_{SA})$ , computed in base 2 (bits) using von Neumann entropy. \n- Algorithmic Efficiency: $\\eta = dW / dI$ , the slope of the linear regression between extracted work and mutual information across the sensing time sweep.\n\nThermodynamic Definitions: We explicitly define the thermodynamic system boundaries to enclose only the quantum information processing degrees of freedom (qubits). Our efficiency metric $\\eta$ quantifies the differential algorithmic work extracted per unit of information entropy generated within the Hilbert space, distinct from the constant macroscopic control overhead.\n\nSimulation Rigor: All thermodynamic data was generated using exact statevector simulation to isolate the fundamental information-theoretic bounds free from hardware-specific noise (e.g., gate error, readout error). This approach allows for the precise calculation of entropic quantities $(S, I, E_N)$ that would require exponentially many measurements in experimental setups, thereby establishing the theoretical upper bounds of the architecture.\n\nCode Availability: The simulation code for reproducing all numerical experiments is available at https://github.com/poig/self-research/tree/main/Quantum_AI/QLTO/theory_test.\n\n# Acknowledgments\n\nThe author used a large language model (Claude, Anthropic) to assist with drafting portions of this manuscript and implementing the simulation code. The research concept, theoretical framework, experimental design, and interpretation of results are solely the author's contribution. The author takes full intellectual responsibility for all scientific content presented herein.\n\n* junliang.tan@student.uq.edu.au \n[1] M. Cerezo, A. Arrasmith, R. Babbush, et al., Variational quantum algorithms, Nature Reviews Physics 3, 625 (2021). \n[2] J. McClean, S. Boixo, V. Smelyanskiy, et al., Barren plateaus in quantum neural network training landscapes, Nature Communications 9, 4812 (2018). \n[3] M. Ragone, B. N. Bakalov, F. Sauvage, A. F. Kemper, C. O. Marrero, M. Larocca, and M. Cerezo, A lie algebraic theory of barren plateaus for deep parameterized quantum circuits, Nature Communications 15, 7172 (2024). \n[4] W. H. Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics 75, 715 (2003). \n[5] G. Francica, J. Goold, F. Plastina, and S. Maniscalco, *Daemonic ergotropy: enhanced work extraction from quantum correlations*, npj Quantum Information 3, 12 (2017). \n[6] A. Glos, A. Krawiec, and Z. Zimborás, Quantum walk-based optimization algorithm, Quantum Information Processing 18, 1 (2019). \n[7] S. Marsh and J. Wang, Quantum walk-based optimization algorithm with variable time steps, Physical Review Research 2, 023302 (2020). \n[8] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani, Quantum walks on graphs, Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, 50 (2001). \n[9] A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, Exponential algorithmic speedup by\n\na quantum walk, Proceedings of the 35th Annual ACM Symposium on Theory of Computing, 59 (2003). \n[10] L. K. Grover, A fast quantum mechanical algorithm for database search, Proceedings of the 28th Annual ACM Symposium on Theory of Computing, 212 (1996). \n[11] J. Stokes, J. Izaac, N. Killoran, and G. Carleo, Quantum natural gradient, Quantum 4, 269 (2020). \n[12] J. Allcock, M. Santha, P. Yuan, and S. Zhang, On the dynamical lie algebras of quantum approximate optimization algorithms, arXiv preprint arXiv:2407.12587 (2024). \n[13] M. Larocca, P. Czarnik, K. Sharma, A. T. Sornborger, L. Cincio, and P. J. Coles, Diagnosing barren plateaus with tools from quantum optimal control, Quantum 6, 824 (2022). \n[14] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner, The power of quantum neural networks, Nature Computational Science 1, 403 (2021). \n[15] B. C. Hall, Lie Groups, Lie Algebras, and Representations: An Elementary Introduction (Springer, Switzerland, 2015). \n[16] J. E. Marsden and T. S. Ratiu, Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems (Springer Science & Business Media, New York, 2013). \n[17] A. S. Holevo, Bounds for the quantity of information transmitted by a quantum communication channel, Problems of Information Transmission 9, 177 (1973). \n[18] S. Wolfram, Computational foundations for the second law of thermodynamics, Stephen Wolfram Writings (2023), accessed: 2025-11-29. \n[19] J. Watrous, The Theory of Quantum Information (Cambridge University Press, Cambridge, 2018)."}
How you feel about a mechanism doesn't change whether it governs you. Abstract Why do organizations comprised of intelligent individuals converge on collective delusion? This paper introduces dysmemic pressure as a formal mechanism explaining organizational epistemic failure. Synthesizing strategic communication theory (Crawford & Sobel, 1982), agency theory (Prendergast, 1993), and cultural evolution (Boyd & Richerson, 1985), I demonstrate how preference divergence between organizational agents generates stable equilibria where communication becomes statistically independent of reality, while transmission biases lock dysfunction into self-reinforcing states. The mechanism operates through identifiable dynamics: as the bias between sender and receiver preferences increases, communication precision degrades through progressively coarser partitions until reaching 'babbling equilibrium' where messages carry no information; simultaneously, transmission biases (content, prestige, conformity) ensure that dysfunctional signals outcompete accurate ones in the organizational meme pool. Three detailed case studies—Nokia's smartphone collapse, NASA's Challenger disaster, and Wells Fargo's account fraud scandal—illustrate the mechanism's operation across industries and failure modes. I derive five testable propositions and evaluate potential countermeasures through a mechanism design lens. The analysis reframes organizational dysfunction from moral failure to physics problem, explaining why standard interventions (culture change, leadership development, values alignment) so often fail: they treat equilibrium outcomes as behavioral problems rather than altering the selection environment that produces them. Keywords: organizational behavior, information economics, cultural evolution, strategic communication, cheap talk, agency theory, organizational failure, epistemic dysfunction # 1. Introduction The modern organization is supposed to be an information-processing machine. Distributed knowledge flows upward through reporting structures, gets aggregated by managers, and emerges as coordinated action. The hierarchy exists, in theory, to make the whole smarter than any individual part. This premise underlies nearly a century of organizational theory, from Weber's bureaucratic rationality through Simon's bounded rationality to contemporary work on organizational learning and knowledge management. The premise does not survive contact with observation. Nokia's middle managers knew Symbian was failing years before the company's collapse. They sent optimistic reports anyway (Vuori & Huy, 2016). NASA's engineers knew O-rings eroded in cold weather; they signed off on the Challenger launch anyway (Vaughan, 1996). Wells Fargo's branch employees knew they were opening fraudulent accounts; they hit their quotas anyway (Independent Directors of the Board of Wells Fargo, 2017). In each case, the organization was not starved for information. It was drowning in false signals it had selected for. The conventional explanations invoke psychology: hubris, greed, groupthink, cognitive bias. These explanations are not wrong. They are incomplete. They fail to account for the systematic nature of the failure—that is, why the same pattern recurs across industries, cultures, decades, and organizational forms. They fail to explain why organizations comprised of individually rational actors produce collectively irrational outcomes with such regularity that we can predict it. This paper proposes a structural explanation grounded in the intersection of three literatures that have not been adequately connected: strategic communication in economics, agency theory, and cultural evolution in biology. The core argument is that organizational dysfunction of this type is the equilibrium itself, not a deviation from it. Organizations are selection environments. Ideas, reports, signals, and cultural norms compete for transmission within those environments. What survives is what is fit—regardless of whether it maps reality. And fitness, in organizational contexts, is often negatively correlated with truth. I call this selection force dysmemic pressure: the systematic favoring of cultural variants that increase individual payoff while decreasing collective adaptability. The term 'dysmemic' has appeared in informal discourse on memetics (e.g., Glendinning, 2001–present) to describe the phenomenon of harmful ideas spreading rapidly. This paper provides the first formal definition of dysmemic pressure as an organizational selection mechanism, specifying its components and deriving testable propositions from the synthesis of strategic communication and cultural evolution theory. The terminology follows the established eu-/dys- pattern in memetics; 'eumemics' (improving meme pool quality) appears in the standard literature, while 'dysmemic' represents its logical complement, paralleling 'dysgenic' in genetics. The contribution is not discovery, per se, but synthesis and precision—connecting literatures that have remained separate and defining the mechanism in terms that permit testable prediction. By integrating game-theoretic models of strategic communication with agency theory and cultural evolutionary models of transmission dynamics, we can describe the mechanism with sufficient clarity to identify what conditions produce it, why it is stable, why standard interventions fail, and what alternative architectures might resist it. That precision matters because it shifts the frame. Organizational dysfunction stops being a moral failing and becomes a physics problem. Physics problems do not respond to exhortation. They respond to engineering. The paper proceeds as follows. Section 2 reviews the relevant literatures in strategic communication, agency theory, cultural evolution, and organizational failure. Section 3 develops the theoretical framework, formally defining dysmemic pressure and deriving its properties. Section 4 presents three detailed case studies illustrating the mechanism's operation. Section 5 derives testable propositions. Section 6 evaluates potential countermeasures through a mechanism design lens. Section 7 discusses implications and limitations. Section 8 concludes. # 2. Theoretical Foundations # 2.1 Strategic Communication and the Partition Theorem The game-theoretic foundation comes from Crawford and Sobel's (1982) seminal paper on strategic information transmission, which formalized the conditions under which communication conveys information in the presence of conflicting interests. In their model, a Sender observes the true state of the world $t$ and transmits a costless message $m$ to a Receiver, who then takes an action $y$ affecting both parties. The crucial variable is bias—the divergence between what the Sender wants and what the Receiver wants. Their central result transformed how economists think about communication: as bias increases, the precision of information transmission decreases. When interests are perfectly aligned, full revelation is possible. As they diverge, communication becomes increasingly coarse, partitioning the state space into ever-larger bins. When bias becomes sufficiently large, the system collapses into what they call a 'babbling equilibrium'—a state where the Sender's messages are statistically independent of the true state, and the Receiver rationally ignores them entirely. The babbling equilibrium is not a failure of the model. It is a prediction. Given sufficient preference divergence, silence or deception becomes the rational choice. The equilibrium is stable because no player can profitably deviate: the Sender gains nothing from truthful revelation (since the Receiver ignores messages anyway), and the Receiver gains nothing from attending to messages (since they contain no information). Subsequent work has extended this framework considerably. Kamenica and Gentzkow (2011) developed Bayesian persuasion, showing how Senders can strategically design information structures to influence Receivers even when both parties are fully rational. Their key insight is that the Sender benefits from commitment—the ability to pre-specify a signal structure before observing the state. This has direct organizational applications: formal reporting systems, metrics, and dashboards can be understood as commitment devices that constrain the information structures available to agents. The organizational application is immediate. Consider a manager who needs the project status report to be positive (to avoid hard conversations, protect headcount, preserve their own position) and an engineer who needs to deliver an honest assessment. Their preferences diverge. The engineer learns that accurate reports invite unwanted attention, while vague optimism satisfies the manager. Communication becomes ritual. 'We're on track' means nothing. Status meetings become noise. The babbling equilibrium has been achieved. # 2.2 Agency Theory and the Yes Man Problem Prendergast (1993) extended this logic specifically to internal labor markets, providing the microeconomic foundation for why executives are systematically deceived by their subordinates. His 'Theory of Yes Men' demonstrates that when organizations rely on subjective performance evaluations—where a principal assesses an agent based on judgment rather than objective metrics—the agent faces powerful incentives to conform to the principal's prior beliefs. The mechanism is straightforward. A subordinate discovers information relevant to a strategic question. If that information contradicts what the principal believes, reporting it creates two risks: the principal may doubt the subordinate's competence (since their conclusions differ from the 'correct' view), and the principal may audit the subordinate's reasoning, consuming political capital and creating friction. The rational subordinate, anticipating these dynamics, skews reporting toward the principal's priors.
How you feel about a mechanism doesn't change whether it governs you. Abstract Why do organizations comprised of intelligent individuals converge on collective delusion? This paper introduces dysmemic pressure as a formal mechanism explaining organizational epistemic failure. Synthesizing strategic communication theory (Crawford & Sobel, 1982), agency theory (Prendergast, 1993), and cultural evolution (Boyd & Richerson, 1985), I demonstrate how preference divergence between organizational agents generates stable equilibria where communication becomes statistically independent of reality, while transmission biases lock dysfunction into self-reinforcing states. The mechanism operates through identifiable dynamics: as the bias between sender and receiver preferences increases, communication precision degrades through progressively coarser partitions until reaching 'babbling equilibrium' where messages carry no information; simultaneously, transmission biases (content, prestige, conformity) ensure that dysfunctional signals outcompete accurate ones in the organizational meme pool. Three detailed case studies—Nokia's smartphone collapse, NASA's Challenger disaster, and Wells Fargo's account fraud scandal—illustrate the mechanism's operation across industries and failure modes. I derive five testable propositions and evaluate potential countermeasures through a mechanism design lens. The analysis reframes organizational dysfunction from moral failure to physics problem, explaining why standard interventions (culture change, leadership development, values alignment) so often fail: they treat equilibrium outcomes as behavioral problems rather than altering the selection environment that produces them. Keywords: organizational behavior, information economics, cultural evolution, strategic communication, cheap talk, agency theory, organizational failure, epistemic dysfunction # 1. Introduction The modern organization is supposed to be an information-processing machine. Distributed knowledge flows upward through reporting structures, gets aggregated by managers, and emerges as coordinated action. The hierarchy exists, in theory, to make the whole smarter than any individual part. This premise underlies nearly a century of organizational theory, from Weber's bureaucratic rationality through Simon's bounded rationality to contemporary work on organizational learning and knowledge management. The premise does not survive contact with observation. Nokia's middle managers knew Symbian was failing years before the company's collapse. They sent optimistic reports anyway (Vuori & Huy, 2016). NASA's engineers knew O-rings eroded in cold weather; they signed off on the Challenger launch anyway (Vaughan, 1996). Wells Fargo's branch employees knew they were opening fraudulent accounts; they hit their quotas anyway (Independent Directors of the Board of Wells Fargo, 2017). In each case, the organization was not starved for information. It was drowning in false signals it had selected for. The conventional explanations invoke psychology: hubris, greed, groupthink, cognitive bias. These explanations are not wrong. They are incomplete. They fail to account for the systematic nature of the failure—that is, why the same pattern recurs across industries, cultures, decades, and organizational forms. They fail to explain why organizations comprised of individually rational actors produce collectively irrational outcomes with such regularity that we can predict it. This paper proposes a structural explanation grounded in the intersection of three literatures that have not been adequately connected: strategic communication in economics, agency theory, and cultural evolution in biology. The core argument is that organizational dysfunction of this type is the equilibrium itself, not a deviation from it. Organizations are selection environments. Ideas, reports, signals, and cultural norms compete for transmission within those environments. What survives is what is fit—regardless of whether it maps reality. And fitness, in organizational contexts, is often negatively correlated with truth. I call this selection force dysmemic pressure: the systematic favoring of cultural variants that increase individual payoff while decreasing collective adaptability. The term 'dysmemic' has appeared in informal discourse on memetics (e.g., Glendinning, 2001–present) to describe the phenomenon of harmful ideas spreading rapidly. This paper provides the first formal definition of dysmemic pressure as an organizational selection mechanism, specifying its components and deriving testable propositions from the synthesis of strategic communication and cultural evolution theory. The terminology follows the established eu-/dys- pattern in memetics; 'eumemics' (improving meme pool quality) appears in the standard literature, while 'dysmemic' represents its logical complement, paralleling 'dysgenic' in genetics. The contribution is not discovery, per se, but synthesis and precision—connecting literatures that have remained separate and defining the mechanism in terms that permit testable prediction. By integrating game-theoretic models of strategic communication with agency theory and cultural evolutionary models of transmission dynamics, we can describe the mechanism with sufficient clarity to identify what conditions produce it, why it is stable, why standard interventions fail, and what alternative architectures might resist it. That precision matters because it shifts the frame. Organizational dysfunction stops being a moral failing and becomes a physics problem. Physics problems do not respond to exhortation. They respond to engineering. The paper proceeds as follows. Section 2 reviews the relevant literatures in strategic communication, agency theory, cultural evolution, and organizational failure. Section 3 develops the theoretical framework, formally defining dysmemic pressure and deriving its properties. Section 4 presents three detailed case studies illustrating the mechanism's operation. Section 5 derives testable propositions. Section 6 evaluates potential countermeasures through a mechanism design lens. Section 7 discusses implications and limitations. Section 8 concludes. # 2. Theoretical Foundations # 2.1 Strategic Communication and the Partition Theorem The game-theoretic foundation comes from Crawford and Sobel's (1982) seminal paper on strategic information transmission, which formalized the conditions under which communication conveys information in the presence of conflicting interests. In their model, a Sender observes the true state of the world $t$ and transmits a costless message $m$ to a Receiver, who then takes an action $y$ affecting both parties. The crucial variable is bias—the divergence between what the Sender wants and what the Receiver wants. Their central result transformed how economists think about communication: as bias increases, the precision of information transmission decreases. When interests are perfectly aligned, full revelation is possible. As they diverge, communication becomes increasingly coarse, partitioning the state space into ever-larger bins. When bias becomes sufficiently large, the system collapses into what they call a 'babbling equilibrium'—a state where the Sender's messages are statistically independent of the true state, and the Receiver rationally ignores them entirely. The babbling equilibrium is not a failure of the model. It is a prediction. Given sufficient preference divergence, silence or deception becomes the rational choice. The equilibrium is stable because no player can profitably deviate: the Sender gains nothing from truthful revelation (since the Receiver ignores messages anyway), and the Receiver gains nothing from attending to messages (since they contain no information). Subsequent work has extended this framework considerably. Kamenica and Gentzkow (2011) developed Bayesian persuasion, showing how Senders can strategically design information structures to influence Receivers even when both parties are fully rational. Their key insight is that the Sender benefits from commitment—the ability to pre-specify a signal structure before observing the state. This has direct organizational applications: formal reporting systems, metrics, and dashboards can be understood as commitment devices that constrain the information structures available to agents. The organizational application is immediate. Consider a manager who needs the project status report to be positive (to avoid hard conversations, protect headcount, preserve their own position) and an engineer who needs to deliver an honest assessment. Their preferences diverge. The engineer learns that accurate reports invite unwanted attention, while vague optimism satisfies the manager. Communication becomes ritual. 'We're on track' means nothing. Status meetings become noise. The babbling equilibrium has been achieved. # 2.2 Agency Theory and the Yes Man Problem Prendergast (1993) extended this logic specifically to internal labor markets, providing the microeconomic foundation for why executives are systematically deceived by their subordinates. His 'Theory of Yes Men' demonstrates that when organizations rely on subjective performance evaluations—where a principal assesses an agent based on judgment rather than objective metrics—the agent faces powerful incentives to conform to the principal's prior beliefs. The mechanism is straightforward. A subordinate discovers information relevant to a strategic question. If that information contradicts what the principal believes, reporting it creates two risks: the principal may doubt the subordinate's competence (since their conclusions differ from the 'correct' view), and the principal may audit the subordinate's reasoning, consuming political capital and creating friction. The rational subordinate, anticipating these dynamics, skews reporting toward the principal's priors. Prendergast proves that this dynamic emerges not from sycophancy but from the structure of subjective evaluation itself. The principal relies on the subordinate for information but can also conduct an independent assessment. If audit costs are high—and in complex organizations, they almost always are—the principal will rely on priors. A rational subordinate, anticipating this, will match their reports to those priors. The 'Yes Man' is not a character flaw. He is the equilibrium output of a poorly designed incentive structure. This creates a feedback loop of confirmation bias. The principal is surrounded by apparent agreement not because they hire sycophants, but because the incentive structure converts honest agents into sycophants. The result is an organization that becomes progressively detached from reality, as the feedback mechanisms meant to correct leadership's errors are repurposed to reinforce them. # 2.3 The Lemon Problem of Ideas Akerlof's (1970) 'Market for Lemons' describes how information asymmetry can cause market collapse through adverse selection. In his canonical example, used car buyers cannot distinguish good cars from bad ones ('lemons'). Sellers of good cars, unable to credibly signal quality, withdraw from the market. Only lemons remain. The same dynamic operates in the organizational idea marketplace. Proposals, assessments, and strategic recommendations are 'goods' whose quality is costly to verify. An optimistic projection and an accurate one look identical at the moment of presentation. The principal (leadership) faces the same information asymmetry as the used car buyer: they cannot easily distinguish dysmemes from eumemes. The Akerlof logic then follows. Accurate assessments require work—data gathering, analysis, acknowledgment of uncertainty. Optimistic dysmemes require only confidence. As the market floods with cheap-to-produce dysmemes, accurate assessments become relatively more costly. Truth-tellers, unable to credibly distinguish themselves and facing higher production costs, reduce their output. The market equilibrates at a pooling equilibrium where the typical signal is uninformative—a mixture of charlatans and the genuinely deluded, with the careful analyst having exited. # 2.4 Cultural Evolution and Transmission Bias Game theory explains why individuals choose to transmit false signals. It does not explain why certain falsehoods dominate, why they spread, or why they prove so resistant to correction. For that, we need cultural evolution. Boyd and Richerson's (1985, 2005) dual-inheritance theory treats cultural information—ideas, practices, norms—as replicating units subject to selection pressures analogous to (though not identical with) those operating on genes. The critical insight is that the fitness of a cultural variant is not determined by its truth value. It is determined by its transmission properties. An idea spreads because it is easy to remember, emotionally resonant, or associated with high-status individuals—not because it accurately maps reality. Three transmission biases are particularly relevant to organizational contexts: Content bias refers to the differential transmission of ideas based on their intrinsic properties. Simple, emotionally satisfying ideas outcompete complex, ambiguous ones. 'We just need to execute better' is lighter cognitive load than 'Our architecture has accumulated technical debt requiring a multi-quarter remediation effort with uncertain ROI.' The former spreads; the latter dies in the meeting where it was born. Henrich and Gil-White (2001) demonstrate that content biases operate largely below conscious awareness—people do not choose to prefer simpler explanations; they simply find them more memorable and transmissible. Prestige bias refers to the preferential copying of ideas associated with high-status individuals. If the CEO believes the competitor is irrelevant, that belief cascades downward regardless of evidence. Subordinates adopt it not because they believe it, but because imitating the leader is a dominant strategy for advancement (Henrich & McElreath, 2003). The mechanism is self-reinforcing: those who adopt the prestige figure's beliefs advance, becoming prestige figures themselves and further propagating the belief. Conformity bias refers to the disproportionate adoption of common beliefs. Once a belief reaches critical mass, deviation becomes costly. If everyone reports green, reporting red marks you as the problem. The pressure to conform locks in whatever belief happened to reach threshold first (Boyd & Richerson, 1985). Importantly, conformity bias operates on perceived rather than actual consensus—if people believe everyone else believes X, they will adopt X even if private dissent is widespread. This creates informational cascades where public consensus increasingly diverges from private belief (Bikhchandani, Hirshleifer, & Welch, 1992). Blackmore (1999) and Dennett (1995) emphasize that meme fitness is orthogonal to truth—a point frequently misunderstood. The claim is not that false ideas spread better than true ones (though sometimes they do). The claim is that transmission properties and truth value are independent variables. An idea can be true and highly transmissible, true and poorly transmissible, false and highly transmissible, or false and poorly transmissible. Selection operates on transmissibility. Truth is along for the ride, or not. # 2.5 Organizational Failure and Epistemic Dysfunction A substantial literature documents organizational failures attributable to information pathologies, though rarely with the formal mechanism proposed here. Vaughan's (1996) study of the Challenger disaster introduced 'normalization of deviance'--the gradual desensitization to risk as small deviations become routine. Each successful launch with O-ring erosion made the next launch with erosion more acceptable, until catastrophic failure became, in retrospect, inevitable. Vaughan explicitly frames this as a structural rather than individual phenomenon: 'No fundamental decision was made at NASA to do evil... Rather, a loss of insight was actually facilitated by... the institutional structures that monitored risk.' Janis's (1972) groupthink model identifies conditions under which cohesive groups make defective decisions: illusions of invulnerability, collective rationalization, stereotypes of out-groups, self-censorship, illusions of unanimity, and direct pressure on dissenters. While influential, the model operates at the group level and does not explain why these dynamics persist or why they prove resistant to intervention. Nguyen's (2020) work on epistemic bubbles and echo chambers provides a useful distinction. Epistemic bubbles arise from omission—relevant voices are simply absent. Echo chambers arise from active discrediting—outside voices are present but dismissed as untrustworthy. Organizations can exhibit both: some information never reaches decision-makers (bubble), while information that does reach them is filtered through loyalty tests that dismiss uncomfortable sources (chamber). Levy (2022) argues that 'bad beliefs' are often rational responses to corrupted epistemic environments rather than individual cognitive failures. This perspective aligns with the mechanism proposed here: if the selection environment rewards certain beliefs regardless of truth, holding those beliefs is individually rational even when collectively catastrophic. # 3. Theoretical Framework # 3.1 The Synthesis The synthesis connects these literatures through a simple observation: organizations are both strategic environments (where agents with divergent preferences communicate) and cultural environments (where ideas compete for transmission). The game theory explains stability—why no individual can profitably deviate from the dysfunctional equilibrium. The cultural evolution explains spread—why the dysfunction saturates the organization and proves resistant to correction. Consider an organization where accurate risk reporting would benefit the collective (avoiding disasters, enabling adaptation) but harm individual reporters (inviting scrutiny, creating conflict, marking oneself as a problem). The strategic communication literature tells us this preference divergence will degrade information quality. The cultural evolution literature tells us that whatever false signals emerge will spread through content bias (simple narratives over complex ones), prestige bias (adopting the beliefs of successful people, who succeeded partly by not reporting risks), and conformity bias (reporting what everyone else reports). The combination produces a ratchet effect. False signals crowd out true ones because they are individually advantageous. Once established, they become culturally dominant through transmission biases. Cultural dominance makes deviation even more costly (now you're fighting consensus, not just reporting bad news). The equilibrium is self-reinforcing and self-protecting. # 3.2 Formal Definition I define dysmemic pressure as follows: Definition: Dysmemic pressure is the selection force in an organization that favors cultural variants (ideas, signals, practices) which increase individual payoff while decreasing collective adaptability—that is, where internal fitness is negatively correlated with external fitness. The relationship among these forces can be expressed as a conceptual schema: Dysmemic Pressure $\propto f$ (Incentive Divergence, Transmission Ease, Verification Cost) This is not a derived formula but a heuristic summarizing the mechanism's structure: pressure intensifies as any component increases. The three terms capture the distinct contributions of each theoretical foundation. Incentive Divergence is the Crawford-Sobel bias parameter b: the degree to which internal fitness (what advances individual careers) diverges from external fitness (what benefits the organization). When these are negatively correlated—when the behaviors that help the individual harm the collective—incentive divergence is high and communication degrades toward babbling equilibrium. As b increases, communication precision decreases. Transmission Ease captures content bias from cultural evolution: simpler, more emotionally satisfying signals spread faster regardless of accuracy. Verification Cost captures the Akerlof asymmetry: the higher the cost to verify signal quality, the greater the adverse selection pressure on the organizational idea market. Several components require elaboration. Internal fitness refers to the expected benefit to the carrier within the organization: promotion probability, conflict avoidance, status maintenance, resource allocation, job security. These are the payoffs that shape individual behavior in the Crawford-Sobel framework. External fitness refers to the expected benefit to the organization as a whole in its external environment: market adaptation, risk management, competitive positioning, survival. These are the outcomes that organizational design is nominally meant to optimize. Negative correlation is the key condition. Internal and external fitness can be positively correlated (when rewarding truth-telling), uncorrelated (when rewards are orthogonal to information quality), or negatively correlated (when rewards punish truth-telling). Dysmemic pressure exists when the correlation is negative—when the behaviors that benefit individuals harm the organization. 'Dysmemes' are cultural variants that satisfy this condition: they help the carrier (get promoted, avoid conflict, maintain status) while harming the host organization (misleading strategy, masking risk, preventing adaptation). They outcompete 'eumemes'—truth-tracking variants—because the selection environment rewards the former and punishes the latter. The result is not merely that individuals lie. The organizational ecology shifts. Truth-tellers exit or go silent. The meme pool becomes saturated with dysmemes. The organization loses the capacity to perceive reality accurately—not because individuals are stupid, but because the smart move is to participate in the collective hallucination. # 3.3 Conditions and Dynamics Dysmemic pressure intensifies under identifiable conditions: Preference divergence: The greater the gap between what agents want and what principals want, the greater the pressure. In Crawford-Sobel terms, larger bias produces coarser communication partitions and increases the probability of babbling equilibria. Organizations where managers are evaluated on metrics that diverge from organizational health (quarterly numbers vs. long-term value, headcount vs. capability, activity vs. outcomes) exhibit greater preference divergence. Evaluation coupling: When the consumer of information is also the evaluator of the producer, pressure intensifies. The engineer reporting to the manager who controls their performance review faces different incentives than the engineer reporting to an independent quality function. Decoupling evaluation from information consumption reduces bias in the Crawford-Sobel sense. Transmission structure: Steep hierarchies with few horizontal connections amplify prestige bias (information flows through few high-status nodes) and reduce the error-correction capacity of distributed networks. Flat structures with many horizontal connections reduce prestige concentration but may amplify conformity bias if consensus norms are strong. External feedback delay: When consequences of dysfunction are distant in time or attribution, dysmemic equilibria persist longer. Organizations in fast-feedback environments (trading desks, emergency services) exhibit less dysmemic pressure than those in slow-feedback environments (strategic planning, R&D) because reality corrections arrive before dysfunction saturates the culture. Exit costs: When truth-tellers cannot easily leave, they face a choice between silence and punishment. High exit costs (specialized skills, geographic constraints, unvested compensation) increase the proportion of the population that chooses silence, accelerating dysmemic saturation. Industries with high mobility exhibit less dysmemic pressure than those with golden handcuffs. # 4. Case Studies Three cases illustrate dysmemic pressure operating across different industries, failure modes, and organizational contexts. Each demonstrates the mechanism's key features: preference divergence generating babbling equilibria, transmission biases locking in dysfunction, and the self-reinforcing nature of the resulting state. # 4.1 Nokia: Fear-Induced Babbling Nokia's collapse from mobile phone dominance to irrelevance between 2007 and 2013 represents one of the most studied failures in business history. Vuori and Huy's (2016) detailed study, based on 76 interviews with Nokia managers and engineers, documents a systematic information pathology consistent with dysmemic pressure. The technical facts were known. Engineers understood that Symbian, Nokia's operating system, could not compete with iOS and Android. Internal assessments documented the gaps. Middle managers were aware that development timelines were unrealistic and that the organization was falling further behind with each quarter. The information existed within the organization. It did not reach decision-makers in usable form. Vuori and Huy document a fear-based communication breakdown: 'Top managers were afraid of the external environment and made middle managers afraid of them. Middle managers were afraid of top managers and made their subordinates afraid of them.' The result was systematic upward distortion. Bad news was softened, delayed, reframed, or omitted entirely. Status reports remained optimistic long after the situation had become dire. The mechanism fits the dysmemic framework precisely. Preference divergence was severe: top managers needed reassurance that their strategy was working; middle managers needed to avoid being identified with failure; engineers needed to avoid the scrutiny that honest assessments would invite. The Crawford-Sobel prediction follows: communication precision collapsed. Transmission biases amplified the dysfunction. Prestige bias meant that optimistic framings endorsed by senior leaders propagated downward while pessimistic assessments from junior engineers died in the hierarchy. Conformity bias meant that once green status reports became the norm, deviation marked the deviator rather than the problem. Content bias meant that simple narratives ('we just need to execute faster') outcompeted complex ones ('our architectural assumptions are fundamentally wrong'). The equilibrium was stable. No individual could profitably deviate. An engineer who reported accurately faced career consequences without changing the outcome—the organization could not act on the information because the same dynamics suppressed it elsewhere. The rational strategy was participation in the collective delusion. Nokia's board received consistent reassurance until the crisis was terminal. The organization did not fail for lack of information. It failed because the selection environment had eliminated the information's transmission path. # 4.2 NASA Challenger: Normalized Deviance The Space Shuttle Challenger disaster of January 28, 1986, killed seven astronauts when an O-ring seal failed during launch. Diane Vaughan's (1996) landmark study documented the organizational dynamics that made the disaster, in her phrase, 'an accident waiting to happen.' Engineers at Morton Thiokol, the contractor responsible for the solid rocket boosters, knew O-rings were vulnerable to cold temperatures. They had documented erosion on previous flights. The night before launch, engineers formally recommended against launching in the forecast cold conditions. They were overruled. The dysmemic mechanism is visible in Vaughan's account of 'normalization of deviance.' Each successful flight with O-ring anomalies made the next anomaly more acceptable. Tl baseline shifted. What began as a concerning deviation became expected variation, then normal operation. The cultural transmission path is clear: interpretations that permitted continued launches were adopted (content bias toward launch-supporting narratives), endorsed by program leadership (prestige bias), and became consensus (conformity bias) Interpretations that would have grounded the fleet faced the opposite selection environment. Preference divergence was structural. NASA operated under intense schedule pressure from Congress, the White House, and institutional competition. Program managers were evaluated on launch cadence. Engineers were embedded in organizations that needed launches to continue. The communication channel between technical assessment and launch decision was systematically biased toward launch. The night before launch, when Thiokol engineers recommended delay, NASA's response is instructive. Larry Mulloy, the solid rocket booster project manager, asked Thiokol to reconsider. Thiokol's management held an off-line caucus, during which senior vice president Jerry Mason reportedly said to Robert Lund, the VP of Engineering: 'Take off your engineering hat and put on your management hat.' Lund reversed his position. The launch proceeded. The phrase captures the dysmemic dynamic: engineering truth (O-rings fail in cold) versus management fitness (launches maintain schedule, budget, and careers). When forced to choose, the individual chose internal fitness over external accuracy. This was not a moral failure. It was a predictable response to a selection environment that had been selecting for exactly this behavior for years. The Rogers Commission that investigated the disaster famously concluded that NASA's 'decision-making culture' had become a causal factor. The language obscures the mechanism. 'Culture' suggests something atmospheric and diffuse. The reality was selection: systematic, structural, and predictable. The organization had built an environment where launch-supporting signals were fit and launch-delaying signals were not. The culture was the output of that selection, not its cause. # 4.3 Wells Fargo: Institutionalized Fraud Between 2002 and 2016, Wells Fargo employees opened approximately 3.5 million accounts without customer authorization. The fraud was not hidden—it was incentivized, measured, and managed. The Independent Directors' Report (2017) documents an organizational system that selected for exactly the behavior it nominally prohibited. The 'cross-sell' strategy required employees to sell multiple products to each customer. Performance was measured by accounts opened, with aggressive quotas tied to compensation and job security. Employees who met quotas were rewarded; those who did not were terminated. The system created a simple optimization problem: open accounts or lose your job. The preference divergence is stark. Wells Fargo's stated objective was customer relationships generating legitimate revenue. Individual employees' objective was survival, which required meeting quotas regardless of customer consent. The gap between stated and revealed preferences was the selection environment. The cultural transmission followed predictable patterns. New employees learned quickly what actually mattered. Training materials emphasized ethics; peer behavior demonstrated that ethics was subordinate to numbers. Managers who met quotas were promoted, becoming prestige figures whose methods were copied. Conformity pressure reinforced the behavior—teams that opened unauthorized accounts created norms that made non-participation conspicuous. Complaints existed at every level. The company's internal ethics hotline received reports. Regional managers raised concerns. The pattern was documented in HR files and legal settlements. The information was not absent; it was systematically discounted, attributed to bad actors rather than bad systems, treated as implementation failure rather than design consequence. The equilibrium persisted for over a decade. No individual could profitably deviate—an employee who refused to meet quotas was terminated; a manager who reduced quotas faced performance reviews based on team numbers. The organization optimized for the measured objective (accounts opened) at the cost of the stated objective (customer relationships). The resulting scandal cost Wells Fargo billions in fines and settlements, executive careers, and reputational damage that persists years later. The case illustrates dysmemic pressure in its most explicit form. The selection environment was not subtle—it was written into job descriptions, compensation plans, and termination criteria. The organization built a machine for generating fraud and then expressed surprise when fraud emerged. # 5. Propositions The theoretical framework generates testable propositions about organizational information environments. These are stated as directional predictions that could, in principle, be evaluated against organizational data. Proposition 1 (Preference Divergence): The greater the divergence between what advances an individual's career and what benefits the organization, the lower the information content of upward communication. As the gap between internal and external fitness incentives widens, communication precision degrades toward babbling equilibrium. This follows directly from Crawford and Sobel (1982). Testable implications include: organizations with stronger 'up or out' cultures should exhibit less accurate upward communication; roles with high job security should produce more accurate assessments than roles with precarious employment; communication from employees with outside options should be more informative than communication from those without. Proposition 2 (Evaluation Coupling): When the recipient of information is also responsible for evaluating the sender, information quality decreases. Decoupling evaluation from information consumption improves signal accuracy. This explains why organizations with independent audit functions, ombudsmen, or protected reporting channels often detect problems earlier than those without. The prediction is that organizations that structurally separate 'who needs to know' from 'who controls your career' will exhibit less dysmemic pressure in those domains. Proposition 3 (Process Capture): Any organizational process whose outputs are used to evaluate participants will, over time, optimize for evaluation success rather than process purpose. The process becomes dysmemic theater. This is a generalization of Goodhart's Law to cultural selection. Testable implications include: OKR processes that affect compensation should exhibit less strategic information than those that do not; performance reviews that determine promotion should contain less accurate information than developmental feedback with no career consequences; planning processes should become less predictive over time as participants learn to optimize for planning metrics rather than planning accuracy. Proposition 4 (Intervention Decay): Interventions that change expressed norms without changing payoff structures will exhibit initial improvement followed by regression to the pre-intervention equilibrium. The rate of regression depends on the strength of the unchanged selection pressure. This explains the consistent failure of culture change initiatives. Meta-analyses of organizational change efforts consistently report failure rates between 60 and 80 percent (Beer & Nohria, 2000). Testable implications include: values training should produce temporary behavioral changes that decay unless reinforced by incentive changes; leadership messaging should affect behavior only when accompanied by visible changes in reward and punishment; organizational culture should resist copying—transplanting practices without transplanting selection environments should produce decay toward the host environment's equilibrium. Proposition 5 (External Correction): Organizations under strong dysmemic pressure can only be corrected by external shock—information or consequences from outside the selection environment. Internal reform attempts will be absorbed into the dysmemic equilibrium. This follows from the self-reinforcing nature of dysmemic equilibria. Testable implications include: organizations that experience market corrections, regulatory interventions, or public scandals should exhibit temporary increases in information accuracy; the magnitude and duration of improvement should correlate with the severity of the shock; internal 'transformation' initiatives without external pressure should fail at higher rates than those accompanied by external forcing functions. # 6. Countermeasures: A Mechanism Design Perspective If dysmemic pressure is structural rather than behavioral, effective countermeasures must alter the selection environment itself rather than exhorting different behavior within the existing environment. This section evaluates potential interventions through a mechanism design lens, asking: what structures might shift the fitness landscape such that truth-tracking variants outcompete dysmemes? # 6.1 The Failure of Exhortation The standard intervention portfolio—culture change initiatives, leadership development, values training, psychological safety programs—treats dysmemic outcomes as behavioral problems susceptible to education and example. The framework developed here explains why these interventions consistently fail. Consider the typical culture change initiative. Leadership announces new values. Posters appear. Training sessions explain expected behaviors. For a period, employees perform the new norms. Then, imperceptibly, old patterns reassert. The employees who most visibly adopted the new culture often turn out to be the same ones who were best at performing the old one—they simply shifted their performance to the new script. The initiative failed not because employees are cynical. It failed because it did not alter the selection environment. The rewards still flowed to those who satisfied superiors rather than challenged them. The penalties still fell on those who surfaced problems rather than buried them. The new values were absorbed into the dysmemic ecosystem, becoming another vocabulary for signaling compliance. This explains why you cannot copy another organization's culture. The visible artifacts—open floor plans, all-hands meetings, mission statements—can be replicated. Without changing the underlying selection environment, the transplanted forms decay on contact with the host organization's incentive structure. Amazon's 'disagree and commit' becomes 'disagree and get fired.' Google's '20% time' becomes time spent after finishing 'real' work. Netflix's 'freedom and responsibility' becomes freedom to comply with unwritten expectations. # 6.2 Structural Countermeasures Effective countermeasures share a common feature: they alter the payoff matrix such that truth-telling becomes a dominant or at least viable strategy. Several structural approaches merit consideration: Evaluation decoupling: Separating the recipient of information from the evaluator of its source reduces the bias in the Crawford-Sobel sense. Examples include independent audit functions that report to boards rather than management, ombudsman offices with protected status, and anonymous reporting channels with credible confidentiality. The key is structural independence—not merely policy statements that can be overridden, but governance architecture that makes the independence durable. Prediction markets and scoring rules: Internal prediction markets on project outcomes, market events, or organizational metrics can elicit private beliefs with proper incentives (Hanson, 2003). Proper scoring rules reward accurate probability assessments regardless of the outcome, decoupling the payoff from what the predictor wants to be true. Implementation challenges are substantial (liquidity, manipulation, interpretation), but the mechanism directly addresses the preference divergence at the core of dysmemic pressure. Red teams and adversarial processes: Institutionalized devil's advocacy, where designated teams are rewarded for finding flaws, can create protected niches for truth-telling. The key is ensuring the red team's incentives genuinely align with finding problems rather than performing opposition. Red teams that are captured by the processes they're meant to challenge become dysmemic theater themselves. External validation requirements: Requiring external review of key assessments (customer advisory boards for product decisions, independent technical review for engineering claims, third-party audit for financial projections) introduces information from outside the internal selection environment. External validators face different fitness landscapes and thus different selection pressures. # 6.3 The Maintenance Problem Any structure that counterweights dysmemic pressure faces continuous pressure toward absorption back into the dysmemic equilibrium. The red team that becomes too influential will be defunded or captured. The independent audit function that creates too much friction will see its mandate narrowed. The prediction market that surfaces too much inconvenient truth will be discontinued or gamed. This is not paranoia; it is the selection dynamic operating on the countermeasures themselves. Ideas and structures that threaten the dysmemic equilibrium face the same fitness disadvantages as individual truth-tellers. The countermeasures must be designed not only to work initially but to resist absorption over time. Durable countermeasures typically require three forms of independence: Governance independence: Reporting lines that do not run through the functions being assessed. The audit committee reports to the board, not the CFO. The red team reports to an executive not responsible for the project being evaluated. Resource independence: Budgets and staffing that cannot be reduced as retaliation for uncomfortable findings. Multi-year commitments, protected funding sources, or external support can provide this. Evaluation independence: Career consequences for the countermeasure staff that do not depend on the satisfaction of those they assess. Rotating assignments, external career paths, or tenure-like protections can provide this. Without all three, the countermeasure will likely be absorbed. With all three, maintenance is still an ongoing effort rather than a solved problem. The physics does not disappear; it can only be counterweighted. # 7. Discussion # 7.1 Implications If dysmemic pressure is structural rather than personal, several implications follow for organizational theory and practice. First, organizational dysfunction is not evidence of bad actors. The same people, in a different selection environment, would behave differently. Blaming individuals for systemic outcomes is not only unfair—it prevents diagnosis. The person who speaks up and gets punished is not more virtuous than the person who stays silent; they merely miscalculated the payoff structure. Attributing organizational failure to individual moral failure is itself a dysmeme—it spreads because it protects the system from examination. Second, the framework explains the stubborn failure of organizational change efforts. Meta-analyses consistently find that most change initiatives fail to achieve their stated objectives. The dysmemic lens suggests this is not implementation failure but design failure: the initiatives target behavior without targeting the selection environment that produces the behavior. They are, in effect, trying to change the output without changing the function. Third, some organizations may be beyond internal repair. When dysmemic pressure has saturated the meme pool sufficiently, the truth-tellers have already exited. The remaining population cannot recognize dysfunction because dysfunction is all they know. The culture is not mistaken about reality; it has constructed an alternative reality that is internally consistent and externally fatal. Correction requires external shock—market failure, regulatory intervention, scandal, or replacement of the organization entirely. Fourth, external perspective is structurally necessary rather than merely helpful. An organization trapped in dysmemic equilibrium cannot validate its own outputs. The same biases that distort the information also distort the assessment of whether information is distorted. Outside observers—consultants, boards, investors, regulators—are not luxuries but requirements for organizations that wish to maintain contact with reality. This provides a functional justification for governance structures that might otherwise appear as mere overhead. Fifth, understanding dysmemic pressure does not exempt you from it. Awareness is necessary but not sufficient. The forces remain operative. The question is whether sufficient counterweight has been built—structures that protect variance, mechanisms that surface truth, governance that maintains independence from the drift toward comfortable consensus. # 7.2 Limitations and Boundary Conditions The framework has important limitations that bound its applicability. First, the mechanism operates most powerfully at scale. Small organizations with direct observation and tight feedback loops may not develop strong dysmemic pressure because the information pathologies are quickly corrected by reality. The framework applies primarily to organizations large enough that information must flow through multiple nodes and slow enough that consequences are temporally distant from actions. Second, the framework does not address all forms of organizational failure. Failures due to external shocks, technological disruption, resource constraints, or genuinely unforeseeable events are not explained by dysmemic pressure. The mechanism applies specifically to failures where relevant information existed within the organization but was not transmitted, processed, or acted upon. Third, the propositions are stated directionally rather than precisely. Quantifying dysmemic pressure, predicting thresholds for babbling equilibria, or specifying the functional form of intervention decay would require empirical work beyond the scope of this paper. The framework generates predictions but does not, at this stage, generate point estimates. Fourth, the mechanism design countermeasures are evaluated conceptually rather than empirically. While the logic suggests they should be effective, real-world implementation faces challenges not addressed here: political resistance, cost constraints, unintended consequences, and the possibility that novel interventions create novel dysmemic adaptations. Fifth, the framework takes the existence of preference divergence as given. It does not address why organizations develop structures that create such divergence in the first place, or why some organizations maintain alignment better than others. A complete theory would need to explain the origins of dysmemic selection environments, not merely their consequences. # 7.3 Future Research Several directions for future research emerge from the framework. Empirical measurement of dysmemic pressure is the most pressing need. This might involve surveys measuring perceived preference divergence, content analysis of organizational communications over time, comparison of internal assessments with external outcomes, or experimental manipulation of selection environments in organizational settings. The propositions generate testable predictions; testing them requires operationalization. Comparative organizational analysis could identify structural features associated with resistance to dysmemic pressure. Are there industries, governance forms, or organizational designs that exhibit systematically better information environments? What do they have in common? Case selection focusing on variation rather than failure might illuminate protective factors. Intervention studies, ideally randomized or quasi-experimental, could evaluate the countermeasures proposed here. Does evaluation decoupling actually improve information quality? Do prediction markets elicit more accurate assessments than traditional reporting? How long do interventions persist before absorption? The mechanism design literature provides tools for such evaluation, but organizational contexts present implementation challenges that merit study in their own right. Integration with adjacent literatures could enrich the framework. The psychological safety literature (Edmondson, 1999) addresses similar phenomena at the team level; connection might reveal how micro-level dynamics aggregate to organizational-level equilibria. The institutional theory literature addresses how organizational forms spread and persist; connection might explain how dysmemic selection environments themselves propagate across organizations. # 8. Conclusion Organizations fail not because they lack information but because they select against it. The selection is not random. It follows predictable dynamics: strategic incentives that make truth costly, transmission biases that make comfortable falsehoods sticky, conformity pressures that lock in whatever dysfunction reaches critical mass first. I have called this selection force dysmemic pressure. The name is new. The phenomenon is ancient. Every organization that has ever collapsed while its members privately knew the collapse was coming has experienced it. Every reform that decayed back into the dysfunction it was meant to address has fallen victim to it. Every leader who has asked 'why didn't anyone tell me?' after a preventable disaster has discovered, too late, what it produces. The contribution here is synthesis and precision. By connecting the game-theoretic literature on strategic communication to agency theory and the cultural evolution literature on transmission dynamics, we can describe the mechanism with sufficient clarity to identify what conditions produce it, why it is stable, why standard interventions fail, and what alternative architectures might resist it. That precision matters because it shifts the frame. Organizational dysfunction stops being a moral failure and becomes a physics problem. Physics problems do not respond to exhortation. They respond to engineering. You do not convince gravity to behave differently. You build structures that account for its operation. The question for any organization is not whether dysmemic pressure exists—it does, always, at scale. The question is whether anything counterweights it. Whether the selection environment has been deliberately designed to protect truth. Whether structures exist that reward accuracy over performance, dissent over consensus, reality over comfort. Where such structures exist and are defended, organizations retain the capacity to adapt. Where they do not, the drift continues—imperceptible, comfortable, and ultimately fatal.
arxiv_physics
2025-12-09T00:00:00Z
https://arxiv.org/pdf/2512.14716
{"title": "Dysmemic Pressure: Selection Dynamics in Organizational Information Environments", "raw_content": "How you feel about a mechanism doesn't change whether it governs you.\n\n# Dysmemic Pressure\n\nSelection Dynamics in Organizational Information Environments\n\nJeremy McEntire\n\nDecember 2024\n\n# Abstract\n\nWhy do organizations comprised of intelligent individuals converge on collective delusion? This paper introduces dysmemic pressure as a formal mechanism explaining organizational epistemic failure. Synthesizing strategic communication theory (Crawford & Sobel, 1982), agency theory (Prendergast, 1993), and cultural evolution (Boyd & Richerson, 1985), I demonstrate how preference divergence between organizational agents generates stable equilibria where communication becomes statistically independent of reality, while transmission biases lock dysfunction into self-reinforcing states. The mechanism operates through identifiable dynamics: as the bias between sender and receiver preferences increases, communication precision degrades through progressively coarser partitions until reaching 'babbling equilibrium' where messages carry no information; simultaneously, transmission biases (content, prestige, conformity) ensure that dysfunctional signals outcompete accurate ones in the organizational meme pool. Three detailed case studies—Nokia's smartphone collapse, NASA's Challenger disaster, and Wells Fargo's account fraud scandal—illustrate the mechanism's operation across industries and failure modes. I derive five testable propositions and evaluate potential countermeasures through a mechanism design lens. The analysis reframes organizational dysfunction from moral failure to physics problem, explaining why standard interventions (culture change, leadership development, values alignment) so often fail: they treat equilibrium outcomes as behavioral problems rather than altering the selection environment that produces them.\n\nKeywords: organizational behavior, information economics, cultural evolution, strategic communication, cheap talk, agency theory, organizational failure, epistemic dysfunction\n\n# 1. Introduction\n\nThe modern organization is supposed to be an information-processing machine. Distributed knowledge flows upward through reporting structures, gets aggregated by managers, and emerges as coordinated action. The hierarchy exists, in theory, to make the whole smarter than any individual part. This premise underlies nearly a century of organizational theory, from Weber's bureaucratic rationality through Simon's bounded rationality to contemporary work on organizational learning and knowledge management.\n\nThe premise does not survive contact with observation.\n\nNokia's middle managers knew Symbian was failing years before the company's collapse. They sent optimistic reports anyway (Vuori & Huy, 2016). NASA's engineers knew O-rings eroded in cold weather; they signed off on the Challenger launch anyway (Vaughan, 1996). Wells Fargo's branch employees knew they were opening fraudulent accounts; they hit their quotas anyway (Independent Directors of the Board of Wells Fargo, 2017). In each case, the\n\norganization was not starved for information. It was drowning in false signals it had selected for.\n\nThe conventional explanations invoke psychology: hubris, greed, groupthink, cognitive bias. These explanations are not wrong. They are incomplete. They fail to account for the systematic nature of the failure—that is, why the same pattern recurs across industries, cultures, decades, and organizational forms. They fail to explain why organizations comprised of individually rational actors produce collectively irrational outcomes with such regularity that we can predict it.\n\nThis paper proposes a structural explanation grounded in the intersection of three literatures that have not been adequately connected: strategic communication in economics, agency theory, and cultural evolution in biology. The core argument is that organizational dysfunction of this type is the equilibrium itself, not a deviation from it. Organizations are selection environments. Ideas, reports, signals, and cultural norms compete for transmission within those environments. What survives is what is fit—regardless of whether it maps reality. And fitness, in organizational contexts, is often negatively correlated with truth.\n\nI call this selection force dysmemic pressure: the systematic favoring of cultural variants that increase individual payoff while decreasing collective adaptability. The term 'dysmemic' has appeared in informal discourse on memetics (e.g., Glendinning, 2001–present) to describe the phenomenon of harmful ideas spreading rapidly. This paper provides the first formal definition of dysmemic pressure as an organizational selection mechanism, specifying its components and deriving testable propositions from the synthesis of strategic communication and cultural evolution theory. The terminology follows the established eu-/dys- pattern in memetics; 'eumemics' (improving meme pool quality) appears in the standard literature, while 'dysmemic' represents its logical complement, paralleling 'dysgenic' in genetics.\n\nThe contribution is not discovery, per se, but synthesis and precision—connecting literatures that have remained separate and defining the mechanism in terms that permit testable prediction. By integrating game-theoretic models of strategic communication with agency theory and cultural evolutionary models of transmission dynamics, we can describe the mechanism with sufficient clarity to identify what conditions produce it, why it is stable, why standard interventions fail, and what alternative architectures might resist it. That precision matters because it shifts the frame. Organizational dysfunction stops being a moral failing and becomes a physics problem. Physics problems do not respond to exhortation. They respond to engineering.\n\nThe paper proceeds as follows. Section 2 reviews the relevant literatures in strategic communication, agency theory, cultural evolution, and organizational failure. Section 3 develops the theoretical framework, formally defining dysmemic pressure and deriving its properties. Section 4 presents three detailed case studies illustrating the mechanism's operation. Section 5 derives testable propositions. Section 6 evaluates potential countermeasures through a mechanism design lens. Section 7 discusses implications and limitations. Section 8 concludes.\n\n# 2. Theoretical Foundations\n\n# 2.1 Strategic Communication and the Partition Theorem\n\nThe game-theoretic foundation comes from Crawford and Sobel's (1982) seminal paper on strategic information transmission, which formalized the conditions under which communication conveys information in the presence of conflicting interests. In their model, a Sender observes the true state of the world $t$ and transmits a costless message $m$ to a\n\nReceiver, who then takes an action $y$ affecting both parties. The crucial variable is bias—the divergence between what the Sender wants and what the Receiver wants.\n\nTheir central result transformed how economists think about communication: as bias increases, the precision of information transmission decreases. When interests are perfectly aligned, full revelation is possible. As they diverge, communication becomes increasingly coarse, partitioning the state space into ever-larger bins. When bias becomes sufficiently large, the system collapses into what they call a 'babbling equilibrium'—a state where the Sender's messages are statistically independent of the true state, and the Receiver rationally ignores them entirely.\n\nThe babbling equilibrium is not a failure of the model. It is a prediction. Given sufficient preference divergence, silence or deception becomes the rational choice. The equilibrium is stable because no player can profitably deviate: the Sender gains nothing from truthful revelation (since the Receiver ignores messages anyway), and the Receiver gains nothing from attending to messages (since they contain no information).\n\nSubsequent work has extended this framework considerably. Kamenica and Gentzkow (2011) developed Bayesian persuasion, showing how Senders can strategically design information structures to influence Receivers even when both parties are fully rational. Their key insight is that the Sender benefits from commitment—the ability to pre-specify a signal structure before observing the state. This has direct organizational applications: formal reporting systems, metrics, and dashboards can be understood as commitment devices that constrain the information structures available to agents.\n\nThe organizational application is immediate. Consider a manager who needs the project status report to be positive (to avoid hard conversations, protect headcount, preserve their own position) and an engineer who needs to deliver an honest assessment. Their preferences diverge. The engineer learns that accurate reports invite unwanted attention, while vague optimism satisfies the manager. Communication becomes ritual. 'We're on track' means nothing. Status meetings become noise. The babbling equilibrium has been achieved.\n\n# 2.2 Agency Theory and the Yes Man Problem\n\nPrendergast (1993) extended this logic specifically to internal labor markets, providing the microeconomic foundation for why executives are systematically deceived by their subordinates. His 'Theory of Yes Men' demonstrates that when organizations rely on subjective performance evaluations—where a principal assesses an agent based on judgment rather than objective metrics—the agent faces powerful incentives to conform to the principal's prior beliefs.\n\nThe mechanism is straightforward. A subordinate discovers information relevant to a strategic question. If that information contradicts what the principal believes, reporting it creates two risks: the principal may doubt the subordinate's competence (since their conclusions differ from the 'correct' view), and the principal may audit the subordinate's reasoning, consuming political capital and creating friction. The rational subordinate, anticipating these dynamics, skews reporting toward the principal's priors.\n\nPrendergast proves that this dynamic emerges not from sycophancy but from the structure of subjective evaluation itself. The principal relies on the subordinate for information but can also conduct an independent assessment. If audit costs are high—and in complex organizations, they almost always are—the principal will rely on priors. A rational subordinate, anticipating this, will match their reports to those priors. The 'Yes Man' is not a character flaw. He is the equilibrium output of a poorly designed incentive structure.\n\nThis creates a feedback loop of confirmation bias. The principal is surrounded by apparent agreement not because they hire sycophants, but because the incentive structure converts honest agents into sycophants. The result is an organization that becomes progressively\n\ndetached from reality, as the feedback mechanisms meant to correct leadership's errors are repurposed to reinforce them.\n\n# 2.3 The Lemon Problem of Ideas\n\nAkerlof's (1970) 'Market for Lemons' describes how information asymmetry can cause market collapse through adverse selection. In his canonical example, used car buyers cannot distinguish good cars from bad ones ('lemons'). Sellers of good cars, unable to credibly signal quality, withdraw from the market. Only lemons remain.\n\nThe same dynamic operates in the organizational idea marketplace. Proposals, assessments, and strategic recommendations are 'goods' whose quality is costly to verify. An optimistic projection and an accurate one look identical at the moment of presentation. The principal (leadership) faces the same information asymmetry as the used car buyer: they cannot easily distinguish dysmemes from eumemes.\n\nThe Akerlof logic then follows. Accurate assessments require work—data gathering, analysis, acknowledgment of uncertainty. Optimistic dysmemes require only confidence. As the market floods with cheap-to-produce dysmemes, accurate assessments become relatively more costly. Truth-tellers, unable to credibly distinguish themselves and facing higher production costs, reduce their output. The market equilibrates at a pooling equilibrium where the typical signal is uninformative—a mixture of charlatans and the genuinely deluded, with the careful analyst having exited.\n\n# 2.4 Cultural Evolution and Transmission Bias\n\nGame theory explains why individuals choose to transmit false signals. It does not explain why certain falsehoods dominate, why they spread, or why they prove so resistant to correction. For that, we need cultural evolution.\n\nBoyd and Richerson's (1985, 2005) dual-inheritance theory treats cultural information—ideas, practices, norms—as replicating units subject to selection pressures analogous to (though not identical with) those operating on genes. The critical insight is that the fitness of a cultural variant is not determined by its truth value. It is determined by its transmission properties. An idea spreads because it is easy to remember, emotionally resonant, or associated with high-status individuals—not because it accurately maps reality.\n\nThree transmission biases are particularly relevant to organizational contexts:\n\nContent bias refers to the differential transmission of ideas based on their intrinsic properties. Simple, emotionally satisfying ideas outcompete complex, ambiguous ones. 'We just need to execute better' is lighter cognitive load than 'Our architecture has accumulated technical debt requiring a multi-quarter remediation effort with uncertain ROI.' The former spreads; the latter dies in the meeting where it was born. Henrich and Gil-White (2001) demonstrate that content biases operate largely below conscious awareness—people do not choose to prefer simpler explanations; they simply find them more memorable and transmissible.\n\nPrestige bias refers to the preferential copying of ideas associated with high-status individuals. If the CEO believes the competitor is irrelevant, that belief cascades downward regardless of evidence. Subordinates adopt it not because they believe it, but because imitating the leader is a dominant strategy for advancement (Henrich & McElreath, 2003).\n\nThe mechanism is self-reinforcing: those who adopt the prestige figure's beliefs advance, becoming prestige figures themselves and further propagating the belief.\n\nConformity bias refers to the disproportionate adoption of common beliefs. Once a belief reaches critical mass, deviation becomes costly. If everyone reports green, reporting red marks you as the problem. The pressure to conform locks in whatever belief happened to reach threshold first (Boyd & Richerson, 1985). Importantly, conformity bias operates on\n\nperceived rather than actual consensus—if people believe everyone else believes X, they will adopt X even if private dissent is widespread. This creates informational cascades where public consensus increasingly diverges from private belief (Bikhchandani, Hirshleifer, & Welch, 1992).\n\nBlackmore (1999) and Dennett (1995) emphasize that meme fitness is orthogonal to truth—a point frequently misunderstood. The claim is not that false ideas spread better than true ones (though sometimes they do). The claim is that transmission properties and truth value are independent variables. An idea can be true and highly transmissible, true and poorly transmissible, false and highly transmissible, or false and poorly transmissible. Selection operates on transmissibility. Truth is along for the ride, or not.\n\n# 2.5 Organizational Failure and Epistemic Dysfunction\n\nA substantial literature documents organizational failures attributable to information pathologies, though rarely with the formal mechanism proposed here.\n\nVaughan's (1996) study of the Challenger disaster introduced 'normalization of deviance'--the gradual desensitization to risk as small deviations become routine. Each successful launch with O-ring erosion made the next launch with erosion more acceptable, until catastrophic failure became, in retrospect, inevitable. Vaughan explicitly frames this as a structural rather than individual phenomenon: 'No fundamental decision was made at NASA to do evil... Rather, a loss of insight was actually facilitated by... the institutional structures that monitored risk.'\n\nJanis's (1972) groupthink model identifies conditions under which cohesive groups make defective decisions: illusions of invulnerability, collective rationalization, stereotypes of out-groups, self-censorship, illusions of unanimity, and direct pressure on dissenters. While influential, the model operates at the group level and does not explain why these dynamics persist or why they prove resistant to intervention.\n\nNguyen's (2020) work on epistemic bubbles and echo chambers provides a useful distinction. Epistemic bubbles arise from omission—relevant voices are simply absent. Echo chambers arise from active discrediting—outside voices are present but dismissed as untrustworthy.\n\nOrganizations can exhibit both: some information never reaches decision-makers (bubble), while information that does reach them is filtered through loyalty tests that dismiss uncomfortable sources (chamber).\n\nLevy (2022) argues that 'bad beliefs' are often rational responses to corrupted epistemic environments rather than individual cognitive failures. This perspective aligns with the mechanism proposed here: if the selection environment rewards certain beliefs regardless of truth, holding those beliefs is individually rational even when collectively catastrophic.\n\n# 3. Theoretical Framework\n\n# 3.1 The Synthesis\n\nThe synthesis connects these literatures through a simple observation: organizations are both strategic environments (where agents with divergent preferences communicate) and cultural environments (where ideas compete for transmission). The game theory explains stability—why no individual can profitably deviate from the dysfunctional equilibrium. The cultural evolution explains spread—why the dysfunction saturates the organization and proves resistant to correction.\n\nConsider an organization where accurate risk reporting would benefit the collective (avoiding disasters, enabling adaptation) but harm individual reporters (inviting scrutiny, creating conflict, marking oneself as a problem). The strategic communication literature tells us this\n\npreference divergence will degrade information quality. The cultural evolution literature tells us that whatever false signals emerge will spread through content bias (simple narratives over complex ones), prestige bias (adopting the beliefs of successful people, who succeeded partly by not reporting risks), and conformity bias (reporting what everyone else reports).\n\nThe combination produces a ratchet effect. False signals crowd out true ones because they are individually advantageous. Once established, they become culturally dominant through transmission biases. Cultural dominance makes deviation even more costly (now you're fighting consensus, not just reporting bad news). The equilibrium is self-reinforcing and self-protecting.\n\n# 3.2 Formal Definition\n\nI define dysmemic pressure as follows:\n\nDefinition: Dysmemic pressure is the selection force in an organization that favors cultural variants (ideas, signals, practices) which increase individual payoff while decreasing collective adaptability—that is, where internal fitness is negatively correlated with external fitness.\n\nThe relationship among these forces can be expressed as a conceptual schema:\n\nDysmemic Pressure $\\propto f$ (Incentive Divergence, Transmission Ease, Verification Cost)\n\nThis is not a derived formula but a heuristic summarizing the mechanism's structure: pressure intensifies as any component increases. The three terms capture the distinct contributions of each theoretical foundation. Incentive Divergence is the Crawford-Sobel bias parameter b: the degree to which internal fitness (what advances individual careers) diverges from external fitness (what benefits the organization). When these are negatively correlated—when the behaviors that help the individual harm the collective—incentive divergence is high and communication degrades toward babbling equilibrium. As b increases, communication precision decreases. Transmission Ease captures content bias from cultural evolution: simpler, more emotionally satisfying signals spread faster regardless of accuracy. Verification Cost captures the Akerlof asymmetry: the higher the cost to verify signal quality, the greater the adverse selection pressure on the organizational idea market.\n\nSeveral components require elaboration. Internal fitness refers to the expected benefit to the carrier within the organization: promotion probability, conflict avoidance, status maintenance, resource allocation, job security. These are the payoffs that shape individual behavior in the Crawford-Sobel framework. External fitness refers to the expected benefit to the organization as a whole in its external environment: market adaptation, risk management, competitive positioning, survival. These are the outcomes that organizational design is nominally meant to optimize.\n\nNegative correlation is the key condition. Internal and external fitness can be positively correlated (when rewarding truth-telling), uncorrelated (when rewards are orthogonal to information quality), or negatively correlated (when rewards punish truth-telling). Dysmemic pressure exists when the correlation is negative—when the behaviors that benefit individuals harm the organization.\n\n'Dysmemes' are cultural variants that satisfy this condition: they help the carrier (get promoted, avoid conflict, maintain status) while harming the host organization (misleading strategy, masking risk, preventing adaptation). They outcompete 'eumemes'—truth-tracking variants—because the selection environment rewards the former and punishes the latter.\n\nThe result is not merely that individuals lie. The organizational ecology shifts. Truth-tellers exit or go silent. The meme pool becomes saturated with dysmemes. The organization loses\n\nthe capacity to perceive reality accurately—not because individuals are stupid, but because the smart move is to participate in the collective hallucination.\n\n# 3.3 Conditions and Dynamics\n\nDysmemic pressure intensifies under identifiable conditions:\n\nPreference divergence: The greater the gap between what agents want and what principals want, the greater the pressure. In Crawford-Sobel terms, larger bias produces coarser communication partitions and increases the probability of babbling equilibria. Organizations where managers are evaluated on metrics that diverge from organizational health (quarterly numbers vs. long-term value, headcount vs. capability, activity vs. outcomes) exhibit greater preference divergence.\n\nEvaluation coupling: When the consumer of information is also the evaluator of the producer, pressure intensifies. The engineer reporting to the manager who controls their performance review faces different incentives than the engineer reporting to an independent quality function. Decoupling evaluation from information consumption reduces bias in the Crawford-Sobel sense.\n\nTransmission structure: Steep hierarchies with few horizontal connections amplify prestige bias (information flows through few high-status nodes) and reduce the error-correction capacity of distributed networks. Flat structures with many horizontal connections reduce prestige concentration but may amplify conformity bias if consensus norms are strong.\n\nExternal feedback delay: When consequences of dysfunction are distant in time or attribution, dysmemic equilibria persist longer. Organizations in fast-feedback environments (trading desks, emergency services) exhibit less dysmemic pressure than those in slow-feedback environments (strategic planning, R&D) because reality corrections arrive before dysfunction saturates the culture.\n\nExit costs: When truth-tellers cannot easily leave, they face a choice between silence and punishment. High exit costs (specialized skills, geographic constraints, unvested compensation) increase the proportion of the population that chooses silence, accelerating dysmemic saturation. Industries with high mobility exhibit less dysmemic pressure than those with golden handcuffs.\n\n# 4. Case Studies\n\nThree cases illustrate dysmemic pressure operating across different industries, failure modes, and organizational contexts. Each demonstrates the mechanism's key features: preference divergence generating babbling equilibria, transmission biases locking in dysfunction, and the self-reinforcing nature of the resulting state.\n\n# 4.1 Nokia: Fear-Induced Babbling\n\nNokia's collapse from mobile phone dominance to irrelevance between 2007 and 2013 represents one of the most studied failures in business history. Vuori and Huy's (2016) detailed study, based on 76 interviews with Nokia managers and engineers, documents a systematic information pathology consistent with dysmemic pressure.\n\nThe technical facts were known. Engineers understood that Symbian, Nokia's operating system, could not compete with iOS and Android. Internal assessments documented the gaps. Middle managers were aware that development timelines were unrealistic and that the organization was falling further behind with each quarter. The information existed within the organization.\n\nIt did not reach decision-makers in usable form. Vuori and Huy document a fear-based communication breakdown: 'Top managers were afraid of the external environment and made middle managers afraid of them. Middle managers were afraid of top managers and made their subordinates afraid of them.' The result was systematic upward distortion. Bad news was softened, delayed, reframed, or omitted entirely. Status reports remained optimistic long after the situation had become dire.\n\nThe mechanism fits the dysmemic framework precisely. Preference divergence was severe: top managers needed reassurance that their strategy was working; middle managers needed to avoid being identified with failure; engineers needed to avoid the scrutiny that honest assessments would invite. The Crawford-Sobel prediction follows: communication precision collapsed.\n\nTransmission biases amplified the dysfunction. Prestige bias meant that optimistic framings endorsed by senior leaders propagated downward while pessimistic assessments from junior engineers died in the hierarchy. Conformity bias meant that once green status reports became the norm, deviation marked the deviator rather than the problem. Content bias meant that simple narratives ('we just need to execute faster') outcompeted complex ones ('our architectural assumptions are fundamentally wrong').\n\nThe equilibrium was stable. No individual could profitably deviate. An engineer who reported accurately faced career consequences without changing the outcome—the organization could not act on the information because the same dynamics suppressed it elsewhere. The rational strategy was participation in the collective delusion.\n\nNokia's board received consistent reassurance until the crisis was terminal. The organization did not fail for lack of information. It failed because the selection environment had eliminated the information's transmission path.\n\n# 4.2 NASA Challenger: Normalized Deviance\n\nThe Space Shuttle Challenger disaster of January 28, 1986, killed seven astronauts when an O-ring seal failed during launch. Diane Vaughan's (1996) landmark study documented the organizational dynamics that made the disaster, in her phrase, 'an accident waiting to happen.' Engineers at Morton Thiokol, the contractor responsible for the solid rocket boosters, knew O-rings were vulnerable to cold temperatures. They had documented erosion on previous flights. The night before launch, engineers formally recommended against launching in the forecast cold conditions. They were overruled.\n\nThe dysmemic mechanism is visible in Vaughan's account of 'normalization of deviance.'\n\nEach successful flight with O-ring anomalies made the next anomaly more acceptable. Tl baseline shifted. What began as a concerning deviation became expected variation, then normal operation. The cultural transmission path is clear: interpretations that permitted continued launches were adopted (content bias toward launch-supporting narratives), endorsed by program leadership (prestige bias), and became consensus (conformity bias)\n\nInterpretations that would have grounded the fleet faced the opposite selection environment. Preference divergence was structural. NASA operated under intense schedule pressure from Congress, the White House, and institutional competition. Program managers were evaluated on launch cadence. Engineers were embedded in organizations that needed launches to continue. The communication channel between technical assessment and launch decision was systematically biased toward launch.\n\nThe night before launch, when Thiokol engineers recommended delay, NASA's response is instructive. Larry Mulloy, the solid rocket booster project manager, asked Thiokol to reconsider. Thiokol's management held an off-line caucus, during which senior vice president Jerry Mason reportedly said to Robert Lund, the VP of Engineering: 'Take off your\n\nengineering hat and put on your management hat.' Lund reversed his position. The launch proceeded.\n\nThe phrase captures the dysmemic dynamic: engineering truth (O-rings fail in cold) versus management fitness (launches maintain schedule, budget, and careers). When forced to choose, the individual chose internal fitness over external accuracy. This was not a moral failure. It was a predictable response to a selection environment that had been selecting for exactly this behavior for years.\n\nThe Rogers Commission that investigated the disaster famously concluded that NASA's 'decision-making culture' had become a causal factor. The language obscures the mechanism. 'Culture' suggests something atmospheric and diffuse. The reality was selection: systematic, structural, and predictable. The organization had built an environment where launch-supporting signals were fit and launch-delaying signals were not. The culture was the output of that selection, not its cause.\n\n# 4.3 Wells Fargo: Institutionalized Fraud\n\nBetween 2002 and 2016, Wells Fargo employees opened approximately 3.5 million accounts without customer authorization. The fraud was not hidden—it was incentivized, measured, and managed. The Independent Directors' Report (2017) documents an organizational system that selected for exactly the behavior it nominally prohibited.\n\nThe 'cross-sell' strategy required employees to sell multiple products to each customer.\n\nPerformance was measured by accounts opened, with aggressive quotas tied to compensation and job security. Employees who met quotas were rewarded; those who did not were terminated. The system created a simple optimization problem: open accounts or lose your job.\n\nThe preference divergence is stark. Wells Fargo's stated objective was customer relationships generating legitimate revenue. Individual employees' objective was survival, which required meeting quotas regardless of customer consent. The gap between stated and revealed preferences was the selection environment.\n\nThe cultural transmission followed predictable patterns. New employees learned quickly what actually mattered. Training materials emphasized ethics; peer behavior demonstrated that ethics was subordinate to numbers. Managers who met quotas were promoted, becoming prestige figures whose methods were copied. Conformity pressure reinforced the behavior—teams that opened unauthorized accounts created norms that made non-participation conspicuous.\n\nComplaints existed at every level. The company's internal ethics hotline received reports. Regional managers raised concerns. The pattern was documented in HR files and legal settlements. The information was not absent; it was systematically discounted, attributed to bad actors rather than bad systems, treated as implementation failure rather than design consequence.\n\nThe equilibrium persisted for over a decade. No individual could profitably deviate—an employee who refused to meet quotas was terminated; a manager who reduced quotas faced performance reviews based on team numbers. The organization optimized for the measured objective (accounts opened) at the cost of the stated objective (customer relationships). The resulting scandal cost Wells Fargo billions in fines and settlements, executive careers, and reputational damage that persists years later.\n\nThe case illustrates dysmemic pressure in its most explicit form. The selection environment was not subtle—it was written into job descriptions, compensation plans, and termination criteria. The organization built a machine for generating fraud and then expressed surprise when fraud emerged.\n\n# 5. Propositions\n\nThe theoretical framework generates testable propositions about organizational information environments. These are stated as directional predictions that could, in principle, be evaluated against organizational data.\n\nProposition 1 (Preference Divergence): The greater the divergence between what advances an individual's career and what benefits the organization, the lower the information content of upward communication. As the gap between internal and external fitness incentives widens, communication precision degrades toward babbling equilibrium.\n\nThis follows directly from Crawford and Sobel (1982). Testable implications include: organizations with stronger 'up or out' cultures should exhibit less accurate upward communication; roles with high job security should produce more accurate assessments than roles with precarious employment; communication from employees with outside options should be more informative than communication from those without.\n\nProposition 2 (Evaluation Coupling): When the recipient of information is also responsible for evaluating the sender, information quality decreases. Decoupling evaluation from information consumption improves signal accuracy.\n\nThis explains why organizations with independent audit functions, ombudsmen, or protected reporting channels often detect problems earlier than those without. The prediction is that organizations that structurally separate 'who needs to know' from 'who controls your career' will exhibit less dysmemic pressure in those domains.\n\nProposition 3 (Process Capture): Any organizational process whose outputs are used to evaluate participants will, over time, optimize for evaluation success rather than process purpose. The process becomes dysmemic theater.\n\nThis is a generalization of Goodhart's Law to cultural selection. Testable implications include: OKR processes that affect compensation should exhibit less strategic information than those that do not; performance reviews that determine promotion should contain less accurate information than developmental feedback with no career consequences; planning processes should become less predictive over time as participants learn to optimize for planning metrics rather than planning accuracy.\n\nProposition 4 (Intervention Decay): Interventions that change expressed norms without changing payoff structures will exhibit initial improvement followed by regression to the pre-intervention equilibrium. The rate of regression depends on the strength of the unchanged selection pressure.\n\nThis explains the consistent failure of culture change initiatives. Meta-analyses of organizational change efforts consistently report failure rates between 60 and 80 percent (Beer & Nohria, 2000). Testable implications include: values training should produce temporary behavioral changes that decay unless reinforced by incentive changes; leadership messaging should affect behavior only when accompanied by visible changes in reward and punishment; organizational culture should resist copying—transplanting practices without transplanting selection environments should produce decay toward the host environment's equilibrium.\n\nProposition 5 (External Correction): Organizations under strong dysmemic pressure can only be corrected by external shock—information or consequences from outside the selection environment. Internal reform attempts will be absorbed into the dysmemic equilibrium. This follows from the self-reinforcing nature of dysmemic equilibria. Testable implications include: organizations that experience market corrections, regulatory interventions, or public scandals should exhibit temporary increases in information accuracy; the magnitude and duration of improvement should correlate with the severity of the shock; internal\n\n'transformation' initiatives without external pressure should fail at higher rates than those accompanied by external forcing functions.\n\n# 6. Countermeasures: A Mechanism Design Perspective\n\nIf dysmemic pressure is structural rather than behavioral, effective countermeasures must alter the selection environment itself rather than exhorting different behavior within the existing environment. This section evaluates potential interventions through a mechanism design lens, asking: what structures might shift the fitness landscape such that truth-tracking variants outcompete dysmemes?\n\n# 6.1 The Failure of Exhortation\n\nThe standard intervention portfolio—culture change initiatives, leadership development, values training, psychological safety programs—treats dysmemic outcomes as behavioral problems susceptible to education and example. The framework developed here explains why these interventions consistently fail.\n\nConsider the typical culture change initiative. Leadership announces new values. Posters appear. Training sessions explain expected behaviors. For a period, employees perform the new norms. Then, imperceptibly, old patterns reassert. The employees who most visibly adopted the new culture often turn out to be the same ones who were best at performing the old one—they simply shifted their performance to the new script.\n\nThe initiative failed not because employees are cynical. It failed because it did not alter the selection environment. The rewards still flowed to those who satisfied superiors rather than challenged them. The penalties still fell on those who surfaced problems rather than buried them. The new values were absorbed into the dysmemic ecosystem, becoming another vocabulary for signaling compliance.\n\nThis explains why you cannot copy another organization's culture. The visible artifacts—open floor plans, all-hands meetings, mission statements—can be replicated. Without changing the underlying selection environment, the transplanted forms decay on contact with the host organization's incentive structure. Amazon's 'disagree and commit' becomes 'disagree and get fired.' Google's '20% time' becomes time spent after finishing 'real' work. Netflix's 'freedom and responsibility' becomes freedom to comply with unwritten expectations.\n\n# 6.2 Structural Countermeasures\n\nEffective countermeasures share a common feature: they alter the payoff matrix such that truth-telling becomes a dominant or at least viable strategy. Several structural approaches merit consideration:\n\nEvaluation decoupling: Separating the recipient of information from the evaluator of its source reduces the bias in the Crawford-Sobel sense. Examples include independent audit functions that report to boards rather than management, ombudsman offices with protected status, and anonymous reporting channels with credible confidentiality. The key is structural independence—not merely policy statements that can be overridden, but governance architecture that makes the independence durable.\n\nPrediction markets and scoring rules: Internal prediction markets on project outcomes, market events, or organizational metrics can elicit private beliefs with proper incentives (Hanson, 2003). Proper scoring rules reward accurate probability assessments regardless of the outcome, decoupling the payoff from what the predictor wants to be true. Implementation\n\nchallenges are substantial (liquidity, manipulation, interpretation), but the mechanism directly addresses the preference divergence at the core of dysmemic pressure.\n\nRed teams and adversarial processes: Institutionalized devil's advocacy, where designated teams are rewarded for finding flaws, can create protected niches for truth-telling. The key is ensuring the red team's incentives genuinely align with finding problems rather than performing opposition. Red teams that are captured by the processes they're meant to challenge become dysmemic theater themselves.\n\nExternal validation requirements: Requiring external review of key assessments (customer advisory boards for product decisions, independent technical review for engineering claims, third-party audit for financial projections) introduces information from outside the internal selection environment. External validators face different fitness landscapes and thus different selection pressures.\n\n# 6.3 The Maintenance Problem\n\nAny structure that counterweights dysmemic pressure faces continuous pressure toward absorption back into the dysmemic equilibrium. The red team that becomes too influential will be defunded or captured. The independent audit function that creates too much friction will see its mandate narrowed. The prediction market that surfaces too much inconvenient truth will be discontinued or gamed.\n\nThis is not paranoia; it is the selection dynamic operating on the countermeasures themselves. Ideas and structures that threaten the dysmemic equilibrium face the same fitness disadvantages as individual truth-tellers. The countermeasures must be designed not only to work initially but to resist absorption over time.\n\nDurable countermeasures typically require three forms of independence:\n\nGovernance independence: Reporting lines that do not run through the functions being assessed. The audit committee reports to the board, not the CFO. The red team reports to an executive not responsible for the project being evaluated.\n\nResource independence: Budgets and staffing that cannot be reduced as retaliation for uncomfortable findings. Multi-year commitments, protected funding sources, or external support can provide this.\n\nEvaluation independence: Career consequences for the countermeasure staff that do not depend on the satisfaction of those they assess. Rotating assignments, external career paths, or tenure-like protections can provide this.\n\nWithout all three, the countermeasure will likely be absorbed. With all three, maintenance is still an ongoing effort rather than a solved problem. The physics does not disappear; it can only be counterweighted.\n\n# 7. Discussion\n\n# 7.1 Implications\n\nIf dysmemic pressure is structural rather than personal, several implications follow for organizational theory and practice.\n\nFirst, organizational dysfunction is not evidence of bad actors. The same people, in a different selection environment, would behave differently. Blaming individuals for systemic outcomes is not only unfair—it prevents diagnosis. The person who speaks up and gets punished is not more virtuous than the person who stays silent; they merely miscalculated the payoff structure. Attributing organizational failure to individual moral failure is itself a dysmeme—it spreads because it protects the system from examination.\n\nSecond, the framework explains the stubborn failure of organizational change efforts.\n\nMeta-analyses consistently find that most change initiatives fail to achieve their stated objectives. The dysmemic lens suggests this is not implementation failure but design failure: the initiatives target behavior without targeting the selection environment that produces the behavior. They are, in effect, trying to change the output without changing the function.\n\nThird, some organizations may be beyond internal repair. When dysmemic pressure has saturated the meme pool sufficiently, the truth-tellers have already exited. The remaining population cannot recognize dysfunction because dysfunction is all they know. The culture is not mistaken about reality; it has constructed an alternative reality that is internally consistent and externally fatal. Correction requires external shock—market failure, regulatory intervention, scandal, or replacement of the organization entirely.\n\nFourth, external perspective is structurally necessary rather than merely helpful. An organization trapped in dysmemic equilibrium cannot validate its own outputs. The same biases that distort the information also distort the assessment of whether information is distorted. Outside observers—consultants, boards, investors, regulators—are not luxuries but requirements for organizations that wish to maintain contact with reality. This provides a functional justification for governance structures that might otherwise appear as mere overhead.\n\nFifth, understanding dysmemic pressure does not exempt you from it. Awareness is necessary but not sufficient. The forces remain operative. The question is whether sufficient counterweight has been built—structures that protect variance, mechanisms that surface truth, governance that maintains independence from the drift toward comfortable consensus.\n\n# 7.2 Limitations and Boundary Conditions\n\nThe framework has important limitations that bound its applicability.\n\nFirst, the mechanism operates most powerfully at scale. Small organizations with direct observation and tight feedback loops may not develop strong dysmemic pressure because the information pathologies are quickly corrected by reality. The framework applies primarily to organizations large enough that information must flow through multiple nodes and slow enough that consequences are temporally distant from actions.\n\nSecond, the framework does not address all forms of organizational failure. Failures due to external shocks, technological disruption, resource constraints, or genuinely unforeseeable events are not explained by dysmemic pressure. The mechanism applies specifically to failures where relevant information existed within the organization but was not transmitted, processed, or acted upon.\n\nThird, the propositions are stated directionally rather than precisely. Quantifying dysmemic pressure, predicting thresholds for babbling equilibria, or specifying the functional form of intervention decay would require empirical work beyond the scope of this paper. The framework generates predictions but does not, at this stage, generate point estimates.\n\nFourth, the mechanism design countermeasures are evaluated conceptually rather than empirically. While the logic suggests they should be effective, real-world implementation faces challenges not addressed here: political resistance, cost constraints, unintended consequences, and the possibility that novel interventions create novel dysmemic adaptations.\n\nFifth, the framework takes the existence of preference divergence as given. It does not address why organizations develop structures that create such divergence in the first place, or why some organizations maintain alignment better than others. A complete theory would need to explain the origins of dysmemic selection environments, not merely their consequences.\n\n# 7.3 Future Research\n\nSeveral directions for future research emerge from the framework.\n\nEmpirical measurement of dysmemic pressure is the most pressing need. This might involve surveys measuring perceived preference divergence, content analysis of organizational communications over time, comparison of internal assessments with external outcomes, or experimental manipulation of selection environments in organizational settings. The propositions generate testable predictions; testing them requires operationalization.\n\nComparative organizational analysis could identify structural features associated with resistance to dysmemic pressure. Are there industries, governance forms, or organizational designs that exhibit systematically better information environments? What do they have in common? Case selection focusing on variation rather than failure might illuminate protective factors.\n\nIntervention studies, ideally randomized or quasi-experimental, could evaluate the countermeasures proposed here. Does evaluation decoupling actually improve information quality? Do prediction markets elicit more accurate assessments than traditional reporting?\n\nHow long do interventions persist before absorption? The mechanism design literature provides tools for such evaluation, but organizational contexts present implementation challenges that merit study in their own right.\n\nIntegration with adjacent literatures could enrich the framework. The psychological safety literature (Edmondson, 1999) addresses similar phenomena at the team level; connection might reveal how micro-level dynamics aggregate to organizational-level equilibria. The institutional theory literature addresses how organizational forms spread and persist; connection might explain how dysmemic selection environments themselves propagate across organizations.\n\n# 8. Conclusion\n\nOrganizations fail not because they lack information but because they select against it. The selection is not random. It follows predictable dynamics: strategic incentives that make truth costly, transmission biases that make comfortable falsehoods sticky, conformity pressures that lock in whatever dysfunction reaches critical mass first.\n\nI have called this selection force dysmemic pressure. The name is new. The phenomenon is ancient. Every organization that has ever collapsed while its members privately knew the collapse was coming has experienced it. Every reform that decayed back into the dysfunction it was meant to address has fallen victim to it. Every leader who has asked 'why didn't anyone tell me?' after a preventable disaster has discovered, too late, what it produces.\n\nThe contribution here is synthesis and precision. By connecting the game-theoretic literature on strategic communication to agency theory and the cultural evolution literature on transmission dynamics, we can describe the mechanism with sufficient clarity to identify what conditions produce it, why it is stable, why standard interventions fail, and what alternative architectures might resist it.\n\nThat precision matters because it shifts the frame. Organizational dysfunction stops being a moral failure and becomes a physics problem. Physics problems do not respond to exhortation. They respond to engineering. You do not convince gravity to behave differently. You build structures that account for its operation.\n\nThe question for any organization is not whether dysmemic pressure exists—it does, always, at scale. The question is whether anything counterweights it. Whether the selection environment has been deliberately designed to protect truth. Whether structures exist that reward accuracy over performance, dissent over consensus, reality over comfort.\n\nWhere such structures exist and are defended, organizations retain the capacity to adapt. Where they do not, the drift continues—imperceptible, comfortable, and ultimately fatal.\n\n# References\n\nAkerlof, G. A. (1970). The market for 'lemons': Quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84(3), 488-500. \nBeer, M., & Nohria, N. (2000). Cracking the code of change. Harvard Business Review, 78(3), 133-141. \nBikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992-1026. \nBlackmore, S. (1999). The Meme Machine. Oxford University Press. \nBoyd, R., & Richerson, P. J. (1985). Culture and the Evolutionary Process. University of Chicago Press. \nBoyd, R., & Richerson, P. J. (2005). The Origin and Evolution of Cultures. Oxford University Press. \nCrawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431-1451. \nDennett, D. C. (1995). Darwin's Dangerous Idea: Evolution and the Meanings of Life. Simon & Schuster. \nEdmondson, A. (1999). Psychological safety and learning behavior in work teams. \nAdministrative Science Quarterly, 44(2), 350-383. \nHanson, R. (2003). Combinatorial information market design. Information Systems Frontiers, 5(1), 107-119. \nHenrich, J. (2016). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton University Press. \nHenrich, J., & Gil-White, F. J. (2001). The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evolution and Human Behavior, 22(3), 165-196. \nHenrich, J., & McElreath, R. (2003). The evolution of cultural evolution. Evolutionary Anthropology, 12(3), 123-135. \nIndependent Directors of the Board of Wells Fargo & Company. (2017). Sales Practices Investigation Report. \nJanis, I. L. (1972). Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Houghton Mifflin. \nKamenica, E., & Gentzkow, M. (2011). Bayesian persuasion. American Economic Review, 101(6), 2590-2615. \nLevy, N. (2022). Bad Beliefs: Why They Happen to Good People. Oxford University Press. Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141-161. \nPrendergast, C. (1993). A theory of 'yes men.' American Economic Review, 83(4), 757-770. Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press. \nVuori, T. O., & Huy, Q. N. (2016). Distributed attention and shared emotions in the innovation process: How Nokia lost the smartphone battle. Administrative Science Quarterly, 61(1), 9-51."}
# Unveiling the amorphous ice layer during premelting using AFM integrating machine learning Binze Tang, $^{1*}$ , Chon-Hei Lo $^{2*}$ , Tiancheng Liang $^{1*}$ , Jiani Hong $^{1*†}$ , Mian Qin $^{3}$ , Yizhi Song $^{1}$ , Duanyun Cao $^{4,5}$ , Ying Jiang $^{1,6,7,8‡}$ , Limei Xu $^{1,6,7§}$ $^{1}$ International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China 2 Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325001, China $^{3}$ School of Physics, Peking University, Beijing, 100871, China $^{4}$ Beijing Key Laboratory of Environmental Science and Engineering, School of Materials Science and Engineering, Beijing Institute of Technology, Beijing, 100081, China 5 Chongqing Innovation Center, Beijing Institute of Technology, Chongqing, 401120, China $^{6}$ Collaborative Innovation Center of Quantum Matter, Beijing, 100871, China $^{7}$ Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light-Element Advanced Materials, Peking University, Beijing 100871, China $^{8}$ New Cornerstone Science Laboratory, Peking University, Beijing 100871, P. R. China Premelting plays a key role across physics, chemistry, materials and biology sciences but remains poorly understood at the atomic level due to surface characterization limitations. We report the discovery of a novel amorphous ice layer (AIL) preceding the quasi-liquid layer (QLL) during ice premelting, enabled by a machine learning framework integrating atomic force microscopy (AFM) with molecular dynamics simulations. This approach overcomes AFM's depth and signal limitations, allowing for three-dimensional surface structure reconstruction from AFM images. It further enables structural exploration of premelting interfaces across a wide temperature range that are experimentally inaccessible. We identify the AIL, present between 121-180K, displaying disordered two-dimensional hydrogen-bond network with solid-like dynamics. Our findings refine the ice premelting phase diagram and offering new insights into the surface growth dynamic, dissolution and interfacial chemical reactivity. Methodologically, this work establishes a novel framework for AFM-based 3D structural discovery, marking a significant leap in our ability to probe complex disordered interfaces with unprecedented precision and paving the way for future disciplinary research, including surface reconstruction, crystallization, ion solvation, and biomolecular recognition. # I. INTRODUCTION. Premelting—the formation of a thin liquid-like layer on crystal surfaces well below the melting point—is a ubiquitous interfacial phenomenon observed across all classes of solids, with profound scientific and practical implications. It influences a wide range of properties and processes, including mechanical behavior, friction, chemical reactivity, cryopreservation, and atmospheric chemistry. First proposed by Faraday on ice surface over 170 years ago, this phenomenon has been extensively explored through experimental and simulation techniques. However, its underlying mechanism remains unsolved, primarily due to the challenge in probing the atomic structure and dynamics of disordered interfaces. Unlike bulk materials, which can be readily analyzed using crystallography, surface structures are inherently more complex and demand advanced, surface-sensitive techniques, such as low-energy electron diffraction, helium-atom scattering, X-ray absorption spectroscopy, and * These authors contributed to this work equally. $\dagger$ Contact author: timeless@pku.edu.cn + Contact author: yjiang@pku.edu.cn $\S_{\text{Contact author: limei.xu@pku.edu.cn}}$ sum frequency generation spectroscopy. While these methods yield valuable insights into the outermost layers, they suffer from limited spatial resolution and intrinsic averaging effects, which prevent the resolution of nanoscale heterogeneities. Recent advancements in qPlus-based noncontact AFM (nc-AFM) with a CO-functionalized tip has achieved submolecular resolution of surface structures, capturing ordered structures, transient intermediates, and even disordered configurations. Despite these capabilities, AFM faces fundamental challenges when applied to complex three-dimensional (3D) disordered systems. Conventional AFM analyses rely on trial-and-error workflows: candidate structures inferred from experimental images are relaxed using density functional theory (DFT), then compared to simulated AFM images via the probe particle method (PPM). While effective for simple, atomically flat surfaces, this approach becomes computationally prohibitive for structurally heterogeneous, fluctuating interfaces. Moreover, the intrinsic surface sensitivity of nc-AFM limits depth resolution, obstructing the reconstruction of full 3D structures and the understanding of interfacial dynamics. Machine learning (ML) offers new possibilities for AFM imaging interpretation, enabling advances in atomic identification, molecular classification, and electrostatic potential mapping. However, current ML methods are predominantly designed for well-ordered or planar surfaces and struggle with disordered, non-periodic interfaces, where signal degradation, often compounded by experimental noise, leads to a pronounced increase in structural ambiguity. Although generative models have achieved success in areas such as protein structure prediction, organic molecules synthesis, and crystal structure generation, robust 3D reconstruction of disordered and asymmetric interfaces from incomplete AFM data remains a formidable challenge. Here we introduce a general ML-AFM framework that combines object detection for topmost layer structure identification with structure generation to infer subsurface configurations, enabling accurate atomic-scale reconstruction of disordered interfaces from AFM data. We apply this framework to ice premelting—the earliest and most extensively studied premelting system—and successfully reconstruct its disordered structure. The reconstructed 3D configurations provide physically grounded inputs for molecular dynamics simulations, enabling effective sampling of premelting dynamics in large temperature regime ( $>140\mathrm{K}$ ) that are experimentally inaccessible due to desorption under vacuum conditions. Notably, our simulations reveal a previously unrecognized amorphous ice layer (AIL) that emerges prior to the formation of the quasi-liquid layer (QLL). This AIL, present within the $121 - 180\mathrm{K}$ range, features a disordered hydrogen-bond (HB) network with solid-like dynamics, challenging conventional views of interfacial premelting. This work establishes a generalizable strategy for resolving complex 3D disordered interfaces, enabling high-resolution imaging and structural discovery that yields new insights into interfacial phenomena across diverse process, including crystallization, ion solvation, molecular recognition, and heterogeneous catalysis. # II. RESULTS # A. Overview of the framework We designed two networks: an object detection network and a structure generation network to resolve the 3D structure of interfacial ice from AFM data. The object detection network analyses the topmost layer experimental signals, while the structure generation network reconstructs the subsurface where no experimental signals are available as illustrated in Fig. 1. In the object detection task, a 3D U-Net-like neural network (NN) takes AFM images as input and predicts the corresponding 3D structure represented by voxels containing position and species information (see methods and fig. S1). Once the top-layer structure is identified, a conditional variational auto-encoder (cVAE) is employed to generate the underlying ice structure for subsequent structural relaxation (see methods and fig. S3). Due to the lack of labeled experimental data, all training data is generated through simulations. MD simulations are used to explore the phase space of bulk ice interfaces, and the sampled structures are transformed into AFM images using the PPM. To replicate experimental noise, a CycleGAN is trained using unlabeled experimental AFM images (see methods and fig. S2). The augmented simulated AFM images are then used to train the object detection model, while the sampled structures are directly employed to train the structure generation model. # B. Input data preparation and augmentation To explore the ice interface phase space, we performed molecular dynamics (MD) simulations of hexagonal ice (Ih) with the basal $<0001>$ plane exposed to the vapor phase (Fig. 1a). Simulations spanned $160\mathrm{K}$ to $260\mathrm{K}$ to capture the entire premelting regime, where the first layer forms a quasi-liquid layer (QLL) near $180\mathrm{K}$ and the second layer melts above $254\mathrm{K}$ based on TIP4P/Ice water model. To enrich the dataset and mimic ice growth, simulations were performed using both pristine and excess water deposited on interfaces (see methods). Structures were sampled using a $2.5 \times 2.5 \times 0.3\mathrm{nm}^3$ sliding detection window (Fig. 1I) to capture interfacial disorder in the xy-plane, with the $z$ -depth determined by the density distribution along the $z$ -axis (see methods). Over 60,000 sampled structures were randomly divided into training ( $60\%$ ), validation ( $20\%$ ), and testing ( $20\%$ ) sets. Each structure was then converted into a stack of 10 simulated AFM images at varying tip-sample distances as input for the object detection NN (see methods and fig. S4). To address the challenge of detecting weak signals from deeper oxygen and hydrogen atoms amidst experimental noise, a CycleGAN was trained on unlabeled experimental AFM images. This approach aligns unpaired simulated and experimental AFM images by mapping them to a shared latent space, with training concluding upon convergence of the Fréchet Inception Distance,
# Unveiling the amorphous ice layer during premelting using AFM integrating machine learning Binze Tang, $^{1*}$ , Chon-Hei Lo $^{2*}$ , Tiancheng Liang $^{1*}$ , Jiani Hong $^{1*†}$ , Mian Qin $^{3}$ , Yizhi Song $^{1}$ , Duanyun Cao $^{4,5}$ , Ying Jiang $^{1,6,7,8‡}$ , Limei Xu $^{1,6,7§}$ $^{1}$ International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China 2 Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325001, China $^{3}$ School of Physics, Peking University, Beijing, 100871, China $^{4}$ Beijing Key Laboratory of Environmental Science and Engineering, School of Materials Science and Engineering, Beijing Institute of Technology, Beijing, 100081, China 5 Chongqing Innovation Center, Beijing Institute of Technology, Chongqing, 401120, China $^{6}$ Collaborative Innovation Center of Quantum Matter, Beijing, 100871, China $^{7}$ Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light-Element Advanced Materials, Peking University, Beijing 100871, China $^{8}$ New Cornerstone Science Laboratory, Peking University, Beijing 100871, P. R. China Premelting plays a key role across physics, chemistry, materials and biology sciences but remains poorly understood at the atomic level due to surface characterization limitations. We report the discovery of a novel amorphous ice layer (AIL) preceding the quasi-liquid layer (QLL) during ice premelting, enabled by a machine learning framework integrating atomic force microscopy (AFM) with molecular dynamics simulations. This approach overcomes AFM's depth and signal limitations, allowing for three-dimensional surface structure reconstruction from AFM images. It further enables structural exploration of premelting interfaces across a wide temperature range that are experimentally inaccessible. We identify the AIL, present between 121-180K, displaying disordered two-dimensional hydrogen-bond network with solid-like dynamics. Our findings refine the ice premelting phase diagram and offering new insights into the surface growth dynamic, dissolution and interfacial chemical reactivity. Methodologically, this work establishes a novel framework for AFM-based 3D structural discovery, marking a significant leap in our ability to probe complex disordered interfaces with unprecedented precision and paving the way for future disciplinary research, including surface reconstruction, crystallization, ion solvation, and biomolecular recognition. # I. INTRODUCTION. Premelting—the formation of a thin liquid-like layer on crystal surfaces well below the melting point—is a ubiquitous interfacial phenomenon observed across all classes of solids, with profound scientific and practical implications. It influences a wide range of properties and processes, including mechanical behavior, friction, chemical reactivity, cryopreservation, and atmospheric chemistry. First proposed by Faraday on ice surface over 170 years ago, this phenomenon has been extensively explored through experimental and simulation techniques. However, its underlying mechanism remains unsolved, primarily due to the challenge in probing the atomic structure and dynamics of disordered interfaces. Unlike bulk materials, which can be readily analyzed using crystallography, surface structures are inherently more complex and demand advanced, surface-sensitive techniques, such as low-energy electron diffraction, helium-atom scattering, X-ray absorption spectroscopy, and * These authors contributed to this work equally. $\dagger$ Contact author: timeless@pku.edu.cn + Contact author: yjiang@pku.edu.cn $\S_{\text{Contact author: limei.xu@pku.edu.cn}}$ sum frequency generation spectroscopy. While these methods yield valuable insights into the outermost layers, they suffer from limited spatial resolution and intrinsic averaging effects, which prevent the resolution of nanoscale heterogeneities. Recent advancements in qPlus-based noncontact AFM (nc-AFM) with a CO-functionalized tip has achieved submolecular resolution of surface structures, capturing ordered structures, transient intermediates, and even disordered configurations. Despite these capabilities, AFM faces fundamental challenges when applied to complex three-dimensional (3D) disordered systems. Conventional AFM analyses rely on trial-and-error workflows: candidate structures inferred from experimental images are relaxed using density functional theory (DFT), then compared to simulated AFM images via the probe particle method (PPM). While effective for simple, atomically flat surfaces, this approach becomes computationally prohibitive for structurally heterogeneous, fluctuating interfaces. Moreover, the intrinsic surface sensitivity of nc-AFM limits depth resolution, obstructing the reconstruction of full 3D structures and the understanding of interfacial dynamics. Machine learning (ML) offers new possibilities for AFM imaging interpretation, enabling advances in atomic identification, molecular classification, and electrostatic potential mapping. However, current ML methods are predominantly designed for well-ordered or planar surfaces and struggle with disordered, non-periodic interfaces, where signal degradation, often compounded by experimental noise, leads to a pronounced increase in structural ambiguity. Although generative models have achieved success in areas such as protein structure prediction, organic molecules synthesis, and crystal structure generation, robust 3D reconstruction of disordered and asymmetric interfaces from incomplete AFM data remains a formidable challenge. Here we introduce a general ML-AFM framework that combines object detection for topmost layer structure identification with structure generation to infer subsurface configurations, enabling accurate atomic-scale reconstruction of disordered interfaces from AFM data. We apply this framework to ice premelting—the earliest and most extensively studied premelting system—and successfully reconstruct its disordered structure. The reconstructed 3D configurations provide physically grounded inputs for molecular dynamics simulations, enabling effective sampling of premelting dynamics in large temperature regime ( $>140\mathrm{K}$ ) that are experimentally inaccessible due to desorption under vacuum conditions. Notably, our simulations reveal a previously unrecognized amorphous ice layer (AIL) that emerges prior to the formation of the quasi-liquid layer (QLL). This AIL, present within the $121 - 180\mathrm{K}$ range, features a disordered hydrogen-bond (HB) network with solid-like dynamics, challenging conventional views of interfacial premelting. This work establishes a generalizable strategy for resolving complex 3D disordered interfaces, enabling high-resolution imaging and structural discovery that yields new insights into interfacial phenomena across diverse process, including crystallization, ion solvation, molecular recognition, and heterogeneous catalysis. # II. RESULTS # A. Overview of the framework We designed two networks: an object detection network and a structure generation network to resolve the 3D structure of interfacial ice from AFM data. The object detection network analyses the topmost layer experimental signals, while the structure generation network reconstructs the subsurface where no experimental signals are available as illustrated in Fig. 1. In the object detection task, a 3D U-Net-like neural network (NN) takes AFM images as input and predicts the corresponding 3D structure represented by voxels containing position and species information (see methods and fig. S1). Once the top-layer structure is identified, a conditional variational auto-encoder (cVAE) is employed to generate the underlying ice structure for subsequent structural relaxation (see methods and fig. S3). Due to the lack of labeled experimental data, all training data is generated through simulations. MD simulations are used to explore the phase space of bulk ice interfaces, and the sampled structures are transformed into AFM images using the PPM. To replicate experimental noise, a CycleGAN is trained using unlabeled experimental AFM images (see methods and fig. S2). The augmented simulated AFM images are then used to train the object detection model, while the sampled structures are directly employed to train the structure generation model. # B. Input data preparation and augmentation To explore the ice interface phase space, we performed molecular dynamics (MD) simulations of hexagonal ice (Ih) with the basal $<0001>$ plane exposed to the vapor phase (Fig. 1a). Simulations spanned $160\mathrm{K}$ to $260\mathrm{K}$ to capture the entire premelting regime, where the first layer forms a quasi-liquid layer (QLL) near $180\mathrm{K}$ and the second layer melts above $254\mathrm{K}$ based on TIP4P/Ice water model. To enrich the dataset and mimic ice growth, simulations were performed using both pristine and excess water deposited on interfaces (see methods). Structures were sampled using a $2.5 \times 2.5 \times 0.3\mathrm{nm}^3$ sliding detection window (Fig. 1I) to capture interfacial disorder in the xy-plane, with the $z$ -depth determined by the density distribution along the $z$ -axis (see methods). Over 60,000 sampled structures were randomly divided into training ( $60\%$ ), validation ( $20\%$ ), and testing ( $20\%$ ) sets. Each structure was then converted into a stack of 10 simulated AFM images at varying tip-sample distances as input for the object detection NN (see methods and fig. S4). To address the challenge of detecting weak signals from deeper oxygen and hydrogen atoms amidst experimental noise, a CycleGAN was trained on unlabeled experimental AFM images. This approach aligns unpaired simulated and experimental AFM images by mapping them to a shared latent space, with training concluding upon convergence of the Fréchet Inception Distance, enabling the NN to be trained with realistic noise (see methods and fig. S5 for examples of images with or without CycleGAN). # C. Structure identification for the topmost bulk ice interface After training the NN for object detection with 36,000 data for over 10 epochs where the loss converges. The NN's prediction aligns well with reference data, even in the $z$ -direction where water molecules exhibit significant fluctuations (see fig. S6). The prediction accuracy for the interfacial ice structure on the basal plane reaches nearly $100\%$ for oxygen atoms and $99\%$ for hydrogen atoms. To further assess the NN's generalization capability, predictions were extended to interfacial water structures on the prism I plane, which has a larger inter-layer distance and exhibits different pre-melting behavior compared to basal plane. Tested on over 10,000 structures across temperatures from $160\mathrm{K}$ to $260\mathrm{K}$ , the NN achieved $97\%$ accuracy for oxygen atoms and $86\%$ accuracy for hydrogen atoms (see fig. S6). The NN inferred the positions of hydrogen atoms based on patterns learned from the training data, but due to their weaker signal compared to oxygen and the structural differences between the basal and prism I planes, the prediction accuracy for hydrogen atoms in the prism I dataset decreased. Despite this, the NN demonstrates robust generalization capabilities for object detection tasks. We then utilized the NN, trained exclusively on simulation data to analyze two experimental AFM images, each with a size $\sim 4\times 4\mathrm{nm}^2$ . One image features an interfacial superstructure recently observed at $121\mathrm{K}$ on hexagonal ice (Ih). This $(\sqrt{19}\times$ $\sqrt{19})$ periodic superstructure, consisting of mixed Ih- and cubic (Ic)-stacking nanodomains which was uncovered by MD simulations, provides an excellent benchmark for evaluating the NN's performance. The second image depicts a disordered interfacial structure on Ih at $135\mathrm{K}$ . The experimental images were divided into $2.5\times 2.5\mathrm{nm}^2$ sections, and the NN predicted atom positions for each section. By combining these predictions, we reconstructed the final structures that correspond to the experimental AFM images (see Fig. 2 and the fig. S7 for origin predictions). Prediction errors are generally observed in regions with weaker AFM signals, particularly where O and H atoms are less prevalent. These discrepancies can be easily corrected through physical rules adjustments (see details in methods). We then use the point charge PPM to simulate the AFM images based on the modified structure without the substrate. As shown in Fig. 2, the simulated AFM closely match the experimental data. The NN accurately predicts the positions of dangling OH bonds (with O-H bonds pointing obliquely upward toward the surface). Comparing the NN predictions with the superstructure benchmark, the NN achieves an accuracy of $94\%$ for O for over 143 water molecules. It is worth noting that the NN's performance deteriorates when CycleGAN-generated data is not included (see fig. S8 and table S1). These results demonstrate the transferability and robustness of our object detection NN. # D. Structure generation and relaxation Although simulated AFM images closely resemble experimental ones, completing the underlying layer and verifying its stability through relaxation is crucial before making definitive assessments of NN predictions. In bulk ice at low temperatures, the internal structure forms a near-perfect hexagonal crystal. However, aligning the lower-layer structure with the upper-layer disordered structure through simple rotation, translation, and energy calculations is labor-intensive and prone to multiple local energy minima due to proton disorder. The delicate HB network can be destabilized by slight lattice misalignments, tilts, or dangling molecules in the interfacial. To address this, we first extended the voxel representation and sliding detection window to $2.5 \times 2.5 \times 0.9 \mathrm{~nm}^3$ , enabling the object detection NN to handle both detection and generation tasks. However, tests using datasets from both the basal and prism I planes revealed a significant drop in prediction accuracy with increasing detection depth, correlating with weaker AFM signals (see fig. S6). To address this issue, we developed a cVAE to reconstruct the lower-layer structure of the bulk ice interface based on the upper-layer structure (see network details in the methods). The process involved encoding the lower-layer structure into a latent space, then decoding it according to the upper-layer structure. By sampling from the latent space distribution, the decoder generates new lower-layer structures. We used a large 3D voxel representation with dimension of $2.5 \times 2.5 \times 1.6 \mathrm{~nm}^3$ to retain the mid- to long-range disorder in the interfacial ice. Approximately 300 water molecules were generated during the process. To simplify the computation only oxygen atoms were retained in generation. Besides, we also included a published trajectory simulating using the coarse-grained mW model to obtain large grain boundaries in the topmost layer (see methods). The model was trained for over 15 epochs, and the parameters with the lowest reconstruction loss were selected. Test with simulation data demonstrated that the NN accurately generated structures that closely matched the real structure, displaying hexagonal crystal morphologies after applying translation and rotation based on the upper-layer structure (see fig. S9). To apply the model to experimental data, the interfacial structure was segmented into $2.5\mathrm{nm}$ sections, using each section as a conditional input to generate the corresponding lower-layer structure. Due to the non-periodic nature of the interfacial structure, we constructed the interfacial ice structure by matching the generated structures with a larger ice Ih template (Fig. 3a). After completing the crystal matching, we first fixed the upper-layer boundaries and performed preliminary relaxation of the hydrogen-bonds, followed by energy relaxation of the entire system (see simulations details in methods). The green lines in Fig. 3a and 3b show the relative atomic displacements before and after relaxation, while the root-mean-squared displacement (RMSD) distribution is depicted in Fig. 3c. The results indicate that most atomic displacements are under $1\AA$ , demonstrating the network's effectiveness in reconstructing the lower-layer structure. An RMSD value around $1.2\AA$ reflects the relative displacement of water molecules due to partial HB mismatches in the adjacent lower layer, without altering the HB topology of the interfacial water. During the energy relaxation of the disordered structure, an RMSD value around $2\AA$ suggests mismatches between lower and upper layers, disrupting the hydrogen-bond network of the interfacial water (Fig. 3d). This local planarization phenomenon, where additional water molecules appear in the center of octagonal rings, has been observed experimentally. Defects in the lower-layer structure may signal the onset of premelting. The failure prediction of NN on small defects may stem from two main reasons. Firstly, the cVAE network lacks precision in predicting small defects in large system. More importantly, these structures were absent in the simulated system, highlighting a significant discrepancy between training and experimental data distributions. Despite the limited scope of the training data, the network performs well in predicting most disordered regions, showcasing strong generalization capability. # E. Amorphous ice layer At low temperatures (115 K-135 K), the dynamics of water molecules slow down significantly, and the timescales required for relaxation from random configurations to near-equilibrium states exceed the capabilities of conventional MD simulations. In contrast, AFM experiments capture these slow dynamics over extended periods. Therefore, the 3D structures derived from experimental data by our ML framework serve as ideal initial configurations for MD simulations. This approach facilitates a comprehensive exploration of the structural and dynamic properties of water molecules in this challenging low-temperature regime, and enables the study of the premelting process at higher temperatures, which is otherwise inaccessible to direct AFM observation due to desorption in vacuum. Seven representative 3D structures, obtained from AFM experiments (115 K-135 K), containing 120-250 surface water molecules with lateral dimensions of $3.5 - 7\mathrm{nm}$ (see fig. S10), were used as starting configurations for MD simulations. Simulations were performed from $120\mathrm{K}$ to $240\mathrm{K}$ , with a 20 ns relaxation at each temperature (see methods). After relaxation, the surface structures exhibited a planar, topologically disordered HB network, rather than a defect-free hexagonal ice configuration. Notably, we observed that such 2D structure disorder is confined to the topmost layer of the surface, while artificially introduced defects in the subsurface layers are rapidly repaired during relaxation. We used tetrahedrality and the proportions of six-membered rings to characterize the surface structure (see SI). Both parameters indicate low values and don't change significantly as temperature, suggesting a stable, highly disordered state between $120\mathrm{K}$ and $180\mathrm{K}$ (See Fig. 4a, b and S11). This contrasts significantly with previous simulation studies, which typically begin with a proton-disordered, pristine hexagonal surface and assume that topological defects gradually accumulate with increasing temperature (see Fig. 4b and S11 and S12). For instance, at $140\mathrm{K}$ , six-membered rings accounted for approximately $30\%$ of the surface structure, a result that differs significantly from simulation based on a proton-disordered surface with minimal defects, or those simulating the deposition process, which exhibit 3D disorder with many ad-molecules (see Fig. 4a). This challenges the conventional assumption that surface disorder primarily arises only after the formation of the quasiliquid layer above $180\mathrm{K}$ , driven by thermally activated diffusivity. We identified a distinct amorphous ice layer phase preceding QLL formation characterized by pronounced 2D topological disorder and solid-like dynamical properties. Further analysis of the distribution of surface dangling OH groups, compared to previous SFG experimental data(see SI), provides strong qualitative and quantitative agreement, offering compelling evidence for the existence of this newly identified surface phase (See Fig. 4e). This finding necessitates a revision of the ice premelting phase diagram under high vacuum condition (Fig. 4b). We propose that at approximately $121\mathrm{K}$ , surface proton disorder and the boundary between Ic and Ih nanodomain facilitates the formation of vacancies. These vacancies reduce the binding strength between neighboring molecules, triggering a cascade of structural disordering that ultimately drives the transition from the ordered superstructure phase to the AIL phase. While cascade disordering has previously been associated with QLL formation, a similar phenomenon—a cascade of structural disorder propagating across the proton-disordered surface of Ih ice—was proposed by Watkins and observed in kinetic Monte Carlo simulations $(0.11\mu s)$ at $100\mathrm{K}$ . As the temperature approaches $\sim 180~\mathrm{K}$ , thermal activation of surface diffusivity triggers in-plane particle diffusion, transforming the AIL phase into the QLL phase, resulting in a more fluctuating and disordered interface (Fig. 4d). Notably, the AIL phase exhibits significant heterogeneity in the degree of disorder, which can influence the onset temperature for molecular diffusivity (see methods and fig. S12 and S13). Such surface heterogeneity suggests variations in adsorption energies, which could significantly impact crystal growth kinetics, dissolution dynamics, and catalytic activity. For example, the presence of topological defects in AIL could significantly enhance the chemical reactivity of ice surfaces, with direct implications for heterogeneous chemistry in stratospheric clouds, where trace gases (e.g., $\mathrm{H}_2\mathrm{O}_2$ $\mathrm{SO}_2$ , HCl) undergo uptake and dissociative reactions. This experimentally validated 3D structural information serves as a solid foundation for future explorations into the physical and chemical properties of interfacial ice. # III.DISCUSSION We introduce a robust machine learning framework capable of reconstructing the complex 3D atomic structure at disordered surfaces directly from experimental AFM data. Applied to the prototypical system of ice premelting, this approach reveals a previously unrecognized amorphous ice layer that forms prior to the quasi-liquid layer, thereby revising the phase diagram of ice and provides atomic-scale insights into the premelting process essential to cryospheric science, materials mechanics, atmospheric chemistry, and planetary phenomena. Methodologically, the ability to resolve the amorphous ice layer underscores the power of our framework in revealing complex interfacial structures and advancing our understanding of disordered systems. The two-stage strategy, which decouples AFM analysis into object detection and 3D structure generation, mitigates the compounding errors arising from simulation artifacts, experimental noise, and AFM's limited depth resolution. Crucially, this framework bridges the longstanding gap between simulation and experiment: AFM images offer physically grounded initial configurations for molecular dynamics (MD) simulations, enabling the exploration of thermal and structural dynamics that are otherwise inaccessible. Although developed in the context of ice, the proposed framework is broadly applicable to a wide range of nanostructure and disordered interfaces. By integrating with established generative approached such as molecular design and crystal structure prediction —it can be extended to diverse systems, including organic adsorbates, heterogeneous catalytic surfaces, and biomolecular assemblies. Furthermore, integration this strategy with complementary modalities, such as spectroscopy, tomography, or coherent X-ray imaging, could enable a more comprehensive understanding of interfacial structure and function. By resolving a long-standing enigma in ice premelting and enabling nanoscale 3D reconstruction of disordered interfaces, our work establishes a generalizable paradigm for AFM-guided analysis, with broad implications for interfacial science and the inverse design of functional materials.
arxiv_physics
2025-12-13T00:00:00Z
https://arxiv.org/pdf/2512.15772
{"title": "Unveiling the amorphous ice layer during premelting using AFM integrating machine learning", "raw_content": "# Unveiling the amorphous ice layer during premelting using AFM integrating machine learning\n\nBinze Tang, $^{1*}$ , Chon-Hei Lo $^{2*}$ , Tiancheng Liang $^{1*}$ , Jiani Hong $^{1*†}$ , Mian Qin $^{3}$ , Yizhi Song $^{1}$ , Duanyun Cao $^{4,5}$ , Ying Jiang $^{1,6,7,8‡}$ , Limei Xu $^{1,6,7§}$\n\n$^{1}$ International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China\n\n2 Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325001, China\n\n$^{3}$ School of Physics, Peking University, Beijing, 100871, China\n\n$^{4}$ Beijing Key Laboratory of Environmental Science and Engineering, School of Materials Science and Engineering, Beijing Institute of Technology, Beijing, 100081, China\n\n5 Chongqing Innovation Center, Beijing Institute of Technology, Chongqing, 401120, China\n\n$^{6}$ Collaborative Innovation Center of Quantum Matter, Beijing, 100871, China\n\n$^{7}$ Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light-Element Advanced\n\nMaterials, Peking University, Beijing 100871, China\n\n$^{8}$ New Cornerstone Science Laboratory, Peking University, Beijing 100871, P. R. China\n\nPremelting plays a key role across physics, chemistry, materials and biology sciences but remains poorly understood at the atomic level due to surface characterization limitations. We report the discovery of a novel amorphous ice layer (AIL) preceding the quasi-liquid layer (QLL) during ice premelting, enabled by a machine learning framework integrating atomic force microscopy (AFM) with molecular dynamics simulations. This approach overcomes AFM's depth and signal limitations, allowing for three-dimensional surface structure reconstruction from AFM images. It further enables structural exploration of premelting interfaces across a wide temperature range that are experimentally inaccessible. We identify the AIL, present between 121-180K, displaying disordered two-dimensional hydrogen-bond network with solid-like dynamics. Our findings refine the ice premelting phase diagram and offering new insights into the surface growth dynamic, dissolution and interfacial chemical reactivity. Methodologically, this work establishes a novel framework for AFM-based 3D structural discovery, marking a significant leap in our ability to probe complex disordered interfaces with unprecedented precision and paving the way for future disciplinary research, including surface reconstruction, crystallization, ion solvation, and biomolecular recognition.\n\n# I. INTRODUCTION.\n\nPremelting—the formation of a thin liquid-like layer on crystal surfaces well below the melting point[1,2]—is a ubiquitous interfacial phenomenon observed across all classes of solids, with profound scientific and practical implications. It influences a wide range of properties and processes, including mechanical behavior, friction, chemical reactivity, cryopreservation, and atmospheric chemistry[3-8]. First proposed by Faraday on ice surface over 170 years ago, this phenomenon has been extensively explored through experimental and simulation techniques[9-13]. However, its underlying mechanism remains unsolved, primarily due to the challenge in probing the atomic structure and dynamics of disordered interfaces. Unlike bulk materials, which can be readily analyzed using crystallography[14,15], surface structures are inherently more complex and demand advanced, surface-sensitive techniques, such as low-energy electron diffraction[16], helium-atom scattering[9], X-ray absorption spectroscopy[17], and\n\n* These authors contributed to this work equally. \n$\\dagger$ Contact author: timeless@pku.edu.cn \n+ Contact author: yjiang@pku.edu.cn \n$\\S_{\\text{Contact author: limei.xu@pku.edu.cn}}$\n\nsum frequency generation spectroscopy[10,18-21]. While these methods yield valuable insights into the outermost layers, they suffer from limited spatial resolution and intrinsic averaging effects, which prevent the resolution of nanoscale heterogeneities.\n\nRecent advancements in qPlus-based noncontact AFM (nc-AFM) with a CO-functionalized tip[22] has achieved submolecular resolution of surface structures[23-27], capturing ordered structures, transient intermediates, and even disordered configurations. Despite these capabilities, AFM faces fundamental challenges when applied to complex three-dimensional (3D) disordered systems. Conventional AFM analyses rely on trial-and-error workflows: candidate structures inferred from experimental images are relaxed using density functional theory (DFT), then compared to simulated AFM images via the probe particle method (PPM)[28,29]. While effective for simple, atomically flat surfaces[30-33], this approach becomes computationally prohibitive for structurally heterogeneous, fluctuating interfaces. Moreover, the\n\nintrinsic surface sensitivity of nc-AFM limits depth resolution, obstructing the reconstruction of full 3D structures and the understanding of interfacial dynamics[34,35].\n\nMachine learning (ML) offers new possibilities for AFM imaging interpretation, enabling advances in atomic identification[36-38], molecular classification[39-41], and electrostatic potential mapping[42]. However, current ML methods are predominantly designed for well-ordered or planar surfaces and struggle with disordered, non-periodic interfaces, where signal degradation, often compounded by experimental noise, leads to a pronounced increase in structural ambiguity. Although generative models have achieved success in areas such as protein structure prediction[43], organic molecules synthesis[44], and crystal[45,46] structure generation, robust 3D reconstruction of disordered and asymmetric interfaces from incomplete AFM data remains a formidable challenge[47].\n\nHere we introduce a general ML-AFM framework that combines object detection for topmost layer structure identification with structure generation to infer subsurface configurations, enabling accurate atomic-scale reconstruction of disordered interfaces from AFM data. We apply this framework to ice premelting—the earliest and most extensively studied premelting system—and successfully reconstruct its disordered structure. The reconstructed 3D configurations provide physically grounded inputs for molecular dynamics simulations, enabling effective sampling of premelting dynamics in large temperature regime ( $>140\\mathrm{K}$ ) that are experimentally inaccessible due to desorption under vacuum conditions. Notably, our simulations reveal a previously unrecognized amorphous ice layer (AIL) that emerges prior to the formation of the quasi-liquid layer (QLL). This AIL, present within the $121 - 180\\mathrm{K}$ range, features a disordered hydrogen-bond (HB) network with solid-like dynamics, challenging conventional views of interfacial premelting. This work establishes a generalizable strategy for resolving complex 3D disordered interfaces, enabling high-resolution imaging and structural discovery that yields new insights into interfacial phenomena across diverse process, including crystallization, ion solvation, molecular recognition, and heterogeneous catalysis.\n\n# II. RESULTS\n\n# A. Overview of the framework\n\nWe designed two networks: an object detection network and a structure generation network to resolve the 3D structure of interfacial ice from AFM data. The object detection network analyses the topmost layer experimental signals, while the structure generation\n\nnetwork reconstructs the subsurface where no experimental signals are available as illustrated in Fig. 1. In the object detection task, a 3D U-Net-like[48] neural network (NN) takes AFM images as input and predicts the corresponding 3D structure represented by voxels containing position and species information (see methods and fig. S1). Once the top-layer structure is identified, a conditional variational auto-encoder (cVAE) is employed to generate the underlying ice structure for subsequent structural relaxation (see methods and fig. S3). Due to the lack of labeled experimental data, all training data is generated through simulations. MD simulations are used to explore the phase space of bulk ice interfaces, and the sampled structures are transformed into AFM images using the PPM. To replicate experimental noise, a CycleGAN is trained using unlabeled experimental AFM images (see methods and fig. S2). The augmented simulated AFM images are then used to train the object detection model, while the sampled structures are directly employed to train the structure generation model.\n\n# B. Input data preparation and augmentation\n\nTo explore the ice interface phase space, we performed molecular dynamics (MD) simulations of hexagonal ice (Ih) with the basal $<0001>$ plane exposed to the vapor phase (Fig. 1a). Simulations spanned $160\\mathrm{K}$ to $260\\mathrm{K}$ to capture the entire premelting regime, where the first layer forms a quasi-liquid layer (QLL) near $180\\mathrm{K}$ and the second layer melts above $254\\mathrm{K}$ based on TIP4P/Ice water model[49,50]. To enrich the dataset and mimic ice growth, simulations were performed using both pristine and excess water deposited on interfaces (see methods). Structures were sampled using a $2.5 \\times 2.5 \\times 0.3\\mathrm{nm}^3$ sliding detection window (Fig. 1I) to capture interfacial disorder in the xy-plane, with the $z$ -depth determined by the density distribution along the $z$ -axis (see methods). Over 60,000 sampled structures were randomly divided into training ( $60\\%$ ), validation ( $20\\%$ ), and testing ( $20\\%$ ) sets. Each structure was then converted into a stack of 10 simulated AFM images at varying tip-sample distances as input for the object detection NN (see methods and fig. S4). To address the challenge of detecting weak signals from deeper oxygen and hydrogen atoms amidst experimental noise, a CycleGAN was trained on unlabeled experimental AFM images. This approach aligns unpaired simulated and experimental AFM images by mapping them to a shared latent space, with training concluding upon convergence of the Fréchet Inception Distance [51], enabling the NN to be trained with realistic noise (see methods and fig. S5 for examples of images with or without CycleGAN).\n\n# C. Structure identification for the topmost bulk ice interface\n\nAfter training the NN for object detection with 36,000 data for over 10 epochs where the loss converges. The NN's prediction aligns well with reference data, even in the $z$ -direction where water molecules exhibit significant fluctuations (see fig. S6). The prediction accuracy for the interfacial ice structure on the basal plane reaches nearly $100\\%$ for oxygen atoms and $99\\%$ for hydrogen atoms. To further assess the NN's generalization capability, predictions were extended to interfacial water structures on the prism I plane, which has a larger inter-layer distance and exhibits different pre-melting behavior compared to basal plane. Tested on over 10,000 structures across temperatures from $160\\mathrm{K}$ to $260\\mathrm{K}$ , the NN achieved $97\\%$ accuracy for oxygen atoms and $86\\%$ accuracy for hydrogen atoms (see fig. S6). The NN inferred the positions of hydrogen atoms based on patterns learned from the training data, but due to their weaker signal compared to oxygen and the structural differences between the basal and prism I planes, the prediction accuracy for hydrogen atoms in the prism I dataset decreased. Despite this, the NN demonstrates robust generalization capabilities for object detection tasks.\n\nWe then utilized the NN, trained exclusively on simulation data to analyze two experimental AFM images, each with a size $\\sim 4\\times 4\\mathrm{nm}^2$ . One image features an interfacial superstructure recently observed at $121\\mathrm{K}$ on hexagonal ice (Ih)[52]. This $(\\sqrt{19}\\times$ $\\sqrt{19})$ periodic superstructure, consisting of mixed Ih- and cubic (Ic)-stacking nanodomains which was uncovered by MD simulations, provides an excellent benchmark for evaluating the NN's performance. The second image depicts a disordered interfacial structure on Ih at $135\\mathrm{K}$ . The experimental images were divided into $2.5\\times 2.5\\mathrm{nm}^2$ sections, and the NN predicted atom positions for each section. By combining these predictions, we reconstructed the final structures that correspond to the experimental AFM images (see Fig. 2 and the fig. S7 for origin predictions). Prediction errors are generally observed in regions with weaker AFM signals, particularly where O and H atoms are less prevalent. These discrepancies can be easily corrected through physical rules adjustments (see details in methods).\n\nWe then use the point charge PPM to simulate the AFM images based on the modified structure without the substrate. As shown in Fig. 2, the simulated AFM closely match the experimental data. The NN accurately predicts the positions of dangling OH bonds (with O-H bonds pointing obliquely upward toward the surface). Comparing the NN predictions with the superstructure benchmark, the NN achieves an accuracy of $94\\%$ for O for over 143 water molecules.\n\nIt is worth noting that the NN's performance deteriorates when CycleGAN-generated data is not included (see fig. S8 and table S1). These results demonstrate the transferability and robustness of our object detection NN.\n\n# D. Structure generation and relaxation\n\nAlthough simulated AFM images closely resemble experimental ones, completing the underlying layer and verifying its stability through relaxation is crucial before making definitive assessments of NN predictions. In bulk ice at low temperatures, the internal structure forms a near-perfect hexagonal crystal. However, aligning the lower-layer structure with the upper-layer disordered structure through simple rotation, translation, and energy calculations is labor-intensive and prone to multiple local energy minima due to proton disorder. The delicate HB network can be destabilized by slight lattice misalignments, tilts, or dangling molecules in the interfacial. To address this, we first extended the voxel representation and sliding detection window to $2.5 \\times 2.5 \\times 0.9 \\mathrm{~nm}^3$ , enabling the object detection NN to handle both detection and generation tasks. However, tests using datasets from both the basal and prism I planes revealed a significant drop in prediction accuracy with increasing detection depth, correlating with weaker AFM signals[35] (see fig. S6).\n\nTo address this issue, we developed a cVAE to reconstruct the lower-layer structure of the bulk ice interface based on the upper-layer structure (see network details in the methods). The process involved encoding the lower-layer structure into a latent space, then decoding it according to the upper-layer structure. By sampling from the latent space distribution, the decoder generates new lower-layer structures. We used a large 3D voxel representation with dimension of $2.5 \\times 2.5 \\times 1.6 \\mathrm{~nm}^3$ to retain the mid- to long-range disorder in the interfacial ice. Approximately 300 water molecules were generated during the process. To simplify the computation only oxygen atoms were retained in generation. Besides, we also included a published trajectory simulating using the coarse-grained mW model to obtain large grain boundaries in the topmost layer[52] (see methods). The model was trained for over 15 epochs, and the parameters with the lowest reconstruction loss were selected. Test with simulation data demonstrated that the NN accurately generated structures that closely matched the real structure, displaying hexagonal crystal morphologies after applying translation and rotation based on the upper-layer structure (see fig. S9).\n\nTo apply the model to experimental data, the interfacial structure was segmented into $2.5\\mathrm{nm}$ sections, using each section as a conditional input to generate the corresponding lower-layer structure. Due\n\nto the non-periodic nature of the interfacial structure, we constructed the interfacial ice structure by matching the generated structures with a larger ice Ih template (Fig. 3a). After completing the crystal matching, we first fixed the upper-layer boundaries and performed preliminary relaxation of the hydrogen-bonds, followed by energy relaxation of the entire system (see simulations details in methods). The green lines in Fig. 3a and 3b show the relative atomic displacements before and after relaxation, while the root-mean-squared displacement (RMSD) distribution is depicted in Fig. 3c. The results indicate that most atomic displacements are under $1\\AA$ , demonstrating the network's effectiveness in reconstructing the lower-layer structure. An RMSD value around $1.2\\AA$ reflects the relative displacement of water molecules due to partial HB mismatches in the adjacent lower layer, without altering the HB topology of the interfacial water. During the energy relaxation of the disordered structure, an RMSD value around $2\\AA$ suggests mismatches between lower and upper layers, disrupting the hydrogen-bond network of the interfacial water (Fig. 3d). This local planarization phenomenon, where additional water molecules appear in the center of octagonal rings, has been observed experimentally. Defects in the lower-layer structure may signal the onset of premelting[52]. The failure prediction of NN on small defects may stem from two main reasons. Firstly, the cVAE network lacks precision in predicting small defects in large system. More importantly, these structures were absent in the simulated system, highlighting a significant discrepancy between training and experimental data distributions. Despite the limited scope of the training data, the network performs well in predicting most disordered regions, showcasing strong generalization capability.\n\n# E. Amorphous ice layer\n\nAt low temperatures (115 K-135 K), the dynamics of water molecules slow down significantly, and the timescales required for relaxation from random configurations to near-equilibrium states exceed the capabilities of conventional MD simulations[53]. In contrast, AFM experiments capture these slow dynamics over extended periods. Therefore, the 3D structures derived from experimental data by our ML framework serve as ideal initial configurations for MD simulations. This approach facilitates a comprehensive exploration of the structural and dynamic properties of water molecules in this challenging low-temperature regime, and enables the study of the premelting process at higher temperatures, which is otherwise inaccessible to direct AFM observation due to desorption in vacuum. Seven representative 3D structures, obtained from AFM\n\nexperiments (115 K-135 K), containing 120-250 surface water molecules with lateral dimensions of $3.5 - 7\\mathrm{nm}$ (see fig. S10), were used as starting configurations for MD simulations. Simulations were performed from $120\\mathrm{K}$ to $240\\mathrm{K}$ , with a 20 ns relaxation at each temperature (see methods). After relaxation, the surface structures exhibited a planar, topologically disordered HB network, rather than a defect-free hexagonal ice configuration. Notably, we observed that such 2D structure disorder is confined to the topmost layer of the surface, while artificially introduced defects in the subsurface layers are rapidly repaired during relaxation.\n\nWe used tetrahedrality and the proportions of six-membered rings to characterize the surface structure (see SI). Both parameters indicate low values and don't change significantly as temperature, suggesting a stable, highly disordered state between $120\\mathrm{K}$ and $180\\mathrm{K}$ (See Fig. 4a, b and S11). This contrasts significantly with previous simulation studies, which typically begin with a proton-disordered, pristine hexagonal surface and assume that topological defects gradually accumulate with increasing temperature (see Fig. 4b and S11 and S12). For instance, at $140\\mathrm{K}$ , six-membered rings accounted for approximately $30\\%$ of the surface structure, a result that differs significantly from simulation based on a proton-disordered surface with minimal defects, or those simulating the deposition process, which exhibit 3D disorder with many ad-molecules (see Fig. 4a). This challenges the conventional assumption that surface disorder primarily arises only after the formation of the quasiliquid layer above $180\\mathrm{K}$ , driven by thermally activated diffusivity[1]. We identified a distinct amorphous ice layer phase preceding QLL formation characterized by pronounced 2D topological disorder and solid-like dynamical properties. Further analysis of the distribution of surface dangling OH groups, compared to previous SFG experimental data[19](see SI), provides strong qualitative and quantitative agreement, offering compelling evidence for the existence of this newly identified surface phase (See Fig. 4e).\n\nThis finding necessitates a revision of the ice premelting phase diagram under high vacuum condition (Fig. 4b). We propose that at approximately $121\\mathrm{K}$ , surface proton disorder and the boundary between Ic and Ih nanodomain facilitates the formation of vacancies. These vacancies reduce the binding strength between neighboring molecules, triggering a cascade of structural disordering that ultimately drives the transition from the ordered superstructure phase to the AIL phase[52]. While cascade disordering has previously been associated with QLL formation, a similar phenomenon—a cascade of structural disorder propagating across the\n\nproton-disordered surface of Ih ice—was proposed by Watkins[54] and observed in kinetic Monte Carlo simulations $(0.11\\mu s)$ at $100\\mathrm{K}[53]$ . As the temperature approaches $\\sim 180~\\mathrm{K}$ , thermal activation of surface diffusivity triggers in-plane particle diffusion, transforming the AIL phase into the QLL phase, resulting in a more fluctuating and disordered interface (Fig. 4d). Notably, the AIL phase exhibits significant heterogeneity in the degree of disorder, which can influence the onset temperature for molecular diffusivity (see methods and fig. S12 and S13).\n\nSuch surface heterogeneity suggests variations in adsorption energies[54], which could significantly impact crystal growth kinetics, dissolution dynamics, and catalytic activity[55]. For example, the presence of topological defects in AIL could significantly enhance the chemical reactivity of ice surfaces, with direct implications for heterogeneous chemistry in stratospheric clouds[56], where trace gases (e.g., $\\mathrm{H}_2\\mathrm{O}_2$ $\\mathrm{SO}_2$ , HCl) undergo uptake and dissociative reactions[6,57-59]. This experimentally validated 3D structural information serves as a solid foundation for future explorations into the physical and chemical properties of interfacial ice.\n\n# III.DISCUSSION\n\nWe introduce a robust machine learning framework capable of reconstructing the complex 3D atomic structure at disordered surfaces directly from experimental AFM data. Applied to the prototypical system of ice premelting, this approach reveals a previously unrecognized amorphous ice layer that forms prior to the quasi-liquid layer, thereby revising the phase diagram of ice and provides atomic-scale insights into the premelting process essential to cryospheric science, materials mechanics, atmospheric chemistry, and planetary phenomena.\n\nMethodologically, the ability to resolve the amorphous ice layer underscores the power of our framework in revealing complex interfacial structures and advancing our understanding of disordered systems. The two-stage strategy, which decouples AFM analysis into object detection and 3D structure generation, mitigates the compounding errors arising from simulation artifacts, experimental noise, and AFM's limited depth resolution. Crucially, this framework bridges the longstanding gap between simulation and experiment: AFM images offer physically grounded initial configurations for molecular dynamics (MD) simulations, enabling the exploration of thermal and structural dynamics that are otherwise inaccessible.\n\nAlthough developed in the context of ice, the proposed framework is broadly applicable to a wide range of nanostructure and disordered interfaces. By integrating with established generative approached\n\nsuch as molecular design and crystal structure prediction[43,44] —it can be extended to diverse systems, including organic adsorbates, heterogeneous catalytic surfaces, and biomolecular assemblies. Furthermore, integration this strategy with complementary modalities, such as spectroscopy, tomography, or coherent X-ray imaging, could enable a more comprehensive understanding of interfacial structure and function. By resolving a long-standing enigma in ice premelting and enabling nanoscale 3D reconstruction of disordered interfaces, our work establishes a generalizable paradigm for AFM-guided analysis, with broad implications for interfacial science and the inverse design of functional materials.\n\n# APPENDIX A: AFM EXPERIMENTS\n\nAll the experiments were performed with a combined noncontact AFM/STM system at $5\\mathrm{K}$ using a home-made qPlus sensor equipped with a tungsten (W) tip (spring constant, $\\mathrm{k_0\\approx 1,800N\\cdot m^{-1}}$ ; resonance frequency, $\\mathrm{f_0 = 30.4kHz}$ ; quality factor, $\\mathrm{Q\\approx 100,000}$ ). All AFM data were measured at $5\\mathrm{K}$ under ultrahigh vacuum $(< 3\\times 10^{-10}$ mbar). The AFM frequency shift $(\\Delta \\mathbf{f})$ images were obtained with the CO-functionalized tips in the constant-height mode, respectively with 200 pm oscillation amplitude. The tip height in AFM imaging refers to the maximum tip height (set as $0\\mathrm{pm}$ ) during the height-dependent imaging process, at which the contrast of H-up water molecules can be clearly distinguished. Only the relative heights between images have a certain reference value. Image processing was performed by Nanotec WSxM.\n\n# APPENDIX B: MD SIMULATIONS\n\nTo explore the phase space of the studied system, we simulated the bulk ice interface by constructing a hexagonal ice (Ih) structure with the basal face (<0001> plane) exposed to the vapor phase. The Tip4p/Ice force field was used in all simulations[50]. An 8-bilayer ice-Ih slab with dimensions of $10.61\\mathrm{nm}\\times 9.19\\mathrm{nm}$ was initially created by GenIce[60] package. To account for thermal expansion, we first ran simulations of these bulk ice configurations in a constant pressure canonical ensemble across a temperature range from $160\\mathrm{K}$ to $260\\mathrm{K}$ , at a pressure of 0 bar for 2 ns. The equilibrium configurations were then cleaved by introducing a vacuum layer of approximately $50\\AA$ . Periodic boundary conditions were applied in all three directions of the simulation box. To acquire the proton-disordered ice surface, we use a heating and annealing process based on Ref [61]. Specifically, all oxygen atoms are fixed while hydrogen atoms are heated to $1200\\mathrm{K}$ and then annealed to $4\\mathrm{K}$ over 2 ns. All MD simulations were carried out using the Large-scale Atomic/Molecular\n\n# Massively Parallel Simulator (LAMMPS) package[62].\n\nTo enrich the dataset, we conducted simulations with two initial configurations: i) a pristine hexagonal ice interface, and ii) an interface with excess water molecules deposited to mimic the ice growth process. For both configurations, MD simulations were performed over a temperature range of $160\\mathrm{K}$ to $260\\mathrm{K}$ (in steps of $20\\mathrm{K}$ ) to sufficiently sample phase space. The bottom four layers were fixed for all simulations. For initial configuration i), the simulation timestep was 1 fs and the total simulation time at a given $(N,V,T)$ is 10 ns where the first 2 ns were run for structure relaxation. For initial configuration ii), 2600 water molecules (equivalent to 1.5 bilayers) were deposited on the surface at a rate of 1 molecule every 2 ps. After deposition, simulations were conducted at a given $(N,V,T)$ for 15 ns where the first 5 ns were run for structure relaxation. Additionally, to diversify the test data, simulations were also conducted on an In slab with the Prism I plane ( $<10\\overline{1}0>$ plane) exposed to the vapor phase. During the structure generation neural network training, we also incorporated a previously published trajectory simulating deposition on a larger surface $(352.2\\AA\\times 366.1\\AA)$ with extended relaxation time, using the mW model to obtain grain boundaries in the topmost layer[52].\n\n# APPENDIX C: STRUCTURE SAMPLING\n\nWe extracted the configurations by sliding a probing window along simulation trajectories to prepare the data for model training. We set the detection window (Fig. 1, the first panel) in size of $2.5 \\times 2.5 \\times 0.3 \\mathrm{~nm}^3$ , which is large enough to capture the disordered feature of interfacial water structures in the XY plane. Thus, all structures are restricted within the windows, and their corresponding simulated AFM images have the same size of $2.5 \\times 2.5 \\mathrm{~nm}^2$ . The depth of the detection window in the normal direction is chosen based on the density distribution peak along the z-axis to capture sufficient information on hydrogen bonds and ring structures. Then, the window slides in the direction parallel to the substrate. In the XY plane, the window takes 2 steps with a stride of $0.5 \\mathrm{~nm}$ . The configurations used for data acquisition were sampled from the MD simulations trajectories with a time interval of 1 ns. For the structure generation task, the detection window was extended to $2.5 \\times 2.5 \\times 1.6 \\mathrm{~nm}^3$ . We note that the detection window can also rotate in the x-y plane to mimic the random orientation of the substrate.\n\n# APPENDIX D: SIMULATIONS OF AFM IMAGES\n\nThe AFM images were simulated using a molecular mechanics model based on methods described in refs\n\n[28,29]. We performed AFM simulations to model the CO-tip based on the probe-particle model with the following parameters, effective lateral stiffness $\\mathrm{k} = 0.50$ N/m, atomic radius $\\mathrm{Rc} = 1.661$ Å, and $\\mathrm{Q} = -0.05$ e (e is the elementary charge). The parameters $r$ (van der Waals radius) and $\\epsilon$ (potential well depth) of the Lennard-Jones pairwise potentials for the O and H atoms used in AFM simulations are: $r_{H} = 1.487$ Å, $\\epsilon_{H} = 0.680$ meV, $r_{O} = 1.661$ Å, $\\epsilon_{O} = 9.106$ meV. To reduce computation cost, the charge distribution is modeled as a point charge on each atom with $q_{H} = 0.4238$ e and $q_{O} = -0.8476$ e. These parameters can effectively reproduce most of the important features of experimental AFM images (see Fig. S4). We observed small changes in the simulation parameters for training data do not significantly change the predictions on the experimental data. The tip height in the AFM simulations is defined as the vertical distance between the metal tip apex and the topmost layer of the substrate. The oscillation amplitude of the tip in the simulated AFM images is $200$ pm.\n\n# APPENDIX E: 3D VOXEL REPRESENTATION\n\nInspired by the YOLO[63], we developed a refined 3D voxel representation to describe atomic structures. The space is divided into $32 \\times 32 \\times 4$ cubic voxels, each with a space diagonal no longer than $1.5\\AA$ , ensuring that no two identical atoms occupy the same voxel (Fig. 1 c). Each voxel can contain at most one oxygen and one hydrogen atom, represented by confidence scores $c_{O}$ and $c_{H}$ respectively, where $c_{O}, c_{H} \\in [0,1]$ . If an atom is present, its displacement relative to the voxel's lower left vertex is recorded as fractional coordinates $dx$ , $dy$ , and $dz$ . This representation allows continuously prediction of 3D atomic positions while minimizing computational resources. The loss function for object detection NN is the weighted sum of binary cross-entropy for atom presence and mean square error for relative displacements. After loss calculation, non-maximum suppression (NMS) is applied to further refine the predictions. To assess model performance, we constructed a confusion matrix that considers both atom types and positions.\n\n# APPENDIX F: OBJECT DETECTION NEURAL NETWORK\n\nWe refer to 'Code availability' for full technical details and below provides a high-level summary of the object detection model architecture. The architecture of the detection network is based on 3D U-Net[48], and is mostly modified from denoising diffusion probabilistic models (DDPM) [64] (see architecture illustration in Fig. S1). It consists of three up-sampling and down-sampling layers. Each layer contains a double convolution with a residual\n\nconnection. The network has skip connections between the up-sampling and down-sampling layer with the same resolution. In the lower two layers, an attention module is followed by the res-block. After passing through the former network, the results will be interpolated as the shape of a 3D voxel representation, and one residual block and a multilayer perceptron (MLP) are followed.\n\nThe optimization objection is a weighted sum of binary cross entropy (BCE) of confidence and mean square errors (MSE) of the fractional coordinates:\n\n$$\nL _ {D} = L _ {\\mathrm {B C E}} + L _ {\\mathrm {M S E}} \\tag {1}\n$$\n\nwhere\n\n$$\n\\begin{array}{l} L _ {\\mathrm {B C E}} = \\frac {1}{2 N} \\sum_ {\\mathrm {O}, \\mathrm {H}} \\sum_ {i = 1} ^ {N} \\left[ - w _ {c} \\left(p \\cdot c _ {i} \\cdot \\log \\hat {c} _ {i} \\right. \\right. \\tag {2} \\\\ + (1 - c _ {i}) \\\\ \\cdot \\log (1 - \\hat {c} _ {i})) ] \\\\ \\end{array}\n$$\n\nand\n\n$$\n\\begin{array}{l} L _ {M S E} = \\frac {1}{2 N} \\sum_ {O, H} \\sum_ {i = 1} ^ {N} \\left[ \\frac {w _ {x y}}{2} \\left(\\left(d x _ {i} - \\widehat {d x} _ {i}\\right) ^ {2} \\right. \\right. (2) \\\\ \\left. + \\left(d y _ {i} - \\widehat {d y} _ {i}\\right) ^ {2}\\right) (3) \\\\ \\left. + w _ {z} \\left(d z _ {i} - \\widehat {d z} _ {i}\\right) ^ {2} \\right] \\\\ \\end{array}\n$$\n\nwhere $w_{c} = 1.0, w_{\\rho} = 0.5, w_{z} = 0.5$ are weighting factors. $c_{i}, dx_{i}, dy_{i}$ and $\\mathrm{d}z_{i}$ are the data labels of confidence, fractional coordinates in x, y and z directions, respectively. And $\\hat{c}_{i}, \\widehat{dx}_{i}, \\widehat{dy}_{i}$ and $\\widehat{dz}_{i}$ are the network's output. To balance the positive and negative samples, pos weight $p = 5$ is applied to the BCE term.\n\nTo eliminate redundant atoms from the neural network (NN) predictions, we applied a nonmaximum suppression (NMS) algorithm based on atom-pair distances. Unlike traditional methods, the computation of Intersection over Union (IoU) is not used. Voxels with $c_{i} > 0.5$ are selected in descending order, and any surrounding voxels within a distance of $r_{\\mathrm{NMS}} < 2.0\\AA$ from the selected voxel are discarded. After removing invalid atoms, the confusion matrix is generated by pairing atoms within a cutoff distance of $r_{\\mathrm{M}} = 1.0\\AA$ . This matrix is used to determine true positives (TP), false positives (FP), and false negatives (FN). The F1-score, quantifying the NN's performance, is then defined as:\n\n$$\nF _ {1} = \\frac {2 T P}{2 T P + F P + F N} \\tag {4}\n$$\n\nA higher F1-score indicates better detection performance.\n\nThe CycleGAN includes two pairs of identical generators and discriminators (see architecture illustration Fig. S2). We use a similar network architecture described in the object detection part but with fewer trainable parameters. The discriminator networks consist of three layers of conv-norm-activation blocks, followed by an MLP. CycleGAN enables style transfer between experimental and simulation data, and vice versa. The training process and the hyperparameters follow those used in the original paper[65]. The complete objective is to minimize the following loss function:\n\n$$\nL = L _ {G A N} + \\lambda_ {1} \\cdot L _ {c y c} + \\lambda_ {2} \\cdot L _ {i d t e n i t y} \\tag {5}\n$$\n\nwhere $\\lambda_{1} = 10, \\lambda_{2} = 0.5$ are two hyperparameters. We use the Fréchet Inception Distance (FID) to estimate the NN performance. FID is computed using a pre-trained neural network to capture the image features[51]. The Inception v3 model with 2048 latent features is used for this purpose[66]. To adapt FID for 3D images, we treat the 3D image as a stack of 2D images.\n\n# APPENDIX H: STRUCTURE GENERATION NEURAL NETWORK\n\nThe structure generation network is a conditional variational autoencoder[67], which includes two encoders and one decoder (see architecture illustration Fig. S3). Both encoders share the same architecture, which includes three down-sampling layers. Each layer features a residual block, and the latter two layers contain an extra attention block. The encoder $E_{c}$ encodes the interfacial layer into an 8-dimensional latent vector in Gaussian distribution $N(\\pmb{\\mu}_{c},\\mathbf{I})$ . Another encoder $E_{VAE}$ encodes the lower layer an 8-dimensional latent vector in Gaussian distribution $N(\\pmb{\\mu},\\pmb{\\Sigma})$ where $\\pmb{\\mu}$ is mean vector and $\\pmb{\\Sigma}$ is log-variance vector. The decoder $D$ generates the lower layer structure depending on resampling from $N(\\pmb{\\mu},\\pmb{\\Sigma})$ during training process or $N(\\pmb{\\mu}_{c},\\mathbf{I})$ during prediction process.\n\nThe training objective is to minimize the MSE between original data and reconstruction one along with the Kullback-Leibler (KL) divergence between the Gaussian distribution $N(\\pmb{\\mu}_c, \\mathbf{I})$ and $N(\\pmb{\\mu}, \\pmb{\\Sigma})$ :\n\n$$\nL = L _ {B C E} + \\gamma \\cdot L _ {M S E} + \\beta \\cdot L _ {\\mathrm {K L}} \\tag {6}\n$$\n\nwhere $L_{\\mathrm{BCE}}$ and $L_{\\mathrm{MSE}}$ are the same as in the detection stage. And KL divergence is\n\n$$\nL _ {\\mathrm {K L}} = - \\frac {1}{2} \\sum_ {i} ^ {N} \\left[ 1 + \\Sigma_ {i} + \\left(\\mu_ {i} - \\mu_ {c, i}\\right) ^ {2} - e ^ {\\Sigma_ {i}} \\right] \\tag {7}\n$$\n\n$\\beta = 1.0$ is a hyperparameter introduced in beta-VAE[68]. $\\gamma = 0.25$ is also a hyperparameter that allows the model to focus more on structural learning.\n\n# APPENDIX I: STRUCTURE GENRATION AND RMSD RELAXATION\n\nAs shown in Fig. S6, the AFM simulated image based on the object detection network's raw prediction shows good alignment with experimental images. However, to further improve the accuracy of the structural model, regions where the two AFM images (experiment images and simulation images of raw prediction) deviated require manual adjustments, namely, adding or removing water molecules based on ice rules[69]. Such modifications were easy to implement and took minimal time. Following the adjustments, we fixed the dangling OH bonds and minimized the energy using the TIP4P/ice empirical potential. This workflow ensured the reliability of the initial structure used for subsequent structure generation.\n\nAfter matching the crystal structure for the interfacial disordered ice based on the experimental AFM images, we first performed a relaxation of the hydrogen bonds. During this process, the oxygen atoms of the crystal substrate and the upper layer were fixed, while allowing the upper layer to drift as a whole. The hydrogen atoms are free. This relaxation was conducted in the NVT ensemble at $120\\mathrm{K}$ for 2 ns. To be noticed, the same method was applied in the object detection process to adjust the hydrogen bonds where only the top layer exists. Following the hydrogen-bond adjustment, we fixed the boundaries of the upper layer and further minimized the energy for\n\nthe entire system to validate the stability of the generated structure. The conjugate gradient (CG) algorithm was applied during energy minimization, which was set to stop when the largest force component on any atom was smaller than $1 \\times 10^{-8}$ kcal/(mol·Å). The root-mean-squared displacement (RMSD) of the trajectories was tracked and only the free oxygen atoms in the upper layer were included in the calculation, as presented in Fig. 3c.\n\n# APPENDIX J: MD SIMULATIONS TAKING GENERATED DATA AS INITIAL CONFIGURATIONS\n\nThe structure generated by the model does not consider proton ordering. To address this, proton ordering is adjusted through a heating and annealing process based on Ref [61]. Specifically, all oxygen atoms are fixed while hydrogen atoms are heated to $1200\\mathrm{K}$ and then annealed to $4\\mathrm{K}$ over 2 ns. Subsequently, the entire structure is relaxed at $20\\mathrm{K}$ , with top-layer oxygen atoms constrained by virtual springs to preserve the topological structure. The spring constant is gradually reduced from 2 kcal/(mol·Å²) to 0.125 kcal/(mol·Å²) over five simulation steps spanning a total of 250 ps. The structure is then heated to $100\\mathrm{K}$ over 2 ns without any constraints.\n\nFor further investigation of its topological ordering and premelting process at higher temperatures, the structure is incrementally heated to the desired temperature at a rate of $0.2\\mathrm{K / ps}$ . The system is then relaxed for 20 ns, with data collection and analysis performed during the final 5 ns.\n\n[1] B. Slater and A. Michaelides, Surface premelting of water ice, Nature Reviews Chemistry, 3, 172 (2019). \n[2] Y. Qiu and V. Molinero, Why is it so difficult to identify the onset of ice premelting?, The journal of physical chemistry letters, 9, 5179 (2018). \n[3] Z. Jia, C. I. DeLuca, H. Chao, and P. L. Davies, Structural basis for the binding of a globular antifreeze protein to ice, Nature, 384, 285 (1996). \n[4] L. Canale, J. Comtet, A. Niguès, C. Cohen, C. Clanet, A. Siria, and L. Bocquet, Nanorheology of interfacial water during ice gliding, Physical Review X, 9, 041025 (2019). \n[5] J. Devlin, N. Uras, J. Sadlej, and V. Buch, Discrete stages in the solvation and ionization of hydrogen chloride adsorbed on ice particles, Nature, 417, 269 (2002). \n[6] T. Huthwelker, M. Ammann, and T. Peter, The uptake of acidic gases on ice, Chem Rev, 106, 1375 (2006).\n\n[7] M. J. Molina, T.-L. Tso, L. T. Molina, and F. C.-Y. Wang, Antarctic stratospheric chemistry of chlorine nitrate, hydrogen chloride, and ice: Release of active chlorine, Science, 238, 1253 (1987). \n[8] T. Hama and N. Watanabe, Surface processes on interstellar amorphous solid water: Adsorption, diffusion, tunneling reactions, and nuclear-spin conversion, Chem Rev, 113, 8783 (2013). \n[9] J. Braun, A. Glebov, A. Graham, A. Menzel, and J. Toennies, Structure and phonons of the ice surface, Phys Rev Lett, 80, 2638 (1998). \n[10] X. Wei, P. B. Miranda, and Y. Shen, Surface vibrational spectroscopic study of surface melting of ice, Phys Rev Lett, 86, 1554 (2001). \n[11] M. Mehlhorn and K. Morgenstern, *Faceting during the transformation of amorphous to crystalline ice*, Phys Rev Lett, 99, 246101 (2007). \n[12] V. Buch, H. Groenzin, I. Li, M. J. Shultz, and E. Tosatti, Proton order in the ice crystal surface, Proceedings of the National Academy of Sciences, 105, 5969 (2008).\n\n[13] N. Kawakami, K. Iwata, A. Shiotari, and Y. Sugimoto, Intrinsic reconstruction of ice-I surfaces, Sci Adv, 6, eabb7986 (2020). \n[14] X. Huang et al., Tracking cubic ice at molecular resolution, Nature, 617, 86 (2023). \n[15] A. Rosu-Finsen, M. B. Davies, A. Amon, H. Wu, A. Sella, A. Michaelides, and C. G. Salzmann, Medium-density amorphous ice, Science, 379, 474 (2023). \n[16] N. Materer, U. Starke, A. Barbieri, M. Van Hove, G. Somorjai, G. J. Kroes, and C. Minot, Molecular surface structure of a low-temperature ice Ih (0001) crystal, The Journal of Physical Chemistry, 99, 6267 (1995). \n[17] D. Nordlund, H. Ogasawara, P. Wernet, M. Nyberg, M. Odelius, L. Pettersson, and A. Nilsson, Surface structure of thin ice films, Chemical Physics Letters, 395, 161 (2004). \n[18] M. A. Sánchez et al., Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice, Proceedings of the National Academy of Sciences, 114, 227 (2017). \n[19] T. Sugimoto, Y. Otsuki, T. Ishiyama, A. Morita, K. Watanabe, and Y. Matsumoto, Topologically disordered mesophase at the topmost surface layer of crystalline ice between 120 and $200K$ , Physical Review B, 99, 121402 (2019). \n[20] Y. Nojima, Y. Suzuki, M. Takahashi, and S. Yamaguchi, Proton order toward the surface of ice In revealed by heterodyne-detected sum frequency generation spectroscopy, The Journal of Physical Chemistry Letters, 8, 5031 (2017). \n[21] W. J. Smit, F. Tang, M. A. Sánchez, E. H. Backus, L. Xu, T. Hasegawa, M. Bonn, H. J. Bakker, and Y. Nagata, Excess hydrogen bond at the ice-vapor interface around 200 K, Phys Rev Lett, 119, 133003 (2017). \n[22] L. Bartels, G. Meyer, and K. H. Rieder, Controlled vertical manipulation of single CO molecules with the scanning tunneling microscope: A route to chemical contrast, Appl Phys Lett, 71, 213 (1997). \n[23] J. B. Peng et al., The effect of hydration number on the interfacial transport of sodium ions, Nature, 557, 701 (2018). \n[24] J. B. Peng et al., Weakly perturbative imaging of interfacial water with submolecular resolution by atomic force microscopy, Nat Commun, 9, 1 (2018). \n[25] F. J. Giessibl, The qPlus sensor, a powerful core for the atomic force microscope, Review of Scientific Instruments, 90, 011101 (2019). \n[26] L. Gross, F. Mohn, N. Moll, P. Liljeroth, and G. Meyer, The chemical structure of a molecule resolved by atomic force microscopy, Science, 325, 1110 (2009).\n\n[27] N. Pavlicek and L. Gross, Generation, manipulation and characterization of molecules by atomic force microscopy, Nature Reviews Chemistry, 1, 1 (2017). \n[28] P. Hapala, R. Temirov, F. S. Tautz, and P. Jelinek, Origin of High-Resolution IETS-STM Images of Organic Molecules with Functionalized Tips, Phys Rev Lett, 113, 226101 (2014). \n[29] P. Hapala, G. Kichin, C. Wagner, F. S. Tautz, R. Temirov, and P. Jelinek, Mechanism of high-resolution STM/AFM imaging with functionalized tips, Phys Rev B, 90, 085421 (2014). \n[30] A. Shiotari and Y. Sugimoto, Ultrahigh-resolution imaging of water networks by atomic force microscopy, Nat Commun, 8, 14313 (2017). \n[31] P. Chen et al., Identification of a common ice nucleus on hydrophilic and hydrophobic close-packed metal surfaces, Nat Commun, 14, 5813 (2023). \n[32] P. Yang, C. Zhang, W. Sun, J. Dong, D. Cao, J. Guo, and Y. Jiang, Robustness of bilayer hexagonal ice against surface symmetry and corrugation, Phys Rev Lett, 129, 046001 (2022). \n[33] Y. Tian et al., Visualizing Eigen/Zundel cations and their interconversion in monolayer water on metal surfaces, Science, 377, 315 (2022). \n[34] K. Bian, C. Gerber, A. J. Heinrich, D. J. Müller, S. Scheuring, and Y. Jiang, Scanning probe microscopy, Nature Reviews Methods Primers, 1, 36 (2021). \n[35] F. J. Giessibl, The qPlus sensor, a powerful core for the atomic force microscope, Review of scientific instruments, 90 (2019). \n[36] F. Priante, N. Oinonen, Y. Tian, D. Guan, C. Xu, S. Cai, P. Liljeroth, Y. Jiang, and A. S. Foster, Structure discovery in Atomic Force Microscopy imaging of ice, ACS Nano, 18, 5546 (2024). \n[37] N. Oinonen, L. Kurki, A. Ilin, and A. S. Foster, Molecular graph reconstruction from atomic force microscope images with machine learning, Mrs Bull, 47, 895 (2022). \n[38] B. Tang, Y. Song, M. Qin, Y. Tian, Z. W. Wu, Y. Jiang, D. Cao, and L. Xu, Machine learning-aided atomic structure identification of interfacial ionic hydrates from AFM images, National Science Review, 10, nwac282 (2023). \n[39] J. Carracedo-Cosme, C. Romero-Muniz, and R. Pérez, A deep learning approach for molecular classification based on AFM images, Nanomaterials-Basel, 11, 1658 (2021). \n[40] J. Carracedo-Cosme, C. Romero-Muniz, P. Pou, and R. Pérez, Molecular identification from afm images using the iupac nomenclature and attribute multimodal recurrent neural networks, ACS Applied Materials & Interfaces, 15, 22692 (2023).\n\n[41] J. Carracedo-Cosme and R. Pérez, Molecular identification with atomic force microscopy and conditional generative adversarial networks, Npj Comput Mater, 10, 19 (2024). \n[42] N. Oinonen et al., Electrostatic discovery atomic force microscopy, ACS Nano, 16, 89 (2021). \n[43] J. Jumper et al., Highly accurate protein structure prediction with AlphaFold, Nature, 596, 583 (2021). \n[44] M. Xu, L. Yu, Y. Song, C. Shi, S. Ermon, and J. Tang, Geodiff: A geometric diffusion model for molecular conformation generation, arXiv preprint arXiv:2203.02923, (2022). \n[45] Y. Zhao, E. M. D. Siriwardane, Z. Wu, N. Fu, M. Al-Fahdi, M. Hu, and J. Hu, Physics guided deep learning for generative design of crystal materials with symmetry constraints, Npj Comput Mater, 9, 38 (2023). \n[46] J. Hoffmann, L. Maestrati, Y. Sawada, J. Tang, J. M. Sellier, and Y. Bengio, Data-driven approach to encoding and decoding 3-d crystal structures, arXiv preprint arXiv:1909.00949, (2019). \n[47] N. Ronne, A. Aspuru-Guzik, and B. Hammer, Generative diffusion model for surface structure discovery, arXiv preprint arXiv:2402.17404, (2024). \n[48] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19 (Springer, 2016), pp. 424. \n[49] T. Kling, F. Kling, and D. Donadio, Structure and dynamics of the quasi-liquid layer at the surface of ice from molecular simulations, The Journal of Physical Chemistry C, 122, 24780 (2018). \n[50] J. Abascal, E. Sanz, R. García Fernández, and C. Vega, A potential model for the study of ices and amorphous water: TIP4P/Ice, The Journal of chemical physics, 122 (2005). \n[51] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, Gans trained by a two time-scale update rule converge to a local nash equilibrium, Advances in neural information processing systems, 30 (2017). \n[52] J. Hong et al., Imaging surface structure and premelting of ice Ih with atomic resolution, Nature, 1 (2024). \n[53] A. Pedersen, K. T. Wikfeldt, L. Karssemeijer, H. Cuppen, and H. Jónsson, Molecular reordering processes on ice (0001) surfaces from long timescale simulations, The Journal of chemical physics, 141 (2014). \n[54] M. Watkins, D. Pan, E. G. Wang, A. Michaelides, J. VandeVondele, and B. Slater, Large variation of vacancy formation energies in the\n\nsurface of crystalline ice, Nature materials, 10, 794 (2011). \n[55] J. Dash, A. Rempel, and J. Wettlaufer, The physics of premelted ice and its geophysical consequences, Reviews of modern physics, 78, 695 (2006). \n[56] C. J. Stubenrauch et al., Assessment of global cloud datasets from satellites: Project and database initiated by the GEWEX radiation panel, Bulletin of the American Meteorological Society, 94, 1031 (2013). \n[57] H. Kang, Chemistry of ice surfaces. Elementary reaction steps on ice studied by reactive ion scattering, Accounts of chemical research, 38, 893 (2005). \n[58] B. Ervens, Modeling the processing of aerosol and trace gases in clouds and fogs, Chem Rev, 115, 4157 (2015). \n[59] M. Clegg and D. Abbatt, Uptake of gas-phase SO2 and H2O2 by ice surfaces: Dependence on partial pressure, temperature, and surface acidity, The Journal of Physical Chemistry A, 105, 6630 (2001). \n[60] M. Matsumoto, T. Yagasaki, and H. Tanaka, (Wiley Online Library, 2018). \n[61] V. Fidalgo Candido, R. Gomes de Aguiar Veiga, and M. de Koning, Generating proton-disordered ice configurations using orientational simulated annealing, The Journal of Chemical Physics, 161 (2024). \n[62] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, Journal of computational physics, 117, 1 (1995). \n[63] P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, A Review of Yolo algorithm developments, Procedia computer science, 199, 1066 (2022). \n[64] J. Ho, A. Jain, and P. Abbeel, Denoising diffusion probabilistic models, Advances in neural information processing systems, 33, 6840 (2020). \n[65] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, in Proceedings of the IEEE international conference on computer vision2017), pp. 2223. \n[66] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, in Proceedings of the IEEE conference on computer vision and pattern recognition2016), pp. 2818. \n[67] K. Sohn, H. Lee, and X. Yan, Learning structured output representation using deep conditional generative models, Advances in neural information processing systems, 28 (2015). \n[68] I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, beta-vae: Learning basic visual concepts with a constrained variational framework, ICLR (Poster), 3 (2017).\n\n[69] J. D. Bernal and R. H. Fowler, A theory of water and ionic solution, with particular reference to hydrogen and hydroxyl ions, J. chem. Phys, 1, 515 (1933).\n\n![](images/d9b1a7416c33b7d0a239ca4a1f58f16a9669a4ec0b7f6ce0d4475c1801e081a1.jpg) \nI. Phase Exploration& Data Augmentation\n\n![](images/88cfaa2387b1e5213dd2437d75f7d4dd240ad680b1aa155b465b1e8dced05d18.jpg) \nII. Object Detection\n\n![](images/a645ff833de92e247d9642b171a838d7c61ebac53bfe082399fd0c82f5049efa.jpg) \nIII. Structure Generation \nFigure 1 | Schematic illustration of the overall framework of the training and prediction processes. (Section I) Bulk ice surface structures are first sampled using MD simulations at temperatures ranging from $160\\mathrm{K}$ to $260\\mathrm{K}$ . These sampled structures are converted into AFM images via the PPM for training input data in Section II. Prior to training, a CycleGAN is trained on unlabeled experimental AFM images to introduce experimental-like noise into the simulated AFM images. (Section II) During training, a 3D U-Net-based neural network processes AFM images to predict the interfacial structure, represented by 3D voxels. For the prediction of experimental data, after adjustment based on ice rules, the predicted topmost layer structure can be simulated into AFM images without the substrate and relaxation for validation against experimental input. (Section III) To complete the underlying structure, a cVAE is trained using simulated data with the topmost layer structure as a conditional input. During prediction, the subsurface structure of the interfacial structure from Section II is reconstructed. The stability of the predicted structure is validated through energy minimization.\n\n![](images/a0f3a83dea63db9c9879a2521962767bda66618c29da8e7abc371f8a4a50be77.jpg) \na. Experiment Images\n\n![](images/e94c7097a9e191f1edd5709f2e292dae049f5d58c3d2eb430658e96de757b7a3.jpg)\n\n![](images/23a99212a4c293ffef5a2ae2d5df3a67daf581cbcfd0b1576f3c718ea24952ce.jpg)\n\n![](images/c1d4c520a3204f64851b5f9f4ad923ae9be2a276d1f896a0acf212417dae14a1.jpg) \nb. Simulation Images\n\n![](images/354d6f8ef70ab482a282aae27203688578ba70b8695a4fe931fafabed77d8449.jpg)\n\n![](images/998c1e77b1ae7f2e50475510b6c82a2ad935a3242fa60c721328f9a90cd9c02e.jpg)\n\n![](images/315cd03cdf346376ba04f02218eb0787fea84cf2af2248eaeb20c93ecb043a35.jpg) \nPrediction\n\n![](images/10854bd657a860cda29e67a099f6e18a1e54624e32c97e7e607548afd5a7ad11.jpg) \nc. Experiment Images\n\n![](images/6d16ecee633804345a47582ecf88900b5cc60059c6d4ac4b48b142909d5c2055.jpg)\n\n![](images/9fabbfe15f7d88c8f266bc8d4afe23df65e15742a985a42e2fa549c316ee26d3.jpg)\n\n![](images/56012b2edcabee6fb4f155165eba9553a83b84e246e8ed1e282ded8342294593.jpg)\n\n![](images/103537b6b8ac59676983760d32727d307f13ee7704ca742bf45af80bd424d2e9.jpg) \nd. Simulation Images\n\n![](images/a8d6ece32979b12037c1d55d9ff94679de36c2ede399b00ac50bfe933c9cee72.jpg) \nFigure 2 | Examples of object detection network prediction from experimental data. (a) and (c) Experimental AFM images of interfacial superstructure at $121\\mathrm{K}$ and disordered structure at $135\\mathrm{K}$ on the Ih phase basal plane. Columns 1-3 show AFM images at different tip heights -100 pm, -150 pm and -200 pm (from left to right). Column 4 shows the network prediction in top and side views, with red and white spheres representing oxygen and hydrogen atoms, respectively. (b) and (d) Simulated AFM images based on the network's predictions (no underlying structure included) corresponding to (a) and (c), showing increasing tip-sample distances.\n\n![](images/26e20516782c7e09c216c98af110be86a10cce8dbabebe509cc1541f66f9a86f.jpg)\n\n![](images/04860063703100a476fd943dc7170b8d904d71ec085262aace395921a1d2395c.jpg) \n(a)\n\n![](images/985f3b28aa470f2f4be31fb94b3fcd43200dc3926657b9522a944a9bdfa6f844.jpg) \n(b)\n\n![](images/8ea820991eb9ffe0979b18f7c484f2c4896c749c08dbdefc9b4b962bde8722d2.jpg) \n(c) \nFigure 3 | Prediction performance of structure generation network on experimental data. (a) and (b) 3D structures of the interfacial superstructure and disordered structure from Fig. 2 (a) and (c), shown in top view. The underlying structures generated by the generative neural network are depicted in light gray lines. Green lines illustrate the relative atomic displacements of oxygen atoms in the top layers before and after relaxation. (c) RMSD distribution of oxygen atoms in the topmost layer during the energy relaxation process. The orange and blue lines represent the distributions for the superstructure in (a) and the disordered structure in (b), respectively. (d) Enlarged side view of the selected area within the dashed box in (b). Two of the largest displacements of oxygen atoms are indicated by green lines. Red, blue, and white spheres represent oxygen atoms in the top layer, lower layer, and hydrogen atoms, respectively.\n\n![](images/ad0e8b33dc10e732a56ac150f7ddd8c715a2d96779fc34dee24f9c2fcd057a0e.jpg) \n(d)\n\n![](images/e61a196ca0208313139e5a0c54e5acb9fa48191fd3011616f687d7506fc84990.jpg)\n\n![](images/079b44eec4e3697b686537eb5a0d8834b8720163c4feec29518d336d2845a6aa.jpg)\n\n![](images/6a5a5e5ae29eb9b23680a56b153f7f8a459748376f4731ff7ba8f3fb2de485c2.jpg) \nFigure 4 | MD simulations using experimental data as initial configurations and Phase Diagram of the Ice Surface. Distribution of hydrogen-bond ring sizes in the interfacial layer at $140\\mathrm{K}$ for different initial configurations: disordered structure derived from atomic force microscopy, proton-disordered pristine Ih basal plane surface (Pristine Surface), and a deposition process (Deposit). Experimental results from AFM were averaged over 7 initial configurations, while results for the pristine surface and deposition process were averaged over three independent samples. (b) Phase diagram of the bulk ice surface under high vacuum conditions. A superstructure forms at $121\\mathrm{K}$ ; above this temperature, a structurally disordered amorphous ice layer (AIL) forms, transitioning to a quasi-liquid layer (QLL) around $180\\mathrm{K}$ where molecular diffusivity is thermally activated. (c) Temperature dependence of the tetrahedral order parameter in the topmost layer. Orange and blue lines represent simulations initialized with configurations from AFM and the proton-disordered hexagonal surface, respectively. (d) Temperature dependence of the diffusion coefficient. (e) Temperature dependence of the sum-frequency generation (SFG) amplitude $A_{zzz} \\propto N_s(0.32\\langle \\cos \\theta \\rangle + 0.68\\langle \\cos^3\\theta \\rangle)$ where $N_s$ and $\\theta$ represent the number and polar orientation of dangling O-H, respectively, and $\\langle \\cdot \\rangle$ denotes the ensemble average. The simulated $A_{zzz}$ values are normalized at $170\\mathrm{K}$ based on the AFM values (orange dot). The experimentally observed spectral $A_{zzz}$ values are also normalized at $170\\mathrm{K}^{19}$ ."}
# Hyperuniform patterns nucleated at low temperatures: Insight from vortex matter imaged in unprecedentedly large fields-of-view Abstract Hyperuniform patterns present enhanced physical properties that make them the new generation of cutting-edge technological devices. Synthesizing devices with tens of thousands of components arranged in a hyperuniform fashion has thus become a breakthrough to achieve in order to implement these technologies. Here we provide evidence that extended two-dimensional hyperuniform patterns spanning tens of thousands of components can be nucleated using as a template the low-temperature vortex structure obtained in pristine $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8}$ samples after following a field-cooling protocol. Keywords: vortex matter, superconductors, hyperuniformity # 1 Introduction Vortex matter in superconductors is a playground for studying how different types of disorder present in the host media, the superconducting sample, shapes the nucleation of condensed matter phases with a broad spectrum of spatial correlations. In general, vortex phases nucleated in real samples present density fluctuations yielding a non-negligible variance of the number of vortices $N$ enclosed within an in-plane area of radius $R$ , $\sigma_N^2 = \langle N^2\rangle -\langle N\rangle^2$ . At one extreme of the statistical correlations lie ordered phases with quasi-long-range order. In the other extreme lie disordered vortex systems which exhibit unbounded density fluctuations that grow faster than those of a point pattern generated by a uniform random distribution. Between these two extremes, vortex phases exhibit an aperiodic in-plane arrangement of vortices with density fluctuations increasing with distance more slowly than the studied area, namely with $\sigma_N^2\sim R^\beta$ with $0 < \beta < 2$ . This results in vortices being more evenly spaced than those in a uniform random distribution, asymptotically suppressing relative number fluctuations, $\sigma_N^2 /\langle N\rangle \rightarrow 0$ in the limit of large window areas. These ubiquitous disordered vortex phases present a hidden order characterized by a slow down of density fluctuations at large wavelengths and belong to the structural class of hyperuniform systems. Even though hyperuniformity is a long wavelength asymptotic property, experimental observations in different superconducting materials in moderate fields of view suggest some vortex phases are hyperuniform. Therefore, vortex matter can be used as a template to generate hyperuniform structures at low temperatures if nucleated in host media with particular disorder potentials. Disordered hyperuniform patterns are universal structures present in different natural systems that possess novel physical functionalities that make them exceptional for technological applications in comparison with conventional ordered materials. For instance, a disordered hyperuniform network of $\mathrm{Al_2O_3}$ walls and cylinders presents isotropic phononic and photonic bandgaps, thus blocking sound and light in all directions unlike crystals. Hyperuniform structures also possess enhanced thermal and electric transport properties, as well as mechanical resilience, outperforming conventional non-hyperuniform disordered media. Therefore, disordered hyperuniform systems are currently regarded as potential candidates for a new generation of technologies at the forefront of innovation. Most of these systems are disordered or class-II hyperuniform systems, a class of disordered structures with a moderate amount of density fluctuations but that retain the hidden hyperuniform order since $1 < \beta < 2$ for large wavelengths. An algebraic increase of $\sigma_N^2$ is manifested in reciprocal space as an algebraic growth of the structure factor for short wave vectors $q$ , namely $S(q) \propto q^\alpha$ when $q \to 0$ . The relation between the growing exponents of $\sigma_N^2$ and $S(q)$ in a $d$ -dimensional system is $\alpha = d - \beta$ . Disordered or class-III hyperuniform systems present a structure factor growing with an exponent $0 < \alpha < 1$ . In contrast, ordered or class-I hyperuniform systems present $1 < \alpha < 2$ . Depending on the nature of the disorder of the host sample and the electromagnetic coupling of vortices with the material, ordered class-I or disordered class-III hyperuniform vortex phases can be nucleated at low temperatures. Then, vortex matter can be used not only as a template to generate hyperuniform patterns at low temperatures, but controlling its coupling with the disordered host media enables to obtain hyperuniform structures with tailored properties. The reported hyperuniformity of vortex phases apparently challenges the fluctuation-compressibility theorem stating that systems with generic constituents at equilibrium with a thermal bath present a $S(q)$ in the $q \to 0$ limit proportional to the compressibility of the system. The thermodynamic limit and the equivalence of ensembles, basic statistical physics concepts, stem from this simple scaling law. Therefore, theoretically, in equilibrium conditions only thermodynamically incompressible systems could present the hyperuniform hidden order at equilibrium, and strikingly vortex matter is a compressible elastic structure in three dimensions. In general, incompressibility at equilibrium is only accomplished when the interaction between constituents is repulsive and long-ranged, in contrast to vortex phases presenting typically short range interactions. However, short-ranged interacting constituents can present hyperuniform arrangements for planes within a higher-dimensional system. This is indeed the case of the structure formed by the tips of superconducting vortices impinging on the surface of the three-dimensional vortex structure nucleated in bulk samples with point disorder. Therefore, the reported hyperuniformity of vortex phases might not challenge the fluctuation-compressibility theorem after all. In previous works we have shown that the hyperuniform correlations in the point pattern of vortices impinging in a plane arise from an effective long-range interaction mediated by the elastic properties of vortices along their length, namely across the sample thickness. Moreover, by means of Langevin dynamics simulations of the quenching of the vortex structure on cooling we have shown that the hyperuniform order progressively degrades on decreasing the magnitude of this effective long-range interaction as in the case of dramatically reducing the sample thickness. These findings warn on the potentially negative impact of finite-size effects on large-scale structural properties, which is crucial for designing hyperuniform materials. Nevertheless, the observation of hyperuniform vortex patterns at sufficiently thick samples is consistent with the fluctuation-compressibility theorem since the density fluctuations of the vortex tips are associated with the compressibility of a single plane that has a relatively large bulk tilting energy cost. All the mentioned experimental evidence was obtained from snapshots of structures frozen during cooling. Then the interpretation of these results raise the relevant question on whether the thickness dependence of hyperuniformity is an equilibrium effect, as predicted in Refs., or rather an out-of-equilibrium effect arising from the slow dynamics during cooling. Our recent simulation results indicate that finite size effects, particularly finite thickness effects, appear both at and out-of-equilibrium conditions. For real samples with a bounded thickness $t$ , at equilibrium conditions the discussed finite-thickness effect yields an in-plane crossover distance $l_{\mathrm{fs}} = (t / 2\pi)\sqrt{c_{11} / c_{44}}$ above which the system is no longer hyperuniform. Since hyperuniformity is a structural property in an asymptotic limit, ascertaining whether a bounded real system presents this hidden order or has reached this crossover in-plane distance requires direct imaging of the constituents of a system in extended fields-of-view. Here we study this issue by imaging vortices in a thick ( $t \geq 20\mu \mathrm{m}$ ) pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8$ in a very large field-of-view containing 33 000 vortices. Previous studies in this host media with weak random point disorder and for the same vortex density show that for a field-of-view of roughly 5000 vortices the vortex structure is ordered class-I hyperuniform with an exponent $\alpha = 1.3 - 1.5$ for different thick samples. Here we reveal that in the unprecedentedly large fields-of-view of up to 33 000 vortices the vortex structure nucleated in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8$ samples with weak point disorder remains hyperuniform and the finite size crossover length $l_{\mathrm{fs}} > 180a$ . # 2 Experimental We studied pristine nearly-optimally doped $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8}$ samples with $T_{\mathrm{c}} \sim 90\mathrm{K}$ from two sources. One batch of samples was grown by means of the flux method and on them we obtained vortex images spanning up to few thousands of vortices. Another batch grown also by the flux method containing much larger samples with tens of millimeter typical sizes allowed us to obtain vortex images in large fields-of-view up to tens of thousands of vortices. All the studied samples are in the thick regime, presenting thicknesses larger than $20\mu \mathrm{m}$ . The samples were specially selected ensuring that they did not exhibit any visible planar defects as determined by magnetic decoration imaging. Then it is reasonable to assume that in the studied samples the dominant disorder is point-like, namely atomic-scale defects that arise randomly when the crystals are grown. We apply the magnetic decoration technique in order to
# Hyperuniform patterns nucleated at low temperatures: Insight from vortex matter imaged in unprecedentedly large fields-of-view Abstract Hyperuniform patterns present enhanced physical properties that make them the new generation of cutting-edge technological devices. Synthesizing devices with tens of thousands of components arranged in a hyperuniform fashion has thus become a breakthrough to achieve in order to implement these technologies. Here we provide evidence that extended two-dimensional hyperuniform patterns spanning tens of thousands of components can be nucleated using as a template the low-temperature vortex structure obtained in pristine $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8}$ samples after following a field-cooling protocol. Keywords: vortex matter, superconductors, hyperuniformity # 1 Introduction Vortex matter in superconductors is a playground for studying how different types of disorder present in the host media, the superconducting sample, shapes the nucleation of condensed matter phases with a broad spectrum of spatial correlations. In general, vortex phases nucleated in real samples present density fluctuations yielding a non-negligible variance of the number of vortices $N$ enclosed within an in-plane area of radius $R$ , $\sigma_N^2 = \langle N^2\rangle -\langle N\rangle^2$ . At one extreme of the statistical correlations lie ordered phases with quasi-long-range order. In the other extreme lie disordered vortex systems which exhibit unbounded density fluctuations that grow faster than those of a point pattern generated by a uniform random distribution. Between these two extremes, vortex phases exhibit an aperiodic in-plane arrangement of vortices with density fluctuations increasing with distance more slowly than the studied area, namely with $\sigma_N^2\sim R^\beta$ with $0 < \beta < 2$ . This results in vortices being more evenly spaced than those in a uniform random distribution, asymptotically suppressing relative number fluctuations, $\sigma_N^2 /\langle N\rangle \rightarrow 0$ in the limit of large window areas. These ubiquitous disordered vortex phases present a hidden order characterized by a slow down of density fluctuations at large wavelengths and belong to the structural class of hyperuniform systems. Even though hyperuniformity is a long wavelength asymptotic property, experimental observations in different superconducting materials in moderate fields of view suggest some vortex phases are hyperuniform. Therefore, vortex matter can be used as a template to generate hyperuniform structures at low temperatures if nucleated in host media with particular disorder potentials. Disordered hyperuniform patterns are universal structures present in different natural systems that possess novel physical functionalities that make them exceptional for technological applications in comparison with conventional ordered materials. For instance, a disordered hyperuniform network of $\mathrm{Al_2O_3}$ walls and cylinders presents isotropic phononic and photonic bandgaps, thus blocking sound and light in all directions unlike crystals. Hyperuniform structures also possess enhanced thermal and electric transport properties, as well as mechanical resilience, outperforming conventional non-hyperuniform disordered media. Therefore, disordered hyperuniform systems are currently regarded as potential candidates for a new generation of technologies at the forefront of innovation. Most of these systems are disordered or class-II hyperuniform systems, a class of disordered structures with a moderate amount of density fluctuations but that retain the hidden hyperuniform order since $1 < \beta < 2$ for large wavelengths. An algebraic increase of $\sigma_N^2$ is manifested in reciprocal space as an algebraic growth of the structure factor for short wave vectors $q$ , namely $S(q) \propto q^\alpha$ when $q \to 0$ . The relation between the growing exponents of $\sigma_N^2$ and $S(q)$ in a $d$ -dimensional system is $\alpha = d - \beta$ . Disordered or class-III hyperuniform systems present a structure factor growing with an exponent $0 < \alpha < 1$ . In contrast, ordered or class-I hyperuniform systems present $1 < \alpha < 2$ . Depending on the nature of the disorder of the host sample and the electromagnetic coupling of vortices with the material, ordered class-I or disordered class-III hyperuniform vortex phases can be nucleated at low temperatures. Then, vortex matter can be used not only as a template to generate hyperuniform patterns at low temperatures, but controlling its coupling with the disordered host media enables to obtain hyperuniform structures with tailored properties. The reported hyperuniformity of vortex phases apparently challenges the fluctuation-compressibility theorem stating that systems with generic constituents at equilibrium with a thermal bath present a $S(q)$ in the $q \to 0$ limit proportional to the compressibility of the system. The thermodynamic limit and the equivalence of ensembles, basic statistical physics concepts, stem from this simple scaling law. Therefore, theoretically, in equilibrium conditions only thermodynamically incompressible systems could present the hyperuniform hidden order at equilibrium, and strikingly vortex matter is a compressible elastic structure in three dimensions. In general, incompressibility at equilibrium is only accomplished when the interaction between constituents is repulsive and long-ranged, in contrast to vortex phases presenting typically short range interactions. However, short-ranged interacting constituents can present hyperuniform arrangements for planes within a higher-dimensional system. This is indeed the case of the structure formed by the tips of superconducting vortices impinging on the surface of the three-dimensional vortex structure nucleated in bulk samples with point disorder. Therefore, the reported hyperuniformity of vortex phases might not challenge the fluctuation-compressibility theorem after all. In previous works we have shown that the hyperuniform correlations in the point pattern of vortices impinging in a plane arise from an effective long-range interaction mediated by the elastic properties of vortices along their length, namely across the sample thickness. Moreover, by means of Langevin dynamics simulations of the quenching of the vortex structure on cooling we have shown that the hyperuniform order progressively degrades on decreasing the magnitude of this effective long-range interaction as in the case of dramatically reducing the sample thickness. These findings warn on the potentially negative impact of finite-size effects on large-scale structural properties, which is crucial for designing hyperuniform materials. Nevertheless, the observation of hyperuniform vortex patterns at sufficiently thick samples is consistent with the fluctuation-compressibility theorem since the density fluctuations of the vortex tips are associated with the compressibility of a single plane that has a relatively large bulk tilting energy cost. All the mentioned experimental evidence was obtained from snapshots of structures frozen during cooling. Then the interpretation of these results raise the relevant question on whether the thickness dependence of hyperuniformity is an equilibrium effect, as predicted in Refs., or rather an out-of-equilibrium effect arising from the slow dynamics during cooling. Our recent simulation results indicate that finite size effects, particularly finite thickness effects, appear both at and out-of-equilibrium conditions. For real samples with a bounded thickness $t$ , at equilibrium conditions the discussed finite-thickness effect yields an in-plane crossover distance $l_{\mathrm{fs}} = (t / 2\pi)\sqrt{c_{11} / c_{44}}$ above which the system is no longer hyperuniform. Since hyperuniformity is a structural property in an asymptotic limit, ascertaining whether a bounded real system presents this hidden order or has reached this crossover in-plane distance requires direct imaging of the constituents of a system in extended fields-of-view. Here we study this issue by imaging vortices in a thick ( $t \geq 20\mu \mathrm{m}$ ) pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8$ in a very large field-of-view containing 33 000 vortices. Previous studies in this host media with weak random point disorder and for the same vortex density show that for a field-of-view of roughly 5000 vortices the vortex structure is ordered class-I hyperuniform with an exponent $\alpha = 1.3 - 1.5$ for different thick samples. Here we reveal that in the unprecedentedly large fields-of-view of up to 33 000 vortices the vortex structure nucleated in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8$ samples with weak point disorder remains hyperuniform and the finite size crossover length $l_{\mathrm{fs}} > 180a$ . # 2 Experimental We studied pristine nearly-optimally doped $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8}$ samples with $T_{\mathrm{c}} \sim 90\mathrm{K}$ from two sources. One batch of samples was grown by means of the flux method and on them we obtained vortex images spanning up to few thousands of vortices. Another batch grown also by the flux method containing much larger samples with tens of millimeter typical sizes allowed us to obtain vortex images in large fields-of-view up to tens of thousands of vortices. All the studied samples are in the thick regime, presenting thicknesses larger than $20\mu \mathrm{m}$ . The samples were specially selected ensuring that they did not exhibit any visible planar defects as determined by magnetic decoration imaging. Then it is reasonable to assume that in the studied samples the dominant disorder is point-like, namely atomic-scale defects that arise randomly when the crystals are grown. We apply the magnetic decoration technique in order to image the positions of individual vortices on the surface of the sample covering extended fields-of-view, see for example the zoomed-in image of Fig. 1 (a). In a magnetic decoration experiment ferromagnetic particles evaporated at low temperatures are attracted towards the magnetic halo of vortices nucleated in superconducting samples in the mixed phase. At the magnetic core of vortices the local magnetic field presents a maximum that decays within a typical distance of the order of the superconducting penetration depth, $\lambda(T)$ . In this way the ferromagnetic particles evaporated on the sample decorate the positions of the vortex tips impinging from the sample. Scanning electron microscopy is then used to obtain panoramic views of the vortex structure from the images of the sample surface containing ferromagnetic particles that decorate the vortex positions. Digitalizing the position of the ferromagnetic clusters allows the identification of vortex positions in extended fields of view. Magnetic decoration imaging is better suited to study vortex density fluctuations at large length scales than other imaging techniques which typically image only hundreds of vortices, such as scanning tunnelling spectroscopy [12, 39?], magnetic force microscopy and scanning SQUID microscopy. In addition, this technique can also be used to study the structural properties of vortex matter in extended fields-of-view nucleated on the same sample in different experimental realizations and for different lengths of the vortex system. The experimental protocol used in this work is a field cooling: We obtain snapshots of the vortex structures at low temperatures after cooling the system from the normal state under an applied field. The data presented in this work corresponds to vortices nucleated at a field of $30\mathrm{Oe}$ and decorations performed at $4.2\mathrm{K}$ . While field cooling the vortex structure gets frozen, at lengthscales Fig. 1 Structural properties of vortex matter nucleated at $30\mathrm{Oe}$ in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8 + \delta}$ . (a) Zoom-in of a magnetic decoration of vortices in the sample with the largest surveyed field-of-view spanning 33 000 vortices. The zoom-in shows around 3 000 vortices imaged as white dots corresponding to the Fe clusters decorating vortex positions at the sample surface. (b) Delaunay triangulations of the largest studied field-of-view with 33 000 vortices in the same sample. Blue lines bond first neighbors and vortices highlighted in red are non-sixfold coordinated topological defects in the structure. of the order of the lattice spacing $a$ , at a temperature $\sim T_{\mathrm{freez}}$ intermediate between the first-order melting transition temperature and the decoration temperature. $T_{\mathrm{freez}}$ is a characteristic temperature where the dynamics of the vortex structure is dramatically slowed down by the disorder of the host medium given by the pinning potential. Thus this temperature is of the order of the irreversibility temperature at which pinning sets in, namely $T_{\mathrm{freez}} \sim T_{\mathrm{irr}}(B) \sim 0.9T_{\mathrm{c}}$ . # 3 Results Figure 1 (a) shows a zoom-in containing about 3500 vortices from the largest field-of-view studied that includes approximately 33000 vortices. Every vortex is imaged as a white dot that corresponds to the Fe clusters that decorate the position of vortices on impinging the sample surface. For the studied vortex density nucleated at $B = 30 \mathrm{Oe}$ the lattice spacing $a \sim 0.8 \mu \mathrm{m}$ . The vortex structure presents the quasi-long range positional order compatible with the Bragg glass phase [4, 42?] and some grain boundaries between very large grains. This is better appreciated in the Delaunay triangulation of panel (b) showing all vortices in the largest studied field-of-view. The Delaunay triangulation follows an algorithm to identify first neighbors allowing to study the coordination of each vortex. In the image first neighbors are bonded with blue lines and non-sixfold coordinated vortices are highlighted in red. These topological defects form screw dislocations (a five-fold coordinated vortex adjacent to a seven-fold coordinated one) that appear isolated or grouped together in the boundaries separating large vortex grains with different orientations. Vortex positions are digitalized from the decoration images in order to obtain the structure factor by Fourier transforming the local density fluctuations of vortex positions, namely $S(q_{\mathrm{x}},q_{\mathrm{y}}) = |\hat{\rho} (q_x,q_y,z = 0)|^2$ at the sample surface, with $\rho (q_x,q_y,z)$ the vortex density matrix. Data of the two-dimensional structure factor obtained from the largest studied field-of-view are presented in Fig. 2 (a). The nucleation of large grains in the structure is evident from the detection of several sextets of Bragg peaks at the Bragg wavevector $q_{0}$ . From these data we compute the angularly-averaged structure factor $S(q) = S(\sqrt{q_x^2 + q_y^2})$ obtained by averaging the two-dimensional $S(q_{x},q_{y})$ data of an infinitesimal circle of radius $q$ over the polar angle, see schematics in Fig. 2 (a). Angularly-averaged structure factor data for the smallest (4 000 vortices) and largest (30 000 vortices) studied fields-of-view are presented in Fig. 2 (b). Irrespective of the size of the field-of-view, the log-log plot of Fig. 2 (b) show that the structure factor decays algebraically in the $q \to 0$ limit. Fits to the data using $S(q) = B(q / q_0)^{\alpha}$ yield $\alpha = 1.46$ for the largest and $\alpha = 1.4$ for the smallest fields-of-view. The prefactor $B$ is larger and the height of the Bragg peak is shorter for the structure nucleated in the sample where the largest field-of-view is studied, suggesting the magnitude of vortex density fluctuations is larger than for the structure nucleated in the another sample. This difference quite likely has origin in a difference in the magnitude of point disorder in both samples. Regardless of these differences, the $\alpha$ exponents obtained in both cases are similar within their error and indicate that the studied vortex structure presents effective type-I hyperuniform properties as (a) Fig. 2 Structure factor data of vortex matter nucleated at $30\mathrm{Oe}$ in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8 + \delta}$ . (a) Two-dimensional structure factor of the largest studied field-of-view computed after digitalizing the positions of roughly 33 000 vortices. Bragg spots (yellow features) appear at the Bragg wavevector $q_{0}$ . The magnitude of $S(q_{\mathrm{x}}, q_{\mathrm{y}})$ is averaged along an infinitesimal circle of radius $q$ , see dotted white lines, in order to obtain the angularly-averaged structure factor $S(q)$ . (b) $S(q)$ data for the smallest (blue open dots) and largest (pink open dots) studied fields-of-view including 4 000 and 33 000 vortices, respectively. Red and orange lines are algebraic fits up to $q / q_{0} = 0.4$ yielding the exponents $\alpha$ indicated in the legend. (c) Same data fitted in the same $q / q_{0}$ range with the function $S(q) = C(q / q_{0})(1 + D(q / q_{0}))$ theoretically predicted for $\alpha = 1$ type II hyperuniformity with dispersive elastic constants (see text). Obtained fitting parameters with their errors indicated in the legend. suggested in previous studies in smaller fields-of-view for various thick samples and vortex densities around 30 G. Strict type-II hyperuniformity with $\alpha = 1$ the behavior expected theoretically at equilibrium for interacting elastic lines in the absence of disorder, or for the vortex liquid at equilibrium-cannot be ruled out. In the case of interacting vortices nucleated in a sample with negligible disorder, including dispersive corrections to the elastic moduli $c_{11}$ and $c_{44}$ , leads to the modified small- $q$ limit expression for the structure factor, $S(q)\propto (q / q_0)(1 + D(q / q_0))$ , with $D > 0$ . The fits of Fig.2 (c) shows that this expression also provides a reasonable description of the data with similar error than the algebraic fits of panel (b). This indicates that in the asymptotic $q\to 0$ limit type-II hyperuniformity considering dispersivity in the elastic constants is a viable alternative interpretation to characterize the structural properties of vortex matter nucleated in $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8 + \delta}$ pristine samples. Thus, for any of the two possible scenarios discussed above, vortex matter nucleated at $30\mathrm{Oe}$ in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8 + \delta}$ the vortex structure is hyperuniform up to lengthscales of $\sim 180a$ . Therefore, for this vortex system the finite size crossover length $l_{\mathrm{fs}} > 180a$ . # 4 Conclusion In conclusion, our work reveals that extended two-dimensional hyperuniform patterns spanning tens of thousands of components, can be nucleated at low temperatures when the host media where the structure is quenched on cooling exhibits uncorrelated weak point disorder. In the system we study here, vortex matter nucleated in pristine $\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8$ samples, the finite size crossover length above which density fluctuations grow with an exponent close to system dimension exceeds 180 lattice spacings. This vortex structure quenched in such a host media down to low temperatures can be used as a template to generate two-dimensional structural systems with strongly suppressed density fluctuations at large lengthscales. These results are important for designing a road-map to synthesize hyperuniform patterns for cutting-edge technological applications for large devices with tens of thousands of components. Acknowledgements. We acknowledge financial support from the organization of the 30th International Conference on Low Temperature Physics at Bilbao, Spain, in order for Y.F. to attend the conference. Y.F. acknowledges funding from the Alexander von Humboldt Foundation through the Georg Forster Research Award and from the Technische Universität Dresden through the Dresden Senior Fellowship Program. Work partially supported by the National Council of Scientific and Technical Research of Argentina (CONICET) through Grant No. PIP 2021-1848 and by the Universidad Nacional de Cuyo Research Grant No. 06/80020240100305UN.
arxiv_physics
2025-12-17T00:00:00Z
https://arxiv.org/pdf/2512.12920
{"title": "Hyperuniform patterns nucleated at low temperatures: Insight from vortex matter imaged in unprecedentedly large fields-of-view", "raw_content": "# Hyperuniform patterns nucleated at low temperatures: Insight from vortex matter imaged in unprecedentedly large fields-of-view\n\nAlexey Cruz-García<sup>1,2</sup>, Joaquín Puig<sup>1,2</sup>, Sergii Pylypenko<sup>3</sup>, Gladys Nieva<sup>1,2</sup>, Alain Pautrat<sup>4</sup>, Alejandro Benedykt Kolton<sup>5</sup>, Yanina Fasano<sup>1,2*</sup>\n\n$^{1*}$ Low Temperatures Lab, Centro Atómico Bariloche, CNEA, Argentina.\n\n$^{2}$ Instituto de Nanociencia y Nanotechnología, CONICET-CNEA, Nodo Bariloche, Argentina.\n\n$^{3}$ Leibniz Institut for Solid State and Materials Research, Dresden, Germany.\n\n$^{4}$ Laboratoire CRISMAT-EnsiCaen, Caen, France.\n\n<sup>5</sup>Condensed Matter Theory Group, Centro Atómico Bariloche, CNEA, Argentina.\n\n*Corresponding author(s). E-mail(s): yanina.fasano@gmail.com;\n\n# Abstract\n\nHyperuniform patterns present enhanced physical properties that make them the new generation of cutting-edge technological devices. Synthesizing devices with tens of thousands of components arranged in a hyperuniform fashion has thus become a breakthrough to achieve in order to implement these technologies. Here we provide evidence that extended two-dimensional hyperuniform patterns spanning tens of thousands of components can be nucleated using as a template the low-temperature vortex structure obtained in pristine $\\mathrm{Bi}_{2}\\mathrm{Sr}_{2}\\mathrm{CaCu}_{2}\\mathrm{O}_{8}$ samples after following a field-cooling protocol.\n\nKeywords: vortex matter, superconductors, hyperuniformity\n\n# 1 Introduction\n\nVortex matter in superconductors is a playground for studying how different types of disorder present in the host media, the superconducting sample, shapes the nucleation of condensed matter phases with a broad spectrum of spatial correlations. [1] In general, vortex phases nucleated in real samples present density fluctuations yielding a non-negligible variance of the number of vortices $N$ enclosed within an in-plane area of radius $R$ , $\\sigma_N^2 = \\langle N^2\\rangle -\\langle N\\rangle^2$ . At one extreme of the statistical correlations lie ordered phases with quasi-long-range order. [2-4] In the other extreme lie disordered vortex systems which exhibit unbounded density fluctuations that grow faster than those of a point pattern generated by a uniform random distribution. [5-7] Between these two extremes, vortex phases exhibit an aperiodic in-plane arrangement of vortices with density fluctuations increasing with distance more slowly than the studied area, namely with $\\sigma_N^2\\sim R^\\beta$ with $0 < \\beta < 2$ . This results in vortices being more evenly spaced than those in a uniform random distribution, asymptotically suppressing relative number fluctuations, $\\sigma_N^2 /\\langle N\\rangle \\rightarrow 0$ in the limit of large window areas. These ubiquitous disordered vortex phases present a hidden order characterized by a slow down of density fluctuations at large wavelengths and belong to the structural class of hyperuniform systems.[8-10] Even though hyperuniformity is a long wavelength asymptotic property, experimental observations in different superconducting materials in moderate fields of view suggest some vortex phases are hyperuniform. [11-16] Therefore, vortex matter can be used as a template to generate hyperuniform structures at low temperatures if nucleated in host media with particular disorder potentials.\n\nDisordered hyperuniform patterns are universal structures present in different natural systems [10-12, 14, 17-20, 20-26] that possess novel physical functionalities that make them exceptional for technological applications in comparison with conventional ordered materials. [20, 27-30] For instance, a disordered hyperuniform network of $\\mathrm{Al_2O_3}$ walls and cylinders presents isotropic phononic and photonic bandgaps, [27] thus blocking sound and light in all directions unlike crystals. Hyperuniform structures also possess enhanced thermal and electric transport properties, as well as mechanical resilience, outperforming conventional non-hyperuniform disordered media. [28] Therefore, disordered hyperuniform systems are currently regarded as potential candidates for a new generation of technologies at the forefront of innovation. Most of these systems are disordered or class-II hyperuniform systems, a class of disordered structures with a moderate amount of density fluctuations but that retain the hidden hyperuniform order since $1 < \\beta < 2$ for large wavelengths. [10]\n\nAn algebraic increase of $\\sigma_N^2$ is manifested in reciprocal space as an algebraic growth of the structure factor for short wave vectors $q$ , namely $S(q) \\propto q^\\alpha$ when $q \\to 0$ . The relation between the growing exponents of $\\sigma_N^2$ and $S(q)$ in a $d$ -dimensional system is $\\alpha = d - \\beta$ . Disordered or class-III hyperuniform systems present a structure factor growing with an exponent $0 < \\alpha < 1$ . In contrast, ordered or class-I hyperuniform systems present $1 < \\alpha < 2$ . Depending on the nature of the disorder of the host sample and the electromagnetic coupling of vortices with the material, ordered class-I [11, 16] or disordered class-III [12, 14] hyperuniform vortex phases can be nucleated at low temperatures. Then, vortex matter can be used not only as a template to generate hyperuniform patterns at low temperatures, but controlling its coupling with\n\nthe disordered host media enables to obtain hyperuniform structures with tailored properties.\n\nThe reported hyperuniformity of vortex phases apparently challenges the fluctuation-compressibility theorem stating that systems with generic constituents at equilibrium with a thermal bath present a $S(q)$ in the $q \\to 0$ limit proportional to the compressibility of the system. [31] The thermodynamic limit and the equivalence of ensembles, basic statistical physics concepts, stem from this simple scaling law. Therefore, theoretically, in equilibrium conditions only thermodynamically incompressible systems could present the hyperuniform hidden order at equilibrium, [10] and strikingly vortex matter is a compressible elastic structure in three dimensions. In general, incompressibility at equilibrium is only accomplished when the interaction between constituents is repulsive and long-ranged, in contrast to vortex phases presenting typically short range interactions. [10] However, short-ranged interacting constituents can present hyperuniform arrangements for planes within a higher-dimensional system. This is indeed the case of the structure formed by the tips of superconducting vortices impinging on the surface of the three-dimensional vortex structure nucleated in bulk samples with point disorder. [11, 12, 14, 16] Therefore, the reported hyperuniformity of vortex phases might not challenge the fluctuation-compressibility theorem after all.\n\nIn previous works we have shown that the hyperuniform correlations in the point pattern of vortices impinging in a plane arise from an effective long-range interaction mediated by the elastic properties of vortices along their length, namely across the sample thickness. [11, 16] Moreover, by means of Langevin dynamics simulations of the quenching of the vortex structure on cooling we have shown that the hyperuniform order progressively degrades on decreasing the magnitude of this effective long-range interaction as in the case of dramatically reducing the sample thickness. [32] These findings warn on the potentially negative impact of finite-size effects on large-scale structural properties, which is crucial for designing hyperuniform materials. Nevertheless, the observation of hyperuniform vortex patterns at sufficiently thick samples is consistent with the fluctuation-compressibility theorem since the density fluctuations of the vortex tips are associated with the compressibility of a single plane that has a relatively large bulk tilting energy cost.\n\nAll the mentioned experimental evidence was obtained from snapshots of structures frozen during cooling. [33, 34] Then the interpretation of these results raise the relevant question on whether the thickness dependence of hyperuniformity is an equilibrium effect, as predicted in Refs. [11, 16], or rather an out-of-equilibrium effect arising from the slow dynamics during cooling. Our recent simulation results indicate that finite size effects, particularly finite thickness effects, appear both at and out-of-equilibrium conditions. [32]\n\nFor real samples with a bounded thickness $t$ , at equilibrium conditions the discussed finite-thickness effect yields an in-plane crossover distance $l_{\\mathrm{fs}} = (t / 2\\pi)\\sqrt{c_{11} / c_{44}}$ above which the system is no longer hyperuniform. [13] Since hyperuniformity is a structural property in an asymptotic limit, ascertaining whether a bounded real system presents this hidden order or has reached this crossover in-plane distance requires direct imaging of the constituents of a system in extended fields-of-view. Here we study this issue by imaging vortices in a thick ( $t \\geq 20\\mu \\mathrm{m}$ ) pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_8$\n\nin a very large field-of-view containing 33 000 vortices. Previous studies in this host media with weak random point disorder and for the same vortex density show that for a field-of-view of roughly 5000 vortices the vortex structure is ordered class-I hyperuniform with an exponent $\\alpha = 1.3 - 1.5$ for different thick samples. [11, 16] Here we reveal that in the unprecedentedly large fields-of-view of up to 33 000 vortices the vortex structure nucleated in pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_8$ samples with weak point disorder remains hyperuniform and the finite size crossover length $l_{\\mathrm{fs}} > 180a$ .\n\n# 2 Experimental\n\nWe studied pristine nearly-optimally doped $\\mathrm{Bi}_{2}\\mathrm{Sr}_{2}\\mathrm{CaCu}_{2}\\mathrm{O}_{8}$ samples with $T_{\\mathrm{c}} \\sim 90\\mathrm{K}$ from two sources. One batch of samples was grown by means of the flux method [35] and on them we obtained vortex images spanning up to few thousands of vortices. Another batch grown also by the flux method containing much larger samples with tens of millimeter typical sizes [36] allowed us to obtain vortex images in large fields-of-view up to tens of thousands of vortices. All the studied samples are in the thick regime, [16] presenting thicknesses larger than $20\\mu \\mathrm{m}$ . The samples were specially selected ensuring that they did not exhibit any visible planar defects as determined by magnetic decoration imaging. [13, 37] Then it is reasonable to assume that in the studied samples the dominant disorder is point-like, namely atomic-scale defects that arise randomly when the crystals are grown.\n\nWe apply the magnetic decoration technique in order to image the positions of individual vortices on the surface of the sample covering extended fields-of-view, see for example the zoomed-in image of Fig. 1 (a). In a magnetic decoration experiment ferromagnetic particles evaporated at low temperatures are attracted towards the magnetic halo of vortices nucleated in superconducting samples in the mixed phase. [38] At the magnetic core of vortices the local magnetic field presents a maximum that decays within a typical distance of the order of the superconducting penetration depth, $\\lambda(T)$ . [1] In this way the ferromagnetic particles evaporated on the sample decorate the positions of the vortex tips impinging from the sample. Scanning electron microscopy is then used to obtain panoramic views of the vortex structure from the images of the sample surface containing ferromagnetic particles that decorate the vortex positions. Digitalizing the position of the ferromagnetic clusters allows the identification of vortex positions in extended fields of view.\n\nMagnetic decoration imaging is better suited to study vortex density fluctuations at large length scales than other imaging techniques which typically image only hundreds of vortices, such as scanning tunnelling spectroscopy [12, 39?], magnetic force microscopy and scanning SQUID microscopy. [6] In addition, this technique can also be used to study the structural properties of vortex matter in extended fields-of-view nucleated on the same sample in different experimental realizations [33, 37] and for different lengths of the vortex system. [40] The experimental protocol used in this work is a field cooling: We obtain snapshots of the vortex structures at low temperatures after cooling the system from the normal state under an applied field. The data presented in this work corresponds to vortices nucleated at a field of $30\\mathrm{Oe}$ and decorations performed at $4.2\\mathrm{K}$ . While field cooling the vortex structure gets frozen, at lengthscales\n\n![](images/7cb2d2fcce3434429f10aff68a7f28c04bf0bc0f3cafcc0b87efce56ccd71a01.jpg)\n\n![](images/7a04be0c19325b794971c6424d4d76fac9cce14fd3a3d9d2047d787679afed1a.jpg) \nFig. 1 Structural properties of vortex matter nucleated at $30\\mathrm{Oe}$ in pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_{8 + \\delta}$ . (a) Zoom-in of a magnetic decoration of vortices in the sample with the largest surveyed field-of-view spanning 33 000 vortices. The zoom-in shows around 3 000 vortices imaged as white dots corresponding to the Fe clusters decorating vortex positions at the sample surface. (b) Delaunay triangulations of the largest studied field-of-view with 33 000 vortices in the same sample. Blue lines bond first neighbors and vortices highlighted in red are non-sixfold coordinated topological defects in the structure.\n\nof the order of the lattice spacing $a$ , at a temperature $\\sim T_{\\mathrm{freez}}$ intermediate between the first-order melting transition temperature and the decoration temperature. [33] $T_{\\mathrm{freez}}$ is a characteristic temperature where the dynamics of the vortex structure is dramatically slowed down by the disorder of the host medium given by the pinning potential. [32] Thus this temperature is of the order of the irreversibility temperature at which pinning sets in, namely $T_{\\mathrm{freez}} \\sim T_{\\mathrm{irr}}(B) \\sim 0.9T_{\\mathrm{c}}$ . [34, 41]\n\n# 3 Results\n\nFigure 1 (a) shows a zoom-in containing about 3500 vortices from the largest field-of-view studied that includes approximately 33000 vortices. Every vortex is imaged as a white dot that corresponds to the Fe clusters that decorate the position of vortices on impinging the sample surface. For the studied vortex density nucleated at $B = 30 \\mathrm{Oe}$ the lattice spacing $a \\sim 0.8 \\mu \\mathrm{m}$ . The vortex structure presents the quasi-long range positional order compatible with the Bragg glass phase [4, 42?] and some grain boundaries between very large grains. This is better appreciated in the Delaunay triangulation of panel (b) showing all vortices in the largest studied field-of-view. The Delaunay triangulation follows an algorithm to identify first neighbors allowing to study the coordination of each vortex. In the image first neighbors are bonded with blue lines and non-sixfold coordinated vortices are highlighted in red. These topological defects form screw dislocations (a five-fold coordinated vortex adjacent to a seven-fold coordinated one) that appear isolated or grouped together in the boundaries separating large vortex grains with different orientations.\n\nVortex positions are digitalized from the decoration images in order to obtain the structure factor by Fourier transforming the local density fluctuations of vortex positions, namely $S(q_{\\mathrm{x}},q_{\\mathrm{y}}) = |\\hat{\\rho} (q_x,q_y,z = 0)|^2$ at the sample surface, with $\\rho (q_x,q_y,z)$ the vortex density matrix. Data of the two-dimensional structure factor obtained from the largest studied field-of-view are presented in Fig. 2 (a). The nucleation of large grains in the structure is evident from the detection of several sextets of Bragg peaks at the Bragg wavevector $q_{0}$ . From these data we compute the angularly-averaged structure factor $S(q) = S(\\sqrt{q_x^2 + q_y^2})$ obtained by averaging the two-dimensional $S(q_{x},q_{y})$ data of an infinitesimal circle of radius $q$ over the polar angle, see schematics in Fig. 2 (a). Angularly-averaged structure factor data for the smallest (4 000 vortices) and largest (30 000 vortices) studied fields-of-view are presented in Fig. 2 (b).\n\nIrrespective of the size of the field-of-view, the log-log plot of Fig. 2 (b) show that the structure factor decays algebraically in the $q \\to 0$ limit. Fits to the data using $S(q) = B(q / q_0)^{\\alpha}$ yield $\\alpha = 1.46$ for the largest and $\\alpha = 1.4$ for the smallest fields-of-view. The prefactor $B$ is larger and the height of the Bragg peak is shorter for the structure nucleated in the sample where the largest field-of-view is studied, suggesting the magnitude of vortex density fluctuations is larger than for the structure nucleated in the another sample. This difference quite likely has origin in a difference in the magnitude of point disorder in both samples. Regardless of these differences, the $\\alpha$ exponents obtained in both cases are similar within their error and indicate that the studied vortex structure presents effective type-I hyperuniform properties as\n\n![](images/6217f981a5ad17de37de358aea70d3b711572068bd1aff2d9a89fee620083071.jpg) \n(a)\n\n![](images/97c494a1317ebb56a4b086f67234bd53bce7f67a2e1af15a1f084123af4bc8e7.jpg)\n\n![](images/ce7011439e45551961099ea0b73468ea65852ff9e18e1b16f6ac7b4acfd186c5.jpg) \nFig. 2 Structure factor data of vortex matter nucleated at $30\\mathrm{Oe}$ in pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_{8 + \\delta}$ . (a) Two-dimensional structure factor of the largest studied field-of-view computed after digitalizing the positions of roughly 33 000 vortices. Bragg spots (yellow features) appear at the Bragg wavevector $q_{0}$ . The magnitude of $S(q_{\\mathrm{x}}, q_{\\mathrm{y}})$ is averaged along an infinitesimal circle of radius $q$ , see dotted white lines, in order to obtain the angularly-averaged structure factor $S(q)$ . (b) $S(q)$ data for the smallest (blue open dots) and largest (pink open dots) studied fields-of-view including 4 000 and 33 000 vortices, respectively. Red and orange lines are algebraic fits up to $q / q_{0} = 0.4$ yielding the exponents $\\alpha$ indicated in the legend. (c) Same data fitted in the same $q / q_{0}$ range with the function $S(q) = C(q / q_{0})(1 + D(q / q_{0}))$ theoretically predicted for $\\alpha = 1$ type II hyperuniformity with dispersive elastic constants (see text). Obtained fitting parameters with their errors indicated in the legend.\n\nsuggested in previous studies in smaller fields-of-view for various thick samples and vortex densities around 30 G. [11, 16]\n\nStrict type-II hyperuniformity with $\\alpha = 1$ the behavior expected theoretically at equilibrium for interacting elastic lines in the absence of disorder, or for the vortex liquid at equilibrium [11]-cannot be ruled out. In the case of interacting vortices nucleated in a sample with negligible disorder, including dispersive corrections to the elastic moduli $c_{11}$ and $c_{44}$ , leads to the modified small- $q$ limit expression for the structure factor, $S(q)\\propto (q / q_0)(1 + D(q / q_0))$ , with $D > 0$ . [11] The fits of Fig.2 (c) shows that this expression also provides a reasonable description of the data with similar error than the algebraic fits of panel (b). This indicates that in the asymptotic $q\\to 0$ limit type-II hyperuniformity considering dispersivity in the elastic constants is a viable alternative interpretation to characterize the structural properties of vortex matter nucleated in $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_{8 + \\delta}$ pristine samples.\n\nThus, for any of the two possible scenarios discussed above, vortex matter nucleated at $30\\mathrm{Oe}$ in pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_{8 + \\delta}$ the vortex structure is hyperuniform up to lengthscales of $\\sim 180a$ . Therefore, for this vortex system the finite size crossover length $l_{\\mathrm{fs}} > 180a$ .\n\n# 4 Conclusion\n\nIn conclusion, our work reveals that extended two-dimensional hyperuniform patterns spanning tens of thousands of components, can be nucleated at low temperatures when the host media where the structure is quenched on cooling exhibits uncorrelated weak point disorder. In the system we study here, vortex matter nucleated in pristine $\\mathrm{Bi}_2\\mathrm{Sr}_2\\mathrm{CaCu}_2\\mathrm{O}_8$ samples, the finite size crossover length above which density fluctuations grow with an exponent close to system dimension exceeds 180 lattice spacings. This vortex structure quenched in such a host media down to low temperatures can be used as a template to generate two-dimensional structural systems with strongly suppressed density fluctuations at large lengthscales. These results are important for designing a road-map to synthesize hyperuniform patterns for cutting-edge technological applications for large devices with tens of thousands of components.\n\nAcknowledgements. We acknowledge financial support from the organization of the 30th International Conference on Low Temperature Physics at Bilbao, Spain, in order for Y.F. to attend the conference. Y.F. acknowledges funding from the Alexander von Humboldt Foundation through the Georg Forster Research Award and from the Technische Universität Dresden through the Dresden Senior Fellowship Program. Work partially supported by the National Council of Scientific and Technical Research of Argentina (CONICET) through Grant No. PIP 2021-1848 and by the Universidad Nacional de Cuyo Research Grant No. 06/80020240100305UN.\n\n# References\n\n[1] Blatter, G., Feigel'man, M.V., Geshkenbein, V.B., Larkin, A.I., Vinokur, V.M.: Vortices in high-temperature superconductors. Rev. Mod. Phys. 66, 1125–1388 (1994) https://doi.org/10.1103/RevModPhys.66.1125\n\n[2] Giamarchi, T., Le Doussal, P.: Elastic theory of flux lattices in the presence of weak disorder. Phys. Rev. B 52, 1242-1270 (1995) https://doi.org/10.1103/PhysRevB.52.1242 \n[3] Klein, T., Joumard, I., Blanchard, S., Marcus, J., Cubitt, R., Giamarchi, T., Le Doussal, P.: A bragg glass phase in the vortex lattice of a type ii superconductor. Nature 413(6854), 404-406 (2001) https://doi.org/10.1038/35096534 \n[4] Fasano, Y., De Seta, M., Menghini, M., Pastoriza, H., Cruz, F.: Commensurability and stability in nonperiodic systems. Proceedings of the National Academy of Sciences of the United States of America 102(11), 3898-3902 (2005) https://doi.org/10.1073/pnas.0408016102 \n[5] Roy, I., Dutta, S., Roy Choudhury, A.N., Basistha, S., Maccari, I., Mandal, S., Jesudasan, J., Bagwe, V., Castellani, C., Benfatto, L., Raychaudhuri, P.: Melting of the vortex lattice through intermediate hexatic fluid in an $a$ -MoGe thin film. Phys. Rev. Lett. 122, 047001 (2019) https://doi.org/10.1103/PhysRevLett.122.047001 \n[6] Llorens, J.B., Embon, L., Correa, A., González, J.D., Herrera, E., Guillamón, I., Luccas, R.F., Azpeitia, J., Mompeán, F.J., García-Hernández, M., Munuera, C., Aragón Sánchez, J., Fasano, Y., Milosevic, M.V., Suderow, H., Anahory, Y.: Observation of a gel of quantum vortices in a superconductor at very low magnetic fields. Phys. Rev. Res. 2, 013329 (2020) https://doi.org/10.1103/PhysRevResearch.2.013329 \n[7] Puig, J., Aragón Sánchez, J., Herrera, E., Guillamón, I., Pribulova, Z., Kacmarcik, J., Suderow, H., Kolton, A.B., Fasano, Y.: Anti-hyperuniform diluted vortex matter induced by correlated disorder. Phys. Rev. B 110, 024108 (2024) https://doi.org/10.1103/PhysRevB.110.024108 \n[8] Gabrielli, A., Jancovici, B., Joyce, M., Lebowitz, J.L., Pietronero, L., Sylos Labini, F.: Generation of primordial cosmological perturbations from statistical mechanical models. Phys. Rev. D 67, 043506 (2003) https://doi.org/10.1103/PhysRevD.67.043506 \n[9] Torquato, S., Stillinger, F.H.: Local density fluctuations, hyperuniformity, and order metrics. Phys. Rev. E 68, 041113 (2003) https://doi.org/10.1103/PhysRevE.68.041113 \n[10] Torquato, S.: Hyperuniform states of matter. Physics Reports 745, 1-95 (2018) https://doi.org/10.1016/j.physrep.2018.03.001 \n[11] Rumi, G., Aragón Sánchez, J., Elias, F., Cortés Maldonado, R., Puig, J., Cejas Bolecek, N.R., Nieva, G., Konczykowski, M., Fasano, Y., Kolton, A.B.: Hyperuniform vortex patterns at the surface of type-ii superconductors. Phys. Rev. Res. 1, 033057 (2019) https://doi.org/10.1103/PhysRevResearch.1.033057\n\n[12] Llorens, J.B., Guillamón, I., Serrano, I.G., Córdoba, R., Sesé, J., De Teresa, J.M., Ibarra, M.R., Vieira, S., Ortuño, M., Suderow, H.: Disordered hyperuniformity in superconducting vortex lattices. Phys. Rev. Res. 2, 033133 (2020) https://doi.org/10.1103/PhysRevResearch.2.033133 \n[13] Puig, J., Elias, F., Aragon Sánchez, J., Cortés Maldonado, R., Rumi, G., Nieva, G., Pedrazzini, P., Kolton, A.B., Fasano, Y.: Anisotropic suppression of hyperuniformity of elastic systems in media with planar disorder. Communications Materials 3, 32 (2022) https://doi.org/10.1038/s43246-022-00250-6 \n[14] Aragón Sánchez, J., Cortés Maldonado, R., Amigó, M.L., Nieva, G., Kolton, A., Fasano, Y.: Disordered hyperuniform vortex matter with rhombic distortions in fese at low fields. Phys. Rev. B 107, 094508 (2023) https://doi.org/10.1103/PhysRevB.107.094508 \n[15] Puig, J., Aragon Sánchez, J., Nieva, G., Kolton, A.B., Fasano, Y.: Hyperuniformity in type-ii superconductors with point and planar defects. JPS Conference Proceedings 38, 011051 (2023) https://doi.org/10.7566/JPSCP.38.011051 \n[16] Besana, R.M., Elias, F., Puig, J., Aragon Sánchez, J., Nieva, G., Kolton, A.B., Fasano, Y.: Finite-size effects in hyperuniform vortex matter. Journal of Physics: Condensed Matter 36(28), 285102 (2024) https://doi.org/10.1088/1361-648X/ad3b5b \n[17] Jiao, Y., Lau, T., Hatzikirou, H., Meyer-Hermann, M., Corbo, J.C., Torquato, S.: Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem. Phys. Rev. E 89, 022721 (2014) https://doi.org/10.1103/PhysRevE.89.022721 \n[18] Dreyfus, R., Xu, Y., Still, T., Hough, L.A., Yodh, A.G., Torquato, S.: Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres. Phys. Rev. E 91, 012302 (2015) https://doi.org/10.1103/PhysRevE.91.012302 \n[19] Chen, D., Torquato, S.: Designing disordered hyperuniform two-phase materials with novel physical properties. Acta Materialia 142, 152-161 (2018) https://doi.org/10.1016/j.actamat.2017.09.053 \n[20] Zheng, Y., Liu, L., Nan, H., Shen, Z.-X., Zhang, G., Chen, D., He, L., Xu, W., Chen, M., Jiao, Y., Zhuang, H.: Disordered hyperuniformity in two-dimensional amorphous silica. Sci. Adv. 6(16), 0826 (2020) \n[21] Nizam, U.S., Makey, G., Barbier, M., Kahraman, S.S., Demir, E., Shafigh, E.E., Galioglu, S., Vahabli, D., Husnugil, S., Gunes, M.H., Yelesti, E., Ilday, S.: Dynamic evolution of hyperuniformity in a driven dissipative colloidal system. Journal of Physics: Condensed Matter 33(30), 304002 (2021) https://doi.org/10.1088/1361-648X/abf9b8\n\n[22] Chen, D., Zheng, Y., Lee, C.-H., Kang, S., Zhu, W., Zhuang, H., Huang, P.Y., Jiao, Y.: Nearly hyperuniform, nonhyperuniform, and antihyperuniform density fluctuations in two-dimensional transition metal dichalcogenides with defects. Phys. Rev. B 103, 224102 (2021) https://doi.org/10.1103/PhysRevB.103.224102 \n[23] Chieco, A.T., Durian, D.J.: Quantifying the long-range structure of foams and other cellular patterns with hyperuniformity disorder length spectroscopy. Phys. Rev. E 103, 062609 (2021) https://doi.org/10.1103/PhysRevE.103.062609 \n[24] Zhang, B., Snezhko, A.: Hyperuniform active chiral fluids with tunable internal structure. Phys. Rev. Lett. 128, 218002 (2022) https://doi.org/10.1103/PhysRevLett.128.218002 \n[25] Philcox, O.H.E., Torquato, S.: Disordered heterogeneous universe: Galaxy distribution and clustering across length scales. Phys. Rev. X 13, 011038 (2023) https://doi.org/10.1103/PhysRevX.13.011038 \n[26] Milor, A.H.G., Salvalaglio, M.: Inferring traits of hyperuniformity from local structures via persistent homology. J. Phys.: Condens. Matter 37(14), 145401 (2025) https://doi.org/10.1088/1361-648X/adb11b \n[27] Man, W., Florescu, M., Williamson, E.P., He, Y., Hashemizad, S.R., Leung, B.Y.C., Liner, D.R., Torquato, S., Chaikin, P.M., Steinhardt, P.J.: Isotropic band gaps and freeform waveguides observed in hyperuniform disordered photonic solids. Proc. Natl. Acad. Sci. USA 110(40), 15886-15891 (2013) https://doi.org/10.1073/pnas.1307879110 \n[28] Chen, D., Zhuang, H., Chen, M., Huang, P.Y., Vlcek, V., Jiao, Y.: Disordered hyperuniform solid state materials. Applied Physics Reviews 10(2), 021310 (2023) https://doi.org/10.1063/5.0137187 \n[29] Liang, N., Wang, Y., Song, B.: Disordered hyperuniformity and thermal transport in monolayer amorphous carbon. Science China Physics, Mechanics & Astronomy 68(2), 226111 (2024) https://doi.org/10.1007/s11433-024-2523-4 \n[30] Wang, Y., Qian, Z., Tong, H., Tanaka, H.: Hyperuniform disordered solids with crystal-like stability. Nature Communications 16(1), 1398 (2025) https://doi.org/10.1038/s41467-025-56283-1 \n[31] Pathria, R.K., Beale, P.D.: Statistical Mechanics, 3rd edn. Elsevier, Burlington, MA (2011) \n[32] Cruz-García, A., Puig, J., Besana, R.M., Kolton, A.B., Fasano, Y.: Finite-size and quenching effects on hyperuniform structures formed during cooling. Phys. Rev. B submitted (2025) \n[33] Fasano, Y., Herbsommer, J., Cruz, F.: Superficial periodic pinning induced by\n\nbitter decoration applied to the study of vortex structure nucleation and growth. Physica Status Solidi (b) 215(1), 563-571 (1999) \n[34] Bolecek, N.R.C., Kolton, A.B., Konczykowski, M., Pastoriza, H., Dominguez, D., Fasano, Y.: Vortex matter freezing in $\\mathrm{bi}_{2}\\mathrm{sr}_{2}\\mathrm{cacu}_{2}\\mathrm{o}_{8}$ samples with a very dense distribution of columnar defects. Phys. Rev. B 93, 054505 (2016) https://doi.org/10.1103/PhysRevB.93.054505 \n[35] Correa, V.F., Kaul, E.E., Nieva, G.: Overdoping effects in $\\mathrm{bi}_2\\mathrm{sr}_2\\mathrm{cacu}_2\\mathrm{o}_{8 + \\delta}$ : from electromagnetic to josephson interlayer coupling. Phys. Rev. B 63, 172505 (2001) https://doi.org/10.1103/PhysRevB.63.172505 \n[36] Aragón Sánchez, J., Cortés Maldonado, R., Cejas Bolecek, N.R., Rumi, G., Pedrazzini, P., Dolz, M.I., Nieva, G., Beek, C.J., Konczykowski, M., Dewhurst, C.D., Cubitt, R., Kolton, A.B., Pautrat, A., Fasano, Y.: Unveiling the vortex glass phase in the surface and volume of a type-ii superconductor. Communications Physics 2 (2019) \n[37] Fasano, Y., Menghini, M., Cruz, F., Nieva, G.: Weak interaction and matching conditions for replicas of vortex lattices. Phys. Rev. B 62, 15183-15189 (2000) https://doi.org/10.1103/PhysRevB.62.15183 \n[38] Fasano, Y., Menghini, M.: Magnetic-decoration imaging of structural transitions induced in vortex matter. Superconductor Science and Technology 21(2), 023001 (2008) https://doi.org/10.1088/0953-2048/21/02/023001 \n[39] Petrovic, A.P., Fasano, Y., Lortz, R., Senatorate, C., Demuer, A., Antunes, A.B., Paré, A., Salloum, D., Gougeon, P., Potel, M., Fischer, O.: Real-space vortex glass imaging and the vortex phase diagram of snmo6s8. Phys. Rev. Lett. 103, 257001 (2009) https://doi.org/10.1103/PhysRevLett.103.257001 \n[40] Fasano, Y., De Seta, M., Menghini, M., Pastoriza, H., de la Cruz, F.: Imaging the structure of the interface between symmetries interconnected by a discontinuous transition. Solid State Communications 128(2), 51-56 (2003) https://doi.org/10.1016/S0038-1098(03)00645-8 \n[41] Dolz, M.I., Fasano, Y., Pastoriza, H., Mosser, V., Li, M., Konczykowski, M.: Latent heat and nonlinear vortex liquid in the vicinity of the first-order phase transition in layered high- $\\mathbf{t}_c$ superconductors. Physical Review B 90, 144507 (2014) \n[42] Kim, P., Yao, Z., Bolle, C.A., Lieber, C.M.: Structure of flux line lattices with weak disorder at large length scales. Phys. Rev. B 60, 12589-12592 (1999) https://doi.org/10.1103/PhysRevB.60.R12589"}
# Absement: Quantitative Assessment of Metabolic Cost during Quasi-Isometric Muscle Loading Abstract Accurate quantitative assessment of metabolic cost during static posture holding is a strategically important problem in biomechanics and physiology. Traditional metrics such as "time under tension" are fundamentally insufficient, because they are scalar quantities that ignore the temporal history of deviations, that is, the microdynamics of posture, which has nontrivial energetic consequences. In this work, we propose a theoretically grounded methodology to address this problem by introducing the concept of the deviation absement $(\Delta A_{\ell})$ , defined as the time integral of the deviation of the muscle-tendon unit length from a reference value. We rigorously prove that, for a broad class of quasi-static models, absevement appears as the leading first-order state variable. For small deviations in a neighbourhood of a reference posture, the total metabolic cost $\mathcal{E}_{\mathrm{met}}(\ell)$ admits a universal asymptotic expansion of the form $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}), $$ where $T$ is the duration of loading, and $P_0, C_1, C_2$ are constants determined by local properties of the system. Thus, the deviation abatement $(\Delta A_{\ell})$ is the unique first-order sufficient statistic that allows one to quantify and separate the energetic contribution of systematic drift of the mean posture from the contribution of micro-oscillations (tremor), which is described by the quadratic term. This result has direct consequences for parameter identification: the proposed formalism makes it possible to recover physically meaningful coefficients $(P_0,C_1,C_2)$ by means of linear regression of experimental data obtained from standard kinematic measurements and indirect calorimetry. # 1 Introduction Modelling the energetics of isometric muscle contractions is one of the fundamental problems of biomechanics. Classical approaches that reduce the description to the scalar predictor "time under tension” are intrinsically insufficient. They treat posture holding as a static act, ignoring its dynamic nature and the temporal history of deviations, that is, the continuous micro-deviations and postural tremor that inevitably accompany any real posture holding and have a substantial impact on the total energetic cost. The central problem addressed in this work is the absence of a formal theoretical framework that would link the microdynamics of posture holding to the integral metabolic cost. We demonstrate that such a framework can be constructed by introducing the concept of the deviation abatement $(\Delta A_{\ell})$ . This quantity is not merely a new empirical index, but a fundamental parameter that arises unavoidably from the asymptotic analysis of the energetic functional as the unique leading first-order variable. The main contributions of this work can be summarised as follows: - Theoretical justification: We rigorously prove that, for a broad class of quasi-static models, the deviation abatement is the unique first-order sufficient statistic in the asymptotic expansion of the energetic cost functional. - Structural decomposition: The proposed formalism allows one to clearly decompose the energetic cost into three physically interpretable components: the baseline cost of holding an ideal posture, the cost of systematic drift of the mean posture, and the cost of tremor or variability. - Practical identification: The model provides a direct methodology for identifying physically meaningful parameters $(P_0, C_1, C_2)$ from standard experimentally measured quantities, thereby establishing a strong link between theoretical coefficients and empirical data. At the level of existing models, the metabolic cost of posture holding is usually described either through the overall rate of oxygen consumption or by empirical regression relationships in which the predictors are characteristics of centre-of-pressure (COP) fluctuations, sway amplitude and velocity, or total trajectory length. In these studies, metabolic cost is treated as a scalar output quantity associated with an embedded set of kinematic and stabilometric indices, but a minimal sufficient descriptor of postural drift, derived directly from an energetic functional, is not formulated. From a methodological standpoint, our approach is closer to classical works in theoretical biophysics, where large-scale metabolic networks are described by variational principles and optimisation problems. In such models, energetic functionals are written explicitly, and optimal profiles of enzymatic activity or fluxes are obtained as solutions of cost minimisation problems under given constraints. Our formulation for quasi-isometric loading is a muscle-tendon analogue of this approach: we explicitly define a functional $\mathcal{E}[a,\theta]$ on the space of trajectories $(a(t),\theta (t))$ and derive its asymptotics in the neighbourhood of a reference posture. A separate body of related work arises from integral kinematics. In mechanics and engineering, the physical quantity $absetment$ has been introduced as the time integral of displacement, that is, the first time integral of distance. Absetment and other integral kinematic variables are used to describe systems with "memory", in which the accumulated deviated state affects the current dynamics. More fundamentally, in the theory of Lagrangian models with memelements it has been demonstrated that an appropriate choice of configuration space may require time-integrated variables. In this context, our deviation absetment $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t $$ is a biophysically grounded analogue of absement: we show that this integral coordinate arises as the unique linear term in the asymptotic expansion of a biologically meaningful energetic functional, rather than being introduced ad hoc. In the following sections we present the formal problem formulation, derive the main mathematical result, and discuss its interpretation and practical implications. # 2 Mathematical model of quasi-isometric posture holding For the subsequent rigorous analysis, we formulate a minimalist yet sufficiently general model of a muscle-tendon system that maintains a prescribed posture in a quasi-isometric regime. In this section we explicitly specify the kinematic variables, the quasi-static equilibrium equation, and the functional of metabolic cost on which the asymptotic analysis will be based. # 2.1 Physical system and kinematics We consider a one-dimensional single-link muscle-tendon system that serves a single joint degree of freedom (for example, knee flexion/extension or ankle plantarflexion). The state of the system at time $t \in [0, T]$ is described by three variables: - the length of the muscle-tendon unit $\ell(t)$ ; the level of muscle activation $a(t)\in$ - the joint angle $\theta(t)$ . The geometry of the system imposes a kinematic relationship between the joint angle and the muscle-tendon length, $$ \ell (t) = \ell (\theta (t)), $$ where $\ell(\cdot)$ is a smooth function that encodes the direction of muscle pull and the moment arm. In a neighbourhood of a reference (target) posture $\theta_0$ we shall consider only small deviations, so that this relationship can be linearised: $$ \ell (t) = \ell_ {0} + r _ {0} (\theta (t) - \theta_ {0}) + O ((\theta (t) - \theta_ {0}) ^ {2}), $$ where $\ell_0 = \ell (\theta_0)$ is the reference length, and $$ r_{0} = \left.\frac{d\ell}{d\theta}\right|_{\theta_{0}} $$ is the effective moment arm at this point. In the asymptotic analysis below we shall be interested precisely in small deviations $\delta \theta (t) = \theta (t) - \theta_0$ , $\delta \ell (t) = \ell (t) - \ell_0$ , for which higher-order terms of the expansion in $\delta \theta$ can be accounted for through terms of the type $O(\| \ell -\ell_0\|_{L^\infty}^3)$ . # 2.2 Quasi-static equilibrium and the activation-angle relationship Let $F(\ell, a)$ be a smooth function that describes muscle force as a function of the muscle-tendon length and the activation level. We do not fix a specific parametrisation of this function (for example, a decomposition into passive and active components), and we rely only on its regularity and local derivatives in a neighbourhood of the equilibrium point. We denote the external joint moment by $M_{\mathrm{ext}}(\theta)$ , and the muscle moment arm by $r(\theta)$ . Then in the quasi-isometric (quasi-static) regime, where inertial and viscous effects are neglected, the following quasi-static equilibrium condition holds: $$ F (\ell (\theta), a) r (\theta) = M _ {\mathrm {e x t}} (\theta). \tag {1} $$ It is convenient to introduce the function $$ Q (\theta , a) := F \big (\ell (\theta), a \big) r (\theta) - M _ {\mathrm {e x t}} (\theta), $$ so that the equilibrium condition takes the form $Q(\theta, a) = 0$ . Let $(\theta_0, a_0)$ be a fixed equilibrium point, that is, $$ Q (\theta_ {0}, a _ {0}) = 0. $$ The key local assumption of the model is formulated as $$ Q _ {a} \left(\theta_ {0}, a _ {0}\right) \neq 0, \tag {2} $$ that is, a change of activation at fixed angle modifies the resulting joint moment. From a physical point of view this means that in a neighbourhood of the operating point the system is neither in a purely passive state, nor in a saturation regime in which variations of activation no longer affect the moment. Under conditions (1) and (2), the implicit function theorem guarantees the existence of a smooth function $$ a _ {*} (\theta) $$ such that $a_{*}(\theta_{0}) = a_{0}$ and $$ Q \left(\theta , a _ {*} (\theta)\right) \equiv 0 $$ in a neighbourhood of $\theta_0$ . In
# Absement: Quantitative Assessment of Metabolic Cost during Quasi-Isometric Muscle Loading Abstract Accurate quantitative assessment of metabolic cost during static posture holding is a strategically important problem in biomechanics and physiology. Traditional metrics such as "time under tension" are fundamentally insufficient, because they are scalar quantities that ignore the temporal history of deviations, that is, the microdynamics of posture, which has nontrivial energetic consequences. In this work, we propose a theoretically grounded methodology to address this problem by introducing the concept of the deviation absement $(\Delta A_{\ell})$ , defined as the time integral of the deviation of the muscle-tendon unit length from a reference value. We rigorously prove that, for a broad class of quasi-static models, absevement appears as the leading first-order state variable. For small deviations in a neighbourhood of a reference posture, the total metabolic cost $\mathcal{E}_{\mathrm{met}}(\ell)$ admits a universal asymptotic expansion of the form $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}), $$ where $T$ is the duration of loading, and $P_0, C_1, C_2$ are constants determined by local properties of the system. Thus, the deviation abatement $(\Delta A_{\ell})$ is the unique first-order sufficient statistic that allows one to quantify and separate the energetic contribution of systematic drift of the mean posture from the contribution of micro-oscillations (tremor), which is described by the quadratic term. This result has direct consequences for parameter identification: the proposed formalism makes it possible to recover physically meaningful coefficients $(P_0,C_1,C_2)$ by means of linear regression of experimental data obtained from standard kinematic measurements and indirect calorimetry. # 1 Introduction Modelling the energetics of isometric muscle contractions is one of the fundamental problems of biomechanics. Classical approaches that reduce the description to the scalar predictor "time under tension” are intrinsically insufficient. They treat posture holding as a static act, ignoring its dynamic nature and the temporal history of deviations, that is, the continuous micro-deviations and postural tremor that inevitably accompany any real posture holding and have a substantial impact on the total energetic cost. The central problem addressed in this work is the absence of a formal theoretical framework that would link the microdynamics of posture holding to the integral metabolic cost. We demonstrate that such a framework can be constructed by introducing the concept of the deviation abatement $(\Delta A_{\ell})$ . This quantity is not merely a new empirical index, but a fundamental parameter that arises unavoidably from the asymptotic analysis of the energetic functional as the unique leading first-order variable. The main contributions of this work can be summarised as follows: - Theoretical justification: We rigorously prove that, for a broad class of quasi-static models, the deviation abatement is the unique first-order sufficient statistic in the asymptotic expansion of the energetic cost functional. - Structural decomposition: The proposed formalism allows one to clearly decompose the energetic cost into three physically interpretable components: the baseline cost of holding an ideal posture, the cost of systematic drift of the mean posture, and the cost of tremor or variability. - Practical identification: The model provides a direct methodology for identifying physically meaningful parameters $(P_0, C_1, C_2)$ from standard experimentally measured quantities, thereby establishing a strong link between theoretical coefficients and empirical data. At the level of existing models, the metabolic cost of posture holding is usually described either through the overall rate of oxygen consumption or by empirical regression relationships in which the predictors are characteristics of centre-of-pressure (COP) fluctuations, sway amplitude and velocity, or total trajectory length. In these studies, metabolic cost is treated as a scalar output quantity associated with an embedded set of kinematic and stabilometric indices, but a minimal sufficient descriptor of postural drift, derived directly from an energetic functional, is not formulated. From a methodological standpoint, our approach is closer to classical works in theoretical biophysics, where large-scale metabolic networks are described by variational principles and optimisation problems. In such models, energetic functionals are written explicitly, and optimal profiles of enzymatic activity or fluxes are obtained as solutions of cost minimisation problems under given constraints. Our formulation for quasi-isometric loading is a muscle-tendon analogue of this approach: we explicitly define a functional $\mathcal{E}[a,\theta]$ on the space of trajectories $(a(t),\theta (t))$ and derive its asymptotics in the neighbourhood of a reference posture. A separate body of related work arises from integral kinematics. In mechanics and engineering, the physical quantity $absetment$ has been introduced as the time integral of displacement, that is, the first time integral of distance. Absetment and other integral kinematic variables are used to describe systems with "memory", in which the accumulated deviated state affects the current dynamics. More fundamentally, in the theory of Lagrangian models with memelements it has been demonstrated that an appropriate choice of configuration space may require time-integrated variables. In this context, our deviation absetment $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t $$ is a biophysically grounded analogue of absement: we show that this integral coordinate arises as the unique linear term in the asymptotic expansion of a biologically meaningful energetic functional, rather than being introduced ad hoc. In the following sections we present the formal problem formulation, derive the main mathematical result, and discuss its interpretation and practical implications. # 2 Mathematical model of quasi-isometric posture holding For the subsequent rigorous analysis, we formulate a minimalist yet sufficiently general model of a muscle-tendon system that maintains a prescribed posture in a quasi-isometric regime. In this section we explicitly specify the kinematic variables, the quasi-static equilibrium equation, and the functional of metabolic cost on which the asymptotic analysis will be based. # 2.1 Physical system and kinematics We consider a one-dimensional single-link muscle-tendon system that serves a single joint degree of freedom (for example, knee flexion/extension or ankle plantarflexion). The state of the system at time $t \in [0, T]$ is described by three variables: - the length of the muscle-tendon unit $\ell(t)$ ; the level of muscle activation $a(t)\in$ - the joint angle $\theta(t)$ . The geometry of the system imposes a kinematic relationship between the joint angle and the muscle-tendon length, $$ \ell (t) = \ell (\theta (t)), $$ where $\ell(\cdot)$ is a smooth function that encodes the direction of muscle pull and the moment arm. In a neighbourhood of a reference (target) posture $\theta_0$ we shall consider only small deviations, so that this relationship can be linearised: $$ \ell (t) = \ell_ {0} + r _ {0} (\theta (t) - \theta_ {0}) + O ((\theta (t) - \theta_ {0}) ^ {2}), $$ where $\ell_0 = \ell (\theta_0)$ is the reference length, and $$ r_{0} = \left.\frac{d\ell}{d\theta}\right|_{\theta_{0}} $$ is the effective moment arm at this point. In the asymptotic analysis below we shall be interested precisely in small deviations $\delta \theta (t) = \theta (t) - \theta_0$ , $\delta \ell (t) = \ell (t) - \ell_0$ , for which higher-order terms of the expansion in $\delta \theta$ can be accounted for through terms of the type $O(\| \ell -\ell_0\|_{L^\infty}^3)$ . # 2.2 Quasi-static equilibrium and the activation-angle relationship Let $F(\ell, a)$ be a smooth function that describes muscle force as a function of the muscle-tendon length and the activation level. We do not fix a specific parametrisation of this function (for example, a decomposition into passive and active components), and we rely only on its regularity and local derivatives in a neighbourhood of the equilibrium point. We denote the external joint moment by $M_{\mathrm{ext}}(\theta)$ , and the muscle moment arm by $r(\theta)$ . Then in the quasi-isometric (quasi-static) regime, where inertial and viscous effects are neglected, the following quasi-static equilibrium condition holds: $$ F (\ell (\theta), a) r (\theta) = M _ {\mathrm {e x t}} (\theta). \tag {1} $$ It is convenient to introduce the function $$ Q (\theta , a) := F \big (\ell (\theta), a \big) r (\theta) - M _ {\mathrm {e x t}} (\theta), $$ so that the equilibrium condition takes the form $Q(\theta, a) = 0$ . Let $(\theta_0, a_0)$ be a fixed equilibrium point, that is, $$ Q (\theta_ {0}, a _ {0}) = 0. $$ The key local assumption of the model is formulated as $$ Q _ {a} \left(\theta_ {0}, a _ {0}\right) \neq 0, \tag {2} $$ that is, a change of activation at fixed angle modifies the resulting joint moment. From a physical point of view this means that in a neighbourhood of the operating point the system is neither in a purely passive state, nor in a saturation regime in which variations of activation no longer affect the moment. Under conditions (1) and (2), the implicit function theorem guarantees the existence of a smooth function $$ a _ {*} (\theta) $$ such that $a_{*}(\theta_{0}) = a_{0}$ and $$ Q \left(\theta , a _ {*} (\theta)\right) \equiv 0 $$ in a neighbourhood of $\theta_0$ . In other words, locally the muscle activation can be expressed uniquely as a function of the joint angle if quasi-static equilibrium is enforced. For the subsequent analysis it will be convenient to use derivatives of the force function $F(\ell, a)$ not only with respect to length, but also with respect to the joint angle. We adopt the convention that for any smooth function $G(\ell, a)$ the derivatives with respect to the angle are defined by $$ G _ {\theta} (\theta , a) := \frac {\partial}{\partial \theta} G \big (\ell (\theta), a \big) = G _ {\ell} \big (\ell (\theta), a \big) \ell_ {\theta} (\theta), $$ $$ G _ {\theta \theta} (\theta , a) := \frac {\partial^ {2}}{\partial \theta^ {2}} G \big (\ell (\theta), a \big), \qquad G _ {\theta a} (\theta , a) := \frac {\partial^ {2}}{\partial \theta \partial a} G \big (\ell (\theta), a \big), $$ where $G_{\ell}$ denotes the partial derivative with respect to $\ell$ . In all formulas below, the derivatives $F_{\theta}, F_{\theta \theta}, F_{\theta a}$ are understood in this sense and are, unless stated otherwise, always evaluated at the reference point $(\theta_0, a_0)$ . # 2.3 Metabolic cost functional The metabolic power $P_{\mathrm{met}}(t)$ consumed by the muscle in the quasi-isometric regime is modelled as a linear combination of activation and force: $$ P _ {\mathrm {m e t}} (t) = \alpha a (t) + \beta F (\ell (t), a (t)), \quad \alpha , \beta > 0, $$ where $\alpha$ and $\beta$ are constant parameters representing the effective cost of activation and force, respectively. The total metabolic cost over the posture-holding interval $T$ is given by the integral functional $$ \mathcal {E} [ a, \theta ] = \int_ {0} ^ {T} P _ {\mathrm {m e t}} (t) d t = \int_ {0} ^ {T} \left(\alpha a (t) + \beta F (\ell (t), a (t))\right) d t. \tag {3} $$ In what follows we are interested in trajectories $(\theta(t), a(t))$ that satisfy the quasi-static equilibrium condition $Q(\theta(t), a(t)) \equiv 0$ . Under this condition, in a neighbourhood of the equilibrium point $(\theta_0, a_0)$ the activation can be expressed as a function of the joint angle, $a(t) = a_*(\theta(t))$ . For brevity we introduce the notation $$ \mathcal {E} _ {\text {m e t}} (\ell) := \mathcal {E} [ a _ {*} (\theta), \theta ], \tag {4} $$ that is, $\mathcal{E}_{\mathrm{met}}(\ell)$ is the same energetic cost functional rewritten in terms of the length coordinate $\ell(t) = \ell(\theta(t))$ . Subsequently we shall analyse small deviations of trajectories $\ell(t)$ from the equilibrium length $\ell_0 = \ell(\theta_0)$ and show that $\mathcal{E}_{\mathrm{met}}(\ell)$ admits an asymptotic expansion in which the linear part depends only on the integral $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t, \tag {5} $$ while the quadratic term has the form $\int_0^T (\ell(t) - \ell_0)^2 dt$ . The integral $\Delta A_\ell$ serves as the unique first-order integral variable (the length abatement), whereas the full shape of the trajectory $\ell(t)$ enters through the quadratic contribution. The following lemma formalises this property. Lemma 1 (Absetment as the unique first-order linear variable). Let the assumptions of Section 2 hold, in particular let there exist a smooth function $a_{*}(\theta)$ that satisfies $Q(\theta ,a_{*}(\theta))\equiv 0$ in a neighbourhood of $\theta_0$ , and let the functional $\mathcal{E}_{\mathrm{met}}(\ell)$ be defined by (4). Suppose in addition that the derivative $\ell_{\theta}(\theta_0) = r_0\neq 0$ , so that in a neighbourhood of $\ell_0 = \ell (\theta_0)$ there exists a smooth inverse function $\Theta (\ell)$ . Then there exists a constant $C_1 \in \mathbb{R}$ such that for any trajectory $\ell$ with sufficiently small deviation $\| \ell - \ell_0 \|_{L^{\infty}(0,T)}$ the following asymptotic expansion holds: $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = \mathcal {E} _ {\mathrm {m e t}} (\ell_ {0}) + C _ {1} \Delta \mathcal {A} _ {\ell} + O \big (\| \ell - \ell_ {0} \| _ {L ^ {\infty} (0, T)} ^ {2} \big), $$ where the length abatement $\Delta A_{\ell}$ is defined in (5). Moreover, if $L$ is any linear functional on the space of deviations $\delta \ell(t) = \ell(t) - \ell_0$ that coincides with the first variation of $\mathcal{E}_{\mathrm{met}}$ at the point $\ell_0$ for all admissible small perturbations, then $L(\ell) = K \Delta A_{\ell}$ for some constant $K$ . In particular, no other independent first-order linear integral descriptor arises. The proof of Lemma 1 is given in Appendix A. From the viewpoint of the model structure, this means that the abatement $\Delta \mathcal{A}_{\ell}$ (that is, the abatement of length) is not a phenomenologically introduced index, but a fundamental integral variable that inevitably appears as the unique linear trajectory descriptor in the asymptotics of the energetic functional. # 3 Asymptotic analysis and main result This section constitutes the central mathematical core of the work. Its goal is to derive, in a rigorous manner, the analytical dependence of energetic cost on postural kinematics via an asymptotic expansion of the key model equations in a neighbourhood of a reference equilibrium point. # 3.1 Linearisation of the equilibrium condition We consider the reference equilibrium point $(\theta_0, a_0)$ , which satisfies the quasi-static equilibrium condition (1), or, in terms of the function $$ Q (\theta , a) := F \big (\ell (\theta), a \big) r (\theta) - M _ {\mathrm {e x t}} (\theta), $$ the condition $$ Q (\theta_ {0}, a _ {0}) = 0. $$ Fix a small parameter $\delta > 0$ and consider trajectories $(\theta(t), a(t))$ on $[0, T]$ such that $$ \| \theta - \theta_ {0} \| _ {L ^ {\infty} (0, T)} \leq \delta , \quad \| a - a _ {0} \| _ {L ^ {\infty} (0, T)} \leq \delta . $$ Since $\ell(\theta)$ is a smooth function, this is equivalent to the local condition $\| \ell - \ell_0 \|_{L^{\infty}(0,T)} \leq C\delta$ for some constant $C > 0$ that depends only on the derivative $\ell_{\theta}$ in a neighbourhood of $\theta_0$ . All estimates below are to be understood in the asymptotic sense as $\delta \to 0$ . We introduce the notation $$ \delta \theta (t) := \theta (t) - \theta_ {0}, \qquad \delta a (t) := a (t) - a _ {0}. $$ We expand $Q(\theta, a)$ in a Taylor series to first order in a neighbourhood of the point $(\theta_0, a_0)$ : $$ Q (\theta_ {0} + \delta \theta , a _ {0} + \delta a) = Q (\theta_ {0}, a _ {0}) + Q _ {\theta} (\theta_ {0}, a _ {0}) \delta \theta + Q _ {a} (\theta_ {0}, a _ {0}) \delta a + O \big (| \delta \theta | ^ {2} + | \delta a | ^ {2} \big), $$ where the term $O\big(|\delta \theta |^2 +|\delta a|^2\big)$ is uniform in $t$ and is of order $O(\delta^2)$ as $\delta \rightarrow 0$ . Since $Q(\theta_0, a_0) = 0$ and we consider trajectories that satisfy the equilibrium condition $Q(\theta(t), a(t)) \equiv 0$ , we obtain in first order $$ Q _ {\theta} (\theta_ {0}, a _ {0}) \delta \theta (t) + Q _ {a} (\theta_ {0}, a _ {0}) \delta a (t) \approx 0. $$ By assumption (2) we have $Q_{a}(\theta_{0}, a_{0}) \neq 0$ , and therefore from the linear relation it follows that $$ \delta a (t) = C _ {\theta} \delta \theta (t), \quad C _ {\theta} := - \frac {Q _ {\theta} \left(\theta_ {0} , a _ {0}\right)}{Q _ {a} \left(\theta_ {0} , a _ {0}\right)}. \tag {6} $$ In order to relate the coefficient $C_{\theta}$ to derivatives of the force function $F(\ell, a)$ and to the kinematic functions $r(\theta), M_{\mathrm{ext}}(\theta)$ , we explicitly calculate the partial derivatives $Q_{\theta}$ and $Q_{a}$ at the point $(\theta_0, a_0)$ . We have $$ Q (\theta , a) = F \big (\ell (\theta), a \big) r (\theta) - M _ {\mathrm {e x t}} (\theta), $$ hence $$ Q _ {\theta} = F _ {\theta} r (\theta) + F (\ell (\theta), a) r ^ {\prime} (\theta) - M _ {\mathrm {e x t}} ^ {\prime} (\theta), \quad Q _ {a} = F _ {a} r (\theta), $$ where $$ F _ {\theta} (\theta , a) := \frac {\partial}{\partial \theta} F \big (\ell (\theta), a \big), \qquad F _ {a} (\theta , a) := \frac {\partial}{\partial a} F \big (\ell (\theta), a \big). $$ Evaluating these derivatives at the point $(\theta_0, a_0)$ and introducing the notation $$ F _ {0} := F (\ell_ {0}, a _ {0}), \quad F _ {\theta} := F _ {\theta} (\theta_ {0}, a _ {0}), \quad F _ {a} := F _ {a} (\theta_ {0}, a _ {0}), $$ $$ r _ {0} := r (\theta_ {0}), \quad r _ {0} ^ {\prime} := r ^ {\prime} (\theta_ {0}), \quad M _ {0} ^ {\prime} := M _ {\mathrm {e x t}} ^ {\prime} (\theta_ {0}), $$ we obtain $$ Q _ {\theta} \left(\theta_ {0}, a _ {0}\right) = F _ {\theta} r _ {0} + F _ {0} r _ {0} ^ {\prime} - M _ {0} ^ {\prime}, \quad Q _ {a} \left(\theta_ {0}, a _ {0}\right) = F _ {a} r _ {0}. $$ Substituting this into the explicit expression for $C_{\theta}$ from (6), we find $$ C _ {\theta} = \frac {M _ {0} ^ {\prime} - F _ {0} r _ {0} ^ {\prime} - r _ {0} F _ {\theta}}{r _ {0} F _ {a}}. $$ Thus, the dependence of muscle activation on the joint angle in a neighbourhood of the equilibrium point has the form $$ a (t) = a _ {0} + C _ {\theta} \left(\theta (t) - \theta_ {0}\right) + O \left(| \theta (t) - \theta_ {0} | ^ {2}\right), $$ and for the subsequent first-order analysis it is sufficient to retain the linear approximation (6). # 3.2 Expansion of the energetic functional We return to the metabolic cost functional (3): $$ \mathcal {E} [ a, \theta ] = \int_ {0} ^ {T} (\alpha a (t) + \beta F (\ell (t), a (t))) d t. $$ We introduce the notation $$ F _ {0} := F (\ell_ {0}, a _ {0}), \quad F _ {\theta} := \partial_ {\theta} F (\ell (\theta), a) \big | _ {(\theta_ {0}, a _ {0})}, \quad F _ {a} := \partial_ {a} F (\ell (\theta), a) \big | _ {(\theta_ {0}, a _ {0})}, $$ and consider small deviations $\delta \theta (t)$ , $\delta a(t)$ from the equilibrium point. Then $$ a (t) = a _ {0} + \delta a (t), \qquad F (\theta (t), a (t)) = F _ {0} + F _ {\theta} \delta \theta (t) + F _ {a} \delta a (t) + O \big (| \delta \theta | ^ {2} + | \delta a | ^ {2} \big). $$ Substituting these expansions into the instantaneous power $$ P _ {\mathrm {m e t}} (t) = \alpha a (t) + \beta F (\theta (t), a (t)), $$ we obtain $$ P_{\text{met}}(t) = \underbrace{\big(\alpha a_{0} + \beta F_{0}\big)}_{P_{0}} + \big(\alpha +\beta F_{a}\big)\delta a(t) + \beta F_{\theta}\delta \theta (t) + O\big(|\delta \theta |^{2} + |\delta a|^{2}\big). $$ Integrating over time, we arrive at $$ \mathcal {E} [ a, \theta ] = P _ {0} T + \int_ {0} ^ {T} \left(\left(\alpha + \beta F _ {a}\right) \delta a (t) + \beta F _ {\theta} \delta \theta (t)\right) d t + O (\delta^ {2}), $$ where $O(\delta^2)$ denotes the contribution of second and higher orders in the small deviations. We now use the linear relationship (6) between $\delta a(t)$ and $\delta \theta (t)$ : $$ \delta a (t) = C _ {\theta} \delta \theta (t). $$ After substitution we obtain $$ \mathcal {E} [ a, \theta ] = P _ {0} T + \int_ {0} ^ {T} \left(\left(\alpha + \beta F _ {a}\right) C _ {\theta} + \beta F _ {\theta}\right) \delta \theta (t) d t + O (\delta^ {2}). $$ Introducing the notation $$ C _ {1} ^ {(\theta)} := \left(\alpha + \beta F _ {a}\right) C _ {\theta} + \beta F _ {\theta}, $$ we can write the linear term in the energy expansion in the form $$ \mathcal {E} [ a, \theta ] = P _ {0} T + C _ {1} ^ {(\theta)} \int_ {0} ^ {T} \delta \theta (t) d t + O (\delta^ {2}). $$ In the subsequent subsections and in the proof of Theorem 1 we show that this expression can be rewritten as $$ C _ {1} ^ {(\theta)} \int_ {0} ^ {T} \delta \theta (t) d t = C _ {1} \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t, $$ where the coefficient $C_1$ is expressed in terms of the same local derivatives $F_{\theta}, F_{a}, r_{0}, r_{0}^{\prime}, M_{0}^{\prime}$ and the parameters $\alpha, \beta$ , and the integral $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) d t $$ is the abatement of the length deviation. Thus, already at the level of the linear approximation, the energetic cost functional reduces to the baseline term $P_0T$ and a linear contribution proportional to the abatement, whereas dependence on the full shape of the trajectory $\theta(t)$ (or $\ell(t)$ ) appears only in the quadratic term, which is analysed in the following. # 3.3 Quadratic term in the energy expansion To obtain an explicit form of the quadratic term in the asymptotic expansion, we consider the function $F(\theta, a) \coloneqq F(\ell(\theta), a)$ in a neighbourhood of the equilibrium point $(\theta_0, a_0)$ and perform its Taylor expansion up to second order in the small deviations $\delta\theta(t) = \theta(t) - \theta_0$ and $\delta a(t) = a(t) - a_0$ . Using the linear relation $\delta a(t) = C_\theta \delta\theta(t)$ obtained in the previous subsection, and carefully collecting all second-order terms, we obtain $$ F (\theta (t), a (t)) \approx F _ {0} + \left(F _ {\theta} + F _ {a} C _ {\theta}\right) \delta \theta (t) + \frac {1}{2} \Big (F _ {\theta \theta} + 2 F _ {\theta a} C _ {\theta} + F _ {a a} C _ {\theta} ^ {2} \Big) \delta \theta (t) ^ {2}, $$ where all derivatives of $F$ are evaluated at the point $(\theta_0, a_0)$ . Substituting this expansion into the metabolic power $$ P _ {\mathrm {m e t}} (t) = \alpha a (t) + \beta F (\theta (t), a (t)), $$ integrating over time, and passing from $\theta$ to $\ell$ using $\ell(t) - \ell_0 \approx r_0(\theta(t) - \theta_0)$ , we arrive at the representation $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}), $$ where $$ C _ {2} = \frac {\beta}{2 r _ {0} ^ {2}} \Big (F _ {\theta \theta} + 2 F _ {\theta a} C _ {\theta} + F _ {a a} C _ {\theta} ^ {2} \Big), $$ and $P_0$ and $C_1$ are defined in the previous subsections. The full technical derivation of these coefficients is provided in Appendix B. # 3.4 Main theorem The result obtained above can be formulated as a formal theorem, which constitutes the central statement of this work. Theorem 1. Under the assumptions formulated above, the energetic cost functional $\mathcal{E}_{\mathrm{met}}(\ell)$ defined in (4) admits, for small deviations from the equilibrium point, the asymptotic representation $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}), $$ where $\Delta A_{\ell} = \int_0^T (\ell (t) - \ell_0)dt$ is the abatement of the length deviation, and the coefficients $P_0,C_1,C_2$ are determined by the local properties of the force $F(\ell ,a)$ and by the system parameters at the equilibrium point $(\theta_0,a_0)$ . The proof of Theorem 1 is given in Appendix B. Owing to the linear kinematic relation $\ell(t) - \ell_0 \approx r_0 (\theta(t) - \theta_0)$ , the result is directly recast in terms of the angular absement as $\Delta \mathcal{A}_\ell \approx r_0 \Delta \mathcal{A}_\theta$ , which unifies the notation. The physical and practical implications of this theorem are discussed in the next section. # 4 Interpretation and practical implications The mathematical result obtained above becomes a tool for in-depth physical analysis and for addressing practical problems in biomechanics. This section unfolds the key consequences of the derived theorem. # 4.1 Decomposition of energetic cost: drift and tremor The asymptotic expansion is not an arbitrary choice but a mathematically enforced structure that decomposes the energetic cost into three physically interpretable components: - Baseline holding cost $(P_0T)$ : This is the zeroth-order term, the fundamental "price of time under tension". It corresponds to the energetic expenditure associated with ideal holding of the posture at the reference point, in the absence of any deviations. - Drift cost $(C_1\Delta \mathcal{A}_\ell)$ : This is the first-order term, a linear correction that captures the energetic consequences of a systematic shift of the mean posture. The abatement of the deviation $\Delta \mathcal{A}_\ell$ provides a quantitative measure of this drift. - Cost of variability (quadratic contribution): This is the second-order term, given by the principal quadratic contribution $C_2 \int_0^T (\ell(t) - \ell_0)^2 dt$ together with higher-order terms $O\left(\| \ell - \ell_0 \|_{L^\infty}^3\right)$ in the full asymptotic expansion of the energy. This contribution represents the metabolic "cost of tremor" or variability around the mean posture. # 4.2 Parameter identification scheme from experimental data The theoretical result becomes a practical tool for the analysis of experimental data. The proposed model allows direct identification of the parameters $P_0, C_1, C_2$ from measurements. 1. Kinematics $(\ell(t)$ or $\theta(t))$ are recorded using standard tools such as B-mode ultrasound for direct measurement of muscle fascicle length, validated in terms of reproducibility and accuracy, optionally with automated deep-learning-based segmentation of muscle contours in ultrasound images. Alternatively or additionally, trajectories of lengths $\ell(t)$ can be reconstructed from musculoskeletal models in OpenSim based on marker kinematics. In parallel, the total metabolic cost $\mathcal{E}_{\mathrm{met}}$ is obtained using a reference indirect calorimetry method (measurement of $\mathrm{O}_2$ consumption and $\mathrm{CO}_2$ production), and the isometric joint moment is measured with a dynamometer to control the external load. 2. Three integral predictors are computed from the kinematic data: the duration $T$ , the abatement of the deviation $\Delta \mathcal{A}_{\ell} = \int_{0}^{T} (\ell(t) - \ell_{0}) dt$ , and the integral of the squared deviation $\int_{0}^{T} (\ell(t) - \ell_{0})^{2} dt$ . 3. Multiple linear regression is applied, with the measured values of $\mathcal{E}_{\mathrm{met}}$ as the dependent variable and the computed predictors as independent variables. The regression coefficients provide estimates of $P_0, C_1, C_2$ . This procedure enables a transition from the abstract model to quantitative characterization of a specific biomechanical system. # 4.3 Implications for variational problems and optimal control The derived expansion has direct implications for optimal control problems, in particular for determining posture-holding strategies that minimize energetic cost. To first order, the optimal strategy reduces to minimizing the absolute value of the angular absement $|\Delta \mathcal{A}_{\theta}|$ . This means that the time-averaged angle $\bar{\theta} = \frac{1}{T}\int_{0}^{T}\theta (t)dt$ should be as close as possible to the reference value $\theta_0$ . This integral criterion is substantially more informative than the naive strategy of minimizing instantaneous deviations, because it correctly accounts for the duration of each displacement. At second order, once the mean drift has been minimized, optimality requires minimization of the quadratic term, which corresponds to minimizing the variance of the posture, that is, reducing the amplitude of tremor. # 5 Discussion This section is devoted to a critical examination of the obtained results, a discussion of the key assumptions and limitations of the model, and an outline of promising directions for future research. # 5.1 Positioning of the result in the context of existing studies Most existing studies that relate metabolic cost to posture holding focus either on empirical correlations between the cost and stabilometric parameters, or on numerical optimal control models. In, the metabolic cost of quiet standing or near-static regimes is described in terms of mean and root-mean-square sway characteristics (amplitude, velocity, length of the center-of-pressure trajectory), as well as in terms of "postural complexity" encoded in entropy-based descriptors of the trajectories. These approaches provide an important empirical foundation, but they operate with multidimensional sets of indices and do not supply a single analytically derived scalar descriptor that specifically represents the "accumulated drift" of the posture. On the other hand, classical works in theoretical biophysics develop an approach in which metabolic networks are described by explicitly specified cost functionals, and actual activity profiles are interpreted as outcomes of optimization (minimization of the total "price" of enzymes or power at a prescribed flux). Subsequent studies in this direction apply optimal control methods to the temporal structure of enzyme activation. Our result can be viewed as a muscle-tendon analogue of this paradigm: instead of merely searching for empirical predictors of metabolic cost, we start from a variational formulation and derive an analytical functional $\mathcal{E}_{\mathrm{met}}(\ell)$ on the space of trajectories. Against this background, the introduced absevement $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) d t $$ does not appear as an additional "index", but as a unique linear coordinate of first order in the asymptotic expansion of the energetic cost. In other words, if one assumes only local smoothness of the functional $\mathcal{E}[a,\theta]$ and a quasi-isometric regime, then any model of this type must contain exactly the integral of length deviation as the leading linear contribution. This fundamentally distinguishes abatement from commonly used scalar characteristics such as mean amplitude or time under tension, which do not possess an analogous strict "universal" property. Its relation to the notion of abatement in integral kinematics can be summarized as follows: the quantity abatement was originally introduced to describe the time integral of displacement in hydraulic musical instruments, where the acoustic output depends not only on the instantaneous state but also on the duration of the deviation. Further development of integral kinematics and integral kinesiology, as well as the application of acoustic abatement in phonetics, demonstrate that such integral variables naturally arise as descriptors of accumulated influence in systems with "memory". On the other hand, in variational models of electrical circuits with mem-elements, the configuration space is deliberately extended to time-integrated variables in order to obtain a correct Lagrangian formulation. In this context, our abatement $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t $$ is a biophysically meaningful analogue of such integral coordinates: we show that an analogous integration of length deviation arises not ad hoc, but is forced by the geometry of the energetic functional. Thus, absement appears as a biophysically grounded realization of the same integral geometry, now with a strict connection to metabolic cost. When comparing the proposed expansion $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}) $$ with more traditional models, several principal advantages can be identified: - Structural separation of components. The baseline term $P_0T$ separates the unavoidable cost of maintaining tone from the additional cost due to drift and variability, whereas the integral drift $\Delta A_{\ell}$ and the quadratic tremor term possess clearly different scaling behaviour. - Minimal sufficient coordinate. In a neighbourhood of equilibrium, absevement is the only linear coordinate that enters the expansion; therefore, at first order all admissible models reduce to it. In this sense, it is a "sufficient statistic" for describing the metabolic effect of slow drift. - Natural compatibility with optimization principles. The functional $\mathcal{E}_{\mathrm{met}}$ has the standard variational structure familiar from theoretical biophysics and can be used directly in optimal control problems for postural equilibrium (minimization of cost under constraints on the amplitude of deviations, and so on). - Fundamental interpretation. From a mathematical point of view, absement characterizes the "mass" of the trajectory in the space of lengths, that is, how much time the system spends in a deviated state, weighted by the magnitude of the deviation. This provides a more transparent understanding of why even a small but systematic drift of posture can produce a substantial contribution to the total metabolic cost. In this way, the proposed model fits naturally within the optimization-based tradition of theoretical biophysics, while at the same time introducing an integral variable (absent) as a strictly justified, rather than purely phenomenological, descriptor of postural energetic cost. # 5.2 Model limitations It is important to clearly delineate the limits of applicability of the proposed model, which follow directly from the assumptions made: - Quasi-static approximation. The model neglects velocity-dependent and viscous effects, which makes it applicable only to slow, quasi-isometric movements. - Single-degree-of-freedom formulation. The analysis was carried out for a system with a single degree of freedom, which is an adequate approximation for isolated joint tasks, but not for complex multi-joint postures such as the front plank, where coordination across multiple joints is essential. - Small deviations. The model is local, as it relies on linearization in a neighbourhood of the reference point. Its accuracy decreases as the amplitude of motion becomes large. # 5.3 Directions for future research Based on these limitations, several promising directions for further development of the theory can be identified: - Extension to the multidimensional case. Generalization of the model to multi-joint systems, which will require the introduction of a notion of vector-valued absement. - Inclusion of dynamic terms. Development of an extended model that incorporates velocity-dependent terms, enabling the analysis of non-isometric contraction regimes. - Experimental validation. Targeted experimental studies to test the proposed identification scheme on real biomechanical data. # 6 Conclusions In this work we have proposed a new, integral paradigm for describing metabolic costs under quasi-isometric loading, in which posture maintenance is treated not as a static state but as a full trajectory of the muscle-tendon system in time. The central result is a rigorous asymptotic expansion of the metabolic cost functional $\mathcal{E}_{\mathrm{met}}(\ell)$ in a neighbourhood of a reference posture (Theorem 1), which isolates three qualitatively distinct contributions: the baseline cost of maintenance, the cost of drift, and the cost of tremor. We have shown that the absetment of deviation $$ \Delta \mathcal {A} _ {\ell} = \int_ {0} ^ {T} (\ell (t) - \ell_ {0}) d t $$ (i.e. the abatement in the terminology of integral kinematics) arises in this expansion not as a phenomenologically introduced index, but as the unique first-order sufficient statistic for the energetics of postural drift: any linear functional that can appear as the first (linear) variational term in the asymptotic expansion of $\mathcal{E}_{\mathrm{met}}(\ell)$ is proportional to $\Delta \mathcal{A}_{\ell}$ . Thus, abatement cleanly separates systematic drift from variability (tremor) in such a way that both components admit a clear geometric and energetic interpretation. The resulting expansion $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O \left(\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3}\right) $$ provides a physically transparent structure for analysing postural control. The term $P_0T$ corresponds to the "pure" cost of maintaining the muscle in the reference state, the linear term $C_1\Delta A_\ell$ reflects the energetic price of systematic postural drift (the shift of the mean length away from $\ell_0$ ), while the quadratic term with coefficient $C_2$ is interpreted as the cost of variability associated with tremor and micro-oscillations of length. This decomposition is consistent with experimental observations on the balance between postural stabilization and admissible variability, but is formulated here as a rigorous mathematical statement. By identifying abatement in our model with the same integral variable used in the theory of integral kinematics and mem-elements, we place the proposed approach within the broader context of systems with memory. In this sense, the same integral variable that describes the accumulated history of action in electrical and mechanical memory elements acquires here a clear biophysical interpretation as a measure of the integrated deviation of muscle-tendon length. This opens the way to a unified description of heterogeneous memory systems within a common variational framework. Practically, the proposed theory yields a clear scheme for identifying the parameters $P_0, C_1, C_2$ from experimental data (ultrasound measurements of muscle length, dynamometric recordings of joint moment, indirect estimates of metabolic cost). This provides a foundation for more accurate modelling, assessment, and optimization of human motor strategies in sports science, clinical biomechanics, and physiological rehabilitation. In summary, the main contributions of this work can be formulated as follows. - Universal structure of the metabolic cost functional. For a broad class of quasi-static models of the muscle-tendon system, we show that in a neighbourhood of a reference posture the cumulative metabolic cost, written as the functional $\mathcal{E}_{\mathrm{met}}(\ell)$ , admits the asymptotic expansion $$ \mathcal {E} _ {\mathrm {m e t}} (\ell) = P _ {0} T + C _ {1} \Delta \mathcal {A} _ {\ell} + C _ {2} \int_ {0} ^ {T} \left(\ell (t) - \ell_ {0}\right) ^ {2} d t + O \big (\| \ell - \ell_ {0} \| _ {L ^ {\infty}} ^ {3} \big), $$ where $\Delta \mathcal{A}_{\ell} = \int_{0}^{T} (\ell(t) - \ell_{0}) dt$ is the unique linear integral descriptor of the kinematics, and $P_{0}, C_{1}, C_{2}$ are determined by the local properties of the muscle-tendon system in a neighbourhood of the equilibrium point. - Abusement as a first-order sufficient statistic. We prove that the abatement of deviation $\Delta A_{\ell}$ (i.e. the length abatement) arises not as a phenomenologically introduced "convenient index", but as a first-order sufficient statistic for the energetics of postural drift: all other linear functionals of the trajectory $\ell(t)$ at first order reduce to $\Delta A_{\ell}$ , whereas dependence on the full shape of the trajectory appears only in the quadratic term. - Bridge to integral kinematics and theoretical biophysics. The results obtained are naturally consistent with the formalism of integral kinematics and the theory of mem-elements, in which the same integral variable (absent) plays a key role in the description of systems with memory. By embedding this object into a variational framework of theoretical biophysics (in the spirit of the approaches of Heinrich-Schuster and Klipp-Heinrich), we provide a conceptual link between the geometry of the metabolic functional, integral kinematic variables, and experimentally observed characteristics of postural control.
arxiv_physics
2025-12-11T00:00:00Z
https://arxiv.org/pdf/2512.13720
{"title": "Absement: Quantitative Assessment of Metabolic Cost during Quasi-Isometric Muscle Loading", "raw_content": "# Absement: Quantitative Assessment of Metabolic Cost during Quasi-Isometric Muscle Loading\n\nSerhii V Marchenko*\n\nDepartment of Physiology, Pathophysiology, Biophysics and Informatics, Odesa National Medical University, Odesa, Ukraine.\n\nDecember 17, 2025\n\n# Abstract\n\nAccurate quantitative assessment of metabolic cost during static posture holding is a strategically important problem in biomechanics and physiology. Traditional metrics such as \"time under tension\" are fundamentally insufficient, because they are scalar quantities that ignore the temporal history of deviations, that is, the microdynamics of posture, which has nontrivial energetic consequences. In this work, we propose a theoretically grounded methodology to address this problem by introducing the concept of the deviation absement $(\\Delta A_{\\ell})$ , defined as the time integral of the deviation of the muscle-tendon unit length from a reference value.\n\nWe rigorously prove that, for a broad class of quasi-static models, absevement appears as the leading first-order state variable. For small deviations in a neighbourhood of a reference posture, the total metabolic cost $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ admits a universal asymptotic expansion of the form\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}),\n$$\n\nwhere $T$ is the duration of loading, and $P_0, C_1, C_2$ are constants determined by local properties of the system.\n\nThus, the deviation abatement $(\\Delta A_{\\ell})$ is the unique first-order sufficient statistic that allows one to quantify and separate the energetic contribution of systematic drift of the mean posture from the contribution of micro-oscillations (tremor), which is described by the quadratic term. This result has direct consequences for parameter identification: the proposed formalism makes it possible to recover physically meaningful coefficients $(P_0,C_1,C_2)$ by means of linear regression of experimental data obtained from standard kinematic measurements and indirect calorimetry.\n\n# 1 Introduction\n\nModelling the energetics of isometric muscle contractions is one of the fundamental problems of biomechanics. Classical approaches that reduce the description to the scalar predictor \"time under\n\ntension” are intrinsically insufficient. They treat posture holding as a static act, ignoring its dynamic nature and the temporal history of deviations, that is, the continuous micro-deviations and postural tremor that inevitably accompany any real posture holding and have a substantial impact on the total energetic cost.\n\nThe central problem addressed in this work is the absence of a formal theoretical framework that would link the microdynamics of posture holding to the integral metabolic cost. We demonstrate that such a framework can be constructed by introducing the concept of the deviation abatement $(\\Delta A_{\\ell})$ . This quantity is not merely a new empirical index, but a fundamental parameter that arises unavoidably from the asymptotic analysis of the energetic functional as the unique leading first-order variable.\n\nThe main contributions of this work can be summarised as follows:\n\n- Theoretical justification: We rigorously prove that, for a broad class of quasi-static models, the deviation abatement is the unique first-order sufficient statistic in the asymptotic expansion of the energetic cost functional. \n- Structural decomposition: The proposed formalism allows one to clearly decompose the energetic cost into three physically interpretable components: the baseline cost of holding an ideal posture, the cost of systematic drift of the mean posture, and the cost of tremor or variability. \n- Practical identification: The model provides a direct methodology for identifying physically meaningful parameters $(P_0, C_1, C_2)$ from standard experimentally measured quantities, thereby establishing a strong link between theoretical coefficients and empirical data.\n\nAt the level of existing models, the metabolic cost of posture holding is usually described either through the overall rate of oxygen consumption or by empirical regression relationships in which the predictors are characteristics of centre-of-pressure (COP) fluctuations, sway amplitude and velocity, or total trajectory length [6, 5, 16, 15, 19]. In these studies, metabolic cost is treated as a scalar output quantity associated with an embedded set of kinematic and stabilometric indices, but a minimal sufficient descriptor of postural drift, derived directly from an energetic functional, is not formulated.\n\nFrom a methodological standpoint, our approach is closer to classical works in theoretical biophysics, where large-scale metabolic networks are described by variational principles and optimisation problems [4, 10, 2]. In such models, energetic functionals are written explicitly, and optimal profiles of enzymatic activity or fluxes are obtained as solutions of cost minimisation problems under given constraints. Our formulation for quasi-isometric loading is a muscle-tendon analogue of this approach: we explicitly define a functional $\\mathcal{E}[a,\\theta]$ on the space of trajectories $(a(t),\\theta (t))$ and derive its asymptotics in the neighbourhood of a reference posture.\n\nA separate body of related work arises from integral kinematics. In mechanics and engineering, the physical quantity $absetment$ has been introduced as the time integral of displacement, that is, the first time integral of distance [14, 13, 12, 9, 8]. Absetment and other integral kinematic variables are used to describe systems with \"memory\", in which the accumulated deviated state affects the current dynamics. More fundamentally, in the theory of Lagrangian models with memelements it has been demonstrated that an appropriate choice of configuration space may require time-integrated variables [7, 1]. In this context, our deviation absetment\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t\n$$\n\nis a biophysically grounded analogue of absement: we show that this integral coordinate arises as the unique linear term in the asymptotic expansion of a biologically meaningful energetic functional, rather than being introduced ad hoc.\n\nIn the following sections we present the formal problem formulation, derive the main mathematical result, and discuss its interpretation and practical implications.\n\n# 2 Mathematical model of quasi-isometric posture holding\n\nFor the subsequent rigorous analysis, we formulate a minimalist yet sufficiently general model of a muscle-tendon system that maintains a prescribed posture in a quasi-isometric regime. In this section we explicitly specify the kinematic variables, the quasi-static equilibrium equation, and the functional of metabolic cost on which the asymptotic analysis will be based.\n\n# 2.1 Physical system and kinematics\n\nWe consider a one-dimensional single-link muscle-tendon system that serves a single joint degree of freedom (for example, knee flexion/extension or ankle plantarflexion). The state of the system at time $t \\in [0, T]$ is described by three variables:\n\n- the length of the muscle-tendon unit $\\ell(t)$ ; \nthe level of muscle activation $a(t)\\in [0,1]$ \n- the joint angle $\\theta(t)$ .\n\nThe geometry of the system imposes a kinematic relationship between the joint angle and the muscle-tendon length,\n\n$$\n\\ell (t) = \\ell (\\theta (t)),\n$$\n\nwhere $\\ell(\\cdot)$ is a smooth function that encodes the direction of muscle pull and the moment arm. In a neighbourhood of a reference (target) posture $\\theta_0$ we shall consider only small deviations, so that this relationship can be linearised:\n\n$$\n\\ell (t) = \\ell_ {0} + r _ {0} (\\theta (t) - \\theta_ {0}) + O ((\\theta (t) - \\theta_ {0}) ^ {2}),\n$$\n\nwhere $\\ell_0 = \\ell (\\theta_0)$ is the reference length, and\n\n$$\nr_{0} = \\left.\\frac{d\\ell}{d\\theta}\\right|_{\\theta_{0}}\n$$\n\nis the effective moment arm at this point. In the asymptotic analysis below we shall be interested precisely in small deviations $\\delta \\theta (t) = \\theta (t) - \\theta_0$ , $\\delta \\ell (t) = \\ell (t) - \\ell_0$ , for which higher-order terms of the expansion in $\\delta \\theta$ can be accounted for through terms of the type $O(\\| \\ell -\\ell_0\\|_{L^\\infty}^3)$ .\n\n# 2.2 Quasi-static equilibrium and the activation-angle relationship\n\nLet $F(\\ell, a)$ be a smooth function that describes muscle force as a function of the muscle-tendon length and the activation level. We do not fix a specific parametrisation of this function (for\n\nexample, a decomposition into passive and active components), and we rely only on its regularity and local derivatives in a neighbourhood of the equilibrium point.\n\nWe denote the external joint moment by $M_{\\mathrm{ext}}(\\theta)$ , and the muscle moment arm by $r(\\theta)$ . Then in the quasi-isometric (quasi-static) regime, where inertial and viscous effects are neglected, the following quasi-static equilibrium condition holds:\n\n$$\nF (\\ell (\\theta), a) r (\\theta) = M _ {\\mathrm {e x t}} (\\theta). \\tag {1}\n$$\n\nIt is convenient to introduce the function\n\n$$\nQ (\\theta , a) := F \\big (\\ell (\\theta), a \\big) r (\\theta) - M _ {\\mathrm {e x t}} (\\theta),\n$$\n\nso that the equilibrium condition takes the form $Q(\\theta, a) = 0$ . Let $(\\theta_0, a_0)$ be a fixed equilibrium point, that is,\n\n$$\nQ (\\theta_ {0}, a _ {0}) = 0.\n$$\n\nThe key local assumption of the model is formulated as\n\n$$\nQ _ {a} \\left(\\theta_ {0}, a _ {0}\\right) \\neq 0, \\tag {2}\n$$\n\nthat is, a change of activation at fixed angle modifies the resulting joint moment. From a physical point of view this means that in a neighbourhood of the operating point the system is neither in a purely passive state, nor in a saturation regime in which variations of activation no longer affect the moment.\n\nUnder conditions (1) and (2), the implicit function theorem guarantees the existence of a smooth function\n\n$$\na _ {*} (\\theta)\n$$\n\nsuch that $a_{*}(\\theta_{0}) = a_{0}$ and\n\n$$\nQ \\left(\\theta , a _ {*} (\\theta)\\right) \\equiv 0\n$$\n\nin a neighbourhood of $\\theta_0$ . In other words, locally the muscle activation can be expressed uniquely as a function of the joint angle if quasi-static equilibrium is enforced.\n\nFor the subsequent analysis it will be convenient to use derivatives of the force function $F(\\ell, a)$ not only with respect to length, but also with respect to the joint angle. We adopt the convention that for any smooth function $G(\\ell, a)$ the derivatives with respect to the angle are defined by\n\n$$\nG _ {\\theta} (\\theta , a) := \\frac {\\partial}{\\partial \\theta} G \\big (\\ell (\\theta), a \\big) = G _ {\\ell} \\big (\\ell (\\theta), a \\big) \\ell_ {\\theta} (\\theta),\n$$\n\n$$\nG _ {\\theta \\theta} (\\theta , a) := \\frac {\\partial^ {2}}{\\partial \\theta^ {2}} G \\big (\\ell (\\theta), a \\big), \\qquad G _ {\\theta a} (\\theta , a) := \\frac {\\partial^ {2}}{\\partial \\theta \\partial a} G \\big (\\ell (\\theta), a \\big),\n$$\n\nwhere $G_{\\ell}$ denotes the partial derivative with respect to $\\ell$ . In all formulas below, the derivatives $F_{\\theta}, F_{\\theta \\theta}, F_{\\theta a}$ are understood in this sense and are, unless stated otherwise, always evaluated at the reference point $(\\theta_0, a_0)$ .\n\n# 2.3 Metabolic cost functional\n\nThe metabolic power $P_{\\mathrm{met}}(t)$ consumed by the muscle in the quasi-isometric regime is modelled as a linear combination of activation and force:\n\n$$\nP _ {\\mathrm {m e t}} (t) = \\alpha a (t) + \\beta F (\\ell (t), a (t)), \\quad \\alpha , \\beta > 0,\n$$\n\nwhere $\\alpha$ and $\\beta$ are constant parameters representing the effective cost of activation and force, respectively. The total metabolic cost over the posture-holding interval $T$ is given by the integral functional\n\n$$\n\\mathcal {E} [ a, \\theta ] = \\int_ {0} ^ {T} P _ {\\mathrm {m e t}} (t) d t = \\int_ {0} ^ {T} \\left(\\alpha a (t) + \\beta F (\\ell (t), a (t))\\right) d t. \\tag {3}\n$$\n\nIn what follows we are interested in trajectories $(\\theta(t), a(t))$ that satisfy the quasi-static equilibrium condition $Q(\\theta(t), a(t)) \\equiv 0$ . Under this condition, in a neighbourhood of the equilibrium point $(\\theta_0, a_0)$ the activation can be expressed as a function of the joint angle, $a(t) = a_*(\\theta(t))$ . For brevity we introduce the notation\n\n$$\n\\mathcal {E} _ {\\text {m e t}} (\\ell) := \\mathcal {E} [ a _ {*} (\\theta), \\theta ], \\tag {4}\n$$\n\nthat is, $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ is the same energetic cost functional rewritten in terms of the length coordinate $\\ell(t) = \\ell(\\theta(t))$ .\n\nSubsequently we shall analyse small deviations of trajectories $\\ell(t)$ from the equilibrium length $\\ell_0 = \\ell(\\theta_0)$ and show that $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ admits an asymptotic expansion in which the linear part depends only on the integral\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t, \\tag {5}\n$$\n\nwhile the quadratic term has the form $\\int_0^T (\\ell(t) - \\ell_0)^2 dt$ . The integral $\\Delta A_\\ell$ serves as the unique first-order integral variable (the length abatement), whereas the full shape of the trajectory $\\ell(t)$ enters through the quadratic contribution. The following lemma formalises this property.\n\nLemma 1 (Absetment as the unique first-order linear variable). Let the assumptions of Section 2 hold, in particular let there exist a smooth function $a_{*}(\\theta)$ that satisfies $Q(\\theta ,a_{*}(\\theta))\\equiv 0$ in a neighbourhood of $\\theta_0$ , and let the functional $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ be defined by (4). Suppose in addition that the derivative $\\ell_{\\theta}(\\theta_0) = r_0\\neq 0$ , so that in a neighbourhood of $\\ell_0 = \\ell (\\theta_0)$ there exists a smooth inverse function $\\Theta (\\ell)$ .\n\nThen there exists a constant $C_1 \\in \\mathbb{R}$ such that for any trajectory $\\ell$ with sufficiently small deviation $\\| \\ell - \\ell_0 \\|_{L^{\\infty}(0,T)}$ the following asymptotic expansion holds:\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = \\mathcal {E} _ {\\mathrm {m e t}} (\\ell_ {0}) + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty} (0, T)} ^ {2} \\big),\n$$\n\nwhere the length abatement $\\Delta A_{\\ell}$ is defined in (5). Moreover, if $L$ is any linear functional on the space of deviations $\\delta \\ell(t) = \\ell(t) - \\ell_0$ that coincides with the first variation of $\\mathcal{E}_{\\mathrm{met}}$ at the point $\\ell_0$ for all admissible small perturbations, then $L(\\ell) = K \\Delta A_{\\ell}$ for some constant $K$ . In particular, no other independent first-order linear integral descriptor arises.\n\nThe proof of Lemma 1 is given in Appendix A. From the viewpoint of the model structure, this means that the abatement $\\Delta \\mathcal{A}_{\\ell}$ (that is, the abatement of length) is not a phenomenologically introduced index, but a fundamental integral variable that inevitably appears as the unique linear trajectory descriptor in the asymptotics of the energetic functional.\n\n# 3 Asymptotic analysis and main result\n\nThis section constitutes the central mathematical core of the work. Its goal is to derive, in a rigorous manner, the analytical dependence of energetic cost on postural kinematics via an asymptotic expansion of the key model equations in a neighbourhood of a reference equilibrium point.\n\n# 3.1 Linearisation of the equilibrium condition\n\nWe consider the reference equilibrium point $(\\theta_0, a_0)$ , which satisfies the quasi-static equilibrium condition (1), or, in terms of the function\n\n$$\nQ (\\theta , a) := F \\big (\\ell (\\theta), a \\big) r (\\theta) - M _ {\\mathrm {e x t}} (\\theta),\n$$\n\nthe condition\n\n$$\nQ (\\theta_ {0}, a _ {0}) = 0.\n$$\n\nFix a small parameter $\\delta > 0$ and consider trajectories $(\\theta(t), a(t))$ on $[0, T]$ such that\n\n$$\n\\| \\theta - \\theta_ {0} \\| _ {L ^ {\\infty} (0, T)} \\leq \\delta , \\quad \\| a - a _ {0} \\| _ {L ^ {\\infty} (0, T)} \\leq \\delta .\n$$\n\nSince $\\ell(\\theta)$ is a smooth function, this is equivalent to the local condition $\\| \\ell - \\ell_0 \\|_{L^{\\infty}(0,T)} \\leq C\\delta$ for some constant $C > 0$ that depends only on the derivative $\\ell_{\\theta}$ in a neighbourhood of $\\theta_0$ . All estimates below are to be understood in the asymptotic sense as $\\delta \\to 0$ .\n\nWe introduce the notation\n\n$$\n\\delta \\theta (t) := \\theta (t) - \\theta_ {0}, \\qquad \\delta a (t) := a (t) - a _ {0}.\n$$\n\nWe expand $Q(\\theta, a)$ in a Taylor series to first order in a neighbourhood of the point $(\\theta_0, a_0)$ :\n\n$$\nQ (\\theta_ {0} + \\delta \\theta , a _ {0} + \\delta a) = Q (\\theta_ {0}, a _ {0}) + Q _ {\\theta} (\\theta_ {0}, a _ {0}) \\delta \\theta + Q _ {a} (\\theta_ {0}, a _ {0}) \\delta a + O \\big (| \\delta \\theta | ^ {2} + | \\delta a | ^ {2} \\big),\n$$\n\nwhere the term $O\\big(|\\delta \\theta |^2 +|\\delta a|^2\\big)$ is uniform in $t$ and is of order $O(\\delta^2)$ as $\\delta \\rightarrow 0$ .\n\nSince $Q(\\theta_0, a_0) = 0$ and we consider trajectories that satisfy the equilibrium condition $Q(\\theta(t), a(t)) \\equiv 0$ , we obtain in first order\n\n$$\nQ _ {\\theta} (\\theta_ {0}, a _ {0}) \\delta \\theta (t) + Q _ {a} (\\theta_ {0}, a _ {0}) \\delta a (t) \\approx 0.\n$$\n\nBy assumption (2) we have $Q_{a}(\\theta_{0}, a_{0}) \\neq 0$ , and therefore from the linear relation it follows that\n\n$$\n\\delta a (t) = C _ {\\theta} \\delta \\theta (t), \\quad C _ {\\theta} := - \\frac {Q _ {\\theta} \\left(\\theta_ {0} , a _ {0}\\right)}{Q _ {a} \\left(\\theta_ {0} , a _ {0}\\right)}. \\tag {6}\n$$\n\nIn order to relate the coefficient $C_{\\theta}$ to derivatives of the force function $F(\\ell, a)$ and to the kinematic functions $r(\\theta), M_{\\mathrm{ext}}(\\theta)$ , we explicitly calculate the partial derivatives $Q_{\\theta}$ and $Q_{a}$ at the point $(\\theta_0, a_0)$ . We have\n\n$$\nQ (\\theta , a) = F \\big (\\ell (\\theta), a \\big) r (\\theta) - M _ {\\mathrm {e x t}} (\\theta),\n$$\n\nhence\n\n$$\nQ _ {\\theta} = F _ {\\theta} r (\\theta) + F (\\ell (\\theta), a) r ^ {\\prime} (\\theta) - M _ {\\mathrm {e x t}} ^ {\\prime} (\\theta), \\quad Q _ {a} = F _ {a} r (\\theta),\n$$\n\nwhere\n\n$$\nF _ {\\theta} (\\theta , a) := \\frac {\\partial}{\\partial \\theta} F \\big (\\ell (\\theta), a \\big), \\qquad F _ {a} (\\theta , a) := \\frac {\\partial}{\\partial a} F \\big (\\ell (\\theta), a \\big).\n$$\n\nEvaluating these derivatives at the point $(\\theta_0, a_0)$ and introducing the notation\n\n$$\nF _ {0} := F (\\ell_ {0}, a _ {0}), \\quad F _ {\\theta} := F _ {\\theta} (\\theta_ {0}, a _ {0}), \\quad F _ {a} := F _ {a} (\\theta_ {0}, a _ {0}),\n$$\n\n$$\nr _ {0} := r (\\theta_ {0}), \\quad r _ {0} ^ {\\prime} := r ^ {\\prime} (\\theta_ {0}), \\quad M _ {0} ^ {\\prime} := M _ {\\mathrm {e x t}} ^ {\\prime} (\\theta_ {0}),\n$$\n\nwe obtain\n\n$$\nQ _ {\\theta} \\left(\\theta_ {0}, a _ {0}\\right) = F _ {\\theta} r _ {0} + F _ {0} r _ {0} ^ {\\prime} - M _ {0} ^ {\\prime}, \\quad Q _ {a} \\left(\\theta_ {0}, a _ {0}\\right) = F _ {a} r _ {0}.\n$$\n\nSubstituting this into the explicit expression for $C_{\\theta}$ from (6), we find\n\n$$\nC _ {\\theta} = \\frac {M _ {0} ^ {\\prime} - F _ {0} r _ {0} ^ {\\prime} - r _ {0} F _ {\\theta}}{r _ {0} F _ {a}}.\n$$\n\nThus, the dependence of muscle activation on the joint angle in a neighbourhood of the equilibrium point has the form\n\n$$\na (t) = a _ {0} + C _ {\\theta} \\left(\\theta (t) - \\theta_ {0}\\right) + O \\left(| \\theta (t) - \\theta_ {0} | ^ {2}\\right),\n$$\n\nand for the subsequent first-order analysis it is sufficient to retain the linear approximation (6).\n\n# 3.2 Expansion of the energetic functional\n\nWe return to the metabolic cost functional (3):\n\n$$\n\\mathcal {E} [ a, \\theta ] = \\int_ {0} ^ {T} (\\alpha a (t) + \\beta F (\\ell (t), a (t))) d t.\n$$\n\nWe introduce the notation\n\n$$\nF _ {0} := F (\\ell_ {0}, a _ {0}), \\quad F _ {\\theta} := \\partial_ {\\theta} F (\\ell (\\theta), a) \\big | _ {(\\theta_ {0}, a _ {0})}, \\quad F _ {a} := \\partial_ {a} F (\\ell (\\theta), a) \\big | _ {(\\theta_ {0}, a _ {0})},\n$$\n\nand consider small deviations $\\delta \\theta (t)$ , $\\delta a(t)$ from the equilibrium point. Then\n\n$$\na (t) = a _ {0} + \\delta a (t), \\qquad F (\\theta (t), a (t)) = F _ {0} + F _ {\\theta} \\delta \\theta (t) + F _ {a} \\delta a (t) + O \\big (| \\delta \\theta | ^ {2} + | \\delta a | ^ {2} \\big).\n$$\n\nSubstituting these expansions into the instantaneous power\n\n$$\nP _ {\\mathrm {m e t}} (t) = \\alpha a (t) + \\beta F (\\theta (t), a (t)),\n$$\n\nwe obtain\n\n$$\nP_{\\text{met}}(t) = \\underbrace{\\big(\\alpha a_{0} + \\beta F_{0}\\big)}_{P_{0}} + \\big(\\alpha +\\beta F_{a}\\big)\\delta a(t) + \\beta F_{\\theta}\\delta \\theta (t) + O\\big(|\\delta \\theta |^{2} + |\\delta a|^{2}\\big).\n$$\n\nIntegrating over time, we arrive at\n\n$$\n\\mathcal {E} [ a, \\theta ] = P _ {0} T + \\int_ {0} ^ {T} \\left(\\left(\\alpha + \\beta F _ {a}\\right) \\delta a (t) + \\beta F _ {\\theta} \\delta \\theta (t)\\right) d t + O (\\delta^ {2}),\n$$\n\nwhere $O(\\delta^2)$ denotes the contribution of second and higher orders in the small deviations.\n\nWe now use the linear relationship (6) between $\\delta a(t)$ and $\\delta \\theta (t)$ :\n\n$$\n\\delta a (t) = C _ {\\theta} \\delta \\theta (t).\n$$\n\nAfter substitution we obtain\n\n$$\n\\mathcal {E} [ a, \\theta ] = P _ {0} T + \\int_ {0} ^ {T} \\left(\\left(\\alpha + \\beta F _ {a}\\right) C _ {\\theta} + \\beta F _ {\\theta}\\right) \\delta \\theta (t) d t + O (\\delta^ {2}).\n$$\n\nIntroducing the notation\n\n$$\nC _ {1} ^ {(\\theta)} := \\left(\\alpha + \\beta F _ {a}\\right) C _ {\\theta} + \\beta F _ {\\theta},\n$$\n\nwe can write the linear term in the energy expansion in the form\n\n$$\n\\mathcal {E} [ a, \\theta ] = P _ {0} T + C _ {1} ^ {(\\theta)} \\int_ {0} ^ {T} \\delta \\theta (t) d t + O (\\delta^ {2}).\n$$\n\nIn the subsequent subsections and in the proof of Theorem 1 we show that this expression can be rewritten as\n\n$$\nC _ {1} ^ {(\\theta)} \\int_ {0} ^ {T} \\delta \\theta (t) d t = C _ {1} \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t,\n$$\n\nwhere the coefficient $C_1$ is expressed in terms of the same local derivatives $F_{\\theta}, F_{a}, r_{0}, r_{0}^{\\prime}, M_{0}^{\\prime}$ and the parameters $\\alpha, \\beta$ , and the integral\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) d t\n$$\n\nis the abatement of the length deviation. Thus, already at the level of the linear approximation, the energetic cost functional reduces to the baseline term $P_0T$ and a linear contribution proportional to the abatement, whereas dependence on the full shape of the trajectory $\\theta(t)$ (or $\\ell(t)$ ) appears only in the quadratic term, which is analysed in the following.\n\n# 3.3 Quadratic term in the energy expansion\n\nTo obtain an explicit form of the quadratic term in the asymptotic expansion, we consider the function $F(\\theta, a) \\coloneqq F(\\ell(\\theta), a)$ in a neighbourhood of the equilibrium point $(\\theta_0, a_0)$ and perform its Taylor expansion up to second order in the small deviations $\\delta\\theta(t) = \\theta(t) - \\theta_0$ and $\\delta a(t) = a(t) - a_0$ . Using the linear relation $\\delta a(t) = C_\\theta \\delta\\theta(t)$ obtained in the previous subsection, and carefully collecting all second-order terms, we obtain\n\n$$\nF (\\theta (t), a (t)) \\approx F _ {0} + \\left(F _ {\\theta} + F _ {a} C _ {\\theta}\\right) \\delta \\theta (t) + \\frac {1}{2} \\Big (F _ {\\theta \\theta} + 2 F _ {\\theta a} C _ {\\theta} + F _ {a a} C _ {\\theta} ^ {2} \\Big) \\delta \\theta (t) ^ {2},\n$$\n\nwhere all derivatives of $F$ are evaluated at the point $(\\theta_0, a_0)$ . Substituting this expansion into the metabolic power\n\n$$\nP _ {\\mathrm {m e t}} (t) = \\alpha a (t) + \\beta F (\\theta (t), a (t)),\n$$\n\nintegrating over time, and passing from $\\theta$ to $\\ell$ using $\\ell(t) - \\ell_0 \\approx r_0(\\theta(t) - \\theta_0)$ , we arrive at the representation\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}),\n$$\n\nwhere\n\n$$\nC _ {2} = \\frac {\\beta}{2 r _ {0} ^ {2}} \\Big (F _ {\\theta \\theta} + 2 F _ {\\theta a} C _ {\\theta} + F _ {a a} C _ {\\theta} ^ {2} \\Big),\n$$\n\nand $P_0$ and $C_1$ are defined in the previous subsections.\n\nThe full technical derivation of these coefficients is provided in Appendix B.\n\n# 3.4 Main theorem\n\nThe result obtained above can be formulated as a formal theorem, which constitutes the central statement of this work.\n\nTheorem 1. Under the assumptions formulated above, the energetic cost functional $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ defined in (4) admits, for small deviations from the equilibrium point, the asymptotic representation\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}),\n$$\n\nwhere $\\Delta A_{\\ell} = \\int_0^T (\\ell (t) - \\ell_0)dt$ is the abatement of the length deviation, and the coefficients $P_0,C_1,C_2$ are determined by the local properties of the force $F(\\ell ,a)$ and by the system parameters at the equilibrium point $(\\theta_0,a_0)$ .\n\nThe proof of Theorem 1 is given in Appendix B.\n\nOwing to the linear kinematic relation $\\ell(t) - \\ell_0 \\approx r_0 (\\theta(t) - \\theta_0)$ , the result is directly recast in terms of the angular absement as $\\Delta \\mathcal{A}_\\ell \\approx r_0 \\Delta \\mathcal{A}_\\theta$ , which unifies the notation.\n\nThe physical and practical implications of this theorem are discussed in the next section.\n\n# 4 Interpretation and practical implications\n\nThe mathematical result obtained above becomes a tool for in-depth physical analysis and for addressing practical problems in biomechanics. This section unfolds the key consequences of the derived theorem.\n\n# 4.1 Decomposition of energetic cost: drift and tremor\n\nThe asymptotic expansion is not an arbitrary choice but a mathematically enforced structure that decomposes the energetic cost into three physically interpretable components:\n\n- Baseline holding cost $(P_0T)$ : This is the zeroth-order term, the fundamental \"price of time under tension\". It corresponds to the energetic expenditure associated with ideal holding of the posture at the reference point, in the absence of any deviations.\n\n- Drift cost $(C_1\\Delta \\mathcal{A}_\\ell)$ : This is the first-order term, a linear correction that captures the energetic consequences of a systematic shift of the mean posture. The abatement of the deviation $\\Delta \\mathcal{A}_\\ell$ provides a quantitative measure of this drift. \n- Cost of variability (quadratic contribution): This is the second-order term, given by the principal quadratic contribution $C_2 \\int_0^T (\\ell(t) - \\ell_0)^2 dt$ together with higher-order terms $O\\left(\\| \\ell - \\ell_0 \\|_{L^\\infty}^3\\right)$ in the full asymptotic expansion of the energy. This contribution represents the metabolic \"cost of tremor\" or variability around the mean posture.\n\n# 4.2 Parameter identification scheme from experimental data\n\nThe theoretical result becomes a practical tool for the analysis of experimental data. The proposed model allows direct identification of the parameters $P_0, C_1, C_2$ from measurements.\n\n1. Kinematics $(\\ell(t)$ or $\\theta(t))$ are recorded using standard tools such as B-mode ultrasound for direct measurement of muscle fascicle length, validated in terms of reproducibility and accuracy [11], optionally with automated deep-learning-based segmentation of muscle contours in ultrasound images [18]. Alternatively or additionally, trajectories of lengths $\\ell(t)$ can be reconstructed from musculoskeletal models in OpenSim [3] based on marker kinematics. In parallel, the total metabolic cost $\\mathcal{E}_{\\mathrm{met}}$ is obtained using a reference indirect calorimetry method [17] (measurement of $\\mathrm{O}_2$ consumption and $\\mathrm{CO}_2$ production), and the isometric joint moment is measured with a dynamometer to control the external load. \n2. Three integral predictors are computed from the kinematic data: the duration $T$ , the abatement of the deviation $\\Delta \\mathcal{A}_{\\ell} = \\int_{0}^{T} (\\ell(t) - \\ell_{0}) dt$ , and the integral of the squared deviation $\\int_{0}^{T} (\\ell(t) - \\ell_{0})^{2} dt$ . \n3. Multiple linear regression is applied, with the measured values of $\\mathcal{E}_{\\mathrm{met}}$ as the dependent variable and the computed predictors as independent variables. The regression coefficients provide estimates of $P_0, C_1, C_2$ .\n\nThis procedure enables a transition from the abstract model to quantitative characterization of a specific biomechanical system.\n\n# 4.3 Implications for variational problems and optimal control\n\nThe derived expansion has direct implications for optimal control problems, in particular for determining posture-holding strategies that minimize energetic cost. To first order, the optimal strategy reduces to minimizing the absolute value of the angular absement $|\\Delta \\mathcal{A}_{\\theta}|$ . This means that the time-averaged angle $\\bar{\\theta} = \\frac{1}{T}\\int_{0}^{T}\\theta (t)dt$ should be as close as possible to the reference value $\\theta_0$ . This integral criterion is substantially more informative than the naive strategy of minimizing instantaneous deviations, because it correctly accounts for the duration of each displacement.\n\nAt second order, once the mean drift has been minimized, optimality requires minimization of the quadratic term, which corresponds to minimizing the variance of the posture, that is, reducing the amplitude of tremor.\n\n# 5 Discussion\n\nThis section is devoted to a critical examination of the obtained results, a discussion of the key assumptions and limitations of the model, and an outline of promising directions for future research.\n\n# 5.1 Positioning of the result in the context of existing studies\n\nMost existing studies that relate metabolic cost to posture holding focus either on empirical correlations between the cost and stabilometric parameters, or on numerical optimal control models. In [6, 5, 16, 15, 19], the metabolic cost of quiet standing or near-static regimes is described in terms of mean and root-mean-square sway characteristics (amplitude, velocity, length of the center-of-pressure trajectory), as well as in terms of \"postural complexity\" encoded in entropy-based descriptors of the trajectories. These approaches provide an important empirical foundation, but they operate with multidimensional sets of indices and do not supply a single analytically derived scalar descriptor that specifically represents the \"accumulated drift\" of the posture.\n\nOn the other hand, classical works in theoretical biophysics [4, 10] develop an approach in which metabolic networks are described by explicitly specified cost functionals, and actual activity profiles are interpreted as outcomes of optimization (minimization of the total \"price\" of enzymes or power at a prescribed flux). Subsequent studies in this direction [2] apply optimal control methods to the temporal structure of enzyme activation. Our result can be viewed as a muscle-tendon analogue of this paradigm: instead of merely searching for empirical predictors of metabolic cost, we start from a variational formulation and derive an analytical functional $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ on the space of trajectories.\n\nAgainst this background, the introduced absevement\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) d t\n$$\n\ndoes not appear as an additional \"index\", but as a unique linear coordinate of first order in the asymptotic expansion of the energetic cost. In other words, if one assumes only local smoothness of the functional $\\mathcal{E}[a,\\theta]$ and a quasi-isometric regime, then any model of this type must contain exactly the integral of length deviation as the leading linear contribution. This fundamentally distinguishes abatement from commonly used scalar characteristics such as mean amplitude or time under tension, which do not possess an analogous strict \"universal\" property.\n\nIts relation to the notion of abatement in integral kinematics [14, 13, 12, 9, 8] can be summarized as follows: the quantity abatement was originally introduced to describe the time integral of displacement in hydraulic musical instruments, where the acoustic output depends not only on the instantaneous state but also on the duration of the deviation [14]. Further development of integral kinematics and integral kinesiology [13, 12], as well as the application of acoustic abatement in phonetics [9, 8], demonstrate that such integral variables naturally arise as descriptors of accumulated influence in systems with \"memory\". On the other hand, in variational models of electrical circuits with mem-elements [7, 1], the configuration space is deliberately extended to time-integrated variables in order to obtain a correct Lagrangian formulation. In this context, our abatement\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t\n$$\n\nis a biophysically meaningful analogue of such integral coordinates: we show that an analogous integration of length deviation arises not ad hoc, but is forced by the geometry of the energetic\n\nfunctional. Thus, absement appears as a biophysically grounded realization of the same integral geometry, now with a strict connection to metabolic cost.\n\nWhen comparing the proposed expansion\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3})\n$$\n\nwith more traditional models, several principal advantages can be identified:\n\n- Structural separation of components. The baseline term $P_0T$ separates the unavoidable cost of maintaining tone from the additional cost due to drift and variability, whereas the integral drift $\\Delta A_{\\ell}$ and the quadratic tremor term possess clearly different scaling behaviour. \n- Minimal sufficient coordinate. In a neighbourhood of equilibrium, absevement is the only linear coordinate that enters the expansion; therefore, at first order all admissible models reduce to it. In this sense, it is a \"sufficient statistic\" for describing the metabolic effect of slow drift. \n- Natural compatibility with optimization principles. The functional $\\mathcal{E}_{\\mathrm{met}}$ has the standard variational structure familiar from theoretical biophysics [4, 10, 2] and can be used directly in optimal control problems for postural equilibrium (minimization of cost under constraints on the amplitude of deviations, and so on). \n- Fundamental interpretation. From a mathematical point of view, absement characterizes the \"mass\" of the trajectory in the space of lengths, that is, how much time the system spends in a deviated state, weighted by the magnitude of the deviation. This provides a more transparent understanding of why even a small but systematic drift of posture can produce a substantial contribution to the total metabolic cost.\n\nIn this way, the proposed model fits naturally within the optimization-based tradition of theoretical biophysics, while at the same time introducing an integral variable (absent) as a strictly justified, rather than purely phenomenological, descriptor of postural energetic cost.\n\n# 5.2 Model limitations\n\nIt is important to clearly delineate the limits of applicability of the proposed model, which follow directly from the assumptions made:\n\n- Quasi-static approximation. The model neglects velocity-dependent and viscous effects, which makes it applicable only to slow, quasi-isometric movements. \n- Single-degree-of-freedom formulation. The analysis was carried out for a system with a single degree of freedom, which is an adequate approximation for isolated joint tasks, but not for complex multi-joint postures such as the front plank, where coordination across multiple joints is essential. \n- Small deviations. The model is local, as it relies on linearization in a neighbourhood of the reference point. Its accuracy decreases as the amplitude of motion becomes large.\n\n# 5.3 Directions for future research\n\nBased on these limitations, several promising directions for further development of the theory can be identified:\n\n- Extension to the multidimensional case. Generalization of the model to multi-joint systems, which will require the introduction of a notion of vector-valued absement. \n- Inclusion of dynamic terms. Development of an extended model that incorporates velocity-dependent terms, enabling the analysis of non-isometric contraction regimes. \n- Experimental validation. Targeted experimental studies to test the proposed identification scheme on real biomechanical data.\n\n# 6 Conclusions\n\nIn this work we have proposed a new, integral paradigm for describing metabolic costs under quasi-isometric loading, in which posture maintenance is treated not as a static state but as a full trajectory of the muscle-tendon system in time. The central result is a rigorous asymptotic expansion of the metabolic cost functional $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ in a neighbourhood of a reference posture (Theorem 1), which isolates three qualitatively distinct contributions: the baseline cost of maintenance, the cost of drift, and the cost of tremor.\n\nWe have shown that the absetment of deviation\n\n$$\n\\Delta \\mathcal {A} _ {\\ell} = \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t\n$$\n\n(i.e. the abatement in the terminology of integral kinematics) arises in this expansion not as a phenomenologically introduced index, but as the unique first-order sufficient statistic for the energetics of postural drift: any linear functional that can appear as the first (linear) variational term in the asymptotic expansion of $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ is proportional to $\\Delta \\mathcal{A}_{\\ell}$ . Thus, abatement cleanly separates systematic drift from variability (tremor) in such a way that both components admit a clear geometric and energetic interpretation.\n\nThe resulting expansion\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O \\left(\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}\\right)\n$$\n\nprovides a physically transparent structure for analysing postural control. The term $P_0T$ corresponds to the \"pure\" cost of maintaining the muscle in the reference state, the linear term $C_1\\Delta A_\\ell$ reflects the energetic price of systematic postural drift (the shift of the mean length away from $\\ell_0$ ), while the quadratic term with coefficient $C_2$ is interpreted as the cost of variability associated with tremor and micro-oscillations of length. This decomposition is consistent with experimental observations on the balance between postural stabilization and admissible variability, but is formulated here as a rigorous mathematical statement.\n\nBy identifying abatement in our model with the same integral variable used in the theory of integral kinematics and mem-elements, we place the proposed approach within the broader context of systems with memory. In this sense, the same integral variable that describes the accumulated\n\nhistory of action in electrical and mechanical memory elements acquires here a clear biophysical interpretation as a measure of the integrated deviation of muscle-tendon length. This opens the way to a unified description of heterogeneous memory systems within a common variational framework.\n\nPractically, the proposed theory yields a clear scheme for identifying the parameters $P_0, C_1, C_2$ from experimental data (ultrasound measurements of muscle length, dynamometric recordings of joint moment, indirect estimates of metabolic cost). This provides a foundation for more accurate modelling, assessment, and optimization of human motor strategies in sports science, clinical biomechanics, and physiological rehabilitation.\n\nIn summary, the main contributions of this work can be formulated as follows.\n\n- Universal structure of the metabolic cost functional. For a broad class of quasi-static models of the muscle-tendon system, we show that in a neighbourhood of a reference posture the cumulative metabolic cost, written as the functional $\\mathcal{E}_{\\mathrm{met}}(\\ell)$ , admits the asymptotic expansion\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3} \\big),\n$$\n\nwhere $\\Delta \\mathcal{A}_{\\ell} = \\int_{0}^{T} (\\ell(t) - \\ell_{0}) dt$ is the unique linear integral descriptor of the kinematics, and $P_{0}, C_{1}, C_{2}$ are determined by the local properties of the muscle-tendon system in a neighbourhood of the equilibrium point.\n\n- Abusement as a first-order sufficient statistic. We prove that the abatement of deviation $\\Delta A_{\\ell}$ (i.e. the length abatement) arises not as a phenomenologically introduced \"convenient index\", but as a first-order sufficient statistic for the energetics of postural drift: all other linear functionals of the trajectory $\\ell(t)$ at first order reduce to $\\Delta A_{\\ell}$ , whereas dependence on the full shape of the trajectory appears only in the quadratic term. \n- Bridge to integral kinematics and theoretical biophysics. The results obtained are naturally consistent with the formalism of integral kinematics and the theory of mem-elements, in which the same integral variable (absent) plays a key role in the description of systems with memory. By embedding this object into a variational framework of theoretical biophysics (in the spirit of the approaches of Heinrich-Schuster and Klipp-Heinrich), we provide a conceptual link between the geometry of the metabolic functional, integral kinematic variables, and experimentally observed characteristics of postural control.\n\n# A Appendix: Proof of Lemma 1\n\nProof. By definition (4) and by the equilibrium condition $Q(\\theta, a_{*}(\\theta)) \\equiv 0$ , the instantaneous metabolic power along an equilibrium trajectory can be written as\n\n$$\nP _ {\\mathrm {m e t}} (t) = \\alpha a _ {*} (\\theta (t)) + \\beta F \\big (\\ell (\\theta (t)), a _ {*} (\\theta (t)) \\big).\n$$\n\nSince in a neighbourhood of $\\theta_0$ there exists an inverse function $\\Theta (\\ell)$ , we have $\\theta (t) = \\Theta (\\ell (t))$ and, therefore, we may introduce a smooth scalar function of a single argument\n\n$$\n\\varphi (\\ell) := \\alpha a _ {*} (\\Theta (\\ell)) + \\beta F (\\ell , a _ {*} (\\Theta (\\ell))),\n$$\n\nsuch that for any admissible trajectory in a neighbourhood of $\\ell_0$ the power can be written as $P_{\\mathrm{met}}(t) = \\varphi (\\ell (t))$ . Consequently,\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = \\int_ {0} ^ {T} \\varphi (\\ell (t)) d t.\n$$\n\nWe expand $\\varphi$ into a Taylor series about the point $\\ell_0$ up to second order:\n\n$$\n\\varphi (\\ell_ {0} + \\delta \\ell) = \\varphi (\\ell_ {0}) + \\varphi^ {\\prime} (\\ell_ {0}) \\delta \\ell + \\frac {1}{2} \\varphi^ {\\prime \\prime} (\\ell_ {0}) \\delta \\ell^ {2} + R (\\delta \\ell),\n$$\n\nwhere the remainder $R(\\delta \\ell)$ satisfies the estimate $|R(\\delta \\ell)| \\leq C|\\delta \\ell|^3$ for some $C > 0$ and for sufficiently small $|\\delta \\ell|$ . Setting $\\delta \\ell(t) := \\ell(t) - \\ell_0$ and using the assumption $\\| \\ell - \\ell_0 \\|_{L^\\infty(0, T)} \\leq \\varepsilon$ with small $\\varepsilon$ , we obtain\n\n$$\n\\varphi (\\ell (t)) = \\varphi (\\ell_ {0}) + \\varphi^ {\\prime} (\\ell_ {0}) \\left(\\ell (t) - \\ell_ {0}\\right) + \\frac {1}{2} \\varphi^ {\\prime \\prime} (\\ell_ {0}) \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3} \\big),\n$$\n\nwhere the notation $O\\big(\\| \\ell -\\ell_0\\|_{L^\\infty}^3\\big)$ is uniform in $t$\n\nIntegrating over time, we obtain\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = \\int_ {0} ^ {T} \\varphi (\\ell (t)) d t = \\varphi (\\ell_ {0}) T + \\varphi^ {\\prime} (\\ell_ {0}) \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) d t + \\frac {1}{2} \\varphi^ {\\prime \\prime} (\\ell_ {0}) \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3} \\big).\n$$\n\nDenoting $P_0 \\coloneqq \\varphi(\\ell_0)$ and $C_1 \\coloneqq \\varphi'(\\ell_0)$ , we arrive at\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) d t + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {2} \\big) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + O \\big (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {2} \\big),\n$$\n\nwhich yields the first part of the statement.\n\nFor the second part, we consider the first variation of $\\mathcal{E}_{\\mathrm{met}}$ at the point $\\ell_0$ in the direction of an arbitrary small perturbation $h(t)$ :\n\n$$\nD \\mathcal {E} _ {\\mathrm {m e t}} (\\ell_ {0}) [ h ] := \\left. \\frac {d}{d \\varepsilon} \\right| _ {\\varepsilon = 0} \\mathcal {E} _ {\\mathrm {m e t}} (\\ell_ {0} + \\varepsilon h) = \\left. \\frac {d}{d \\varepsilon} \\right| _ {\\varepsilon = 0} \\int_ {0} ^ {T} \\varphi (\\ell_ {0} + \\varepsilon h (t)) d t = \\varphi^ {\\prime} (\\ell_ {0}) \\int_ {0} ^ {T} h (t) d t.\n$$\n\nThus, the first variation is a linear functional that maps any perturbation $h$ to a quantity proportional to the integral $\\int_0^T h(t)dt$ .\n\nNow let $L$ be a linear functional on the space of deviations $\\delta \\ell(t)$ which coincides with $D\\mathcal{E}_{\\mathrm{met}}(\\ell_0)[\\cdot]$ for all admissible $h$ . Then for all $h$ we must have\n\n$$\nL (h) = D \\mathcal {E} _ {\\mathrm {m e t}} (\\ell_ {0}) [ h ] = \\varphi^ {\\prime} (\\ell_ {0}) \\int_ {0} ^ {T} h (t) d t.\n$$\n\nAny linear integral functional on $C[0,T]$ can be represented in the form $L(h) = \\int_0^T k(t)h(t)dt$ for some integrable kernel $k(t)$ . The equality\n\n$$\n\\int_ {0} ^ {T} k (t) h (t) d t = \\varphi^ {\\prime} (\\ell_ {0}) \\int_ {0} ^ {T} h (t) d t\n$$\n\nfor all $h$ is possible only if $k(t) \\equiv \\varphi'(\\ell_0)$ almost everywhere on $[0, T]$ . Hence,\n\n$$\nL (h) = \\varphi^ {\\prime} (\\ell_ {0}) \\int_ {0} ^ {T} h (t) d t = K \\int_ {0} ^ {T} h (t) d t\n$$\n\nwith $K = \\varphi'(\\ell_0)$ , that is, $L$ is proportional to the integral of the deviation. Returning from the abstract perturbation $h$ to the actual trajectory $\\delta \\ell(t) = \\ell(t) - \\ell_0$ , we obtain\n\n$$\nL (\\ell) = K \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) d t = K \\Delta \\mathcal {A} _ {\\ell}.\n$$\n\nThis means that the abatement $\\Delta \\mathcal{A}_{\\ell}$ is the only (up to a multiplicative constant) linear integral variable that appears in the first-order expansion of the energy functional. The lemma is proved.\n\n# B Proof of the Main Theorem\n\nProof of Theorem 1. The idea of the proof is as follows: we reduce the energy functional to an integral of a scalar function of a single variable (joint angle or length), and then apply the standard Taylor expansion up to second order. The first- and second-order coefficients are expressed in terms of the derivatives of $F(\\ell, a)$ , $r(\\theta)$ , and $M_{\\mathrm{ext}}(\\theta)$ at the equilibrium point, while all higher-order contributions are absorbed into the remainder term $O(\\| \\ell - \\ell_0 \\|_{L^\\infty}^3)$ .\n\n# Step 1. Reduction to a one-dimensional scalar function.\n\nConsider the quasi-static equilibrium condition\n\n$$\nF (\\ell (\\theta), a) r (\\theta) = M _ {\\mathrm {e x t}} (\\theta),\n$$\n\nwhich holds at each time instant $t$ in a neighbourhood of the reference point $(\\theta_0, a_0)$ . Denote by\n\n$$\nQ (\\theta , a) := F (\\ell (\\theta), a) r (\\theta) - M _ {\\mathrm {e x t}} (\\theta)\n$$\n\nthe left-hand side of the equilibrium equation. Then the point $(\\theta_0, a_0)$ satisfies $Q(\\theta_0, a_0) = 0$ .\n\nAssume that the partial derivative $Q_{a}(\\theta_{0}, a_{0}) \\neq 0$ (physically: at a fixed angle, changes in activation change the muscle moment, i.e., the equilibrium is non-degenerate). Then, by the implicit function theorem, there exists a neighbourhood of the reference point in which the condition $Q(\\theta, a) = 0$ uniquely defines a smooth function\n\n$$\na = a _ {*} (\\theta),\n$$\n\nsuch that $a_{*}(\\theta_{0}) = a_{0}$ and $Q(\\theta ,a_{*}(\\theta))\\equiv 0$\n\nTherefore, the metabolic power\n\n$$\nP _ {\\mathrm {m e t}} (t) = \\alpha a (t) + \\beta F (\\ell (t), a (t))\n$$\n\nin the quasi-static regime can be written as a function of the angle alone:\n\n$$\nP _ {\\mathrm {m e t}} (t) = P (\\theta (t)), \\quad P (\\theta) := \\alpha a _ {*} (\\theta) + \\beta F (\\ell (\\theta), a _ {*} (\\theta)).\n$$\n\nThe total metabolic cost then takes the form\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} = \\int_ {0} ^ {T} P (\\theta (t)) d t.\n$$\n\n# Step 2. Second-order Taylor expansion of the function $P(\\theta)$ .\n\nSince $F$ , $r$ , $M_{\\mathrm{ext}}$ , and $\\ell(\\theta)$ are smooth, and $a_*(\\theta)$ is constructed as a smooth implicit function, we have $P(\\theta) \\in C^2$ in a neighbourhood of $\\theta_0$ .\n\nDefine\n\n$$\n\\delta \\theta (t) := \\theta (t) - \\theta_ {0}.\n$$\n\nThen the standard Taylor expansion up to second order gives\n\n$$\nP (\\theta_ {0} + \\delta \\theta) = P _ {0} + P _ {\\theta} (\\theta_ {0}) \\delta \\theta + \\frac {1}{2} P _ {\\theta \\theta} (\\theta_ {0}) \\delta \\theta^ {2} + R _ {3} (\\delta \\theta),\n$$\n\nwhere $P_0 \\coloneqq P(\\theta_0)$ , and the remainder $R_3$ satisfies\n\n$$\n\\left| R _ {3} (\\delta \\theta) \\right| \\leq C \\left| \\delta \\theta \\right| ^ {3}\n$$\n\nfor some fixed $C > 0$ in a sufficiently small neighbourhood of $\\theta_0$ .\n\nSince $a_{*}(\\theta)$ is defined implicitly, its first and second derivatives with respect to $\\theta$ are obtained from the relation $Q(\\theta, a_{*}(\\theta)) \\equiv 0$ :\n\n$$\nQ (\\theta , a _ {*} (\\theta)) \\equiv 0.\n$$\n\nDifferentiating with respect to $\\theta$ , we obtain\n\n$$\nQ _ {\\theta} (\\theta , a _ {*} (\\theta)) + Q _ {a} (\\theta , a _ {*} (\\theta)) a _ {*} ^ {\\prime} (\\theta) = 0,\n$$\n\nand therefore, at the equilibrium point $(\\theta_0,a_0)$\n\n$$\na _ {\\theta} (\\theta_ {0}) := a _ {*} ^ {\\prime} (\\theta_ {0}) = - \\frac {Q _ {\\theta} (\\theta_ {0} , a _ {0})}{Q _ {a} (\\theta_ {0} , a _ {0})}.\n$$\n\nHere\n\n$$\nQ _ {\\theta} (\\theta , a) = F _ {\\ell} (\\ell (\\theta), a) \\ell_ {\\theta} (\\theta) r (\\theta) + F (\\ell (\\theta), a) r _ {\\theta} (\\theta) - M _ {\\mathrm {e x t}} ^ {\\prime} (\\theta),\n$$\n\n$$\nQ _ {a} (\\theta , a) = F _ {a} (\\ell (\\theta), a) r (\\theta).\n$$\n\nand all derivatives are to be evaluated at the point $(\\theta_0, a_0)$ .\n\nThe second derivative is obtained by differentiating once more:\n\n$$\n0 = \\frac {d ^ {2}}{d \\theta^ {2}} Q (\\theta , a _ {*} (\\theta)) = Q _ {\\theta \\theta} + 2 Q _ {\\theta a} a _ {*} ^ {\\prime} (\\theta) + Q _ {a a} a _ {*} ^ {\\prime} (\\theta) ^ {2} + Q _ {a} a _ {*} ^ {\\prime \\prime} (\\theta).\n$$\n\nThus,\n\n$$\na _ {\\theta \\theta} (\\theta_ {0}) := a _ {*} ^ {\\prime \\prime} (\\theta_ {0}) = - \\frac {Q _ {\\theta \\theta} (\\theta_ {0} , a _ {0}) + 2 Q _ {\\theta a} (\\theta_ {0} , a _ {0}) a _ {\\theta} (\\theta_ {0}) + Q _ {a a} (\\theta_ {0} , a _ {0}) a _ {\\theta} (\\theta_ {0}) ^ {2}}{Q _ {a} (\\theta_ {0} , a _ {0})}.\n$$\n\nHence we obtain the local expansion\n\n$$\na _ {*} (\\theta_ {0} + \\delta \\theta) = a _ {0} + A \\delta \\theta + B \\delta \\theta^ {2} + O (| \\delta \\theta | ^ {3}),\n$$\n\nwhere\n\n$$\nA := a _ {\\theta} (\\theta_ {0}), \\qquad B := \\frac {1}{2} a _ {\\theta \\theta} (\\theta_ {0}),\n$$\n\nwhich are explicitly expressed in terms of the derivatives of $Q$ at the equilibrium point.\n\nSimilarly, for the kinematic relation\n\n$$\n\\ell (\\theta) = \\ell_ {0} + \\ell_ {\\theta} (\\theta_ {0}) \\delta \\theta + \\frac {1}{2} \\ell_ {\\theta \\theta} (\\theta_ {0}) \\delta \\theta^ {2} + O (| \\delta \\theta | ^ {3})\n$$\n\nwe introduce the notation\n\n$$\nr _ {0} := \\ell_ {\\theta} (\\theta_ {0}), \\quad \\kappa_ {0} := \\ell_ {\\theta \\theta} (\\theta_ {0}).\n$$\n\nNext, we expand the force $F(\\ell, a)$ about the point $(\\ell_0, a_0)$ up to second order:\n\n$$\n\\begin{array}{l} F (\\ell (\\theta), a _ {*} (\\theta)) = F _ {0} + F _ {\\ell} \\left(\\ell_ {0}, a _ {0}\\right) \\delta \\ell + F _ {a} \\left(\\ell_ {0}, a _ {0}\\right) \\delta a \\\\ + \\frac {1}{2} F _ {\\ell \\ell} \\left(\\ell_ {0}, a _ {0}\\right) \\delta \\ell^ {2} + F _ {\\ell a} \\left(\\ell_ {0}, a _ {0}\\right) \\delta \\ell \\delta a + \\frac {1}{2} F _ {a a} \\left(\\ell_ {0}, a _ {0}\\right) \\delta a ^ {2} + O \\left(| \\delta \\theta | ^ {3}\\right), \\\\ \\end{array}\n$$\n\nwhere\n\n$$\n\\delta \\ell = \\ell (\\theta) - \\ell_ {0} = r _ {0} \\delta \\theta + \\frac {1}{2} \\kappa_ {0} \\delta \\theta^ {2} + O (| \\delta \\theta | ^ {3}), \\qquad \\delta a = A \\delta \\theta + B \\delta \\theta^ {2} + O (| \\delta \\theta | ^ {3}).\n$$\n\nSubstituting these expressions and grouping the terms by powers of $\\delta \\theta$ , we obtain\n\n$$\nF (\\ell (\\theta), a _ {*} (\\theta)) = F _ {0} + f _ {1} \\delta \\theta + \\frac {1}{2} f _ {2} \\delta \\theta^ {2} + O (| \\delta \\theta | ^ {3}),\n$$\n\nwhere the first- and second-order coefficients have the explicit form\n\n$$\nf _ {1} = F _ {\\ell} \\left(\\ell_ {0}, a _ {0}\\right) r _ {0} + F _ {a} \\left(\\ell_ {0}, a _ {0}\\right) A,\n$$\n\n$$\n\\begin{array}{l} f _ {2} = F _ {\\ell} \\left(\\ell_ {0}, a _ {0}\\right) \\kappa_ {0} + F _ {a} \\left(\\ell_ {0}, a _ {0}\\right) 2 B \\\\ + F _ {\\ell \\ell} \\left(\\ell_ {0}, a _ {0}\\right) r _ {0} ^ {2} + 2 F _ {\\ell a} \\left(\\ell_ {0}, a _ {0}\\right) r _ {0} A + F _ {a a} \\left(\\ell_ {0}, a _ {0}\\right) A ^ {2}. \\\\ \\end{array}\n$$\n\nFinally, substituting the expansions of $a_{*}(\\theta)$ and $F(\\ell(\\theta), a_{*}(\\theta))$ into the definition\n\n$$\nP (\\theta) = \\alpha a _ {*} (\\theta) + \\beta F (\\ell (\\theta), a _ {*} (\\theta)),\n$$\n\nand collecting the terms at $\\delta \\theta$ and $\\delta \\theta^2$ , we obtain\n\n$$\nP \\left(\\theta_ {0} + \\delta \\theta\\right) = P _ {0} + C _ {1} ^ {(\\theta)} \\delta \\theta + C _ {2} ^ {(\\theta)} \\delta \\theta^ {2} + O \\left(| \\delta \\theta | ^ {3}\\right),\n$$\n\nwhere\n\n$$\nP _ {0} = \\alpha a _ {0} + \\beta F _ {0},\n$$\n\n$$\nC _ {1} ^ {(\\theta)} = \\alpha A + \\beta f _ {1} = \\alpha A + \\beta \\big (F _ {\\ell} (\\ell_ {0}, a _ {0}) r _ {0} + F _ {a} (\\ell_ {0}, a _ {0}) A \\big),\n$$\n\n$$\nC _ {2} ^ {(\\theta)} = \\alpha B + \\frac {\\beta}{2} f _ {2} = \\alpha B + \\beta \\left[ \\frac {1}{2} F _ {\\ell} (\\ell_ {0}, a _ {0}) \\kappa_ {0} + F _ {a} (\\ell_ {0}, a _ {0}) B + \\frac {1}{2} F _ {\\ell \\ell} (\\ell_ {0}, a _ {0}) r _ {0} ^ {2} + F _ {\\ell a} (\\ell_ {0}, a _ {0}) r _ {0} A + \\frac {1}{2} F _ {a a} (\\ell_ {0}, a _ {0}) A ^ {2} \\right].\n$$\n\n# Step 3. Time integration and estimate of the remainder term.\n\nWe now return to the metabolic energy functional:\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} = \\int_ {0} ^ {T} P (\\theta (t)) d t.\n$$\n\nSubstituting the expansion for $P(\\theta(t))$ , we obtain\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} = P _ {0} T + C _ {1} ^ {(\\theta)} \\int_ {0} ^ {T} \\delta \\theta (t) d t + C _ {2} ^ {(\\theta)} \\int_ {0} ^ {T} \\delta \\theta (t) ^ {2} d t + \\int_ {0} ^ {T} R _ {3} (\\delta \\theta (t)) d t.\n$$\n\nUsing the estimate $|R_{3}(\\delta \\theta)| \\leq C |\\delta \\theta|^{3}$ , we have\n\n$$\n\\left| \\int_ {0} ^ {T} R _ {3} (\\delta \\theta (t)) d t \\right| \\leq C T \\| \\delta \\theta \\| _ {L ^ {\\infty}} ^ {3} = O (\\| \\delta \\theta \\| _ {L ^ {\\infty}} ^ {3}).\n$$\n\nIntroducing\n\n$$\n\\Delta \\mathcal {A} _ {\\theta} := \\int_ {0} ^ {T} (\\theta (t) - \\theta_ {0}) d t = \\int_ {0} ^ {T} \\delta \\theta (t) d t,\n$$\n\nwe obtain the asymptotic expansion\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} = P _ {0} T + C _ {1} ^ {(\\theta)} \\Delta \\mathcal {A} _ {\\theta} + C _ {2} ^ {(\\theta)} \\int_ {0} ^ {T} (\\theta (t) - \\theta_ {0}) ^ {2} d t + O (\\| \\theta - \\theta_ {0} \\| _ {L ^ {\\infty}} ^ {3}).\n$$\n\n# Step 4. From angular absement to length absement.\n\nFor small deviations we have the linearised kinematic relation\n\n$$\n\\ell (t) - \\ell_ {0} = r _ {0} (\\theta (t) - \\theta_ {0}) + O \\big ((\\theta (t) - \\theta_ {0}) ^ {2} \\big),\n$$\n\nwhich, uniformly in $t$ , implies\n\n$$\n\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} = | r _ {0} | \\| \\theta - \\theta_ {0} \\| _ {L ^ {\\infty}} + O (\\| \\theta - \\theta_ {0} \\| _ {L ^ {\\infty}} ^ {2}).\n$$\n\nInverting the linearisation, we obtain\n\n$$\n\\theta (t) - \\theta_ {0} = \\frac {1}{r _ {0}} \\big (\\ell (t) - \\ell_ {0} \\big) + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {2}),\n$$\n\nand therefore\n\n$$\n\\Delta \\mathcal {A} _ {\\theta} = \\frac {1}{r _ {0}} \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {2}) = \\frac {1}{r _ {0}} \\Delta \\mathcal {A} _ {\\ell} + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {2}),\n$$\n\n$$\n\\int_ {0} ^ {T} (\\theta (t) - \\theta_ {0}) ^ {2} d t = \\frac {1}{r _ {0} ^ {2}} \\int_ {0} ^ {T} (\\ell (t) - \\ell_ {0}) ^ {2} d t + O (\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}).\n$$\n\nSubstituting these relations into the obtained expansion for $\\mathcal{E}_{\\mathrm{met}}$ , and redefining\n\n$$\nC _ {1} := \\frac {C _ {1} ^ {(\\theta)}}{r _ {0}}, \\qquad C _ {2} := \\frac {C _ {2} ^ {(\\theta)}}{r _ {0} ^ {2}},\n$$\n\nwe obtain\n\n$$\n\\mathcal {E} _ {\\mathrm {m e t}} (\\ell) = P _ {0} T + C _ {1} \\Delta \\mathcal {A} _ {\\ell} + C _ {2} \\int_ {0} ^ {T} \\left(\\ell (t) - \\ell_ {0}\\right) ^ {2} d t + O \\left(\\| \\ell - \\ell_ {0} \\| _ {L ^ {\\infty}} ^ {3}\\right).\n$$\n\nThis is precisely the asymptotic form stated in the theorem. The coefficients $P_0, C_1,$ and $C_2$ are explicitly determined by the derivatives of $F(\\ell ,a)$ , $r(\\theta)$ , and $M_{\\mathrm{ext}}(\\theta)$ , as well as by the implicit derivatives $a_{\\theta}(\\theta_0)$ and $a_{\\theta \\theta}(\\theta_0)$ , which in turn are expressed in terms of the derivatives of $Q$ . This completes the proof.\n\n# References\n\n[1] Dalibor Biolek, Zdeněk Biolek, and Viera Biolková. Lagrangian for circuits with higher-order elements. Entropy, 21(12):1230, 2019. \n[2] Gundián M. de Hijas-Liste, Edda Klipp, Eva Balsa-Canto, and Julio R. Banga. Global dynamic optimization approach to predict activation in metabolic pathways. BMC Systems Biology, 8(1):1, 2014. \n[3] Scott L. Delp, Frank C. Anderson, Allison S. Arnold, J. Peter Loan, Ashraf Habib, Chand T. John, Emma Guendelman, and Darryl G. Thelen. Opensim: Open-source software to create and analyze dynamic simulations of movement. IEEE Transactions on Biomedical Engineering, 54(11):1940-1950, 2007. \n[4] Reinhart Heinrich and Stefan Schuster. The modelling of metabolic systems: Structure, control and optimal design. BioSystems, 47(1-2):61-77, 1998. \n[5] Han Houdijk, Starr E. Brown, and Jaap H. van Dieen. Relation between postural sway magnitude and metabolic energy cost during upright standing on a compliant surface. Journal of Applied Physiology, 119(6):696-703, 2015. \n[6] Trienke Ijmker, Han Houdijk, Claudine J. C. Lamoth, Peter J. Beek, and Lucas H. V. van der Woude. Energy cost of balance control during walking decreases with external stabilizer stiffness independent of walking speed. Journal of Biomechanics, 46(13):2109-2114, 2013. \n[7] Dimitri Jeltsema. Memory elements: A paradigm shift in lagrangian modeling of electrical circuits. IFAC Proceedings Volumes, 45(2):25-30, 2012. \n[8] Matthew C. Kelley. Acoustic absement in detail: Quantifying acoustic differences across time-series representations of speech data. arXiv preprint arXiv:2304.06183, 2023. \n[9] Matthew C. Kelley and Benjamin V. Tucker. Using acoustic distance and acoustic absement to quantify lexical competition. The Journal of the Acoustical Society of America, 151(2):1367-1379, 2022.\n\n[10] Edda Klipp and Reinhart Heinrich. Competition for enzymes in metabolic pathways: Implications for optimal distributions of enzyme concentrations and for the distribution of flux control. *BioSystems*, 54(1-2):1-14, 1999. \n[11] Li Khim Kwah, Rafael Z. Pinto, Joanna Diong, and Robert D. Herbert. Reliability and validity of ultrasound measurements of muscle fascicle length and pennation in humans: A systematic review. Journal of Applied Physiology, 114(6):761-769, 2013. \n[12] Steve Mann, M. L. Hao, M. Tsai, M. Hafezi, A. Azad, and F. Keramatimoezabad. Effectiveness of integral kinesiology feedback for fitness-based games. In 2018 IEEE Games, Entertainment, Media Conference (GEM), pages 1-9. IEEE, 2018. \n[13] Steve Mann and Ryan Janzen. Integral kinematics (time-integrals of distance, energy, etc.) and integral kinesiology. In 2014 IEEE Games, Entertainment, Media Conference (GEM), pages 1-8. IEEE, 2014. \n[14] Steve Mann, Ryan Janzen, and Mark Post. Hydraulophone design considerations: Absetment, displacement, and velocity-sensitive music keyboard in which each key is a water jet. In Proceedings of the 14th ACM International Conference on Multimedia, pages 519-528. ACM, 2006. \n[15] Jennifer L. Miles-Chan and Abdul G. Dulleo. Posture allocation revisited: Breaking the sedentary threshold of energy expenditure for obesity management. Frontiers in Physiology, Volume 8 - 2017, 2017. \n[16] Cathriona R. Monnard and Jennifer L. Miles-Chan. Energy cost of standing in a multi-ethnic cohort: Are energy-savers a minority or the majority? PLOS ONE, 12(1):1–12, 01 2017. \n[17] Hala Mtaweh, Sanna Tuira, Armin A. Floh, and Christopher S. Parshuram. Indirect calorimetry: History, technology, and application. Frontiers in Pediatrics, 6:257, 2018. \n[18] Luis G. Rosa, Jonathan S. Zia, Omer T. Inan, and Gregory S. Sawicki. Machine learning to extract muscle fascicle length changes from dynamic ultrasound images in real-time. PLOS ONE, 16(5):1-17, 05 2021. \n[19] Yih-Min Wu, Himanshu Mittal, Yueh-Ho Lin, and Yu-Hsuan Chang. Magnitude determination using cumulative absolute absement for earthquake early warning. Geoscience Letters, 10(1):1, 2023."}
# Electron-positron pair creation induced by multi-pulse train of electric fields: effect of randomness in time-delay Abstract We investigate the creation of electron-positron pairs (EPPs) in a sequence of alternating-sign, time-dependent electric field pulse trains by solving the quantum Vlasov equations. Specifically, we focus on Sauter-like pulse trains with random time delays between successive pulses, drawn from a Gaussian distribution wherein the extent of fluctuations is controlled by the standard deviation $\sigma_T$ of the distribution. We find that increasing $\sigma_T$ leads to a dramatic transformation in the longitudinal momentum spectrum. The well-known fringe pattern, akin to that in the multi-slit interference, gets significantly modified. The averaged spectra exhibit a robust Gaussian-like envelope with residual oscillations, which are much more prominent in the central momentum region. Notably, we find that in certain cases, stochastic time delays lead to a pronounced enhancement in the central peak of the distribution function for pulse train containing $N$ pulses. For example, for $N = 20$ pulses, $\sigma_T \approx 31[m^{-1}]$ (about $17\%$ of the mean time delay) yields nearly a tenfold increase in the central peak, which for $\sigma_T \approx 50[m^{-1}]$ (about $27\%$ of the mean time delay), scales up to $10^3$ . This may open up new possibilities for optimizing multi-pulse field configurations and guide future experimental designs aimed at maximizing EPPs creation. Keywords: Schwinger mechanism, Interference effect, multi-pulse trains, pair creation, randomness # 1. Introduction The spontaneous creation of electron-positron pairs(EPPs) from vacuum in the presence of intense external fields is a fundamental prediction of quantum electrodynamics (QED). However, observing this effect experimentally remains challenging due to the significant exponential suppression, given by $\exp (-\pi E_c / E)$ , where $E_{\mathrm{c}} = m^{2} / |e|\approx 1.3\times 10^{16}\mathrm{V / cm}$ represents the Schwinger critical field strength, $m$ is the electron mass, $e$ is the electron charge (the units $\hbar = c = 1$ are used) and $E$ is the applied field strength. The laser intensity needed to reach this threshold is approximately $I_{\mathrm{c}}\approx 10^{29}\mathrm{W / cm}^2$ , which greatly surpasses the capabilities of the current conventional laboratory systems. Nonetheless, significant progress in high-intensity laser technology and the construction of cutting-edge laser facilities is steadily closing the gap, making the experimental observation of this phenomenon increasingly feasible. This progress continues to inspire extensive theoretical and experimental efforts. Currently, laser systems have reached peak intensities of approximately $10^{23}\mathrm{W / cm}^2$. EPPs creation can occur through various mechanisms under strong electromagnetic fields. One example is the Bethe-Heitler mechanism, in which a super-intense laser interacts with the Coulomb field of a nucleus, resulting in pair creation. Another widely studied mechanism is the Breit-Wheeler mechanism, where a high-energy gamma photon collides with an ultra-strong laser field to produce pairs. Notably, the only direct experimental observation of positron production via such mechanisms was carried out at the Stanford Linear Accelerator Center (SLAC), where a 46.6 GeV electron beam was made to interact with a terawatt laser pulse of intensity around $10^{18}\mathrm{W/cm}^2$ . In that setup, positrons were generated following nonlinear Compton scattering, which produced photons that then triggered the Breit-Wheeler process. Apart from the nonlinear Breit-Wheeler process, other strong-field QED effects—such as nonlinear Compton scattering and strong-field-induced vacuum pair production—have also attracted substantial attention. To investigate pair creation in different external field configurations, researchers have developed various theoretical approaches. These studies primarily focus on reducing the required field strength and enhancing the yield of produced pairs. Semiclassical techniques such as the generalized Wentzel-Kramers-Brillouin (WKB) approximation and the worldline instanton method have been widely used to describe pair production probabilities. Quantum kinetic approaches, including the quantum Vlasov equation (QVE), the low-density approximation, and the Dirac-Heisenberg-Wigner (DHW) formalism, offer more detailed quantum descriptions. Brezin and Itzykson analyzed pair production in a time-dependent, spatially homogeneous electric field using the WKB approximation, deriving probabilities based on the Keldysh adiabaticity parameter, $\gamma = m\omega / |e|E$ , which determines the interaction regime, with $\omega$ being the frequency of the electric field. Over the past decade, investigations have shown that the momentum spectrum of created particles is highly sensitive to the profile of the applied electric field and its parameters, particularly in the tunneling regime. This sensitivity extends to the multiphoton and the intermediate regimes too. Furthermore, recent studies indicate that these dependencies significantly influence the time evolution of the momentum spectrum as well; see for details. A major advancement in the study of vacuum pair creation has been the realization that structured multi-pulse electric field configurations can dramatically enhance pair production due to quantum interference effects. Analogous to the optical double-slit experiment, time-domain multiple-slit interference has emerged as a key mechanism that both increases the total yield and shapes the features of the momentum spectrum of the created pairs. Akkermans and Dunne were the first to demonstrate Ramsey-type multiple-time-slit interference, showing that in an alternating-sign $N$ -pulse electric field, the central peak of the momentum distribution scales as $N^2$ , indicating constructive interference. Building on this concept, Kohlfurst explored various multi-pulse configurations, illustrating how precise pulse shaping can be used to optimize the pair production rate. In addition, combinations of different pulse types have been shown to enhance the pair creation. For instance, the dynamically assisted Schwinger mechanism, which involves the interplay of a strong, slowly varying pulse with a weak, rapidly oscillating one, has been demonstrated to significantly boost pair production. Among the other field configurations, multi-pulse electric field comprising of sequences of time-dependent pulses with alternating signs have garnered particular interest. In such setups, not only do parameters of individual pulses, such as amplitude, duration, and shape, influence EPPs production, but the temporal spacing between pulses also plays a decisive role. Prior studies have shown that the inter-pulse time delay can significantly affect the momentum distribution of the produced pairs. No- tably, Ref. reports that the total pair production probability exhibits damped oscillations as a function of the time interval between pulses. The aforesaid theoretical models assume pulse trains with a uniform inter-pulse delay. However, real-world experimental conditions may deviate from this idealization. Fluctuations in pulse timing can arise due to limitations in synchronization and control, particularly in sequences involving a large number of pulses subject to the shot-to-shot variations. One may also consider experiments wherein a pulse train is derived from multiple sources which may not be well synchronized. This raises a key question as to how do random fluctuations in pulse timing influence EPPs production. Specifically, does the breakdown of perfect coherence diminish the enhancements typically observed in the usual case of uniform inter-pulse delay, or can certain stochastic realizations of random temporal spacing in the multi-pulse train unexpectedly enhance pair creation? What happens to the momentum spectrum? How different it is when averaged over the many realizations of the random inter-pulse delays from that for the single realization. To address these questions, we investigate vacuum pair production under field configurations with random time delays between successive pulses, which are drawn from a Gaussian distribution wherein the degree of randomness is quantified by the standard deviation $\sigma_T$ of the distribution. We set $\mu_T = 180.32[m^{-1}]$ to facilitate direct comparison with earlier studies of regular pulse trains, but we also examine the role of $\mu_T$ in the stochastic regime. While $\mu_T$ influences individual realizations, we show in the Supplementary Material that the ensemble-averaged momentum spectrum is robust to small variations in $\mu_T$ . The key parameter governing the transition from coherent to incoherent spectra is $\sigma_T$ . It is found that increasing the value of $\sigma_T$ has a strong influence on the longitudinal momentum spectrum. The well-known fringe-like structure in the spectrum of the usual non-stochastic counterpart arising due to the interference effect intrinsic to strong-field QED (for example, in Ref.) gets modified. The momentum spectrum, when averaged over the realizations of the time delays with a given $\sigma_T$ , exhibits a robust Gaussian-like envelope with residual oscillations which are much more pronounced in the central momentum region. Intriguingly, sufficiently large values of $\sigma_T$ lead to a pronounced enhancement of the central peak in the momentum spectrum. For instance, in the case of $N = 20$ , a value of $\sigma_T \approx 31[m^{-1}]$ yields nearly a tenfold increase, while around $\sigma_T \approx 50[m^{-1}]$ the enhancement reaches almost three orders of magnitude. This paper is organized as follows. In Sec. 2, we introduce the theoretical formalism based on the quantum Vlasov equation. In Sec. 3, we
# Electron-positron pair creation induced by multi-pulse train of electric fields: effect of randomness in time-delay Abstract We investigate the creation of electron-positron pairs (EPPs) in a sequence of alternating-sign, time-dependent electric field pulse trains by solving the quantum Vlasov equations. Specifically, we focus on Sauter-like pulse trains with random time delays between successive pulses, drawn from a Gaussian distribution wherein the extent of fluctuations is controlled by the standard deviation $\sigma_T$ of the distribution. We find that increasing $\sigma_T$ leads to a dramatic transformation in the longitudinal momentum spectrum. The well-known fringe pattern, akin to that in the multi-slit interference, gets significantly modified. The averaged spectra exhibit a robust Gaussian-like envelope with residual oscillations, which are much more prominent in the central momentum region. Notably, we find that in certain cases, stochastic time delays lead to a pronounced enhancement in the central peak of the distribution function for pulse train containing $N$ pulses. For example, for $N = 20$ pulses, $\sigma_T \approx 31[m^{-1}]$ (about $17\%$ of the mean time delay) yields nearly a tenfold increase in the central peak, which for $\sigma_T \approx 50[m^{-1}]$ (about $27\%$ of the mean time delay), scales up to $10^3$ . This may open up new possibilities for optimizing multi-pulse field configurations and guide future experimental designs aimed at maximizing EPPs creation. Keywords: Schwinger mechanism, Interference effect, multi-pulse trains, pair creation, randomness # 1. Introduction The spontaneous creation of electron-positron pairs(EPPs) from vacuum in the presence of intense external fields is a fundamental prediction of quantum electrodynamics (QED). However, observing this effect experimentally remains challenging due to the significant exponential suppression, given by $\exp (-\pi E_c / E)$ , where $E_{\mathrm{c}} = m^{2} / |e|\approx 1.3\times 10^{16}\mathrm{V / cm}$ represents the Schwinger critical field strength, $m$ is the electron mass, $e$ is the electron charge (the units $\hbar = c = 1$ are used) and $E$ is the applied field strength. The laser intensity needed to reach this threshold is approximately $I_{\mathrm{c}}\approx 10^{29}\mathrm{W / cm}^2$ , which greatly surpasses the capabilities of the current conventional laboratory systems. Nonetheless, significant progress in high-intensity laser technology and the construction of cutting-edge laser facilities is steadily closing the gap, making the experimental observation of this phenomenon increasingly feasible. This progress continues to inspire extensive theoretical and experimental efforts. Currently, laser systems have reached peak intensities of approximately $10^{23}\mathrm{W / cm}^2$. EPPs creation can occur through various mechanisms under strong electromagnetic fields. One example is the Bethe-Heitler mechanism, in which a super-intense laser interacts with the Coulomb field of a nucleus, resulting in pair creation. Another widely studied mechanism is the Breit-Wheeler mechanism, where a high-energy gamma photon collides with an ultra-strong laser field to produce pairs. Notably, the only direct experimental observation of positron production via such mechanisms was carried out at the Stanford Linear Accelerator Center (SLAC), where a 46.6 GeV electron beam was made to interact with a terawatt laser pulse of intensity around $10^{18}\mathrm{W/cm}^2$ . In that setup, positrons were generated following nonlinear Compton scattering, which produced photons that then triggered the Breit-Wheeler process. Apart from the nonlinear Breit-Wheeler process, other strong-field QED effects—such as nonlinear Compton scattering and strong-field-induced vacuum pair production—have also attracted substantial attention. To investigate pair creation in different external field configurations, researchers have developed various theoretical approaches. These studies primarily focus on reducing the required field strength and enhancing the yield of produced pairs. Semiclassical techniques such as the generalized Wentzel-Kramers-Brillouin (WKB) approximation and the worldline instanton method have been widely used to describe pair production probabilities. Quantum kinetic approaches, including the quantum Vlasov equation (QVE), the low-density approximation, and the Dirac-Heisenberg-Wigner (DHW) formalism, offer more detailed quantum descriptions. Brezin and Itzykson analyzed pair production in a time-dependent, spatially homogeneous electric field using the WKB approximation, deriving probabilities based on the Keldysh adiabaticity parameter, $\gamma = m\omega / |e|E$ , which determines the interaction regime, with $\omega$ being the frequency of the electric field. Over the past decade, investigations have shown that the momentum spectrum of created particles is highly sensitive to the profile of the applied electric field and its parameters, particularly in the tunneling regime. This sensitivity extends to the multiphoton and the intermediate regimes too. Furthermore, recent studies indicate that these dependencies significantly influence the time evolution of the momentum spectrum as well; see for details. A major advancement in the study of vacuum pair creation has been the realization that structured multi-pulse electric field configurations can dramatically enhance pair production due to quantum interference effects. Analogous to the optical double-slit experiment, time-domain multiple-slit interference has emerged as a key mechanism that both increases the total yield and shapes the features of the momentum spectrum of the created pairs. Akkermans and Dunne were the first to demonstrate Ramsey-type multiple-time-slit interference, showing that in an alternating-sign $N$ -pulse electric field, the central peak of the momentum distribution scales as $N^2$ , indicating constructive interference. Building on this concept, Kohlfurst explored various multi-pulse configurations, illustrating how precise pulse shaping can be used to optimize the pair production rate. In addition, combinations of different pulse types have been shown to enhance the pair creation. For instance, the dynamically assisted Schwinger mechanism, which involves the interplay of a strong, slowly varying pulse with a weak, rapidly oscillating one, has been demonstrated to significantly boost pair production. Among the other field configurations, multi-pulse electric field comprising of sequences of time-dependent pulses with alternating signs have garnered particular interest. In such setups, not only do parameters of individual pulses, such as amplitude, duration, and shape, influence EPPs production, but the temporal spacing between pulses also plays a decisive role. Prior studies have shown that the inter-pulse time delay can significantly affect the momentum distribution of the produced pairs. No- tably, Ref. reports that the total pair production probability exhibits damped oscillations as a function of the time interval between pulses. The aforesaid theoretical models assume pulse trains with a uniform inter-pulse delay. However, real-world experimental conditions may deviate from this idealization. Fluctuations in pulse timing can arise due to limitations in synchronization and control, particularly in sequences involving a large number of pulses subject to the shot-to-shot variations. One may also consider experiments wherein a pulse train is derived from multiple sources which may not be well synchronized. This raises a key question as to how do random fluctuations in pulse timing influence EPPs production. Specifically, does the breakdown of perfect coherence diminish the enhancements typically observed in the usual case of uniform inter-pulse delay, or can certain stochastic realizations of random temporal spacing in the multi-pulse train unexpectedly enhance pair creation? What happens to the momentum spectrum? How different it is when averaged over the many realizations of the random inter-pulse delays from that for the single realization. To address these questions, we investigate vacuum pair production under field configurations with random time delays between successive pulses, which are drawn from a Gaussian distribution wherein the degree of randomness is quantified by the standard deviation $\sigma_T$ of the distribution. We set $\mu_T = 180.32[m^{-1}]$ to facilitate direct comparison with earlier studies of regular pulse trains, but we also examine the role of $\mu_T$ in the stochastic regime. While $\mu_T$ influences individual realizations, we show in the Supplementary Material that the ensemble-averaged momentum spectrum is robust to small variations in $\mu_T$ . The key parameter governing the transition from coherent to incoherent spectra is $\sigma_T$ . It is found that increasing the value of $\sigma_T$ has a strong influence on the longitudinal momentum spectrum. The well-known fringe-like structure in the spectrum of the usual non-stochastic counterpart arising due to the interference effect intrinsic to strong-field QED (for example, in Ref.) gets modified. The momentum spectrum, when averaged over the realizations of the time delays with a given $\sigma_T$ , exhibits a robust Gaussian-like envelope with residual oscillations which are much more pronounced in the central momentum region. Intriguingly, sufficiently large values of $\sigma_T$ lead to a pronounced enhancement of the central peak in the momentum spectrum. For instance, in the case of $N = 20$ , a value of $\sigma_T \approx 31[m^{-1}]$ yields nearly a tenfold increase, while around $\sigma_T \approx 50[m^{-1}]$ the enhancement reaches almost three orders of magnitude. This paper is organized as follows. In Sec. 2, we introduce the theoretical formalism based on the quantum Vlasov equation. In Sec. 3, we present and discuss our numerical results. In Sec. 4, we provide a brief conclusion and outlook. # 2. Theoretical Framework In the subcritical field regime, $E < E_{\mathrm{c}}$ , pair production and the corresponding back-reaction current are minimal. This allows us to neglect both collision effects and the internal electric field. Moreover, the spatial focusing scale of typical laser pulses is usually much larger than the electron Compton wavelength—which characterizes the spatial extent relevant for vacuum pair creation—so spatial inhomogeneities of the background field are strongly suppressed. Nevertheless, we emphasize that spatial variations and accompanying magnetic fields can, in general, have a significant impact on electron-positron pair production. This has been demonstrated in several works, including Refs.. In the present work we restrict ourselves to a simplified but widely used scenario: a spatially uniform, purely time-dependent electric field. When two counterpropagating laser pulses form a standing wave with sufficiently large beam waist, the associated magnetic field near the antinodes can be neglected, effectively giving $B(t) = 0$ . Consequently, the background field can be modeled as $\mathbf{E}(t) = (0,0,E(t))$ . Adopting the temporal gauge $A^0 (t) = 0$ , the corresponding four potential is given by $A^{\mu}(t) = (0,0,0,A(t))$ , where $A(t)$ is related to the electric field as $E(t) = -\dot{A} (t)$ . Pair creation from the vacuum in the presence of such an external electric field has been studied using the quantum Vlasov equation (QVE) by many researchers; see, for example, and references therein. QVE is a standard tool within the framework of quantum kinetic theory, and its detailed derivation is readily available in the literature, e.g., in Ref.. Here, for the sake of completeness, we provide only the essential equations and the notations. Starting from the Dirac equation in a homogeneous electric field and employing a time-dependent Bogoliubov transformation, the QVE can be formulated as an integro-differential equation that governs the evolution of the single-particle momentum distribution function $f(\pmb{p}, t)$ : $$ \frac {d f (\pmb {p} , t)}{d t} = \frac {\lambda (\pmb {p} , t)}{2} \int_ {t _ {0}} ^ {t} d t ^ {\prime} \lambda (\pmb {p}, t ^ {\prime}) [ 1 - 2 f (\pmb {p}, t ^ {\prime}) ] \cos [ \Theta (\pmb {p}, t, t ^ {\prime}) ], \qquad (1) $$ where $\lambda (\pmb {p},t) = \frac{eE(t)\varepsilon_{\perp}}{\omega^{2}(\pmb{p},t)}$ , is the amplitude of the vacuum transition, while $\Theta (\pmb {p},t,t^{\prime}) = 2\int_{t^{\prime}}^{t}d\tau \omega (\pmb {p},\tau)$ stands for the dynamical phase, describing the vacuum oscillations modulated by the external field. The quasiparticle energy $\omega (\pmb {p},t)$ , the transverse energy $\varepsilon_{\perp}$ and longitudinal quasiparticle momentum $P_{3}$ are defined as: $$ \omega (\boldsymbol {p}, t) = \sqrt {\varepsilon_ {\perp} ^ {2} + P _ {3} ^ {2} (p _ {3} , t)}, \tag {2} $$ $$ \varepsilon_ {\perp} = \sqrt {m ^ {2} + p _ {\perp} ^ {2}}, \tag {3} $$ $$ P _ {3} (t) = p _ {3} - e A (t), \tag {4} $$ where $\pmb{p} = (p_{\perp}, p_3)$ represents the canonical momentum. Here, $p_{\perp} = |p_{\perp}| = \sqrt{p_1^2 + p_2^2}$ is the modulus of the momentum component perpendicular to the electric field, and $p_3$ stands for the momentum component parallel to the electric field $E(t)$ . It is important to note that the distribution function $f(\pmb{p}, t)$ represents the number of real particles created with momentum $\pmb{p}$ in the asymptotic limit $t \to +\infty$ . This limit corresponds to a physical scenario in which the external laser field vanishes. Our analysis focuses on the distribution function in the asymptotic regime. Eq. (1) is difficult to solve numerically due to the presence of rapidly oscillating phase term in the integrand. It is, therefore, convenient to recast this equation as an equivalent system of three coupled ordinary differential equations: $$ \frac {d f (\pmb {p} , t)}{d t} = \frac {1}{2} \lambda (\pmb {p}, t) u (\pmb {p}, t), \tag {5} $$ $$ \frac {d u (\boldsymbol {p} , t)}{d t} = \lambda (\boldsymbol {p}, t) [ 1 - 2 f (\boldsymbol {p}, t) ] - 2 \omega (\boldsymbol {p}, t) v (\boldsymbol {p}, t), \tag {6} $$ $$ \frac {d v (\boldsymbol {p} , t)}{d t} = 2 \omega (\boldsymbol {p}, t) u (\boldsymbol {p}, t). \tag {7} $$ Together with the initial conditions $f(\pmb{p}, -\infty) = u(\pmb{p}, -\infty) = v(\pmb{p}, -\infty) = 0$ this set of equations becomes a well-defined and numerically solvable initial value problem. Figure 1: The longitudinal momentum spectrum of created EPPs in an alternating-sign four-pulse electric field $E(t)$ for the non-stochastic case $(\sigma_T = 0)$ . The transverse momentum is set to zero, and all quantities are expressed in units of the electron mass. The field parameters are $E_0 = 0.1E_c$ , $\tau = 20$ [m $^{-1}$ ], and $\mu_T = 180.32$ [m $^{-1}$ ] # 3. Results We consider a field configuration composed of a sequence of alternating-sign, time-dependent Sauter electric pulses referred to as a multi-pulse train: $$ E (t) = \sum_ {k = 1} ^ {N} (- 1) ^ {k - 1} E _ {0} \operatorname {s e c h} ^ {2} \left(\frac {t + \left(k - \frac {N + 1}{2}\right) T _ {k}}{\tau}\right), \tag {8} $$ where $E_0$ denotes the amplitude of each electric field pulse, and $N$ is the total number of pulses in the pulse train. The random variable $T_k$ specifies the temporal position of the $k$ th pulse, whose center is located at $\left(\frac{(N + 1 - 2k)}{2}\right)T_k$ . Thus, the timing of pulses in the pulse train becomes stochastic. This electric field should be compared with those in Refs. where all the pulses in the pulse train are regularly spaced with a fixed time delay. Henceforth, we shall refer to such a pulse train as regular or non-stochastic. The electric field in Eq. (8) reduces to the one considered in Refs. upon replacing the random variables $\{T_k\}$ by a constant which is the fixed time delay between successive pulses. Note that the time delay between the $i$ th and the $j$ th pulse for the electric field considered here (Eq. (8)) depends on the random variables $T_i$ and $T_j$ . The corresponding vector potential is given by, $$ A (t) = - E _ {0} \tau \left[ 1 + \sum_ {k = 1} ^ {N} (- 1) ^ {k - 1} \tanh \left(\frac {t + \left(k - \frac {N + 1}{2}\right) T _ {k}}{\tau}\right) \right]. \tag {9} $$ We take random variables $\{T_k\}$ to be independent and identically distributed (IID) according to a Gaussian distribution having a mean $\mu_T$ and variance $\sigma_T^2$ . Specifically, we generate $\{T_k\}$ using MATLAB's built-in randn function, which returns an array of pseudo-random numbers drawn from the standard normal distribution with zero mean and unit variance (i.e., unbounded and symmetric about zero). Accordingly, $$ \left\{T _ {k} \right\} = \mu_ {T} + \sigma_ {T} \times \operatorname {r a n d n} (1, N), \tag {10} $$ It is evident from Eq. (10) that the stochastic pulse train with random variables $\{T_k\}$ would turn into a non-stochastic one, with $\mu_T$ being the delay between the successive pulses, when $\sigma_T$ is set to zero. In other words, for the stochastic pulse train, the fluctuation in the time delay about its mean $\mu_T$ is governed by the standard deviation $\sigma_T$ . We fix the mean as $\mu_T = 180.32[m^{-1}]$ , which is exactly the same as that considered for the non-stochastic pulse train in Ref.. To demonstrate the effect of randomness in the stochastic pulse train on the longitudinal momentum spectrum of the created pairs, we numerically solve the system of first-order ordinary differential equations given in Eqs. (5)-(7). We plot the resulting momentum spectra for both the stochastic and non-stochastic cases. The latter, which is widely studied in the literature, serves as a reference to identify the modifications introduced by the randomness. In Figure 1, the longitudinal momentum spectrum of created EPPs is shown for the non-stochastic case ( $\sigma_T = 0$ ) of an alternating-sign $N$ -pulse electric field $E(t)$ . The spectrum exhibits a characteristic $N$ -slit interference fringe-like structure consisting of several bands of maxima and minima. Most pairs are contained in the central band located around $p_3 \approx 0$ , which has many peaks of nearly equal heights exhibiting the well-known $N^2$ scaling. Furthermore, the central band has a gradually varying and broad momentum profile. On the other hand, the successive side bands, located symmetrically on either side of the central band, have far fewer peaks. The farther the side band from the central band, the lower the peak height of the maxima therein. Therefore, with increasing value of $|p_3|$ , the fringes gradually vanish Figure 2: The longitudinal momentum spectrum of created EPPs in an alternating-sign $N$ -pulse electric field $E(t)$ with $N = 4$ . The blue line shows a stochastic realization with $\sigma_T = 15[m^{-1}]$ , while the grey line represents the non-stochastic case. The three panels correspond to independent realizations (Runs I-III). The value of transverse momentum is taken to be zero, and all the units are taken in electron mass units. The electric field parameters are $E_0 = 0.1E_c$ , $\tau = 20[m^{-1}]$ , and $\mu_T = 180.32[m^{-1}]$ . asymptotically. Overall, the spectrum exhibits a highly regular, symmetric interference pattern, with the central broad band $(-0.2[m] < p_3 < 0.2[m])$ dominating the distribution. In contrast to the non-stochastic case, Figure 2 shows the spectra for the stochastic case $(\sigma_{T} = 15[m^{-1}])$ with $N = 4$ for three independent realizations (Runs I-III). Introducing randomness modifies the regular $N$ -slit interference pattern observed in the non-stochastic case. The spectra lose their symmetry about $p_3 = 0$ , and the central band becomes distorted in a realization-dependent manner. Although a broad Gaussian-like envelope persists, the peaks fragment into irregular structures with uneven heights and spacings. The differences between Runs I-III highlight run-to-run fluctuations, reflecting the sensitivity of the momentum distribution to stochastic variations in the time delays. The corresponding values of the random variable $\{T_k\}$ are listed in Table 1. In Run I, the interference structure around the central region $(-0.2[m] < p_3 < 0.2[m])$ is strongly modified compared to the non-stochastic case. The sharp, well-ordered fringe pattern with evenly spaced maxima and minima is replaced by irregular sub-bands. The central peak at $p_3 = 0$ is noticeably suppressed, and the minima between fringes no longer drop to zero, reducing overall fringe visibility. On the left side $(-0.2[m] < p_3 < 0[m])$ , relatively strong oscillations remain, with amplitudes comparable to those in the deterministic case. In contrast, on the right side $(0[m] < p_3 < 0.2[m])$ , the fringes become weaker and more irregular. Side-band peaks at $p_3 \approx -0.4[m]$ and $-0.3[m]$ remain visible, whereas their positive- $p_3$ counterparts are distorted and less pronounced. In Run II, the central interference band fragments into fewer sub-bands than in Run I, but the peak heights remain comparable to the non-stochastic case (grey curve). The spectrum is still dominated by the central maximum at $p_3 \approx 0$ , much like in the deterministic case. Some side-band structures survive, but their peaks are noticeably suppressed in height and lose their regular spacing. As a result, the distribution becomes asymmetric about $p_3 = 0$ , with the side-bands appearing less sharp and more irregular compared to the non-stochastic spectrum. In Run III, the asymmetry is most pronounced. One side of the spectrum (negative $p_3$ ) exhibits stronger and denser peaks, while the other side (positive $p_3$ ) is highly irregular and suppressed. The central band is significantly distorted, with one side dominating the distribution. This extreme imbalance demonstrates the strong sensitivity of the momentum spectrum to random variations in inter-pulse delays. Overall, randomness in the inter-pulse delays $(\sigma_T = 15[m^{-1}])$ modifies the highly regular fringe pattern of the non-stochastic case. The randomness not only destroys the ordered fringe and side-band hierarchy but also induces a clear left-right asymmetry across all runs. Although band-like interference features persist, their internal structure becomes irregular and asymmetric, with the degree of distortion varying from run to run. These run-to-run fluctuations highlight the stochastic nature of the driving field and its impact on the symmetry of the momentum distribution. In Figure 3, the spread of randomly distributed inter-pulse delays ( $\sigma_T = 45[m^{-1}]$ ) further amplifies the stochasticity in the $N = 4$ pulse trains, with the corresponding random variables $T_{k}$ tabulated in Table 1. Compared to the non-stochastic case and the weaker randomness ( $\sigma_T = 15[m^{-1}]$ ), the spectra exhibit fragmented peaks and reduced fringe visibility. In Run I, the central band $(-0.2[m] < p_3 < 0.2[m])$ remains the most prominent feature; however, instead of exhibiting evenly spaced oscillations, it fragments into clusters of irregular peaks, while the side bands become distorted and lose symmetry about $p_3 = 0$ , highlighting how stronger randomness enhances fragmentation and disrupts the interference hierarchy. Run II shows a pronounced left-right asymmetry: peaks for $p_3 < 0$ are enhanced, whereas those for $p_3 > 0$ are suppressed, and interference features are unevenly spaced, reflecting a breakdown of regular phase correlations. In Run III, the overall envelope resembles the deterministic case, yet the fine structure is highly disordered; individual peaks survive but no longer form a recognizable fringe pattern, and the asymmetry is weaker than in Run II. In general, increasing the inter-pulse delay spread to $\sigma_T = 45[m^{-1}]$ makes the momentum distribution highly sensitive to randomness: the spectra become fragmented, asymmetric, and run-dependent, demonstrating the strong stochastic influence of the driving field. The spectra shown in Figure 4 correspond to the strongest degree of randomness considered, namely $\sigma_T = 75[m^{-1}]$ . At this level of stochasticity, the longitudinal momentum spectra lose the well-separated bands of maxima and minima observed at smaller $\sigma_T$ . Instead of regular clusters, the distributions exhibit isolated peaks scattered across a broad momentum range. Nevertheless, it is notable that the magnitude of the central peak remains comparable, or in some cases even slightly enhanced, compared to spectra at lower $\sigma_T$ . In Run I, the spectrum exhibits a dense arrangement of sharp peaks centered around the central region ( $p_3 \approx 0$ ), while the overall fringe-like band structure has disappeared. In Run II, the central peak is the most pronounced among the three realizations, with a strong maximum at $p_3 \approx 0$ Figure 3: Same as in Fig. 2, except for an alternating-sign four-pulse electric field with $\sigma_T = 45[m^{-1}]$ . <table><tr><td>σT[m-1]</td><td>k</td><td colspan="2">Run I</td><td colspan="2">Run II</td><td colspan="2">Run III</td></tr><tr><td></td><td></td><td>Tk</td><td>Pulse Centre</td><td>Tk</td><td>Pulse Centre</td><td>Tk</td><td>Pulse Centre</td></tr><tr><td rowspan="4">15</td><td>1</td><td>172.25</td><td>258.37</td><td>193.33</td><td>289.99</td><td>194.96</td><td>292.44</td></tr><tr><td>2</td><td>185.38</td><td>92.69</td><td>165.38</td><td>82.69</td><td>172.47</td><td>86.24</td></tr><tr><td>3</td><td>160.86</td><td>-80.43</td><td>194.08</td><td>-97.04</td><td>182.97</td><td>-91.48</td></tr><tr><td>4</td><td>191.65</td><td>-287.47</td><td>171.45</td><td>-257.17</td><td>207.99</td><td>-311.98</td></tr><tr><td rowspan="4">45</td><td>1</td><td>156.10</td><td>234.15</td><td>219.35</td><td>329.02</td><td>224.24</td><td>336.36</td></tr><tr><td>2</td><td>195.50</td><td>97.75</td><td>135.50</td><td>67.75</td><td>156.78</td><td>78.39</td></tr><tr><td>3</td><td>121.93</td><td>-60.97</td><td>221.60</td><td>-110.80</td><td>188.27</td><td>-94.13</td></tr><tr><td>4</td><td>214.30</td><td>-321.45</td><td>153.70</td><td>-230.55</td><td>263.32</td><td>-394.98</td></tr><tr><td rowspan="4">75</td><td>1</td><td>139.95</td><td>209.93</td><td>245.36</td><td>368.04</td><td>253.51</td><td>380.28</td></tr><tr><td>2</td><td>205.62</td><td>102.81</td><td>105.61</td><td>52.81</td><td>141.07</td><td>70.54</td></tr><tr><td>3</td><td>83.01</td><td>-41.51</td><td>249.12</td><td>-124.56</td><td>193.56</td><td>-96.78</td></tr><tr><td>4</td><td>236.95</td><td>-355.44</td><td>135.96</td><td>-203.94</td><td>318.65</td><td>-477.97</td></tr></table> Table 1: Tabulated values of the random variables $\{T_k\}$ [see Eq. 10] with $\mu_T = 180.32[m^{-1}]$ . Each block corresponds to independent realizations (Runs I-III). The last column in each run lists the corresponding pulse centers, located at $\frac{3}{2} T_1$ , $\frac{1}{2} T_2$ , $-\frac{1}{2} T_3$ , $-\frac{3}{2} T_4$ for $N = 4$ . accompanied by asymmetries more visible on one side of the spectrum. In Run III, the central region becomes more fragmented. Instead of a single dominant peak, multiple maxima of comparable height appear near $p_3 = 0$ . This creates a visibly more irregular and distorted spectrum compared to Runs I and II. The side regions are highly suppressed, showing little evidence of structured oscillations. Taken together, the three realizations show that at $\sigma_T = 75[m^{-1}]$ , the spectral structure becomes entirely run-dependent. Although all cases retain a concentration of spectral weight near the origin, the detailed arrangement of peaks varies significantly from run to run. For $N = 4$ pulses, a comparative analysis of all figures reveals a distinct trend: as $\sigma_T$ increases, the spectral profile evolves from a well-defined interference pattern into a progressively incoherent and asymmetric distribution. The case $\sigma_T = 0$ represents the fully deterministic limit, serving as a reference for coherent particle production with a symmetric spectrum around the central momentum. Intermediate values ( $\sigma_T = 15$ and $45[m^{-1}]$ ) show a gradual disappearance of the fringe-like bands of maxima and minima, accompanied by asymmetries in the peak heights and positions, indicative of increasing temporal randomness. At $\sigma_T = 75[m^{-1}]$ , stochastic effects dominate, and the spectrum becomes strongly asymmetric, with the fringe-like features completely washed out. As discussed above, the longitudinal momentum spectra are highly sensitive to variations in inter-pulse delays. Increasing randomness progressively smears the well-defined fringe-like patterns of maxima and minima, suppressing several peaks and highlighting the effect of temporal disorder on coherent pair production. To quantify the statistical impact of this stochasticity, we now analyze the averaged momentum spectrum, obtained by ensemble-averaging over many numerical runs with randomized time delays. In particular, we consider ensemble averages for two representative values of randomness, $\sigma_T = 15[m^{-1}]$ and $\sigma_T = 45[m^{-1}]$ . Although these values do not span the entire parameter space, they are sufficient to capture the essential trends. Unlike earlier results, which reflected spectra from individual random realizations, ensemble averaging is crucial for extracting statistically robust and physically meaningful features. This approach also mirrors realistic experimental scenarios, where multiple experimental shots must be accumulated and averaged to obtain reliable signatures from stochastic sources. Figure 5 shows the momentum spectrum averaged over different numbers of numerical runs for $\sigma_T = 15[m^{-1}]$ . For 10 realizations (Fig. 5(a)), the central region $(-0.4[m] \lesssim p_3 \lesssim 0.4[m])$ displays sharp oscillations accompanying the largest peak, which reaches approximately $8.1 \times 10^{-13}$ . These oscillations, arising from quantum interference effects also seen in individual realizations (see Fig. 2), are not fully suppressed by averaging, leaving pronounced fluctuations superimposed on an overall Gaussian-like profile. As the number of runs increases, the spectrum becomes progressively smoother and more Gaussian-like. For 50 runs [Fig. 5(b)], the central peak at $p_3 \approx 0$ decreases slightly to approximately $5.2 \times 10^{-13}$ , while the irregular oscillations are significantly reduced. At 100 runs (Fig. 5(c)), the spectrum becomes smooth. The irregular fluctuations observed at smaller sample sizes are nearly suppressed. The central peak stabilizes around $5.4 \times 10^{-13}$ , consistent with the 50-run case, indicating statistical convergence. Although faint ripples remain, they are nearly regular and of much smaller amplitude. At this stage, the spectrum clearly exhibits a dominant Gaussian envelope. From the above discussion of the averaged momentum spectrum, it is evident that residual oscillations persist—particularly in the central region—even after averaging over a reasonably large number of numerical runs with randomized inter-pulse delay configurations. In this sense, the spectrum is characterized by a broad Gaussian-like envelope, on top of which oscillations Figure 4: Same as in Fig. 2, except for an alternating-sign four-pulse electric field with $\sigma_T = 75[m^{-1}]$ . Figure 5: Averaged momentum spectra $\bar{f}(p_3)$ computed over different numbers of random samples with randomized time delays for an alternating-sign four-pulse electric field $E(t)$ . Each panel corresponds to a different number of averaging runs, with the black dashed curve indicating a Gaussian fit. The field parameters are $E_0 = 0.1E_c$ , $\tau = 20[m^{-1}]$ , $\mu_T = 180.32[m^{-1}]$ , and $\sigma_T = 15[m^{-1}]$ . remain. To assess the convergence behavior of the averaged momentum spectrum with increasing numbers of random realizations, we quantify the statistical convergence of $\bar{f}(p_3)$ using nonlinear least-squares fits to a Gaussian model, $$ \bar {f} (p _ {3}) = \frac {\mathcal {N} _ {0}}{\sqrt {2 \pi \mathcal {S} ^ {2}}} \exp \left[ - \frac {(p _ {3} - \bar {p} _ {3}) ^ {2}}{2 \mathcal {S} ^ {2}} \right], \tag {11} $$ and compute the reduced chi-squared $\left(\chi_{\mathrm{red}}^{2}\right)$ for each fit. The fitted parameters and goodness-of-fit metrics are summarized in Table 2. The results Table 2: Fitted parameters and reduced chi-squared values for the Gaussian model applied to the averaged longitudinal momentum spectra, computed over different numbers of sample runs with randomized inter-pulse delays. The parameters $\mathcal{N}_0$ , $\bar{p}_3$ , and $\mathcal{S}$ represent the fitted peak amplitude, peak position, and spectral width, respectively. Quoted uncertainties correspond to $1\mathcal{S}$ standard errors. <table><tr><td>Number of runs</td><td>N0(x10-13)</td><td>p3(x10-4)</td><td>S</td><td>χ2red(x10-2)</td></tr><tr><td>10 runs</td><td>3.8516 ± 0.0069</td><td>4.2761 ± 1.41</td><td>0.28128 ± 0.00081</td><td>8.78</td></tr><tr><td>30 runs</td><td>3.8995 ± 0.0039</td><td>-0.7363 ± 0.79</td><td>0.28135 ± 0.00046</td><td>2.78</td></tr><tr><td>50 runs</td><td>3.9168 ± 0.0034</td><td>1.0714 ± 0.68</td><td>0.28111 ± 0.00039</td><td>2.06</td></tr><tr><td>70 runs</td><td>3.8959 ± 0.0028</td><td>-1.1407 ± 0.56</td><td>0.28157 ± 0.00033</td><td>1.40</td></tr><tr><td>100 runs</td><td>3.8934 ± 0.0025</td><td>-1.7805 ± 0.50</td><td>0.28185 ± 0.00029</td><td>1.12</td></tr></table> in Figure 5 show the averaged momentum spectra with Gaussian fits (black dashed curves), illustrating the emergence of statistical convergence as the number of runs used for averaging. For 10 runs, strong fluctuations remain on top of the Gaussian envelope, resulting in large peak-position uncertainty $(\bar{p}_3 = 4.2761 \pm 1.41 \times 10^{-4})$ and relatively high reduced chi-squared $(\chi_{red}^2 = 8.78 \times 10^{-2})$ , as summarized in Table 2. With 30-50 runs, the magnitude of fluctuations decreases significantly, leading to smaller parameter uncertainties and improved fit quality $(\chi_{red}^2 \sim 2 \times 10^{-2})$ . At 50 runs, the spectrum is smoother, the peak position approaches zero $(\bar{p}_3 = 1.0714 \pm 0.68 \times 10^{-4})$ , and the width is essentially converged $(S = 0.28111 \pm 0.00039)$ , as reported in Table 2. For 70-100 runs, the fitted parameters show only minor changes, and the reduced chi-squared decreases further to $\sim 1 \times 10^{-2}$ , demonstrating excellent agreement with the Gaussian model. At 100 runs, the spectrum becomes nearly smooth, confirming quantitative convergence and statistical robustness $(\mathcal{N}_0 = 3.893 \times 10^{-13}, \bar{p}_3 = -1.7805 \pm 0.50 \times 10^{-4}, S = 0.28185 \pm 0.00029$ ; see Table 2). The central Region of Interest (ROI, $-0.3 < p_{3} < 0.3$ ) is used to quantify residual oscillations. The maximum amplitude and $\chi_{\mathrm{red}}^2$ of the Gaussian fit decrease systematically with an increasing number of runs, as summarized in Table 3: from $4.02 \times 10^{-13}$ and $4.99 \times 10^{-24}$ at 10 runs, to $\sim 1.2 - 2.0 \times 10^{-13}$ and $10^{-25} - 10^{-24}$ at 70-100 runs, with the reduced chi-squared in the ROI approaching $10^{-27}$ . Table 3: Maximum amplitude of oscillation and reduced chi-squared values in the central ROI $(-0.3 < p_3 < 0.3)$ for different numbers of runs. <table><tr><td>Number of runs</td><td>Max amplitude (×10-13)</td><td>χ2red (×10-26)</td></tr><tr><td>10</td><td>4.0248</td><td>1.6809</td></tr><tr><td>30</td><td>1.7816</td><td>0.4525</td></tr><tr><td>50</td><td>1.5648</td><td>0.3568</td></tr><tr><td>70</td><td>2.0040</td><td>0.3105</td></tr><tr><td>100</td><td>1.1815</td><td>0.2053</td></tr></table> Therefore, the results clearly demonstrate that increasing the number of runs improves statistical convergence. The spectral fluctuations are progressively suppressed, parameter uncertainties shrink, and the Gaussian model provides an increasingly reliable description of the underlying momentum distribution. By 100 runs, the momentum distribution is statistically robust, and the Gaussian model accurately represents the underlying spectrum. Next, we consider the case with a stronger degree of randomness, $\sigma_T = 45[m^{-1}]$ . Figure 6 displays the averaged longitudinal momentum spectra $\bar{f}(p_3)$ for this value of $\sigma_T$ , corresponding to stronger temporal disorder. Similar to the $\sigma_T = 15[m^{-1}]$ case (Fig. 5), strong oscillatory behaviour persists when averaging over only a few realizations (10 runs). These oscillations are gradually suppressed as the ensemble size increases, and the distribution converges toward a Gaussian-like profile with only small residual oscillations when 100 runs are included. Representative results for 10, 50, and 100 runs are shown in the three panels of Fig. 6. In each case, the spectra are fitted with the Gaussian model in Eq. (11), which captures the overall envelope of the distributions. The extracted fit parameters, along with the reduced chi-squared values, are listed in Table 4, providing a quantitative measure of convergence. For 10 runs [Fig. 6(a)], the spectrum remains highly irregular, with strong fluctuations about the Gaussian envelope. From Table 4, the fitted parameters are $\mathcal{N}_0 = 3.8137 \times 10^{-13}$ , $\bar{p}_3 = 2.59 \times 10^{-4}$ , and $S = 0.27947$ , Figure 6: Same as in Fig. 5, except for an alternating-sign four-pulse electric field with $\sigma_T = 45[m^{-1}]$ . while the reduced chi-squared, $\chi_{\mathrm{red}}^2 = 8.99\times 10^{-2}$ , indicates that the Gaussian model captures only the broad trend but not the fine details. Thus, averaging over 10 runs is insufficient for statistical reliability. At 50 runs [Fig. 6(b)], the oscillation amplitudes are much reduced, and the spectrum closely resembles a Gaussian. The parameters stabilize to $\mathcal{N}_0 = 3.733\times 10^{-13}$ , $\bar{p}_{3} = 2.126\times 10^{-3}$ , and $S = 0.27924$ , with a reduced chi-squared of $2.25\times 10^{-2}$ , signaling a marked improvement in the fit quality. Finally, for 100 runs [Fig. 6(c)], the averaged spectrum becomes very smooth, with residual oscillations strongly suppressed. The Gaussian model provides an excellent fit, with nearly saturated parameters $\mathcal{N}_0 = 3.7629\times 10^{-13}$ , $\bar{p}_{3} = 5.69\times 10^{-4}$ , and $S = 0.27989$ , while the reduced chi-squared drops to $1.05\times 10^{-2}$ . This confirms that statistical convergence is achieved at large ensemble sizes. Table 4: Fitted parameters and reduced chi-squared values for the Gaussian model applied to the averaged longitudinal momentum spectra, computed over different numbers of ensemble runs with randomized time delays. Quoted uncertainties correspond to $1S$ standard errors. <table><tr><td>Number of runs</td><td>N0(x10-13)</td><td>p3(x10-4)</td><td>S</td><td>χ2red(x10-2)</td></tr><tr><td>10 runs</td><td>3.8137 ± 0.0065</td><td>2.5916 ± 1.32</td><td>0.27947 ± 0.00073</td><td>8.99</td></tr><tr><td>30 runs</td><td>3.8644 ± 0.0037</td><td>9.9292 ± 0.74</td><td>0.27881 ± 0.00041</td><td>2.88</td></tr><tr><td>50 runs</td><td>3.7332 ± 0.0032</td><td>21.257 ± 0.66</td><td>0.27924 ± 0.00036</td><td>2.25</td></tr><tr><td>70 runs</td><td>3.8164 ± 0.0027</td><td>7.9331 ± 0.55</td><td>0.28018 ± 0.00030</td><td>1.55</td></tr><tr><td>100 runs</td><td>3.7629 ± 0.0022</td><td>5.6917 ± 0.45</td><td>0.27989 ± 0.00025</td><td>1.05</td></tr></table> Residual oscillations in the central ROI are quantified in Table 5. At 10 runs, the largest fluctuation reaches $2.95 \times 10^{-13}$ with $\chi_{ROI}^2 = 4.02 \times 10^{-24}$ . Increasing the ensemble size steadily suppresses these deviations: the fluctuation amplitude drops below $2.0 \times 10^{-13}$ for 30-50 runs and reaches $1.15 \times 10^{-13}$ at 100 runs, with $\chi_{ROI}^2$ correspondingly reduced to $5.36 \times 10^{-25}$ ( $\chi_{\mathrm{red}}^2 = 1.81 \times 10^{-27}$ ). This monotonic improvement confirms that ensemble averaging systematically damps residual oscillations and yields statistically converged Gaussian spectra, though weak fluctuations remain visible even at 100 runs. The averaged momentum spectra can then be directly compared with the non-stochastic case $(\sigma_T = 0)$ . In this deterministic limit, the momentum distribution displays sharply resolved interference fringes, a clear manifestation of quantum coherence in the time domain (see Fig. 1. By contrast, Table 5: Residual oscillations in the central ROI $(-0.3 < p_3 < 0.3)$ for $\sigma_T = 45$ $[m^{-1}]$ . Listed are the maximum amplitude of oscillation and the reduced chi-squared $\chi_{\mathrm{red}}^2$ for different ensemble sizes. <table><tr><td>Runs</td><td>Max amplitude (×10-13)</td><td>χ2red(×10-26)</td></tr><tr><td>10</td><td>2.9483</td><td>1.3519</td></tr><tr><td>30</td><td>1.8882</td><td>0.4341</td></tr><tr><td>50</td><td>1.8226</td><td>0.4677</td></tr><tr><td>70</td><td>1.4549</td><td>0.3129</td></tr><tr><td>100</td><td>1.1525</td><td>0.1805</td></tr></table> introducing temporal randomness in the pulse sequence $(\sigma_T > 0)$ progressively degrades this coherence. For low disorder $(\sigma_T = 15[m^{-1}])$ , ensemble averaging suppresses fine oscillatory structures, producing a smoother spectral profile. At high disorder $(\sigma_T = 45[m^{-1}])$ , coherence is largely destroyed, and the spectrum evolves into a broad Gaussian-like envelope with a central peak at $p_3 \sim 0$ and exponential-like decay in the tails. Small residual oscillations remain in the central region, indicating that partial coherence persists even under significant randomness. These comparisons clearly show that the strength of timing disorder effectively governs the crossover from coherent, interference-dominated spectra to incoherent, Gaussian-like distributions. Ensemble averaging becomes especially important in realistic experimental settings where laser pulses exhibit intrinsic jitter. Averaging over approximately $50 - 100$ realizations is therefore essential for extracting physically meaningful and reproducible features in the presence of shot-to-shot variations. Figure 7 serves as a predictive benchmark that illustrates the transition from coherent to incoherent pair production as timing randomness increases. In the absence of randomness, the spectrum exhibits a sharp $N$ -slit interference pattern, with the central peak enhanced by a factor of $N^2$ relative to a single Sauter pulse—a hallmark of fully constructive quantum interference. This scaling is evident in the upper panel, where the dashed blue curve matches $N^2 = 16$ times the single-pulse momentum distribution. Introducing random delays randomizes the relative quantum phases between successive pulses. For individual stochastic realizations, the interference pattern becomes distorted and asymmetric (Figs. 2-4). When averaged over many such realizations, the interference fringes—which occur at different Figure 7: Upper panel: Longitudinal momentum spectrum for the non-stochastic case ( $\sigma_T = 0$ ). The dashed blue curve is $N^2 = 16$ times the single-pulse spectrum, illustrating the coherent $N^2$ enhancement. Middle panel: Ensemble-averaged spectrum over 100 realizations for $\sigma_T = 15[m^{-1}]$ . The dashed blue curve is $(2N + 1)/2 \approx 4.5$ times the single-pulse result, indicating partial loss of coherence. Lower panel: Ensemble-averaged spectrum over 100 realizations for $\sigma_T = 45[m^{-1}]$ . The dashed blue curve again follows $(2N + 1)/2 \approx 4.5$ times the single-pulse spectrum, confirming the transition to an incoherent sum of pulse contributions. Figure 8: Averaged distribution function $\bar{f}$ at zero momentum, computed over multiple numerical runs with randomized inter-pulse delays, as a function of the randomness parameter $\sigma_T$ for an alternating-sign $N$ -pulse electric field $E(t)$ with $N = 4$ . The values are normalized to the corresponding result for the non-stochastic case ( $\sigma_T = 0$ ). The field parameters are $E_0 = 0.1E_c$ , $\tau = 20$ , $[m^{-1}]$ , and $\mu_T = 180.32$ , $[m^{-1}]$ . momenta in each run—average out, leaving an incoherent sum of contributions from the $N$ individual pulses. This process is analogous to the central limit theorem, leading to a Gaussian-like envelope in the averaged spectrum. The fitted Gaussian parameters (Tables 2 and 4) support this interpretation: the spectral width $\mathcal{S}$ is consistent with that of a single Sauter pulse of duration $\tau = 20[m^{-1}]$ and amplitude $E_0 = 0.1E_c$ . Thus, in the high-disorder limit, the averaged spectrum converges to approximately $N$ times the single-pulse momentum distribution. In the middle and lower panels of Fig. 7, the dashed blue curve scales as $(2N + 1) / 2\approx 4.5$ times the single-pulse result (for $N = 4$ ), rather than $N^2$ . This reduced scaling factor—close to $N$ rather than $N^2$ —quantitatively demonstrates the destruction of phase coherence by random time delays. The transition from $N^2$ to $\sim N$ scaling reflects the shift from constructive interference of amplitudes to incoherent addition of probabilities. The emergence of a Gaussian-like envelope in the ensemble-averaged spectra is a direct signature of decoherence induced by timing disorder. It provides a clear link between the stochastic multi-pulse field and the underlying single-pulse momentum distribution, confirming that in the highly stochastic regime, pair production reduces to an incoherent sum of independent pulse contributions. Akkermans and Dunne demonstrated that in a regular alternating-sign pulse train, the central peak scales as $N^2$ , making such configurations promising for enhancing Schwinger pair production. Our results extend this picture by showing that when randomness is introduced, the scaling transitions from $N^2$ to approximately $N$ , corresponding to the loss of quantum coherence. This insight is crucial for designing future experiments where timing jitter is unavoidable. We now extend our analysis to examine the role of randomness in shaping the momentum spectrum of created EPPs. Increasing stochasticity in the inter-pulse delays generally degrades the interference fringes; however, the central peak of the spectrum remains especially sensitive to such variations. To study this effect, we focus on the distribution function at $\pmb{p} = 0$ , which corresponds to the central peak in the non-stochastic limit and has been highlighted in earlier studies as the point where the $N^2$ enhancement in pair production is concentrated. Motivated by this, we analyze how the distribution at $p = 0$ evolves with the degree of randomness, parameterized by the standard deviation $\sigma_T$ . For this purpose, we consider the averaged distribution function at zero momentum, $\bar{f}(\pmb{p} = 0)$ , as a function of $\sigma_T$ , computed over ensembles of 50 and 100 realizations. Such averaging is crucial Figure 9: The same as in Fig. 8, except for an alternating-sign $N$ -pulse electric field with $N = 12$ . in experimental situations involving stochastic pulse sources, where multiple laser shots are accumulated, and it is the averaged behavior that corresponds to the measured signal. Figure 8 shows $\bar{f} (\pmb {p} = 0)$ , normalized to its non-stochastic value $(\sigma_T = 0)$ for a four-pulse sequence. Panels (a) and (b) present the results for averages over 50 and 100 configurations, respectively. At small $\sigma_{T}$ , the averaged distribution function remains suppressed and nearly constant. As $\sigma_T$ increases, irregular variations develop, and around $\sigma_T\approx 70[m^{-1}]$ sharp peaks appear in both panels. These peaks indicate that certain levels of timing randomness can lead to enhanced distribution function at zero momentum, $\bar{f}$ . Their reproducibility with larger ensembles confirms that they are robust physical features rather than statistical fluctuations. Overall, increasing randomness tends to suppress the central peak, yet for specific values of $\sigma_T$ , the averaged distribution at $p = 0$ can still be enhanced. This trend is consistent with the residual oscillatory behavior observed in the momentum spectrum near $p_3 = 0$ , which survives averaging over many random realizations. To explore how the number of pulses affects the sensitivity to timing randomness, we now consider larger pulse trains with $N = 12$ and $N = 20$ . Figure 9 shows the averaged distribution function $\bar{f}(\pmb{p} = 0)$ , normalized to its value at $\sigma_T = 0$ , for the case of $N = 12$ pulses. The results are averaged over (a) 50 and (b) 100 random configurations of the inter-pulse delays. For small $\sigma_T$ , the distribution is strongly suppressed compared to the non-stochastic case, reflecting more pronounced destructive interference than for $N = 4$ scenario (Fig. 8). As $\sigma_T$ increases beyond $20[m^{-1}]$ , fluctuations in $\bar{f}$ begin to appear, and sharp peaks are visible across a broad range of $\sigma_T$ . In contrast to the $N = 4$ case, where only a few peaks emerged at large $\sigma_T$ , the $N = 12$ case exhibits a much richer structure, with many peaks distributed throughout $20 \lesssim \sigma_T \lesssim 70[m^{-1}]$ . In this interval, the averaged values fluctuate around an order of magnitude above the non-stochastic baseline. At larger randomness, particularly near $\sigma_T \approx 80[m^{-1}]$ , a sharp enhancement appears. With 50 runs, this feature is already visible, but with 100 runs (panel b), it becomes clear and reproducible, showing an increase of more than four orders of magnitude. This peak originates from rare but favorable time-delay configurations and only emerges distinctly when averaging over sufficiently large ensembles. Overall, Fig. 9 shows that for $N = 12$ the system is significantly more sensitive to stochastic delays than for smaller pulse numbers. We now consider a multi-pulse train with $N = 20$ pulses, doubling the Figure 10: The same as in Fig. 8, except for an alternating-sign $N$ -pulse electric field with $N = 20$ . number used in the previous case ( $N = 12$ ). Figure 10 shows the averaged distribution function $\bar{f}(p = 0)$ , normalized to its value at $\sigma_T = 0$ , as a function of the time-delay randomness parameter $\sigma_T$ . For small $\sigma_T$ , the nearly regular delays produce structured oscillations in $\bar{f}$ . As $\sigma_T$ increases, the timing becomes irregular, and the distribution develops noisy fluctuations that strongly depend on each random realization. With 50 runs (panel a), the distribution remains suppressed at low $\sigma_T$ , then rises with growing randomness, showing distinct peaks beyond $\sigma_T \sim 40[m^{-1}]$ . With 100 runs (panel b), these peaks become sharp and reproducible, confirming that they are genuine features. Across all panels, $\bar{f}$ shows a clear progression as the randomness parameter increases. For $\sigma_T \lesssim 10$ , the average already exceeds the $\sigma_T = 0$ value. In the intermediate range, $10 \lesssim \sigma_T \lesssim 40$ , alternating maxima and minima appear, with enhancements reaching nearly tenfold. At larger values of $\sigma_T$ , the amplification becomes much stronger, with the averaged distribution rising by almost three orders of magnitude around $\sigma_T \approx 50[m^{-1}]$ . Very sharp peaks also occur at certain values, arising from rare delay configurations that yield exceptionally strong particle production. Compared to $N = 12$ (Fig. 9), the $N = 20$ case shows stronger amplification and clearer trends, while for $N = 4$ only a few peaks are visible. This progression highlights how larger pulse numbers enhance the sensitivity to timing randomness. The emergence of sharp peaks in $\bar{f}(\pmb{p} = 0)$ at large $\sigma_T$ can be understood through statistical sampling of delay configurations. At low $\sigma_T$ (coherent regime), the pulse train is nearly regular, and quantum phases are locked, especially for larger $N$ , where the normalized $\bar{f}(0)$ starts well below unity. As $\sigma_T$ increases into the intermediate regime, phase coherence is broken and contributions from pulses add incoherently, yielding a gradual rise in $\bar{f}(0)$ without sharp features. At high $\sigma_T$ , however, the broad distribution of delays creates a vast "search space" of possible pulse sequences. Within this space, random sampling occasionally generates rare, optimized configurations where the relative delays coincidentally align to produce strong constructive interference—more efficient than the simple incoherent sum. For $N = 4$ , a relatively large $\sigma_T$ ( $\approx 60 - 70[m^{-1}]$ ) is needed to provide a sufficiently wide parameter space for these optimal configurations to appear with statistical significance in the ensemble average. In contrast, for larger $N$ (12, 20), the system starts in a deeper interference minimum, making it more sensitive to randomness. Moreover, with more pulses, there are more combinatorial possibilities for creating highly constructive sequences. Consequently, the threshold $\sigma_T$ for observing sharp, order-of-magnitude en hancements is lower $(\approx 30 - 50[m^{-1}])$ , as seen in Figs. 9 and 10. Thus, the peaks reflect a stochastic optimization mechanism where randomness, at specific strengths, maximizes the probability of generating pulse trains that strongly enhance pair production. This insight suggests that tailored randomness could be strategically exploited to optimize pair production in experimental settings where perfect timing control is challenging. The multi-pulse train with random delays explores an effectively infinite space of possible pulse timing arrangements. Our results demonstrate that increasing randomness can occasionally "find" rare configurations that strongly enhance pair production. This naturally raises the question: what is the optimal temporal arrangement of pulses that maximizes pair creation? While our study shows that such optimized configurations exist and can be accessed stochastically, a systematic search for the absolute optimum and a deeper analytical understanding of the underlying interference conditions go beyond the scope of the present work. This constitutes a compelling direction for future research. # 4. Conclusions We investigated the creation of EPPs in a sequence of alternating-sign Sauter-like pulses with randomized inter-pulse delays, modeled by a Gaussian distribution with standard deviation $\sigma_T$ controlling the degree of temporal disorder. For $N = 4$ , the longitudinal momentum spectra exhibit a clear progression with increasing $\sigma_T$ . In the deterministic limit ( $\sigma_T = 0$ ), a regular $N$ -slit interference pattern emerges, characterized by a dominant central band and symmetric side fringes with high visibility. For low randomness ( $\sigma_T = 15[m^{-1}]$ ), the central broad band fragments into sub-bands, distorting the structure into irregular, asymmetric peaks and inducing a left-right asymmetry across runs. At moderate disorder ( $\sigma_T = 45[m^{-1}]$ ), the continuous fringe pattern dissolves into clusters of irregular peaks, accompanied by suppressed side bands and enhanced run-to-run fluctuations. For strong randomness ( $\sigma_T = 75[m^{-1}]$ ), the fringe-like interference pattern becomes almost completely disordered: the central region is densely populated with erratic peaks, and the notion of a band-like structure disappears entirely. Stochastic fluctuations, with pronounced run-to-run variability, dominate the resulting distribution. Taken together, these results demonstrate that increasing temporal randomness progressively degrades fringe-like patterns arising from quantum interference. While residual constructive interference persists at intermediate values of $\sigma_T$ , the spectrum ultimately becomes dominated by irregular fluctuations as randomness grows, signaling a transition from a coherent interference-dominated regime to one governed by stochastic behavior. To obtain statistically reliable results, we performed ensemble averaging over multiple realizations, particularly for $N = 4$ . The averaged momentum spectra exhibit a broad Gaussian-like envelope with residual oscillatory features, while central-region fluctuations persist even after averaging. These findings are especially relevant for realistic experimental conditions, where averaging over multiple laser shots is necessary. Interestingly, beyond the general trend of fringe-like interference pattern modification and suppression, we also observe that randomness can induce a pronounced enhancement of the central peak in the momentum spectrum. Specifically, the distribution function at zero momentum shows a fluctuating dependence on $\sigma_T$ , with significant amplification at higher disorder. For $N = 12$ , a noticeable enhancement appears around $\sigma_T \sim 40[m^{-1}]$ , whereas for smaller pulse numbers ( $N = 4$ ), a nearly tenfold increase is observed only at larger disorder, $\sigma_T \sim 70[m^{-1}]$ . This progression with increasing $N$ highlights how larger pulse trains enhance the sensitivity to randomness. For example, at $N = 20$ , even a modest value of $\sigma_T \approx 31[m^{-1}]$ produces a nearly tenfold increase in the central peak, while at $\sigma_T \approx 50[m^{-1}]$ certain configurations yield enhancements of up to three orders of magnitude. These amplification peaks are not statistical artifacts but arise from a stochastic optimization mechanism: at low $\sigma_T$ , destructive interference dominates; as coherence breaks at intermediate $\sigma_T$ , contributions add incoherently, lifting the suppression; at high $\sigma_T$ , the broad delay distribution occasionally samples rare,favorable configurations where the relative delays accidentally align to produce strong constructive interference- more efficiently than in the regular or weakly random cases. Larger pulse numbers increase both the combinatorial space for such optimal sequences and the sensitivity to delay variations, lowering the $\sigma_T$ threshold for observable enhancements. Our results demonstrate that temporal randomness is not merely a source of spectral degradation but can - under specific conditions - be strategically exploited to enhance pair production. In particular, certain stochastic configurations strongly amplify the central peak of the distribution function, suggesting that tailored randomness in pulse sequences could serve as a resource for optimizing pair yields in strong-field QED. These findings open up new pathways for designing multipulse schemes in environments where perfect timing control is experimentally challenging. A systematic explo ration of the optimal temporal arrangements that maximize pair creation, and a deeper analytical understanding of these rare enhanced configurations, remain compelling directions for future research.
arxiv_physics
2025-12-12T00:00:00Z
https://arxiv.org/pdf/2512.13722
{"title": "Electron-positron pair creation induced by multi-pulse train of electric fields: effect of randomness in time-delay", "raw_content": "# Electron-positron pair creation induced by multi-pulse train of electric fields: effect of randomness in time-delay\n\nDeepak Sah $^{a,b}$ , Manoranjan P. Singh $^{a,b}$\n\n$^{a}$ Theory and Simulations Lab, Theoretical and Computational Physics Section, Raja Ramanna Centre for Advanced Technology, Indore-452013, India \n $^{b}$ Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India\n\n# Abstract\n\nWe investigate the creation of electron-positron pairs (EPPs) in a sequence of alternating-sign, time-dependent electric field pulse trains by solving the quantum Vlasov equations. Specifically, we focus on Sauter-like pulse trains with random time delays between successive pulses, drawn from a Gaussian distribution wherein the extent of fluctuations is controlled by the standard deviation $\\sigma_T$ of the distribution. We find that increasing $\\sigma_T$ leads to a dramatic transformation in the longitudinal momentum spectrum. The well-known fringe pattern, akin to that in the multi-slit interference, gets significantly modified. The averaged spectra exhibit a robust Gaussian-like envelope with residual oscillations, which are much more prominent in the central momentum region. Notably, we find that in certain cases, stochastic time delays lead to a pronounced enhancement in the central peak of the distribution function for pulse train containing $N$ pulses. For example, for $N = 20$ pulses, $\\sigma_T \\approx 31[m^{-1}]$ (about $17\\%$ of the mean time delay) yields nearly a tenfold increase in the central peak, which for $\\sigma_T \\approx 50[m^{-1}]$ (about $27\\%$ of the mean time delay), scales up to $10^3$ . This may open up new possibilities for optimizing multi-pulse field configurations and guide future experimental designs aimed at maximizing EPPs creation.\n\nKeywords: Schwinger mechanism, Interference effect, multi-pulse trains, pair creation, randomness\n\n# 1. Introduction\n\nThe spontaneous creation of electron-positron pairs(EPPs) from vacuum in the presence of intense external fields is a fundamental prediction of quantum electrodynamics (QED) [1, 2]. However, observing this effect experimentally remains challenging due to the significant exponential suppression, given by $\\exp (-\\pi E_c / E)$ , where $E_{\\mathrm{c}} = m^{2} / |e|\\approx 1.3\\times 10^{16}\\mathrm{V / cm}$ represents the Schwinger critical field strength, $m$ is the electron mass, $e$ is the electron charge (the units $\\hbar = c = 1$ are used) and $E$ is the applied field strength [3]. The laser intensity needed to reach this threshold is approximately $I_{\\mathrm{c}}\\approx 10^{29}\\mathrm{W / cm}^2$ , which greatly surpasses the capabilities of the current conventional laboratory systems. Nonetheless, significant progress in high-intensity laser technology and the construction of cutting-edge laser facilities [4, 5, 6, 7] is steadily closing the gap, making the experimental observation of this phenomenon increasingly feasible. This progress continues to inspire extensive theoretical and experimental efforts. Currently, laser systems have reached peak intensities of approximately $10^{23}\\mathrm{W / cm}^2$ [8].\n\nEPPs creation can occur through various mechanisms under strong electromagnetic fields. One example is the Bethe-Heitler mechanism, in which a super-intense laser interacts with the Coulomb field of a nucleus, resulting in pair creation [9, 10]. Another widely studied mechanism is the Breit-Wheeler mechanism, where a high-energy gamma photon collides with an ultra-strong laser field to produce pairs [11, 12]. Notably, the only direct experimental observation of positron production via such mechanisms was carried out at the Stanford Linear Accelerator Center (SLAC) [13], where a 46.6 GeV electron beam was made to interact with a terawatt laser pulse of intensity around $10^{18}\\mathrm{W/cm}^2$ . In that setup, positrons were generated following nonlinear Compton scattering, which produced photons that then triggered the Breit-Wheeler process. Apart from the nonlinear Breit-Wheeler process, other strong-field QED effects—such as nonlinear Compton scattering [14] and strong-field-induced vacuum pair production—have also attracted substantial attention.\n\nTo investigate pair creation in different external field configurations, researchers have developed various theoretical approaches. These studies primarily focus on reducing the required field strength and enhancing the yield of produced pairs [15, 16, 17, 18]. Semiclassical techniques such as the generalized Wentzel-Kramers-Brillouin (WKB) approximation [19, 20] and the worldline instanton method [21, 22, 23] have been widely used to describe\n\npair production probabilities. Quantum kinetic approaches, including the quantum Vlasov equation (QVE)[24, 25, 26], the low-density approximation [27, 28], and the Dirac-Heisenberg-Wigner (DHW) formalism [29, 30, 31], offer more detailed quantum descriptions. Brezin and Itzykson [19] analyzed pair production in a time-dependent, spatially homogeneous electric field using the WKB approximation, deriving probabilities based on the Keldysh adiabaticity parameter, $\\gamma = m\\omega / |e|E$ , which determines the interaction regime, with $\\omega$ being the frequency of the electric field.\n\nOver the past decade, investigations have shown that the momentum spectrum of created particles is highly sensitive to the profile of the applied electric field and its parameters, particularly in the tunneling regime [32]. This sensitivity extends to the multiphoton and the intermediate regimes too [26, 33]. Furthermore, recent studies indicate that these dependencies significantly influence the time evolution of the momentum spectrum as well; see [34, 35] for details.\n\nA major advancement in the study of vacuum pair creation has been the realization that structured multi-pulse electric field configurations can dramatically enhance pair production due to quantum interference effects. Analogous to the optical double-slit experiment, time-domain multiple-slit interference has emerged as a key mechanism that both increases the total yield and shapes the features of the momentum spectrum of the created pairs. Akkermans and Dunne [36] were the first to demonstrate Ramsey-type multiple-time-slit interference, showing that in an alternating-sign $N$ -pulse electric field, the central peak of the momentum distribution scales as $N^2$ , indicating constructive interference. Building on this concept, Kohlfurst [37] explored various multi-pulse configurations, illustrating how precise pulse shaping can be used to optimize the pair production rate. In addition, combinations of different pulse types have been shown to enhance the pair creation. For instance, the dynamically assisted Schwinger mechanism, which involves the interplay of a strong, slowly varying pulse with a weak, rapidly oscillating one, has been demonstrated to significantly boost pair production [38, 39]. Among the other field configurations, multi-pulse electric field comprising of sequences of time-dependent pulses with alternating signs have garnered particular interest. In such setups, not only do parameters of individual pulses, such as amplitude, duration, and shape, influence EPPs production, but the temporal spacing between pulses also plays a decisive role. Prior studies [25, 40, 41, 42, 43] have shown that the inter-pulse time delay can significantly affect the momentum distribution of the produced pairs. No-\n\ntably, Ref. [41] reports that the total pair production probability exhibits damped oscillations as a function of the time interval between pulses.\n\nThe aforesaid theoretical models assume pulse trains with a uniform inter-pulse delay. However, real-world experimental conditions may deviate from this idealization. Fluctuations in pulse timing can arise due to limitations in synchronization and control, particularly in sequences involving a large number of pulses subject to the shot-to-shot variations [44, 45]. One may also consider experiments wherein a pulse train is derived from multiple sources which may not be well synchronized. This raises a key question as to how do random fluctuations in pulse timing influence EPPs production. Specifically, does the breakdown of perfect coherence diminish the enhancements typically observed in the usual case of uniform inter-pulse delay, or can certain stochastic realizations of random temporal spacing in the multi-pulse train unexpectedly enhance pair creation? What happens to the momentum spectrum? How different it is when averaged over the many realizations of the random inter-pulse delays from that for the single realization. To address these questions, we investigate vacuum pair production under field configurations with random time delays between successive pulses, which are drawn from a Gaussian distribution wherein the degree of randomness is quantified by the standard deviation $\\sigma_T$ of the distribution. We set $\\mu_T = 180.32[m^{-1}]$ to facilitate direct comparison with earlier studies of regular pulse trains [46], but we also examine the role of $\\mu_T$ in the stochastic regime. While $\\mu_T$ influences individual realizations, we show in the Supplementary Material that the ensemble-averaged momentum spectrum is robust to small variations in $\\mu_T$ . The key parameter governing the transition from coherent to incoherent spectra is $\\sigma_T$ .\n\nIt is found that increasing the value of $\\sigma_T$ has a strong influence on the longitudinal momentum spectrum. The well-known fringe-like structure in the spectrum of the usual non-stochastic counterpart arising due to the interference effect intrinsic to strong-field QED (for example, in Ref. [36]) gets modified. The momentum spectrum, when averaged over the realizations of the time delays with a given $\\sigma_T$ , exhibits a robust Gaussian-like envelope with residual oscillations which are much more pronounced in the central momentum region. Intriguingly, sufficiently large values of $\\sigma_T$ lead to a pronounced enhancement of the central peak in the momentum spectrum. For instance, in the case of $N = 20$ , a value of $\\sigma_T \\approx 31[m^{-1}]$ yields nearly a tenfold increase, while around $\\sigma_T \\approx 50[m^{-1}]$ the enhancement reaches almost three orders of magnitude.\n\nThis paper is organized as follows. In Sec. 2, we introduce the theoretical formalism based on the quantum Vlasov equation. In Sec. 3, we present and discuss our numerical results. In Sec. 4, we provide a brief conclusion and outlook.\n\n# 2. Theoretical Framework\n\nIn the subcritical field regime, $E < E_{\\mathrm{c}}$ , pair production and the corresponding back-reaction current are minimal. This allows us to neglect both collision effects and the internal electric field. Moreover, the spatial focusing scale of typical laser pulses is usually much larger than the electron Compton wavelength—which characterizes the spatial extent relevant for vacuum pair creation—so spatial inhomogeneities of the background field are strongly suppressed. Nevertheless, we emphasize that spatial variations and accompanying magnetic fields can, in general, have a significant impact on electron-positron pair production. This has been demonstrated in several works, including Refs. [47, 48, 49, 30, 15, 50].\n\nIn the present work we restrict ourselves to a simplified but widely used scenario: a spatially uniform, purely time-dependent electric field. When two counterpropagating laser pulses form a standing wave with sufficiently large beam waist, the associated magnetic field near the antinodes can be neglected, effectively giving $B(t) = 0$ . Consequently, the background field can be modeled as $\\mathbf{E}(t) = (0,0,E(t))$ .\n\nAdopting the temporal gauge $A^0 (t) = 0$ , the corresponding four potential is given by $A^{\\mu}(t) = (0,0,0,A(t))$ , where $A(t)$ is related to the electric field as $E(t) = -\\dot{A} (t)$ . Pair creation from the vacuum in the presence of such an external electric field has been studied using the quantum Vlasov equation (QVE) by many researchers; see, for example, [26, 51, 52] and references therein. QVE is a standard tool within the framework of quantum kinetic theory, and its detailed derivation is readily available in the literature, e.g., in Ref. [53]. Here, for the sake of completeness, we provide only the essential equations and the notations.\n\nStarting from the Dirac equation in a homogeneous electric field and employing a time-dependent Bogoliubov transformation, the QVE can be formulated as an integro-differential equation that governs the evolution of the single-particle momentum distribution function $f(\\pmb{p}, t)$ :\n\n$$\n\\frac {d f (\\pmb {p} , t)}{d t} = \\frac {\\lambda (\\pmb {p} , t)}{2} \\int_ {t _ {0}} ^ {t} d t ^ {\\prime} \\lambda (\\pmb {p}, t ^ {\\prime}) [ 1 - 2 f (\\pmb {p}, t ^ {\\prime}) ] \\cos [ \\Theta (\\pmb {p}, t, t ^ {\\prime}) ], \\qquad (1)\n$$\n\nwhere $\\lambda (\\pmb {p},t) = \\frac{eE(t)\\varepsilon_{\\perp}}{\\omega^{2}(\\pmb{p},t)}$ , is the amplitude of the vacuum transition, while $\\Theta (\\pmb {p},t,t^{\\prime}) = 2\\int_{t^{\\prime}}^{t}d\\tau \\omega (\\pmb {p},\\tau)$ stands for the dynamical phase, describing the vacuum oscillations modulated by the external field. The quasiparticle energy $\\omega (\\pmb {p},t)$ , the transverse energy $\\varepsilon_{\\perp}$ and longitudinal quasiparticle momentum $P_{3}$ are defined as:\n\n$$\n\\omega (\\boldsymbol {p}, t) = \\sqrt {\\varepsilon_ {\\perp} ^ {2} + P _ {3} ^ {2} (p _ {3} , t)}, \\tag {2}\n$$\n\n$$\n\\varepsilon_ {\\perp} = \\sqrt {m ^ {2} + p _ {\\perp} ^ {2}}, \\tag {3}\n$$\n\n$$\nP _ {3} (t) = p _ {3} - e A (t), \\tag {4}\n$$\n\nwhere $\\pmb{p} = (p_{\\perp}, p_3)$ represents the canonical momentum. Here, $p_{\\perp} = |p_{\\perp}| = \\sqrt{p_1^2 + p_2^2}$ is the modulus of the momentum component perpendicular to the electric field, and $p_3$ stands for the momentum component parallel to the electric field $E(t)$ .\n\nIt is important to note that the distribution function $f(\\pmb{p}, t)$ represents the number of real particles created with momentum $\\pmb{p}$ in the asymptotic limit $t \\to +\\infty$ . This limit corresponds to a physical scenario in which the external laser field vanishes. Our analysis focuses on the distribution function in the asymptotic regime.\n\nEq. (1) is difficult to solve numerically due to the presence of rapidly oscillating phase term in the integrand. It is, therefore, convenient to recast this equation as an equivalent system of three coupled ordinary differential equations:\n\n$$\n\\frac {d f (\\pmb {p} , t)}{d t} = \\frac {1}{2} \\lambda (\\pmb {p}, t) u (\\pmb {p}, t), \\tag {5}\n$$\n\n$$\n\\frac {d u (\\boldsymbol {p} , t)}{d t} = \\lambda (\\boldsymbol {p}, t) [ 1 - 2 f (\\boldsymbol {p}, t) ] - 2 \\omega (\\boldsymbol {p}, t) v (\\boldsymbol {p}, t), \\tag {6}\n$$\n\n$$\n\\frac {d v (\\boldsymbol {p} , t)}{d t} = 2 \\omega (\\boldsymbol {p}, t) u (\\boldsymbol {p}, t). \\tag {7}\n$$\n\nTogether with the initial conditions $f(\\pmb{p}, -\\infty) = u(\\pmb{p}, -\\infty) = v(\\pmb{p}, -\\infty) = 0$ this set of equations becomes a well-defined and numerically solvable initial value problem.\n\n![](images/071594c2984de87365aadfdab910c89719c38417e9255aa693af631c754e9eee.jpg) \nFigure 1: The longitudinal momentum spectrum of created EPPs in an alternating-sign four-pulse electric field $E(t)$ for the non-stochastic case $(\\sigma_T = 0)$ . The transverse momentum is set to zero, and all quantities are expressed in units of the electron mass. The field parameters are $E_0 = 0.1E_c$ , $\\tau = 20$ [m $^{-1}$ ], and $\\mu_T = 180.32$ [m $^{-1}$ ]\n\n# 3. Results\n\nWe consider a field configuration composed of a sequence of alternating-sign, time-dependent Sauter electric pulses referred to as a multi-pulse train:\n\n$$\nE (t) = \\sum_ {k = 1} ^ {N} (- 1) ^ {k - 1} E _ {0} \\operatorname {s e c h} ^ {2} \\left(\\frac {t + \\left(k - \\frac {N + 1}{2}\\right) T _ {k}}{\\tau}\\right), \\tag {8}\n$$\n\nwhere $E_0$ denotes the amplitude of each electric field pulse, and $N$ is the total number of pulses in the pulse train. The random variable $T_k$ specifies the temporal position of the $k$ th pulse, whose center is located at $\\left(\\frac{(N + 1 - 2k)}{2}\\right)T_k$ . Thus, the timing of pulses in the pulse train becomes stochastic. This electric field should be compared with those in Refs. [46, 36] where all the pulses in the pulse train are regularly spaced with a fixed time delay. Henceforth, we shall refer to such a pulse train as regular or non-stochastic. The electric field in Eq. (8) reduces to the one considered in Refs. [36, 54, 37] upon replacing the random variables $\\{T_k\\}$ by a constant which is the fixed time delay between successive pulses. Note that the time delay between the $i$ th and the $j$ th pulse for the electric field considered here (Eq. (8)) depends on the random variables $T_i$ and $T_j$ . The corresponding vector potential is given\n\nby,\n\n$$\nA (t) = - E _ {0} \\tau \\left[ 1 + \\sum_ {k = 1} ^ {N} (- 1) ^ {k - 1} \\tanh \\left(\\frac {t + \\left(k - \\frac {N + 1}{2}\\right) T _ {k}}{\\tau}\\right) \\right]. \\tag {9}\n$$\n\nWe take random variables $\\{T_k\\}$ to be independent and identically distributed (IID) according to a Gaussian distribution having a mean $\\mu_T$ and variance $\\sigma_T^2$ . Specifically, we generate $\\{T_k\\}$ using MATLAB's built-in randn function [55], which returns an array of pseudo-random numbers drawn from the standard normal distribution with zero mean and unit variance (i.e., unbounded and symmetric about zero). Accordingly,\n\n$$\n\\left\\{T _ {k} \\right\\} = \\mu_ {T} + \\sigma_ {T} \\times \\operatorname {r a n d n} (1, N), \\tag {10}\n$$\n\nIt is evident from Eq. (10) that the stochastic pulse train with random variables $\\{T_k\\}$ would turn into a non-stochastic one, with $\\mu_T$ being the delay between the successive pulses, when $\\sigma_T$ is set to zero. In other words, for the stochastic pulse train, the fluctuation in the time delay about its mean $\\mu_T$ is governed by the standard deviation $\\sigma_T$ . We fix the mean as $\\mu_T = 180.32[m^{-1}]$ , which is exactly the same as that considered for the non-stochastic pulse train in Ref. [46].\n\nTo demonstrate the effect of randomness in the stochastic pulse train on the longitudinal momentum spectrum of the created pairs, we numerically solve the system of first-order ordinary differential equations given in Eqs. (5)-(7). We plot the resulting momentum spectra for both the stochastic and non-stochastic cases. The latter, which is widely studied in the literature, serves as a reference to identify the modifications introduced by the randomness. In Figure 1, the longitudinal momentum spectrum of created EPPs is shown for the non-stochastic case ( $\\sigma_T = 0$ ) of an alternating-sign $N$ -pulse electric field $E(t)$ . The spectrum exhibits a characteristic $N$ -slit interference fringe-like structure consisting of several bands of maxima and minima. Most pairs are contained in the central band located around $p_3 \\approx 0$ , which has many peaks of nearly equal heights exhibiting the well-known $N^2$ scaling [36, 46]. Furthermore, the central band has a gradually varying and broad momentum profile. On the other hand, the successive side bands, located symmetrically on either side of the central band, have far fewer peaks. The farther the side band from the central band, the lower the peak height of the maxima therein. Therefore, with increasing value of $|p_3|$ , the fringes gradually vanish\n\n![](images/a1f2fe38478b2e08a528d5e4a469a42c2ee1a5ef88d0e89a360529e9b2d19330.jpg)\n\n![](images/e4d9b10760d302090e74c4ee5af9c377954233a5000bdc707cfa3ee460858c84.jpg)\n\n![](images/ed8b587a796c226168513b3e131075da3317cc4fa60f7d7c97075f9673372789.jpg) \nFigure 2: The longitudinal momentum spectrum of created EPPs in an alternating-sign $N$ -pulse electric field $E(t)$ with $N = 4$ . The blue line shows a stochastic realization with $\\sigma_T = 15[m^{-1}]$ , while the grey line represents the non-stochastic case. The three panels correspond to independent realizations (Runs I-III). The value of transverse momentum is taken to be zero, and all the units are taken in electron mass units. The electric field parameters are $E_0 = 0.1E_c$ , $\\tau = 20[m^{-1}]$ , and $\\mu_T = 180.32[m^{-1}]$ .\n\nasymptotically. Overall, the spectrum exhibits a highly regular, symmetric interference pattern, with the central broad band $(-0.2[m] < p_3 < 0.2[m])$ dominating the distribution.\n\nIn contrast to the non-stochastic case, Figure 2 shows the spectra for the stochastic case $(\\sigma_{T} = 15[m^{-1}])$ with $N = 4$ for three independent realizations (Runs I-III). Introducing randomness modifies the regular $N$ -slit interference pattern observed in the non-stochastic case. The spectra lose their symmetry about $p_3 = 0$ , and the central band becomes distorted in a realization-dependent manner. Although a broad Gaussian-like envelope persists, the peaks fragment into irregular structures with uneven heights and spacings. The differences between Runs I-III highlight run-to-run fluctuations, reflecting the sensitivity of the momentum distribution to stochastic variations in the time delays. The corresponding values of the random variable $\\{T_k\\}$ are listed in Table 1. In Run I, the interference structure around the central region $(-0.2[m] < p_3 < 0.2[m])$ is strongly modified compared to the non-stochastic case. The sharp, well-ordered fringe pattern with evenly spaced maxima and minima is replaced by irregular sub-bands. The central peak at $p_3 = 0$ is noticeably suppressed, and the minima between fringes no longer drop to zero, reducing overall fringe visibility. On the left side $(-0.2[m] < p_3 < 0[m])$ , relatively strong oscillations remain, with amplitudes comparable to those in the deterministic case. In contrast, on the right side $(0[m] < p_3 < 0.2[m])$ , the fringes become weaker and more irregular. Side-band peaks at $p_3 \\approx -0.4[m]$ and $-0.3[m]$ remain visible, whereas their positive- $p_3$ counterparts are distorted and less pronounced. In Run II, the central interference band fragments into fewer sub-bands than in Run I, but the peak heights remain comparable to the non-stochastic case (grey curve). The spectrum is still dominated by the central maximum at $p_3 \\approx 0$ , much like in the deterministic case. Some side-band structures survive, but their peaks are noticeably suppressed in height and lose their regular spacing. As a result, the distribution becomes asymmetric about $p_3 = 0$ , with the side-bands appearing less sharp and more irregular compared to the non-stochastic spectrum. In Run III, the asymmetry is most pronounced. One side of the spectrum (negative $p_3$ ) exhibits stronger and denser peaks, while the other side (positive $p_3$ ) is highly irregular and suppressed. The central band is significantly distorted, with one side dominating the distribution. This extreme imbalance demonstrates the strong sensitivity of the momentum spectrum to random variations in inter-pulse delays. Overall, randomness in the inter-pulse delays $(\\sigma_T = 15[m^{-1}])$ modifies the highly regular\n\nfringe pattern of the non-stochastic case. The randomness not only destroys the ordered fringe and side-band hierarchy but also induces a clear left-right asymmetry across all runs. Although band-like interference features persist, their internal structure becomes irregular and asymmetric, with the degree of distortion varying from run to run. These run-to-run fluctuations highlight the stochastic nature of the driving field and its impact on the symmetry of the momentum distribution.\n\nIn Figure 3, the spread of randomly distributed inter-pulse delays ( $\\sigma_T = 45[m^{-1}]$ ) further amplifies the stochasticity in the $N = 4$ pulse trains, with the corresponding random variables $T_{k}$ tabulated in Table 1. Compared to the non-stochastic case and the weaker randomness ( $\\sigma_T = 15[m^{-1}]$ ), the spectra exhibit fragmented peaks and reduced fringe visibility. In Run I, the central band $(-0.2[m] < p_3 < 0.2[m])$ remains the most prominent feature; however, instead of exhibiting evenly spaced oscillations, it fragments into clusters of irregular peaks, while the side bands become distorted and lose symmetry about $p_3 = 0$ , highlighting how stronger randomness enhances fragmentation and disrupts the interference hierarchy. Run II shows a pronounced left-right asymmetry: peaks for $p_3 < 0$ are enhanced, whereas those for $p_3 > 0$ are suppressed, and interference features are unevenly spaced, reflecting a breakdown of regular phase correlations. In Run III, the overall envelope resembles the deterministic case, yet the fine structure is highly disordered; individual peaks survive but no longer form a recognizable fringe pattern, and the asymmetry is weaker than in Run II. In general, increasing the inter-pulse delay spread to $\\sigma_T = 45[m^{-1}]$ makes the momentum distribution highly sensitive to randomness: the spectra become fragmented, asymmetric, and run-dependent, demonstrating the strong stochastic influence of the driving field.\n\nThe spectra shown in Figure 4 correspond to the strongest degree of randomness considered, namely $\\sigma_T = 75[m^{-1}]$ . At this level of stochasticity, the longitudinal momentum spectra lose the well-separated bands of maxima and minima observed at smaller $\\sigma_T$ . Instead of regular clusters, the distributions exhibit isolated peaks scattered across a broad momentum range. Nevertheless, it is notable that the magnitude of the central peak remains comparable, or in some cases even slightly enhanced, compared to spectra at lower $\\sigma_T$ . In Run I, the spectrum exhibits a dense arrangement of sharp peaks centered around the central region ( $p_3 \\approx 0$ ), while the overall fringe-like band structure has disappeared. In Run II, the central peak is the most pronounced among the three realizations, with a strong maximum at $p_3 \\approx 0$\n\n![](images/ec243b6bc58c06e80fe44b3bc7ed2fdd4ba6f39cf8b6301a20eddbf4f3a0bf0d.jpg)\n\n![](images/7f6938bb4e81aaced2e75a204f163ee4836ba1cf3526ea240abe1d2175f1ea2e.jpg)\n\n![](images/8664362dec9f57bf085ea84c3867bb868c7cd363ed4763d2a2d4d7644939db0e.jpg) \nFigure 3: Same as in Fig. 2, except for an alternating-sign four-pulse electric field with $\\sigma_T = 45[m^{-1}]$ .\n\n<table><tr><td>σT[m-1]</td><td>k</td><td colspan=\"2\">Run I</td><td colspan=\"2\">Run II</td><td colspan=\"2\">Run III</td></tr><tr><td></td><td></td><td>Tk</td><td>Pulse Centre</td><td>Tk</td><td>Pulse Centre</td><td>Tk</td><td>Pulse Centre</td></tr><tr><td rowspan=\"4\">15</td><td>1</td><td>172.25</td><td>258.37</td><td>193.33</td><td>289.99</td><td>194.96</td><td>292.44</td></tr><tr><td>2</td><td>185.38</td><td>92.69</td><td>165.38</td><td>82.69</td><td>172.47</td><td>86.24</td></tr><tr><td>3</td><td>160.86</td><td>-80.43</td><td>194.08</td><td>-97.04</td><td>182.97</td><td>-91.48</td></tr><tr><td>4</td><td>191.65</td><td>-287.47</td><td>171.45</td><td>-257.17</td><td>207.99</td><td>-311.98</td></tr><tr><td rowspan=\"4\">45</td><td>1</td><td>156.10</td><td>234.15</td><td>219.35</td><td>329.02</td><td>224.24</td><td>336.36</td></tr><tr><td>2</td><td>195.50</td><td>97.75</td><td>135.50</td><td>67.75</td><td>156.78</td><td>78.39</td></tr><tr><td>3</td><td>121.93</td><td>-60.97</td><td>221.60</td><td>-110.80</td><td>188.27</td><td>-94.13</td></tr><tr><td>4</td><td>214.30</td><td>-321.45</td><td>153.70</td><td>-230.55</td><td>263.32</td><td>-394.98</td></tr><tr><td rowspan=\"4\">75</td><td>1</td><td>139.95</td><td>209.93</td><td>245.36</td><td>368.04</td><td>253.51</td><td>380.28</td></tr><tr><td>2</td><td>205.62</td><td>102.81</td><td>105.61</td><td>52.81</td><td>141.07</td><td>70.54</td></tr><tr><td>3</td><td>83.01</td><td>-41.51</td><td>249.12</td><td>-124.56</td><td>193.56</td><td>-96.78</td></tr><tr><td>4</td><td>236.95</td><td>-355.44</td><td>135.96</td><td>-203.94</td><td>318.65</td><td>-477.97</td></tr></table>\n\nTable 1: Tabulated values of the random variables $\\{T_k\\}$ [see Eq. 10] with $\\mu_T = 180.32[m^{-1}]$ . Each block corresponds to independent realizations (Runs I-III). The last column in each run lists the corresponding pulse centers, located at $\\frac{3}{2} T_1$ , $\\frac{1}{2} T_2$ , $-\\frac{1}{2} T_3$ , $-\\frac{3}{2} T_4$ for $N = 4$ .\n\naccompanied by asymmetries more visible on one side of the spectrum. In Run III, the central region becomes more fragmented. Instead of a single dominant peak, multiple maxima of comparable height appear near $p_3 = 0$ . This creates a visibly more irregular and distorted spectrum compared to Runs I and II. The side regions are highly suppressed, showing little evidence of structured oscillations. Taken together, the three realizations show that at $\\sigma_T = 75[m^{-1}]$ , the spectral structure becomes entirely run-dependent. Although all cases retain a concentration of spectral weight near the origin, the detailed arrangement of peaks varies significantly from run to run. For $N = 4$ pulses, a comparative analysis of all figures reveals a distinct trend: as $\\sigma_T$ increases, the spectral profile evolves from a well-defined interference pattern into a progressively incoherent and asymmetric distribution. The case $\\sigma_T = 0$ represents the fully deterministic limit, serving as a reference for coherent particle production with a symmetric spectrum around the central momentum. Intermediate values ( $\\sigma_T = 15$ and $45[m^{-1}]$ ) show a gradual disappearance of the fringe-like bands of maxima and minima, accompanied by asymmetries in the peak heights and positions, indicative of increasing temporal randomness. At $\\sigma_T = 75[m^{-1}]$ , stochastic effects dominate, and the\n\nspectrum becomes strongly asymmetric, with the fringe-like features completely washed out.\n\nAs discussed above, the longitudinal momentum spectra are highly sensitive to variations in inter-pulse delays. Increasing randomness progressively smears the well-defined fringe-like patterns of maxima and minima, suppressing several peaks and highlighting the effect of temporal disorder on coherent pair production. To quantify the statistical impact of this stochasticity, we now analyze the averaged momentum spectrum, obtained by ensemble-averaging over many numerical runs with randomized time delays. In particular, we consider ensemble averages for two representative values of randomness, $\\sigma_T = 15[m^{-1}]$ and $\\sigma_T = 45[m^{-1}]$ . Although these values do not span the entire parameter space, they are sufficient to capture the essential trends. Unlike earlier results, which reflected spectra from individual random realizations, ensemble averaging is crucial for extracting statistically robust and physically meaningful features. This approach also mirrors realistic experimental scenarios, where multiple experimental shots must be accumulated and averaged to obtain reliable signatures from stochastic sources.\n\nFigure 5 shows the momentum spectrum averaged over different numbers of numerical runs for $\\sigma_T = 15[m^{-1}]$ . For 10 realizations (Fig. 5(a)), the central region $(-0.4[m] \\lesssim p_3 \\lesssim 0.4[m])$ displays sharp oscillations accompanying the largest peak, which reaches approximately $8.1 \\times 10^{-13}$ . These oscillations, arising from quantum interference effects also seen in individual realizations (see Fig. 2), are not fully suppressed by averaging, leaving pronounced fluctuations superimposed on an overall Gaussian-like profile. As the number of runs increases, the spectrum becomes progressively smoother and more Gaussian-like. For 50 runs [Fig. 5(b)], the central peak at $p_3 \\approx 0$ decreases slightly to approximately $5.2 \\times 10^{-13}$ , while the irregular oscillations are significantly reduced. At 100 runs (Fig. 5(c)), the spectrum becomes smooth. The irregular fluctuations observed at smaller sample sizes are nearly suppressed. The central peak stabilizes around $5.4 \\times 10^{-13}$ , consistent with the 50-run case, indicating statistical convergence. Although faint ripples remain, they are nearly regular and of much smaller amplitude. At this stage, the spectrum clearly exhibits a dominant Gaussian envelope.\n\nFrom the above discussion of the averaged momentum spectrum, it is evident that residual oscillations persist—particularly in the central region—even after averaging over a reasonably large number of numerical runs with randomized inter-pulse delay configurations. In this sense, the spectrum is characterized by a broad Gaussian-like envelope, on top of which oscillations\n\n![](images/37a90d771f5fb61dc7b1037498699479b5a6d7f5972a067ec312db2bce3a2536.jpg)\n\n![](images/74da78790539f4a4f9b33295cdbc458a11c3e885490fade222077b9646d05bb9.jpg)\n\n![](images/1fd9e547e1a7120b53cdfa4047ef24262c80f3f43a1cd39f8556b26aa55d91d7.jpg) \nFigure 4: Same as in Fig. 2, except for an alternating-sign four-pulse electric field with $\\sigma_T = 75[m^{-1}]$ .\n\n![](images/119b250c63411846c418b61a42bb21a397db030e8640e009d6b324d21fc1a1d1.jpg)\n\n![](images/2272d5a9912b6f96d9c1a7d88fd093cee08f2639142130ac1e428999b7ef58ea.jpg)\n\n![](images/c8edb257d6c29744952a450da27fce6ccfea348a443f57dd66cd5365f9c356b7.jpg) \nFigure 5: Averaged momentum spectra $\\bar{f}(p_3)$ computed over different numbers of random samples with randomized time delays for an alternating-sign four-pulse electric field $E(t)$ . Each panel corresponds to a different number of averaging runs, with the black dashed curve indicating a Gaussian fit. The field parameters are $E_0 = 0.1E_c$ , $\\tau = 20[m^{-1}]$ , $\\mu_T = 180.32[m^{-1}]$ , and $\\sigma_T = 15[m^{-1}]$ .\n\nremain. To assess the convergence behavior of the averaged momentum spectrum with increasing numbers of random realizations, we quantify the statistical convergence of $\\bar{f}(p_3)$ using nonlinear least-squares fits to a Gaussian model,\n\n$$\n\\bar {f} (p _ {3}) = \\frac {\\mathcal {N} _ {0}}{\\sqrt {2 \\pi \\mathcal {S} ^ {2}}} \\exp \\left[ - \\frac {(p _ {3} - \\bar {p} _ {3}) ^ {2}}{2 \\mathcal {S} ^ {2}} \\right], \\tag {11}\n$$\n\nand compute the reduced chi-squared $\\left(\\chi_{\\mathrm{red}}^{2}\\right)$ for each fit. The fitted parameters and goodness-of-fit metrics are summarized in Table 2. The results\n\nTable 2: Fitted parameters and reduced chi-squared values for the Gaussian model applied to the averaged longitudinal momentum spectra, computed over different numbers of sample runs with randomized inter-pulse delays. The parameters $\\mathcal{N}_0$ , $\\bar{p}_3$ , and $\\mathcal{S}$ represent the fitted peak amplitude, peak position, and spectral width, respectively. Quoted uncertainties correspond to $1\\mathcal{S}$ standard errors. \n\n<table><tr><td>Number of runs</td><td>N0(x10-13)</td><td>p3(x10-4)</td><td>S</td><td>χ2red(x10-2)</td></tr><tr><td>10 runs</td><td>3.8516 ± 0.0069</td><td>4.2761 ± 1.41</td><td>0.28128 ± 0.00081</td><td>8.78</td></tr><tr><td>30 runs</td><td>3.8995 ± 0.0039</td><td>-0.7363 ± 0.79</td><td>0.28135 ± 0.00046</td><td>2.78</td></tr><tr><td>50 runs</td><td>3.9168 ± 0.0034</td><td>1.0714 ± 0.68</td><td>0.28111 ± 0.00039</td><td>2.06</td></tr><tr><td>70 runs</td><td>3.8959 ± 0.0028</td><td>-1.1407 ± 0.56</td><td>0.28157 ± 0.00033</td><td>1.40</td></tr><tr><td>100 runs</td><td>3.8934 ± 0.0025</td><td>-1.7805 ± 0.50</td><td>0.28185 ± 0.00029</td><td>1.12</td></tr></table>\n\nin Figure 5 show the averaged momentum spectra with Gaussian fits (black dashed curves), illustrating the emergence of statistical convergence as the number of runs used for averaging. For 10 runs, strong fluctuations remain on top of the Gaussian envelope, resulting in large peak-position uncertainty $(\\bar{p}_3 = 4.2761 \\pm 1.41 \\times 10^{-4})$ and relatively high reduced chi-squared $(\\chi_{red}^2 = 8.78 \\times 10^{-2})$ , as summarized in Table 2. With 30-50 runs, the magnitude of fluctuations decreases significantly, leading to smaller parameter uncertainties and improved fit quality $(\\chi_{red}^2 \\sim 2 \\times 10^{-2})$ . At 50 runs, the spectrum is smoother, the peak position approaches zero $(\\bar{p}_3 = 1.0714 \\pm 0.68 \\times 10^{-4})$ , and the width is essentially converged $(S = 0.28111 \\pm 0.00039)$ , as reported in Table 2. For 70-100 runs, the fitted parameters show only minor changes, and the reduced chi-squared decreases further to $\\sim 1 \\times 10^{-2}$ , demonstrating excellent agreement with the Gaussian model. At 100 runs, the spectrum becomes nearly smooth, confirming quantitative convergence and statistical robustness $(\\mathcal{N}_0 = 3.893 \\times 10^{-13}, \\bar{p}_3 = -1.7805 \\pm 0.50 \\times 10^{-4}, S = 0.28185 \\pm 0.00029$ ; see Table 2).\n\nThe central Region of Interest (ROI, $-0.3 < p_{3} < 0.3$ ) is used to quantify residual oscillations. The maximum amplitude and $\\chi_{\\mathrm{red}}^2$ of the Gaussian fit decrease systematically with an increasing number of runs, as summarized in Table 3: from $4.02 \\times 10^{-13}$ and $4.99 \\times 10^{-24}$ at 10 runs, to $\\sim 1.2 - 2.0 \\times 10^{-13}$ and $10^{-25} - 10^{-24}$ at 70-100 runs, with the reduced chi-squared in the ROI approaching $10^{-27}$ .\n\nTable 3: Maximum amplitude of oscillation and reduced chi-squared values in the central ROI $(-0.3 < p_3 < 0.3)$ for different numbers of runs. \n\n<table><tr><td>Number of runs</td><td>Max amplitude (×10-13)</td><td>χ2red (×10-26)</td></tr><tr><td>10</td><td>4.0248</td><td>1.6809</td></tr><tr><td>30</td><td>1.7816</td><td>0.4525</td></tr><tr><td>50</td><td>1.5648</td><td>0.3568</td></tr><tr><td>70</td><td>2.0040</td><td>0.3105</td></tr><tr><td>100</td><td>1.1815</td><td>0.2053</td></tr></table>\n\nTherefore, the results clearly demonstrate that increasing the number of runs improves statistical convergence. The spectral fluctuations are progressively suppressed, parameter uncertainties shrink, and the Gaussian model provides an increasingly reliable description of the underlying momentum distribution. By 100 runs, the momentum distribution is statistically robust, and the Gaussian model accurately represents the underlying spectrum.\n\nNext, we consider the case with a stronger degree of randomness, $\\sigma_T = 45[m^{-1}]$ . Figure 6 displays the averaged longitudinal momentum spectra $\\bar{f}(p_3)$ for this value of $\\sigma_T$ , corresponding to stronger temporal disorder. Similar to the $\\sigma_T = 15[m^{-1}]$ case (Fig. 5), strong oscillatory behaviour persists when averaging over only a few realizations (10 runs). These oscillations are gradually suppressed as the ensemble size increases, and the distribution converges toward a Gaussian-like profile with only small residual oscillations when 100 runs are included. Representative results for 10, 50, and 100 runs are shown in the three panels of Fig. 6. In each case, the spectra are fitted with the Gaussian model in Eq. (11), which captures the overall envelope of the distributions. The extracted fit parameters, along with the reduced chi-squared values, are listed in Table 4, providing a quantitative measure of convergence. For 10 runs [Fig. 6(a)], the spectrum remains highly irregular, with strong fluctuations about the Gaussian envelope. From Table 4, the fitted parameters are $\\mathcal{N}_0 = 3.8137 \\times 10^{-13}$ , $\\bar{p}_3 = 2.59 \\times 10^{-4}$ , and $S = 0.27947$ ,\n\n![](images/729a4b721efb150c3683c2d07ba6a5e3902ed54dc52303bf9f8c4316a5dad9d2.jpg)\n\n![](images/cf32cac6a9f6c49b263bfd7fa3bcbb6dd674d3e70ae9405c1a91847ca843767c.jpg)\n\n![](images/fa22eb8127fda5e2c07a7f257f9295e6e379f3d3ecefc6c70f11ff21b7f44e4c.jpg) \nFigure 6: Same as in Fig. 5, except for an alternating-sign four-pulse electric field with $\\sigma_T = 45[m^{-1}]$ .\n\nwhile the reduced chi-squared, $\\chi_{\\mathrm{red}}^2 = 8.99\\times 10^{-2}$ , indicates that the Gaussian model captures only the broad trend but not the fine details. Thus, averaging over 10 runs is insufficient for statistical reliability. At 50 runs [Fig. 6(b)], the oscillation amplitudes are much reduced, and the spectrum closely resembles a Gaussian. The parameters stabilize to $\\mathcal{N}_0 = 3.733\\times 10^{-13}$ , $\\bar{p}_{3} = 2.126\\times 10^{-3}$ , and $S = 0.27924$ , with a reduced chi-squared of $2.25\\times 10^{-2}$ , signaling a marked improvement in the fit quality. Finally, for 100 runs [Fig. 6(c)], the averaged spectrum becomes very smooth, with residual oscillations strongly suppressed. The Gaussian model provides an excellent fit, with nearly saturated parameters $\\mathcal{N}_0 = 3.7629\\times 10^{-13}$ , $\\bar{p}_{3} = 5.69\\times 10^{-4}$ , and $S = 0.27989$ , while the reduced chi-squared drops to $1.05\\times 10^{-2}$ . This confirms that statistical convergence is achieved at large ensemble sizes.\n\nTable 4: Fitted parameters and reduced chi-squared values for the Gaussian model applied to the averaged longitudinal momentum spectra, computed over different numbers of ensemble runs with randomized time delays. Quoted uncertainties correspond to $1S$ standard errors. \n\n<table><tr><td>Number of runs</td><td>N0(x10-13)</td><td>p3(x10-4)</td><td>S</td><td>χ2red(x10-2)</td></tr><tr><td>10 runs</td><td>3.8137 ± 0.0065</td><td>2.5916 ± 1.32</td><td>0.27947 ± 0.00073</td><td>8.99</td></tr><tr><td>30 runs</td><td>3.8644 ± 0.0037</td><td>9.9292 ± 0.74</td><td>0.27881 ± 0.00041</td><td>2.88</td></tr><tr><td>50 runs</td><td>3.7332 ± 0.0032</td><td>21.257 ± 0.66</td><td>0.27924 ± 0.00036</td><td>2.25</td></tr><tr><td>70 runs</td><td>3.8164 ± 0.0027</td><td>7.9331 ± 0.55</td><td>0.28018 ± 0.00030</td><td>1.55</td></tr><tr><td>100 runs</td><td>3.7629 ± 0.0022</td><td>5.6917 ± 0.45</td><td>0.27989 ± 0.00025</td><td>1.05</td></tr></table>\n\nResidual oscillations in the central ROI are quantified in Table 5. At 10 runs, the largest fluctuation reaches $2.95 \\times 10^{-13}$ with $\\chi_{ROI}^2 = 4.02 \\times 10^{-24}$ . Increasing the ensemble size steadily suppresses these deviations: the fluctuation amplitude drops below $2.0 \\times 10^{-13}$ for 30-50 runs and reaches $1.15 \\times 10^{-13}$ at 100 runs, with $\\chi_{ROI}^2$ correspondingly reduced to $5.36 \\times 10^{-25}$ ( $\\chi_{\\mathrm{red}}^2 = 1.81 \\times 10^{-27}$ ). This monotonic improvement confirms that ensemble averaging systematically damps residual oscillations and yields statistically converged Gaussian spectra, though weak fluctuations remain visible even at 100 runs.\n\nThe averaged momentum spectra can then be directly compared with the non-stochastic case $(\\sigma_T = 0)$ . In this deterministic limit, the momentum distribution displays sharply resolved interference fringes, a clear manifestation of quantum coherence in the time domain (see Fig. 1. By contrast,\n\nTable 5: Residual oscillations in the central ROI $(-0.3 < p_3 < 0.3)$ for $\\sigma_T = 45$ $[m^{-1}]$ . Listed are the maximum amplitude of oscillation and the reduced chi-squared $\\chi_{\\mathrm{red}}^2$ for different ensemble sizes. \n\n<table><tr><td>Runs</td><td>Max amplitude (×10-13)</td><td>χ2red(×10-26)</td></tr><tr><td>10</td><td>2.9483</td><td>1.3519</td></tr><tr><td>30</td><td>1.8882</td><td>0.4341</td></tr><tr><td>50</td><td>1.8226</td><td>0.4677</td></tr><tr><td>70</td><td>1.4549</td><td>0.3129</td></tr><tr><td>100</td><td>1.1525</td><td>0.1805</td></tr></table>\n\nintroducing temporal randomness in the pulse sequence $(\\sigma_T > 0)$ progressively degrades this coherence. For low disorder $(\\sigma_T = 15[m^{-1}])$ , ensemble averaging suppresses fine oscillatory structures, producing a smoother spectral profile. At high disorder $(\\sigma_T = 45[m^{-1}])$ , coherence is largely destroyed, and the spectrum evolves into a broad Gaussian-like envelope with a central peak at $p_3 \\sim 0$ and exponential-like decay in the tails. Small residual oscillations remain in the central region, indicating that partial coherence persists even under significant randomness. These comparisons clearly show that the strength of timing disorder effectively governs the crossover from coherent, interference-dominated spectra to incoherent, Gaussian-like distributions. Ensemble averaging becomes especially important in realistic experimental settings where laser pulses exhibit intrinsic jitter. Averaging over approximately $50 - 100$ realizations is therefore essential for extracting physically meaningful and reproducible features in the presence of shot-to-shot variations [56].\n\nFigure 7 serves as a predictive benchmark that illustrates the transition from coherent to incoherent pair production as timing randomness increases. In the absence of randomness, the spectrum exhibits a sharp $N$ -slit interference pattern, with the central peak enhanced by a factor of $N^2$ relative to a single Sauter pulse—a hallmark of fully constructive quantum interference [36]. This scaling is evident in the upper panel, where the dashed blue curve matches $N^2 = 16$ times the single-pulse momentum distribution.\n\nIntroducing random delays randomizes the relative quantum phases between successive pulses. For individual stochastic realizations, the interference pattern becomes distorted and asymmetric (Figs. 2-4). When averaged over many such realizations, the interference fringes—which occur at different\n\n![](images/31f6341c46620a882a39a0424246bae0862070ae0891c9dcccb6c328e5b0869d.jpg)\n\n![](images/43f0949a2dec71a7ca7723b582bbc775cedbd31f26f6d295a6f5ac29dd2e02c5.jpg)\n\n![](images/31151348e1e0ef15030b75035805cec408716db55f42125fa6c7325753361016.jpg) \nFigure 7: Upper panel: Longitudinal momentum spectrum for the non-stochastic case ( $\\sigma_T = 0$ ). The dashed blue curve is $N^2 = 16$ times the single-pulse spectrum, illustrating the coherent $N^2$ enhancement. Middle panel: Ensemble-averaged spectrum over 100 realizations for $\\sigma_T = 15[m^{-1}]$ . The dashed blue curve is $(2N + 1)/2 \\approx 4.5$ times the single-pulse result, indicating partial loss of coherence. Lower panel: Ensemble-averaged spectrum over 100 realizations for $\\sigma_T = 45[m^{-1}]$ . The dashed blue curve again follows $(2N + 1)/2 \\approx 4.5$ times the single-pulse spectrum, confirming the transition to an incoherent sum of pulse contributions.\n\n![](images/c627987d49ab59cd120db705eaa2597f6e226f615250c773b413f4d057e49ddd.jpg)\n\n![](images/b4898732b660d8eb3ee2db75ce3879fec12b26569fbf5f6d8b533fa2a4f3fed8.jpg) \nFigure 8: Averaged distribution function $\\bar{f}$ at zero momentum, computed over multiple numerical runs with randomized inter-pulse delays, as a function of the randomness parameter $\\sigma_T$ for an alternating-sign $N$ -pulse electric field $E(t)$ with $N = 4$ . The values are normalized to the corresponding result for the non-stochastic case ( $\\sigma_T = 0$ ). The field parameters are $E_0 = 0.1E_c$ , $\\tau = 20$ , $[m^{-1}]$ , and $\\mu_T = 180.32$ , $[m^{-1}]$ .\n\nmomenta in each run—average out, leaving an incoherent sum of contributions from the $N$ individual pulses. This process is analogous to the central limit theorem, leading to a Gaussian-like envelope in the averaged spectrum.\n\nThe fitted Gaussian parameters (Tables 2 and 4) support this interpretation: the spectral width $\\mathcal{S}$ is consistent with that of a single Sauter pulse of duration $\\tau = 20[m^{-1}]$ and amplitude $E_0 = 0.1E_c$ . Thus, in the high-disorder limit, the averaged spectrum converges to approximately $N$ times the single-pulse momentum distribution.\n\nIn the middle and lower panels of Fig. 7, the dashed blue curve scales as $(2N + 1) / 2\\approx 4.5$ times the single-pulse result (for $N = 4$ ), rather than $N^2$ . This reduced scaling factor—close to $N$ rather than $N^2$ —quantitatively demonstrates the destruction of phase coherence by random time delays. The transition from $N^2$ to $\\sim N$ scaling reflects the shift from constructive interference of amplitudes to incoherent addition of probabilities. The emergence of a Gaussian-like envelope in the ensemble-averaged spectra is a direct signature of decoherence induced by timing disorder. It provides a clear link between the stochastic multi-pulse field and the underlying single-pulse momentum distribution, confirming that in the highly stochastic regime, pair production reduces to an incoherent sum of independent pulse contributions. Akkermans and Dunne [36] demonstrated that in a regular alternating-sign pulse train, the central peak scales as $N^2$ , making such configurations promising for enhancing Schwinger pair production. Our results extend this picture by showing that when randomness is introduced, the scaling transitions from $N^2$ to approximately $N$ , corresponding to the loss of quantum coherence. This insight is crucial for designing future experiments where timing jitter is unavoidable.\n\nWe now extend our analysis to examine the role of randomness in shaping the momentum spectrum of created EPPs. Increasing stochasticity in the inter-pulse delays generally degrades the interference fringes; however, the central peak of the spectrum remains especially sensitive to such variations. To study this effect, we focus on the distribution function at $\\pmb{p} = 0$ , which corresponds to the central peak in the non-stochastic limit and has been highlighted in earlier studies [36] as the point where the $N^2$ enhancement in pair production is concentrated. Motivated by this, we analyze how the distribution at $p = 0$ evolves with the degree of randomness, parameterized by the standard deviation $\\sigma_T$ . For this purpose, we consider the averaged distribution function at zero momentum, $\\bar{f}(\\pmb{p} = 0)$ , as a function of $\\sigma_T$ , computed over ensembles of 50 and 100 realizations. Such averaging is crucial\n\n![](images/d84eba5312782282d3ee857bc08283d396f495d52c454f6b6770ceff434a2475.jpg)\n\n![](images/84a723ee24eccbb2fff522366033ef1c35eb6ac125b25d6afb2d90e0f13b88f1.jpg) \nFigure 9: The same as in Fig. 8, except for an alternating-sign $N$ -pulse electric field with $N = 12$ .\n\nin experimental situations involving stochastic pulse sources, where multiple laser shots are accumulated, and it is the averaged behavior that corresponds to the measured signal.\n\nFigure 8 shows $\\bar{f} (\\pmb {p} = 0)$ , normalized to its non-stochastic value $(\\sigma_T = 0)$ for a four-pulse sequence. Panels (a) and (b) present the results for averages over 50 and 100 configurations, respectively. At small $\\sigma_{T}$ , the averaged distribution function remains suppressed and nearly constant. As $\\sigma_T$ increases, irregular variations develop, and around $\\sigma_T\\approx 70[m^{-1}]$ sharp peaks appear in both panels. These peaks indicate that certain levels of timing randomness can lead to enhanced distribution function at zero momentum, $\\bar{f}$ . Their reproducibility with larger ensembles confirms that they are robust physical features rather than statistical fluctuations. Overall, increasing randomness tends to suppress the central peak, yet for specific values of $\\sigma_T$ , the averaged distribution at $p = 0$ can still be enhanced. This trend is consistent with the residual oscillatory behavior observed in the momentum spectrum near $p_3 = 0$ , which survives averaging over many random realizations.\n\nTo explore how the number of pulses affects the sensitivity to timing randomness, we now consider larger pulse trains with $N = 12$ and $N = 20$ . Figure 9 shows the averaged distribution function $\\bar{f}(\\pmb{p} = 0)$ , normalized to its value at $\\sigma_T = 0$ , for the case of $N = 12$ pulses. The results are averaged over (a) 50 and (b) 100 random configurations of the inter-pulse delays. For small $\\sigma_T$ , the distribution is strongly suppressed compared to the non-stochastic case, reflecting more pronounced destructive interference than for $N = 4$ scenario (Fig. 8). As $\\sigma_T$ increases beyond $20[m^{-1}]$ , fluctuations in $\\bar{f}$ begin to appear, and sharp peaks are visible across a broad range of $\\sigma_T$ . In contrast to the $N = 4$ case, where only a few peaks emerged at large $\\sigma_T$ , the $N = 12$ case exhibits a much richer structure, with many peaks distributed throughout $20 \\lesssim \\sigma_T \\lesssim 70[m^{-1}]$ . In this interval, the averaged values fluctuate around an order of magnitude above the non-stochastic baseline. At larger randomness, particularly near $\\sigma_T \\approx 80[m^{-1}]$ , a sharp enhancement appears. With 50 runs, this feature is already visible, but with 100 runs (panel b), it becomes clear and reproducible, showing an increase of more than four orders of magnitude. This peak originates from rare but favorable time-delay configurations and only emerges distinctly when averaging over sufficiently large ensembles. Overall, Fig. 9 shows that for $N = 12$ the system is significantly more sensitive to stochastic delays than for smaller pulse numbers.\n\nWe now consider a multi-pulse train with $N = 20$ pulses, doubling the\n\n![](images/f19d0faeaf45c52907736a01e24df85aecc1c8ba2da99e6eaead9a92a4d59b15.jpg)\n\n![](images/474597405234c8360de43f63fcda2bc0a49fd4ea223e86846374e3f4d7ccc1fe.jpg) \nFigure 10: The same as in Fig. 8, except for an alternating-sign $N$ -pulse electric field with $N = 20$ .\n\nnumber used in the previous case ( $N = 12$ ). Figure 10 shows the averaged distribution function $\\bar{f}(p = 0)$ , normalized to its value at $\\sigma_T = 0$ , as a function of the time-delay randomness parameter $\\sigma_T$ . For small $\\sigma_T$ , the nearly regular delays produce structured oscillations in $\\bar{f}$ . As $\\sigma_T$ increases, the timing becomes irregular, and the distribution develops noisy fluctuations that strongly depend on each random realization. With 50 runs (panel a), the distribution remains suppressed at low $\\sigma_T$ , then rises with growing randomness, showing distinct peaks beyond $\\sigma_T \\sim 40[m^{-1}]$ . With 100 runs (panel b), these peaks become sharp and reproducible, confirming that they are genuine features. Across all panels, $\\bar{f}$ shows a clear progression as the randomness parameter increases. For $\\sigma_T \\lesssim 10$ , the average already exceeds the $\\sigma_T = 0$ value. In the intermediate range, $10 \\lesssim \\sigma_T \\lesssim 40$ , alternating maxima and minima appear, with enhancements reaching nearly tenfold. At larger values of $\\sigma_T$ , the amplification becomes much stronger, with the averaged distribution rising by almost three orders of magnitude around $\\sigma_T \\approx 50[m^{-1}]$ . Very sharp peaks also occur at certain values, arising from rare delay configurations that yield exceptionally strong particle production. Compared to $N = 12$ (Fig. 9), the $N = 20$ case shows stronger amplification and clearer trends, while for $N = 4$ only a few peaks are visible. This progression highlights how larger pulse numbers enhance the sensitivity to timing randomness.\n\nThe emergence of sharp peaks in $\\bar{f}(\\pmb{p} = 0)$ at large $\\sigma_T$ can be understood through statistical sampling of delay configurations. At low $\\sigma_T$ (coherent regime), the pulse train is nearly regular, and quantum phases are locked, especially for larger $N$ , where the normalized $\\bar{f}(0)$ starts well below unity. As $\\sigma_T$ increases into the intermediate regime, phase coherence is broken and contributions from pulses add incoherently, yielding a gradual rise in $\\bar{f}(0)$ without sharp features. At high $\\sigma_T$ , however, the broad distribution of delays creates a vast \"search space\" of possible pulse sequences. Within this space, random sampling occasionally generates rare, optimized configurations where the relative delays coincidentally align to produce strong constructive interference—more efficient than the simple incoherent sum.\n\nFor $N = 4$ , a relatively large $\\sigma_T$ ( $\\approx 60 - 70[m^{-1}]$ ) is needed to provide a sufficiently wide parameter space for these optimal configurations to appear with statistical significance in the ensemble average. In contrast, for larger $N$ (12, 20), the system starts in a deeper interference minimum, making it more sensitive to randomness. Moreover, with more pulses, there are more combinatorial possibilities for creating highly constructive sequences. Consequently, the threshold $\\sigma_T$ for observing sharp, order-of-magnitude en\n\nhancements is lower $(\\approx 30 - 50[m^{-1}])$ , as seen in Figs. 9 and 10.\n\nThus, the peaks reflect a stochastic optimization mechanism where randomness, at specific strengths, maximizes the probability of generating pulse trains that strongly enhance pair production. This insight suggests that tailored randomness could be strategically exploited to optimize pair production in experimental settings where perfect timing control is challenging.\n\nThe multi-pulse train with random delays explores an effectively infinite space of possible pulse timing arrangements. Our results demonstrate that increasing randomness can occasionally \"find\" rare configurations that strongly enhance pair production. This naturally raises the question: what is the optimal temporal arrangement of pulses that maximizes pair creation? While our study shows that such optimized configurations exist and can be accessed stochastically, a systematic search for the absolute optimum and a deeper analytical understanding of the underlying interference conditions go beyond the scope of the present work. This constitutes a compelling direction for future research.\n\n# 4. Conclusions\n\nWe investigated the creation of EPPs in a sequence of alternating-sign Sauter-like pulses with randomized inter-pulse delays, modeled by a Gaussian distribution with standard deviation $\\sigma_T$ controlling the degree of temporal disorder. For $N = 4$ , the longitudinal momentum spectra exhibit a clear progression with increasing $\\sigma_T$ . In the deterministic limit ( $\\sigma_T = 0$ ), a regular $N$ -slit interference pattern emerges, characterized by a dominant central band and symmetric side fringes with high visibility. For low randomness ( $\\sigma_T = 15[m^{-1}]$ ), the central broad band fragments into sub-bands, distorting the structure into irregular, asymmetric peaks and inducing a left-right asymmetry across runs. At moderate disorder ( $\\sigma_T = 45[m^{-1}]$ ), the continuous fringe pattern dissolves into clusters of irregular peaks, accompanied by suppressed side bands and enhanced run-to-run fluctuations. For strong randomness ( $\\sigma_T = 75[m^{-1}]$ ), the fringe-like interference pattern becomes almost completely disordered: the central region is densely populated with erratic peaks, and the notion of a band-like structure disappears entirely. Stochastic fluctuations, with pronounced run-to-run variability, dominate the resulting distribution. Taken together, these results demonstrate that increasing temporal randomness progressively degrades fringe-like patterns arising from quantum interference. While residual constructive interference persists at\n\nintermediate values of $\\sigma_T$ , the spectrum ultimately becomes dominated by irregular fluctuations as randomness grows, signaling a transition from a coherent interference-dominated regime to one governed by stochastic behavior. To obtain statistically reliable results, we performed ensemble averaging over multiple realizations, particularly for $N = 4$ . The averaged momentum spectra exhibit a broad Gaussian-like envelope with residual oscillatory features, while central-region fluctuations persist even after averaging. These findings are especially relevant for realistic experimental conditions, where averaging over multiple laser shots is necessary.\n\nInterestingly, beyond the general trend of fringe-like interference pattern modification and suppression, we also observe that randomness can induce a pronounced enhancement of the central peak in the momentum spectrum. Specifically, the distribution function at zero momentum shows a fluctuating dependence on $\\sigma_T$ , with significant amplification at higher disorder. For $N = 12$ , a noticeable enhancement appears around $\\sigma_T \\sim 40[m^{-1}]$ , whereas for smaller pulse numbers ( $N = 4$ ), a nearly tenfold increase is observed only at larger disorder, $\\sigma_T \\sim 70[m^{-1}]$ . This progression with increasing $N$ highlights how larger pulse trains enhance the sensitivity to randomness. For example, at $N = 20$ , even a modest value of $\\sigma_T \\approx 31[m^{-1}]$ produces a nearly tenfold increase in the central peak, while at $\\sigma_T \\approx 50[m^{-1}]$ certain configurations yield enhancements of up to three orders of magnitude. These amplification peaks are not statistical artifacts but arise from a stochastic optimization mechanism: at low $\\sigma_T$ , destructive interference dominates; as coherence breaks at intermediate $\\sigma_T$ , contributions add incoherently, lifting the suppression; at high $\\sigma_T$ , the broad delay distribution occasionally samples rare,favorable configurations where the relative delays accidentally align to produce strong constructive interference- more efficiently than in the regular or weakly random cases. Larger pulse numbers increase both the combinatorial space for such optimal sequences and the sensitivity to delay variations, lowering the $\\sigma_T$ threshold for observable enhancements.\n\nOur results demonstrate that temporal randomness is not merely a source of spectral degradation but can - under specific conditions - be strategically exploited to enhance pair production. In particular, certain stochastic configurations strongly amplify the central peak of the distribution function, suggesting that tailored randomness in pulse sequences could serve as a resource for optimizing pair yields in strong-field QED. These findings open up new pathways for designing multipulse schemes in environments where perfect timing control is experimentally challenging. A systematic explo\n\nration of the optimal temporal arrangements that maximize pair creation, and a deeper analytical understanding of these rare enhanced configurations, remain compelling directions for future research.\n\n# Acknowledgments\n\nWe are grateful to the anonymous referee for constructive comments that helped improve the manuscript. Deepak Sah acknowledges financial support from the Raja Ramanna Centre for Advanced Technology (RRCAT) and the Homi Bhabha National Institute (HBNI).\n\n# 5. Supplementary\n\n# 5.1. Effect of the mean inter-pulse delay $\\mu_T$ on momentum spectrum\n\nThe mean inter-pulse delay $\\mu_T$ plays a decisive role in the coherent interference pattern of the momentum spectrum. In this supplementary section, we systematically analyze the effect of varying $\\mu_T$ on both individual stochastic realizations and ensemble-averaged spectra, using $\\sigma_T = 15[m^{-1}]$ and $N = 4$ as a representative case.\n\nIn a regular $(\\sigma_T = 0)$ alternating-sign pulse train, $\\mu_T$ acts as the temporal equivalent of the slit separation in optical multi-slit interference. The quantum phase difference accumulated between successive pulses scales proportionally to the product of the mean inter-pulse delay $\\mu_T$ and the longitudinal momentum $p_3$ . Therefore, even small changes in $\\mu_T$ shift the condition for constructive and destructive interference, altering the entire fringe pattern. Consequently, even minute variations in $\\mu_T$ shift the conditions for constructive and destructive interference, thereby reshaping the fringe pattern of the momentum spectrum.\n\nFigure 11 illustrates this sensitivity by comparing longitudinal momentum spectra for $\\mu_T = 181$ $[m^{-1}]$ (magenta) with the baseline value $\\mu_T = 180.32$ $[m^{-1}]$ (blue) across three independent stochastic realizations (Runs I-III). The mere $0.38\\%$ increase in $\\mu_T$ leads to discernible modifications in each realization. In Run I, the central interference band shifts leftward, and the relative amplitudes of peaks are redistributed. In Run II, the fringe spacing visibly changes, and side-band structures emerge at different momentum values. In Run III, the symmetry about $p_3 = 0$ is broken, with one side of the spectrum noticeably enhanced relative to the other.\n\n![](images/c91ddcef1b917b5fc245a278b4386e81d4d66965e19e0aee54302e44039d2096.jpg)\n\n![](images/21aa703713011a3bac18fcb1ce571d34fd68376a6d27f6ff79ed9d6a8df05a36.jpg)\n\n![](images/26d793ed644adee5c45259b4ded3d4a74da895342be8c1bd25833f69213dbe64.jpg) \nFigure 11: Longitudinal momentum spectra for $N = 4$ pulses with $\\sigma_T = 15$ [m-1]. Parameters: $E_0 = 0.1E_c$ , $\\tau = 20$ [m-1], $p_\\perp = 0$ . (a) Run I: $\\{T_k\\} = \\{172.93, 186.06, 161.54, 192.33\\}$ [m-1]; (b) Run II: $\\{T_k\\} = \\{194.01, 166.06, 194.76, 172.13\\}$ [m-1]; (c) Run III: $\\{T_k\\} = \\{195.64, 173.15, 183.65, 208.68\\}$ [m-1]. All quantities are in electron mass units.\n\n![](images/8aeeed5fb4868db376275cc8fb2280d13fcfebd79ef25c5771dec6c87053817b.jpg) \nFigure 12: Averaged momentum spectra $\\bar{f} (p_3)$ computed over 30-run of random samples with randomized time delays for an alternating-sign four-pulse electric field $E(t)$ . The field parameters are $E_0 = 0.1E_c$ , $\\tau = 20[m^{-1}]$ , and $\\sigma_T = 15[m^{-1}]$ .\n\nThese observations confirm that, even in the presence of weak randomness $(\\sigma_T > 0)$ , the interference pattern remains sensitive to $\\mu_T$ on a realization-by-realization basis. This sensitivity stems from the fact that each stochastic configuration of inter-pulse delays interacts distinctively with the underlying mean temporal structure, thereby imprinting $\\mu_T$ -dependent phase shifts on the quantum interference.\n\nFigure 12 shows the corresponding ensemble-averaged momentum spectrum computed over 30 stochastic realizations for $\\mu_T = 181[m^{-1}]$ . Despite the sensitivity observed in individual runs, the averaged spectrum converges to a smooth, Gaussian-like envelope that is nearly identical to that obtained with the baseline value $\\mu_T = 180.32[m^{-1}]$ . This demonstrates that, while individual stochastic realizations are $\\mu_T$ -sensitive, the ensemble-averaged result is robust. The underlying reason is statistical: averaging over many random delay configurations washes out the phase-sensitive interference details, leaving only the incoherent sum of contributions from individual pulses, which depends primarily on the pulse shape and amplitude, not on $\\mu_T$ .\n\nThe insensitivity of the averaged spectrum to small $\\mu_T$ variations is encouraging for experimental applications. In realistic laser setups, where $\\mu_T$ cannot be controlled with infinite precision, our results indicate several practical insights. First, single-shot measurements may show strong $\\mu_T$ -dependent variations due to the sensitivity of individual realizations. Second, however, multi-shot averaging will yield reproducible Gaussian-like spectra whose shape is largely independent of small systematic offsets in $\\mu_T$ . Third, the key parameter governing the transition from coherent to incoherent pair\n\nproduction is $\\sigma_T$ , not the exact value of $\\mu_T$ .\n\nThe value $\\mu_T = 180.32[m^{-1}]$ was chosen in the main text to facilitate direct comparison with Ref. [46]. Our supplementary analysis shows that individual stochastic spectra are sensitive to $\\mu_T$ due to phase-dependent interference, while ensemble-averaged spectra are robust against small $\\mu_T$ variations. Consequently, the main conclusions of our work—regarding the role of $\\sigma_T$ in driving the transition from interference-dominated to Gaussian-like spectra—remain valid across a range of $\\mu_T$ values. Thus, while $\\mu_T$ sets the coherent baseline, $\\sigma_T$ controls the degree of decoherence, making our findings broadly applicable to experimental scenarios where timing cannot be perfectly regularized.\n\n# References\n\n[1] F. Sauter, Über das verhalten eines elektrons im homogenen elektrischen feld nach der relativistischen theorie diracs, Zeitschrift für Physik 69 (1931) 742-764. doi:10.1007/BF01339461. \n[2] W. Heisenberg, H. Euler, Folgerungen aus der diracsschen theorie des positrons, Zeitschrift für Physik 98 (11-12) (1936) 714-732. arXiv:physics/0605038, doi:10.1007/BF01343663. \n[3] J. S. Schwinger, On gauge invariance and vacuum polarization, Physical Review 82 (1951) 664-679. doi:10.1103/PhysRev.82.664. \n[4] A. D. Piazza, C. Müller, K. Z. Hatsagortsyan, C. H. Keitel, Extremely high-intensity laser interactions with fundamental quantum systems, Reviews of Modern Physics 84 (2012) 1177-1228. arXiv:1111.3886, doi:10.1103/RevModPhys.84.1177. \n[5] G. Mourou, T. Tajima, Summary of the izest science and aspiration, European Physical Journal Special Topics 223 (6) (2014) 979-984. doi:10.1140/epjst/e2014-02148-4. \n[6] A. Ringwald, Pair production from vacuum at the focus of an x-ray free electron laser, Physics Letters B 510 (2001) 107-116. arXiv:hep-ph/0103185, doi:10.1016/S0370-2693(01)00496-8. \n[7] A. R. Bell, J. G. Kirk, Possibility of prolific pair production with high-power lasers, Physical Review Letters 101 (2008) 200403. doi:10.1103/PhysRevLett.101.200403.\n\n[8] J. W. Yoon, Y. G. Kim, I. W. Choi, J. H. Sung, H. W. Lee, S. K. Lee, et al., Realization of laser intensity over $10^{23}\\mathrm{w/cm^2}$ , Optica 8 (5) (2021) 630-635. doi:10.1364/OPTICA.420520. \n[9] S. Augustin, C. Müller, Nonperturbative bethe-heitler pair creation in combined high- and low-frequency laser fields, Physics Letters B 737 (2014) 114-119. arXiv:1406.6263, doi:10.1016/j.physletb.2014.08.042. \n[10] C. Müller, A. B. Voitkiv, N. Grün, Nonlinear bound-free pair creation in the strong electromagnetic fields of a heavy nucleus and an intense x-ray laser, Physical Review Letters 91 (2003) 223601. doi:10.1103/PhysRevLett.91.223601. \n[11] A. D. Piazza, Nonlinear breit-wheeler pair production in a tightly focused laser beam, Physical Review Letters 117 (21) (2016) 213201. arXiv:1608.08120, doi:10.1103/PhysRevLett.117.213201. \n[12] K. Krajewska, J. Z. Kamiński, Breit-wheeler process in intense short laser pulses, Physical Review A 86 (2012) 052104. arXiv:1209.2394, doi:10.1103/PhysRevA.86.052104. \n[13] D. L. Burke, R. C. Field, G. Horton-Smith, T. Kotseroglou, J. E. Spencer, D. Walz, S. C. Berridge, W. M. Bugg, K. Shmakov, A. W. Weidemann, et al., Positron production in multi-photon light by light scattering, Physical Review Letters 79 (1997) 1626-1629. doi:10.1103/PhysRevLett.79.1626. \n[14] A. Ilderton, B. King, S. Tang, Toward the observation of interference effects in nonlinear compton scattering, Physics Letters B 804 (2020) 135410. arXiv:2002.04629, doi:10.1016/j.physletb.2020.135410. \n[15] C. Kohlfürst, Phase-space analysis of the Schwinger effect in inhomogeneous electromagnetic fields, Eur. Phys. J. Plus 133 (5) (2018) 191. arXiv:1708.08920, doi:10.1140/epjp/i2018-12062-6. \n[16] A. Fedotov, A. Ilderton, F. Karbstein, B. King, D. Seipt, H. Taya, G. Torgrimsson, Advances in QED with intense background fields, Phys. Rept. 1010 (2023) 1-138. arXiv:2203.00019, doi:10.1016/j.physrep.2023.01.003.\n\n[17] C. K. Dumlu, Schwinger vacuum pair production in chirped laser pulses, Physical Review D 82 (2010) 045007. arXiv:1006.3882, doi:10.1103/PhysRevD.82.045007. \n[18] F. Hebenstreit, R. Alkofer, H. Gies, Particle self-bunching in the Schwinger effect in spacetime-dependent electric fields, Phys. Rev. Lett. 107 (2011) 180403. arXiv:1106.6175, doi:10.1103/PhysRevLett.107.180403. \n[19] E. Brezin, C. Itzykson, Pair production in vacuum by an alternating field, Phys. Rev. D 2 (1970) 1191-1199. doi:10.1103/PhysRevD.2.1191. \n[20] S. P. Kim, D. N. Page, Schwinger pair production in electric and magnetic fields, Phys. Rev. D 73 (2006) 065020. arXiv:hep-th/0301132, doi:10.1103/PhysRevD.73.065020. \n[21] S. P. Kim, D. N. Page, Schwinger pair production via instantons in a strong electric field, Phys. Rev. D 65 (2002) 105002. arXiv:hep-th/0005078, doi:10.1103/PhysRevD.65.105002. \n[22] G. V. Dunne, C. Schubert, Worldline instantons and pair production in inhomogeneous fields, Phys. Rev. D 72 (2005) 105004. arXiv:hep-th/0507174, doi:10.1103/PhysRevD.72.105004. \n[23] C. Schubert, Schwinger Pair Creation of Particles and Strings, J. Phys. Conf. Ser. 287 (2011) 012031. arXiv:1103.0243, doi:10.1088/1742-6596/287/1/012031. \n[24] D. Sah, M. P. Singh, Vacuum polarization current in presence of intense Sauter field (3 2025). arXiv:2503.23232. \n[25] F. Hebenstreit, F. Fillion-Gourdeau, Optimization of Schwinger pair production in colliding laser pulses, Phys. Lett. B 739 (2014) 189-195. arXiv:1409.7943, doi:10.1016/j.physletb.2014.10.056. \n[26] C. Banerjee, M. P. Singh, Imprint of the temporal envelope of ultra-short laser pulses on the longitudinal momentum spectrum of e+epairs, Phys. Rev. D 105 (7) (2022) 076021. arXiv:1807.06951, doi:10.1103/PhysRevD.105.076021.\n\n[27] N. Abdukerim, Z.-L. Li, B.-S. Xie, Electron-positron pair production in the low-density approximation, Front. Phys. (Beijing) 10 (4) (2015) 101202. doi:10.1007/s11467-015-0471-3. \n[28] D. B. Blaschke, B. Kampfer, A. D. Panferov, A. V. Prozorkevich, S. A. Smolyansky, Influence of Laser Pulse Parameters on the Properties of e-e+ Plasmas Created from Vacuum, Contrib. Plasma Phys. 53 (2013) 165-172. arXiv:1205.3154, doi:10.1002/ctpp.201310029. \n[29] I. Bialynicki-Birula, P. Gornicki, J. Rafelski, Phase space structure of the Dirac vacuum, Phys. Rev. D 44 (1991) 1825-1835. doi:10.1103/PhysRevD.44.1825. \n[30] F. Hebenstreit, R. Alkofer, H. Gies, Schwinger pair production in space and time-dependent electric fields: Relating the Wigner formalism to quantum kinetic theory, Phys. Rev. D 82 (2010) 105026. arXiv:1007.1099, doi:10.1103/PhysRevD.82.105026. \n[31] Z. L. Li, R. Z. Jiang, Y. J. Li, Nonadiabatic quantum kinetic equations and dirac-heisenberg-wigner formalism for schwinger pair production in time-varying electric fields with multiple components, arXiv preprint (Mar. 2025). arXiv:2503.02530. \n[32] F. Hebenstreit, R. Alkofer, G. V. Dunne, H. Gies, Momentum signatures for Schwinger pair production in short laser pulses with subcycle structure, Phys. Rev. Lett. 102 (2009) 150404. arXiv:0901.2631, doi:10.1103/PhysRevLett.102.150404. \n[33] D. Sah, M. P. Singh, Dynamical Scaling in Pair Production for Scalar QED (2 2025). arXiv:2503.00182. \n[34] D. Sah, M. P. Singh, Pair production in time-dependent electric field at finite times, arXiv preprint (Sep. 2023). arXiv:2309.12079. \n[35] D. Sah, M. P. Singh, Pair Production in Time-Dependent Electric Field at Finite Times, Springer Proc. Phys. 304 (2024) 1023-1025. doi:10.1007/978-981-97-0289-3_275. \n[36] E. Akkermans, G. V. Dunne, Ramsey Fringes and Time-domain Multiple-Slit Interference from Vacuum, Phys. Rev. Lett. 108 (2012) 030401. arXiv:1109.3489, doi:10.1103/PhysRevLett.108.030401.\n\n[37] C. Kohlfurst, Electron-Positron Pair Production in Structured Pulses of Electric Fields, Master's thesis, U. Graz (main) (2012). arXiv:1212.0880. \n[38] R. Schutzhold, H. Gies, G. Dunne, Dynamically assisted Schwinger mechanism, Phys. Rev. Lett. 101 (2008) 130404. arXiv:0807.0754, doi:10.1103/PhysRevLett.101.130404. \n[39] O. Olugh, Z. L. Li, B. S. Xie, Dynamically assisted pair production for various polarizations, Physics Letters B 802 (2020) 135259. arXiv:1905.11986, doi:10.1016/j.physletb.2020.135259. \n[40] M. J. A. Jansen, C. Müller, Strong-Field Breit-Wheeler Pair Production in Two Consecutive Laser Pulses with Variable Time Delay, Phys. Lett. B 766 (2017) 71-76. arXiv:1612.07137, doi:10.1016/j.physletb.2016.12.056. \n[41] L. F. Granz, O. Mathiak, S. Villalba-Chávez, C. Müller, Electron-positron pair production in oscillating electric fields with double-pulse structure, Phys. Lett. B 793 (2019) 85–89. arXiv:1903.06000, doi:10.1016/j.physletb.2019.04.026. \n[42] A. Ilderton, Coherent quantum enhancement of pair production in the null domain, Phys. Rev. D 101 (1) (2020) 016006. arXiv:1910.03012, doi:10.1103/PhysRevD.101.016006. \n[43] J. Z. Kamiński, M. Twardy, K. Krajewska, Diffraction at a time grating in electron-positron pair creation from the vacuum, Phys. Rev. D 98 (5) (2018) 056009. arXiv:1807.05386, doi:10.1103/PhysRevD.98.056009. \n[44] H. Yi, X. Wang, L. Zeng, Y. Liang, W. Zhang, Stability enhancement of a self-amplified spontaneous emission free-electron laser with bunching containment, Phys. Rev. Accel. Beams 28 (5) (2025) 050703. arXiv:2503.05187, doi:10.1103/xbkr-n5h7. \n[45] E. L. Saldin, E. A. Schneidmiller, M. V. Yurkov, Statistical properties of radiation from SASE FEL driven by short electron bunches, Nucl. Instrum. Meth. A 507 (2003) 101-105. doi:10.1016/S0168-9002(03)00847-7.\n\n[46] Z.-L. Li, D. Lu, B.-S. Xie, Multiple-slit interference effect in the time domain for boson pair production, Phys. Rev. D 89 (6) (2014) 067701. doi:10.1103/PhysRevD.89.067701. \n[47] I. A. Aleksandrov, G. Plunien, V. M. Shabaev, Electron-positron pair production in external electric fields varying both in space and time, Phys. Rev. D 94 (6) (2016) 065024. arXiv:1606.06313, doi:10.1103/PhysRevD.94.065024. \n[48] C. Kohlfürst, R. Alkofer, On the effect of time-dependent inhomogeneous magnetic fields in electron-positron pair production, Phys. Lett. B 756 (2016) 371-375. arXiv:1512.06668, doi:10.1016/j.physletb.2016.03.027. \n[49] M. Ruf, G. R. Mocken, C. Muller, K. Z. Hatsagortsyan, C. H. Keitel, Pair production in laser fields oscillating in space and time, Phys. Rev. Lett. 102 (2009) 080402. arXiv:0810.4047, doi:10.1103/PhysRevLett.102.080402. \n[50] C. Kohlfürst, N. Ahmadiniaz, J. Oertel, R. Schützhold, Sauter-Schwinger Effect for Colliding Laser Pulses, Phys. Rev. Lett. 129 (24) (2022) 241801. arXiv:2107.08741, doi:10.1103/PhysRevLett.129.241801. \n[51] N. Abdukerim, Z.-L. Li, B.-S. Xie, Effects of laser pulse shape and carrier envelope phase on pair production, Phys. Lett. B 726 (2013) 820-826. doi:10.1016/j.physletb.2013.09.014. \n[52] A. Otto, D. Seipt, D. Blaschke, B. Kämpfer, S. A. Smolyansky, Lifting shell structures in the dynamically assisted Schwinger effect in periodic fields, Phys. Lett. B 740 (2015) 335-340. arXiv:1412.0890, doi:10.1016/j.physletb.2014.12.010. \n[53] S. M. Schmidt, D. Blaschke, G. Ropke, S. A. Smolyansky, A. V. Prozorkevich, V. D. Toneev, A Quantum kinetic equation for particle production in the Schwinger mechanism, Int. J. Mod. Phys. E 7 (1998) 709-722. arXiv:hep-ph/9809227, doi:10.1142/S0218301398000403. \n[54] Z. L. Li, D. Lu, B. S. Xie, L. B. Fu, J. Liu, B. F. Shen, Enhanced pair production in strong fields by multiple-slit interference effect with\n\ndynamically assisted schwinger mechanism, Physical Review D 89 (9) (2014) 093011. doi:10.1103/PhysRevD.89.093011. \n[55] The MathWorks, Inc., randn (MATLAB function), The MathWorks, Inc. (2024). URL https://in.mathworks.com/help/matlab/ref/randn.html \n[56] I. J. B. Macias, S. Düsterer, R. Ivanov, J. Liu, G. Brenner, J. Rönsch-Schulenburg, M. K. Czwalinna, M. V. Yurkov, Study of temporal, spectral, arrival time and energy fluctuations of SASE FEL pulses, Opt. Express 29 (7) (2021) 10491-10508. doi:10.1364/0E.419977."}
# Layer-2 Adoption and Ethereum Mainnet Congestion: Regime-Aware Causal Evidence Across London, the Merge, and Dencun (2021–2024) Abstract Do Ethereum's Layer-2 (L2) rollups actually decongest the Layer-1 (L1) mainnet once protocol upgrades and demand are held constant? Using a 1,245-day daily panel from August 5, 2021 to December 31, 2024 that spans the London, Merge, and Dencun upgrades, we link Ethereum fee and congestion metrics to L2 user activity, macro-demand proxies, and targeted event indicators. We estimate a regime-aware error-correction model that treats posting-clean L2 user share as a continuous treatment. Over the pre-Dencun (London+Merge) window, a 10 percentage point increase in L2 adoption lowers median base fees by about $13\%$ roughly 5 Gwei at pre-Dencun levels—and deviations from the long-run relation decay with an 11-day half-life. Block utilization and a scarcity index show similar congestion relief. After Dencun, L2 adoption is already high and treatment support narrows, so blob-era estimates are statistically imprecise and we treat them as exploratory. The pre-Dencun window therefore delivers the first cross-regime causal estimate of how aggregate L2 adoption decongests Ethereum, together with a reusable template for monitoring rollup-centric scaling strategies. Keywords: Ethereum; Layer-2 rollups; transaction fees; congestion; causal time series JEL Classification: C22, C54, L86, O33 # 1 Introduction Ethereum's fee market has traversed three structural regimes in rapid succession—London's EIP-1559 base-fee burn, the Merge's proof-of-stake transition, and Dencun's EIP-4844 blob space. Each upgrade reshaped how congestion costs are priced and burned but did not expand Layer-1 (L1) execution capacity. Bursts of NFT minting, stablecoin arbitrage, or L2 posting therefore still push median fees into the tens of Gwei and crowd out smaller users. Over the same period, optimistic and zero-knowledge Layer-2 (L2) rollups matured from pilots into production systems that regularly settle more than half of Ethereum's transactions. These rollups offload execution but also consume L1 blockspace when publishing compressed batches. This creates an open question: does aggregate L2 adoption relieve mainnet congestion or merely reshuffle it across users, time, and layers? We ask: when overall demand and protocol regime are held constant, does higher L2 user adoption reduce Ethereum mainnet congestion? Our main findings are straightforward. Over the London $\rightarrow$ Merge window, a 10 percentage point increase in posting-clean L2 adoption is associated with about a $13\%$ reduction in median base fees. That corresponds to roughly 5 Gwei at pre-Dencun fee levels. An error-correction term implies an 11-day half-life back to the long-run relation between adoption, congestion, and demand. The fee relief is therefore meaningful but partial and short-run. Supporting metrics based on block utilization and a scarcity index show similar congestion relief. Blob-era slopes after Dencun are statistically imprecise because adoption is already near saturation, so we treat those estimates as exploratory. Existing work on Ethereum's fee market and rollups shows how individual upgrades and rollup designs affect incentives, price discovery, and posting costs. However, most studies focus on single events or descriptive dashboards rather than regime-spanning causal estimates. Empirical analyses of fee-market upgrades and rollup pricing quantify local changes in fees, waiting times, or cross-rollup spreads. They do not estimate the total effect of aggregate L2 adoption on mainnet congestion across the London $\rightarrow$ Merge $\rightarrow$ Dencun sequence or cleanly separate that effect from shared demand shocks. We address this gap by assembling a regime-aware daily panel of $N = 1,245$ observations from August 5, 2021 through December 31, 2024 that spans the London, Merge, and post-Dencun eras. The panel links median base fees, block utilization, and a congestion scarcity index to a posting-clean measure of L2 user adoption and to a single demand factor summarizing ETH-market activity and stablecoin flows. Calendar and regime dummies plus targeted event indicators capture protocol shifts and discrete shocks. We estimate a regime-aware error-correction model and complementary time-series designs to map adoption shocks into short-run and medium-run congestion outcomes. The adoption measure counts end-user transactions on rollups and mainnet while excluding L2-to-L1 posting flows, so the adoption $\rightarrow$ posting $\rightarrow$ congestion channel remains part of the estimand. Together with the demand factor, this keeps the estimand focused on the total effect of user migration onto L2s without conditioning on mediator pathways. Section 4 provides the full construction details and adjustment logic. # 1.1 Contributions Our contributions are fourfold: 1. Cross-regime causal estimate. We provide a regime-aware causal estimate of the total effect of L2 adoption on L1 fees spanning the London $\rightarrow$ Merge $\rightarrow$ Dencun sequence, rather than focusing on a single upgrade or contemporaneous correlations. 2. Measurement design. We introduce a posting-clean adoption measure and a demand factor that deliberately exclude mediator pathways, offering a reusable template for avoiding post-treatment conditioning in blockchain congestion studies. 3. Policy translation. We map semi-elasticities into Gwei and dollar savings for representative transactions and adoption scenarios, connecting econometric quantities to fee levels and cost savings that protocol designers and users directly observe. 4. Template for monitoring. We combine a regime-aware error-correction framework with a compact set of diagnostics into a monitoring toolkit that can be updated as new data arrive and ported to other rollup-centric ecosystems. # 1.2 Roadmap Section 2 situates this contribution relative to empirical studies of Ethereum's fee market, rollup design, and causal time-series methods, highlighting why existing work cannot recover the total effect of aggregate L2 adoption on mainnet congestion. Section 3 describes the panel construction and variable definitions, and Section 4 outlines the causal design and estimators. Section 5 reports the empirical results, and Sections 6-7 discuss implications and conclude. Appendix A documents the data and code assets, and the replication repository carries the full reproducibility record. # 2 Related Work # 2.1 Fee-Market Design and Ethereum Upgrades Scholarship on Ethereum's fee market shows how protocol upgrades reshape incentives without immediately expanding Layer-1 (L1) throughput. EIP-1559's base-fee burn and elastic block size improved price discovery and reduced fee volatility while leaving the hard cap on computation unchanged (Buterin et al., 2021). The Merge stabilized slot times and validator incentives without materially increasing execution capacity. Dencun's EIP-4844 then introduced dedicated blob space that dramatically reduced Layer-2 (L2) posting costs (Buterin et al., 2024). Empirical analyses of EIP-1559 document how the new fee mechanism affects transaction fees, waiting times, and consensus margins (Liu et al., 2022), while recent work on L2 arbitrage and rollup pricing studies cross-rollup spreads and the interaction between posting costs and liquidity provision (Gogol et al., 2024; Wang et al., 2025). Existing empirical work on Ethereum's fee market and rollups therefore either focuses on a single upgrade such as EIP-1559 or on protocol-level behavior inside specific rollup or application ecosystems, carefully quantifying local changes in fees, spreads, or posting costs but not the total effect of aggregate L2 user adoption on mainnet congestion across multiple protocol regimes. Industry observatories track the resulting growth of optimistic and zero-knowledge rollups, transitions from calldata to blob usage, and the emergence of posting-fee arbitrage, $^{1}$ but they typically treat L2 posting as part of user demand or abstract from macro shocks that jointly affect L1 congestion and L2 adoption. Our design fills this gap by treating L2 adoption as a continuous treatment and explicitly modeling the sequence of London, Merge, and Dencun regimes. # 2.2 Empirical Congestion and Causal Time-Series Methods Causal and time-series methods developed in adjacent technology and financial settings provide templates for credible evaluation of congestion policies. Interrupted time series (ITS) and segmented regression remain staples for policy impact analysis (Bernal et al., 2017; Penfold and Zhang, 2013). Continuous-treatment event studies extend difference-in-differences logic to dosage-style shocks with explicit pre-trend tests (de Chaisemartin and D'Haultfoeuille, 2020). Bayesian Structural Time Series (BSTS) constructs probabilistic counterfactual paths with state-space components for trends, seasonality, and contemporaneous covariates (Brodersen et al., 2015), and Regression Discontinuity in Time (RDiT) exploits sharp policy boundaries when smoothness assumptions hold (Hausman and Rapson, 2018). These designs have been deployed in fintech launches, payment reforms, and energy-market interventions, and they underlie several recent empirical studies of blockchain fee dynamics and rollup pricing. Yet existing congestion studies rarely combine DAG-guided adjustment sets, mediator exclusion, and semi-elasticity reporting that maps coefficients into user-level cost changes. # 2.3 Broader Congestion and Market-Design Literatures Regulatory and market-microstructure literatures highlight the perils of conditioning on post-treatment variables when evaluating market design. Work on tax holidays, exchange-fee rebates, and telecom interconnection policies stresses the need for clean treatment definitions and transparent adjustment sets to maintain
# Layer-2 Adoption and Ethereum Mainnet Congestion: Regime-Aware Causal Evidence Across London, the Merge, and Dencun (2021–2024) Abstract Do Ethereum's Layer-2 (L2) rollups actually decongest the Layer-1 (L1) mainnet once protocol upgrades and demand are held constant? Using a 1,245-day daily panel from August 5, 2021 to December 31, 2024 that spans the London, Merge, and Dencun upgrades, we link Ethereum fee and congestion metrics to L2 user activity, macro-demand proxies, and targeted event indicators. We estimate a regime-aware error-correction model that treats posting-clean L2 user share as a continuous treatment. Over the pre-Dencun (London+Merge) window, a 10 percentage point increase in L2 adoption lowers median base fees by about $13\%$ roughly 5 Gwei at pre-Dencun levels—and deviations from the long-run relation decay with an 11-day half-life. Block utilization and a scarcity index show similar congestion relief. After Dencun, L2 adoption is already high and treatment support narrows, so blob-era estimates are statistically imprecise and we treat them as exploratory. The pre-Dencun window therefore delivers the first cross-regime causal estimate of how aggregate L2 adoption decongests Ethereum, together with a reusable template for monitoring rollup-centric scaling strategies. Keywords: Ethereum; Layer-2 rollups; transaction fees; congestion; causal time series JEL Classification: C22, C54, L86, O33 # 1 Introduction Ethereum's fee market has traversed three structural regimes in rapid succession—London's EIP-1559 base-fee burn, the Merge's proof-of-stake transition, and Dencun's EIP-4844 blob space. Each upgrade reshaped how congestion costs are priced and burned but did not expand Layer-1 (L1) execution capacity. Bursts of NFT minting, stablecoin arbitrage, or L2 posting therefore still push median fees into the tens of Gwei and crowd out smaller users. Over the same period, optimistic and zero-knowledge Layer-2 (L2) rollups matured from pilots into production systems that regularly settle more than half of Ethereum's transactions. These rollups offload execution but also consume L1 blockspace when publishing compressed batches. This creates an open question: does aggregate L2 adoption relieve mainnet congestion or merely reshuffle it across users, time, and layers? We ask: when overall demand and protocol regime are held constant, does higher L2 user adoption reduce Ethereum mainnet congestion? Our main findings are straightforward. Over the London $\rightarrow$ Merge window, a 10 percentage point increase in posting-clean L2 adoption is associated with about a $13\%$ reduction in median base fees. That corresponds to roughly 5 Gwei at pre-Dencun fee levels. An error-correction term implies an 11-day half-life back to the long-run relation between adoption, congestion, and demand. The fee relief is therefore meaningful but partial and short-run. Supporting metrics based on block utilization and a scarcity index show similar congestion relief. Blob-era slopes after Dencun are statistically imprecise because adoption is already near saturation, so we treat those estimates as exploratory. Existing work on Ethereum's fee market and rollups shows how individual upgrades and rollup designs affect incentives, price discovery, and posting costs. However, most studies focus on single events or descriptive dashboards rather than regime-spanning causal estimates. Empirical analyses of fee-market upgrades and rollup pricing quantify local changes in fees, waiting times, or cross-rollup spreads. They do not estimate the total effect of aggregate L2 adoption on mainnet congestion across the London $\rightarrow$ Merge $\rightarrow$ Dencun sequence or cleanly separate that effect from shared demand shocks. We address this gap by assembling a regime-aware daily panel of $N = 1,245$ observations from August 5, 2021 through December 31, 2024 that spans the London, Merge, and post-Dencun eras. The panel links median base fees, block utilization, and a congestion scarcity index to a posting-clean measure of L2 user adoption and to a single demand factor summarizing ETH-market activity and stablecoin flows. Calendar and regime dummies plus targeted event indicators capture protocol shifts and discrete shocks. We estimate a regime-aware error-correction model and complementary time-series designs to map adoption shocks into short-run and medium-run congestion outcomes. The adoption measure counts end-user transactions on rollups and mainnet while excluding L2-to-L1 posting flows, so the adoption $\rightarrow$ posting $\rightarrow$ congestion channel remains part of the estimand. Together with the demand factor, this keeps the estimand focused on the total effect of user migration onto L2s without conditioning on mediator pathways. Section 4 provides the full construction details and adjustment logic. # 1.1 Contributions Our contributions are fourfold: 1. Cross-regime causal estimate. We provide a regime-aware causal estimate of the total effect of L2 adoption on L1 fees spanning the London $\rightarrow$ Merge $\rightarrow$ Dencun sequence, rather than focusing on a single upgrade or contemporaneous correlations. 2. Measurement design. We introduce a posting-clean adoption measure and a demand factor that deliberately exclude mediator pathways, offering a reusable template for avoiding post-treatment conditioning in blockchain congestion studies. 3. Policy translation. We map semi-elasticities into Gwei and dollar savings for representative transactions and adoption scenarios, connecting econometric quantities to fee levels and cost savings that protocol designers and users directly observe. 4. Template for monitoring. We combine a regime-aware error-correction framework with a compact set of diagnostics into a monitoring toolkit that can be updated as new data arrive and ported to other rollup-centric ecosystems. # 1.2 Roadmap Section 2 situates this contribution relative to empirical studies of Ethereum's fee market, rollup design, and causal time-series methods, highlighting why existing work cannot recover the total effect of aggregate L2 adoption on mainnet congestion. Section 3 describes the panel construction and variable definitions, and Section 4 outlines the causal design and estimators. Section 5 reports the empirical results, and Sections 6-7 discuss implications and conclude. Appendix A documents the data and code assets, and the replication repository carries the full reproducibility record. # 2 Related Work # 2.1 Fee-Market Design and Ethereum Upgrades Scholarship on Ethereum's fee market shows how protocol upgrades reshape incentives without immediately expanding Layer-1 (L1) throughput. EIP-1559's base-fee burn and elastic block size improved price discovery and reduced fee volatility while leaving the hard cap on computation unchanged (Buterin et al., 2021). The Merge stabilized slot times and validator incentives without materially increasing execution capacity. Dencun's EIP-4844 then introduced dedicated blob space that dramatically reduced Layer-2 (L2) posting costs (Buterin et al., 2024). Empirical analyses of EIP-1559 document how the new fee mechanism affects transaction fees, waiting times, and consensus margins (Liu et al., 2022), while recent work on L2 arbitrage and rollup pricing studies cross-rollup spreads and the interaction between posting costs and liquidity provision (Gogol et al., 2024; Wang et al., 2025). Existing empirical work on Ethereum's fee market and rollups therefore either focuses on a single upgrade such as EIP-1559 or on protocol-level behavior inside specific rollup or application ecosystems, carefully quantifying local changes in fees, spreads, or posting costs but not the total effect of aggregate L2 user adoption on mainnet congestion across multiple protocol regimes. Industry observatories track the resulting growth of optimistic and zero-knowledge rollups, transitions from calldata to blob usage, and the emergence of posting-fee arbitrage, $^{1}$ but they typically treat L2 posting as part of user demand or abstract from macro shocks that jointly affect L1 congestion and L2 adoption. Our design fills this gap by treating L2 adoption as a continuous treatment and explicitly modeling the sequence of London, Merge, and Dencun regimes. # 2.2 Empirical Congestion and Causal Time-Series Methods Causal and time-series methods developed in adjacent technology and financial settings provide templates for credible evaluation of congestion policies. Interrupted time series (ITS) and segmented regression remain staples for policy impact analysis (Bernal et al., 2017; Penfold and Zhang, 2013). Continuous-treatment event studies extend difference-in-differences logic to dosage-style shocks with explicit pre-trend tests (de Chaisemartin and D'Haultfoeuille, 2020). Bayesian Structural Time Series (BSTS) constructs probabilistic counterfactual paths with state-space components for trends, seasonality, and contemporaneous covariates (Brodersen et al., 2015), and Regression Discontinuity in Time (RDiT) exploits sharp policy boundaries when smoothness assumptions hold (Hausman and Rapson, 2018). These designs have been deployed in fintech launches, payment reforms, and energy-market interventions, and they underlie several recent empirical studies of blockchain fee dynamics and rollup pricing. Yet existing congestion studies rarely combine DAG-guided adjustment sets, mediator exclusion, and semi-elasticity reporting that maps coefficients into user-level cost changes. # 2.3 Broader Congestion and Market-Design Literatures Regulatory and market-microstructure literatures highlight the perils of conditioning on post-treatment variables when evaluating market design. Work on tax holidays, exchange-fee rebates, and telecom interconnection policies stresses the need for clean treatment definitions and transparent adjustment sets to maintain credibility when interventions unfold over multiple regimes. In the rollup-centric roadmap, L2 adoption both responds to and influences L1 congestion, so empirical strategies must avoid conditioning on posting flows and clearly distinguish exploratory diagnostics from confirmatory estimands. Viewed through this lens, Ethereum's L1/L2 stack resembles other congestion-pricing problems in transportation networks, electricity grids, and payment systems: multiple service layers share a common bottleneck, and welfare depends on how incentives, fee schedules, and governance are coupled across layers. Existing studies either focus on single upgrades, rely on contemporaneous correlations pulled from dashboards, or embed L2 posting in both treatment and controls, diluting the estimand. To our knowledge, there is no regime-aware, DAG-grounded causal study that estimates the total effect of L2 adoption on L1 congestion across London, the Merge, and Dencun, nor one that pairs a posting-clean treatment with a demand factor that excludes mediator pathways. This study fills that gap by providing cross-regime semi-elasticities and adjustment dynamics that speak directly to Ethereum's rollup-centric scaling roadmap. # 3 Data and Variables We construct a daily UTC panel that tracks Ethereum Layer-1 congestion, Layer-2 user activity, and macro-demand proxies across the London, Merge, and Dencun upgrades. Each observation aggregates raw L1 and L2 transaction traces, blob-fee data, off-chain market indicators, and a curated event list into the variables summarized in Table 1. The unit of analysis is a calendar day, and unless stated otherwise all quantities are computed on this daily grid. # 3.1 Sample Window, Regimes, and Panel Snapshot Our daily sample runs from 5 August 2021 (London / EIP-1559 activation) through 31 December 2024, yielding $N = 1,245$ UTC days. It spans three protocol regimes: London (406 days), Merge (545 days), and the post-Dencun blob era (294 days). Figure 1 plots the posting-clean L2 transaction share $A_{t}^{clean}$ , log base fee, block utilization, and the scarcity index across the four labeled regimes (pre-London, London→Merge, Merge→Dencun, post-Dencun); shaded bands mark the upgrade dates that define the regime indicators $\mathbf{R}_{t}$ . Unless noted otherwise, the pre-Dencun (London+Merge; $N = 951$ ) window is the confirmatory window because $A_{t}^{clean}$ still traverses a wide portion of. The blobera post-Dencun window is retained for descriptive context, as $A_{t}^{clean}$ is already near saturation (Section 5.3). Descriptive figures and summary statistics continue to use the full $N = 1,245$ -day panel. Table 1 summarizes the key variables and data sources; extended descriptive and treatment-support diagnostics appear in Appendix B. Table 1: Key Variables and Data Sources <table><tr><td>Role</td><td>Symbol</td><td>Description</td><td>Construction (brief)</td><td>Source(s)</td></tr><tr><td>Treatment</td><td>Atclean</td><td>Posting-clean L2 adoption share</td><td>Daily share of L2 end-user tx in total L1+L2 user tx; L2→L1 postings removed from both sides</td><td>L1/L2 traces; rollup inbox registry</td></tr><tr><td>Outcome</td><td>logCtfee</td><td>Log median base fee</td><td>Log of median EIP-1559 base fee (Gwei) across blocks in day t</td><td>Ethereum mainnet block traces; public feed dashboards</td></tr><tr><td>Outcome</td><td>ut</td><td>Block utilization</td><td>Median gas used divided by gas limit across blocks in day t</td><td>Ethereum mainnet block traces</td></tr><tr><td>Outcome</td><td>St</td><td>Scarcity index</td><td>Composite (base + tip + blob) fee index relative to smoothed demand benchmark (Appendix G)</td><td>Ethereum execution and blob-fee data</td></tr><tr><td>Control</td><td>Dt*</td><td>Demand factor</td><td>First PC of ETH re-turns, CEX volumes, realized volatility, search intensity, and net stablecoin issuance; standardized</td><td>Off-chain market data; Google Trends</td></tr><tr><td>Control</td><td>Rt</td><td>Regime indicators</td><td>Dummies for London, Merge, post-Dencun regimes</td><td>Protocol up-grade calendar</td></tr><tr><td>Control</td><td>Calt</td><td>Calendar dummies</td><td>UTC weekend, month-end, and quarter-turn indicators</td><td>Calendar</td></tr><tr><td>Control</td><td>Shockt</td><td>Targeted shock dummies</td><td>Event flags for mega NFT mints, sequencer out-ages, airdrop claim days, market-stress episodes (Table 14)</td><td>Curated event catalog</td></tr></table> # 3.2 Treatment: Posting-Clean Adoption Share We define the treatment as the posting-clean adoption share, $$ A _ {t} ^ {c l e a n} = \frac {\mathrm {L 2 u s e r t r a n s a c t i o n s} _ {t}}{\mathrm {L 2 u s e r t r a n s a c t i o n s} _ {t} + \mathrm {L 1 u s e r t r a n s a c t i o n s} _ {t}}, $$ Evolution of L1-L2 dynamics (2019-2024) Figure 1: Regime-Aware Time Series Overview Note: Daily UTC aggregates for treatment ( $A_{t}^{clean}$ ) and congestion outcomes ( $\log C^{fee}$ , utilization $u_{t}$ , scarcity $S_{t}$ ). Shaded bands mark London (2021-08-05), Merge (2022-09-15), and Dencun (2024-03-13); lines show 7-day rolling medians with a log scale for congestion metrics. We identify posting transactions via a point-in-time join against the rollup inbox registry. These postings are removed from both numerator and denominator before computing the share, so $A_{t}^{clean}$ captures end-user execution rather than sequencer posting burden. The construction is applied consistently across the set of canonical Ethereum rollups tracked in our registry, and all quantities are aggregated to the daily UTC grid. The rollup set includes Arbitrum, Optimism, Base, zkSync, Starknet, Linea, and Scroll; Appendix G.3 states the rollup set, and the replication bundle provides the full 12_inbox registries table with contract mappings. By stripping posting transactions from the share, we avoid conditioning on the L2 posting load that sits on the $A_{t}^{clean} \to P_{t} \to C_{t}$ path; Section 4.1 discusses this mediator logic in detail. # 3.3 Outcomes and Congestion Metrics The primary outcome is the log median EIP-1559 base fee, $\log C_t^{fee} = \log (\text{median base fee}_t)$ , computed from canonical Ethereum JSON-RPC traces and cross-checked against public explorers, mirroring the construction in Liu et al. (2022). For each day $t$ we take the median base fee across blocks and then apply the natural logarithm. We track two congestion secondary outcomes. Block utilization $u_{t}$ is the median ratio of gas used to the regime-specific gas limit across blocks in day $t$ , $u_{t} = \mathrm{median}_{b\in t}\left(\frac{\mathrm{gas~used}_b}{\mathrm{gas~limit}_b}\right)$ . The harmonized scarcity index $S_{t}$ combines base fees, priority tips, and blob fees into a single congestion proxy by scaling total per-unit fees relative to a smoothed execution-demand benchmark; the full construction (smoothing window, regime-aware components, and units) is documented in Appendix G. Figure 1 shows that median fees fall sharply after Dencun while utilization and scarcity compress, consistent with blob space easing congestion pressure. All three outcomes are winsorized at the $0.5\%$ tails and share the same $N = 1,245$ daily coverage as the treatment. # 3.4 Controls and Auxiliary Inputs We construct three groups of auxiliary variables—all defined on the same daily UTC grid as the treatment and outcomes—that will later enter the adjustment set $X_{t}$ : - Demand factor $(D_t^*)$ . We condense ETH log returns, centralized-exchange (CEX) log volumes, realized volatility, Google search intensity, and net stablecoin issuance into the first principal component, standardized to mean zero and unit variance. These inputs are purely off-chain and are detailed in the measurement appendix. - Regime and calendar indicators $(\mathbf{R}_t, \mathbf{C}\mathbf{a}\mathbf{l}_t)$ . Regime dummies flag the London, Merge, and post-Dencun eras. Calendar dummies mark weekends, month-ends, and quarter turns to capture deterministic seasonality documented in exploratory diagnostics. - Targeted event dummies $(\mathbf{Shock}_t)$ . A curated event catalog covers mega NFT mints, sequencer outages, notable airdrop claim days, and major market-stress episodes; the full list appears in Table 14. All days and calendar indicators are defined in UTC to match the aggregation grid. Together these variables form the adjustment set $\{D_t^*,\mathbf{R}_t,\mathbf{Cal}_t,\mathbf{Shock}_t\}$ used in the ITS-ECM specifications summarized in Section 4 and listed in Table 1. - Summary. Daily UTC panel (5 August 2021-31 December 2024; $N = 1,245$ ) combining: (i) L1 and L2 on-chain traces for the posting-clean adoption share $A_{t}^{clean}$ ; (ii) EIP-1559 fee and gas-usage data for congestion metrics $(\log C_{t}^{fee}, u_{t}, S_{t})$ ; and (iii) off-chain market and search data, protocol calendars, and curated events for the controls $\{D_{t}^{*}, \mathbf{R}_{t}, \mathbf{Cal}_{t}, \mathbf{Shock}_{t}\}$ . The pre-Dencun (London+Merge; $N = 951$ ) window is the primary window with wide treatment support; post-Dencun days are retained descriptively. # 4 Methodology Method overview. We study how the daily posting-clean Layer-2 adoption share $A_{t}^{clean}$ affects Ethereum Layer-1 congestion using an interrupted time-series (ITS) design. The main estimand is a semi-elasticity: the percentage change in the typical user's base fee for a 1 percentage point rise in $A_{t}^{clean}$ , which we report per 10 percentage points to match observed adoption swings. Our confirmatory analysis uses a levels specification and a corresponding error-correction model (ECM) for short-run dynamics with a fixed outcome family and multiple-testing adjustments; exploratory extensions reuse the same adjustment set but relax some of these constraints. # 4.1 Causal Estimand and DAG # 4.1.1 Estimand in plain language Formally, our main estimand is a semi-elasticity: the percentage change in the log base fee associated with a 1 percentage point increase in $A_t^{clean}$ , conditional on macro-demand, protocol regime, and calendar effects. Reporting effects for a 10 percentage point change aligns the scale with typical observed shifts in L2 market share. Economically, this measures how much a "typical" user's base fee responds to a shift in aggregate L2 adoption, holding the broader environment fixed. Treatment is $A_{t}^{clean}$ ; the confirmatory outcome family is $C_{t} = (\log C_{t}^{fee}, \log S_{t})$ with utilization $u_{t}$ reported as exploratory. The adjustment vector $X_{t} = \{D_{t}^{*}, \mathbf{R}_{t}, \mathbf{Cal}_{t}, \mathbf{Shock}_{t}\}$ matches the covariates introduced in Section 3. For brevity in figures we occasionally write $A_{t}$ ; throughout this section $A_{t} \equiv A_{t}^{clean}$ , the posting-clean adoption share defined in Section 3.2. Construction details, PCA loadings, and validation diagnostics remain in the methodology appendix and the public replication package (Appendix A). # 4.1.2 DAG and identification logic Figure 2 summarizes the causal structure we assume. Figure 2: Directed Acyclic Graph for Total-Effect Identification Paths: Solid = primary causal; dashed = confounding; dash-dotted = mediation; dotted = dynamic feedback. Nodes: Light grey = confounders; medium grey = treatment; darker grey = mediator; darkest grey = outcome. Note: The DAG encodes treatment $A_{t}^{clean}$ (posting-clean L2 adoption share; labeled $A_{t}$ in the graphic for brevity), outcomes $C_{t}$ (congestion metrics), confounders $D_{t}^{*}$ (latent demand) and $U_{t}$ (protocol regimes), mediator $P_{t}$ (posting load), and dynamic feedback $C_{t-1}$ . Conditioning on $\{D_{t}^{*}, U_{t}, \mathbf{Cal}_{t}, \mathbf{Shock}_{t}\}$ blocks the main back-door paths while the mediator-exclusion principle keeps posting activity out of the control set. Dynamic feedback is addressed via deterministic trends and robustness checks. Concretely, $A_{t}^{\text{clean}}$ is the daily posting-clean adoption share from Section 3.2, $C_{t}$ stacks the congestion metrics introduced in Section 3.3, $D_{t}^{*}$ is the off-chain latent demand factor in Section 3.4, $U_{t}$ corresponds to the regime indicators $\mathbf{R}_{t}$ in Section 3.1, and $P_{t}$ denotes the posting load on the $A_{t}^{\text{clean}} \to P_{t} \to C_{t}$ path. Intuitively, both adoption and congestion respond to underlying demand shocks—ETH price moves, DeFi/NFT cycles, and macro news—summarized by $D_{t}^{*}$ together with regime, calendar, and targeted-shock indicators. Higher adoption raises posting load $P_{t}$ through data-availability transactions, which in turn pushes up congestion $C_{t}$ . Because our target is the total effect of adoption on congestion, we adjust for these common shocks while deliberately leaving the $A_{t}^{\mathrm{clean}} \rightarrow P_{t} \rightarrow C_{t}$ path open. The posting-clean construction subtracts L2 posting transactions from both numerator and denominator when forming $A_{t}^{\mathrm{clean}}$ , so the treatment reflects end-user execution rather than sequencer posting burden and we avoid "bad-control" contamination of the total-effect estimand (Wang et al., 2025). Operationally, the adjustment set $X_{t} = \{D_{t}^{*},\mathbf{R}_{t},\mathbf{Cal}_{t},\mathbf{Shock}_{t}\}$ is built to support the identification assumptions listed below using three design choices, backed by diagnostics in the methodology appendix. First, the latent-demand factor uses only off-chain proxies so that mediator pathways (such as L2 posting) are excluded by construction. Second, deterministic regime and calendar structure capture discontinuities from protocol upgrades and recurring seasonality, preventing them from contaminating $A_{t}^{\mathrm{clean}}$ . Third, targeted shock dummies isolate large day-specific shocks (NFT mega-mints, macro turmoil, sequencer outages) that would otherwise spill into both treatment and outcomes. With these controls active, the remaining identifying variation is slow-moving adoption intensity that is plausibly less contaminated by concurrent demand shocks, conditional on $X_{t}$ . Identification assumptions. These design choices are intended to make the following assumptions plausible: 1. Conditional exchangeability: Sequential ignorance holds once we condition on $X_{t}$ ; the covariate definitions and targeted-event coverage tables in the measurement appendix document how each covariate maps to the back-door paths in Figure 2. 2. Positivity within regimes: Treatment-support diagnostics (Appendix B) show wide support across the domain during London and Merge, but post-Dencun days concentrate in a 0.86-0.91 band. Minimum-detectable-effect calculations therefore label post-Dencun slope estimates as exploratory, consistent with Section 5.3. 3. SUTVA / stable interventions: The posting-clean construction keeps $A_{t}^{\mathrm{clean}}$ within the simplex even when L2 posting volumes swell and defines a single aggregate adoption measure per day. Together with daily aggregation, this maintains a stable notion of the treatment (no hidden versions of $A_{t}^{\mathrm{clean}}$ ) and limits cross-day interference, in line with the Stable Unit Treatment Value Assumption (SUTVA). Diagnostics summary. Exchangeability is probed via placebo regressions of $A_{t}^{\mathrm{clean}}$ on lagged outcomes and on leads of $D_{t}^{*}$ ; coefficients cluster near zero in the diagnostics archive. Positivity is reinforced by trimming pre-London outliers where $A_{t}^{\mathrm{clean}} < 0.05$ and by flagging post-Dencun estimates as exploratory whenever coverage collapses. Stability is evaluated through split-sample tests that compare pre- and post-Merge coefficients; the absence of sign flips in the local-projection responses (Figure 3) suggests that the estimand retains meaning across hardware and software upgrades, though we continue to report regime-specific precision. # 4.1.3 Relation to existing empirical work Conceptually, our design complements upgrade-focused empirical analyses of the fee market such as Liu et al. (2022), who compare pre- and post-London behavior, and transaction-level rollup studies such as Gogol et al. (2024), who analyze arbitrage and fee dynamics within specific L2s. Upgrade-focused studies treat London or Dencun as discrete interventions and rely on event-study or regression-discontinuity-in-time designs anchored on those dates. In contrast, our question concerns how continuous variation in aggregate L2 adoption affects L1 congestion across and within regimes, motivating an interrupted time-series design with a continuous treatment rather than a pure event-study/RDiT framework. # 4.2 Main Estimators: ITS Levels and ECM We summarize the confirmatory estimators once here; derivations and additional estimator variants appear in the methodology appendix. # 4.2.1 Long-run levels specification The long-run benchmark is a levels ITS specification, $$ \log C _ {t} ^ {f e e} = \beta_ {0} + \beta_ {1} A _ {t} ^ {c l e a n} + \gamma D _ {t} ^ {*} + \pmb {\delta^ {\prime}} \mathbf {R} _ {t} + \pmb {\theta^ {\prime}} \mathbf {C a l} _ {t} + \pmb {\eta^ {\prime}} \mathbf {S h o c k} _ {t} + \varepsilon_ {t}, \tag {1} $$ where $\pmb{\eta}$ stacks the targeted event controls and $\varepsilon_{t}$ may exhibit serial dependence. Here, $\beta_{1}$ captures the semi-elasticity of congestion with respect to adoption. Because $A_{t}^{clean}$ is scaled on, a 1 percentage point increase corresponds to a 0.01 change in $A_{t}^{clean}$ . We report effects for a 10 percentage point increase in adoption, computed as $$ \% \text {Change in Fees for} 10 \mathrm {pp} = 100 \times \left[ \exp \left(0.10 \times \beta_ {1}\right) - 1 \right]. \tag{2} $$ Reporting effects for a 10 percentage point change makes the magnitude directly comparable to typical movements in L2 market share. Boldface terms denote stacked indicator vectors (regimes $\mathbf{R}_t$ , calendar $\mathbf{Cal}_t$ , shocks $\mathbf{Shock}_t$ ); primes on the corresponding coefficient blocks ( $\delta', \theta', \eta'$ ) indicate row-vector transposes so that, for example, $\delta' \mathbf{R}_t = \sum_j \delta_j R_{j,t}$ . # 4.2.2 Short-run dynamics via error-correction model We test for cointegration between $\log C_t^{fee}$ and $A_t^{clean}$ using Engle-Granger residual unit-root tests and Johansen rank tests (Appendix B). In both cases we reject the null of no cointegration over the pre-Dencun window (Section 5.1), supporting the presence of a stable long-run relation. This motivates an Error-Correction Model (ECM) for short-run inference: $$ \Delta \log C _ {t} ^ {f e e} = \phi E C T _ {t - 1} + \psi \Delta A _ {t} ^ {c l e a n} + \kappa \Delta D _ {t} ^ {*} + \boldsymbol {\lambda} ^ {\prime} \Delta \mathbf {C a l} _ {t} + \omega^ {\prime} \Delta \mathbf {S h o c k} _ {t} + \nu_ {t}, (3) $$ where $ECT_{t-1}$ is the lagged residual from the long-run relation implied by Equation 1. Here, $\psi$ is the instantaneous effect of $\Delta A_t^{clean}$ on the daily change in the log base fee, and $\phi < 0$ is the speed at which fees adjust back to equilibrium. Estimation proceeds in three steps: (i) fit Equation 1 with HAC covariance to obtain the long-run residual, (ii) form $ECT_{t-1}$ by lagging that residual, and (iii) estimate Equation 3 with HAC or feasible GLS while tracking residual diagnostics. The implied half-life $t_{1/2} = \ln(0.5) / \ln(1 + \phi)$ summarizes how quickly fees revert after an adoption shock, and the same three-step procedure yields comparable 10pp semi-elasticities from $\psi$ across confirmatory outcomes. Confirmatory ECM inference uses the full 2021-2024 sample, with post-Dencun days flagged as a separate regime; after differencing and lagging this leaves $N = 1,242$ daily observations, and the primary causal interpretation remains anchored to the pre-Dencun support. Throughout, the ECM reuses the same adjustment set ( $D_t^*, \mathbf{R}_t, \mathbf{Cal}_t, \mathbf{Shock}_t$ ) as the levels specification in Equation 1, so that differences between long-run and short-run estimates reflect dynamics rather than changes in control variables. The confirmatory levels estimator is Prais-Winstein AR(1) FGLS (selected by the residual-dependence diagnostics); ARMA(1,2) is retained solely as a diagnostic alternative. # 4.2.3 Alternative dynamic specifications (robustness) For robustness, we also estimate distributed-lag, Koyck (geometric-lag), first-difference, and local-projection variants, detailed in the methodology appendix. These models share the same adjustment set and are used to check that the sign and magnitude of the adoption effect are not artifacts of the ECM specification. To provide additional evidence on persistence, we include a geometric-lag (Koyck) specification: $$ \log C _ {t} ^ {f e e} = \alpha + \rho \log C _ {t - 1} ^ {f e e} + \beta_ {0} A _ {t} ^ {c l e a n} + \gamma D _ {t} ^ {*} + \pmb {\delta^ {\prime}} \mathbf {R} _ {t} + \pmb {\theta^ {\prime}} \mathbf {C a l} _ {t} + \pmb {\eta^ {\prime}} \mathbf {S h o c k} _ {t} + u _ {t}, \qquad (4) $$ where the long-run multiplier equals $\beta_0 / (1 - \rho)$ whenever $|\rho| < 1$ . Estimates from this specification are treated as supportive evidence on persistence rather than as primary causal effects; full derivations and diagnostic checks are reported in the methodology appendix. Regime-aware variants. When sample support permits, we interact $A_{t}^{clean}$ with Merge and Dencun indicators to estimate differential slopes. Because post-Dencun adoption saturates the treatment domain, these interaction coefficients are reported in Section 5.3 and labeled exploratory. # 4.3 Controls, Regimes, and Inference The implementation details that support Equations 1-3 are summarized in three blocks; extended diagnostics remain in the methodology appendix. Adjustment set and targeted shocks (controls). Our adjustment set combines the PCA-based latent demand factor $(D_t^*)$ , regime dummies $(\mathbf{R}_t)$ , calendar indicators $(\mathbf{Cal}_t)$ , and a curated set of targeted shock dummies $\mathbf{Shock}_t$ covering mega NFT mints, sequencer or mainnet outages, large airdrop claim days, and major market-stress episodes (Section 3.4). This set is chosen to block the main back-door paths in Figure 2 while preserving the mediator path from adoption to posting to congestion. We retain an indicator for any sequencer or mainnet outage in both the long-run and short-run equations so that platform outages do not get misattributed as treatment shocks; detailed coverage diagnostics are reported in Appendix B. Seasonality, regimes, and serial dependence. Deterministic seasonality (weekends, month-ends, quarter turns) and Merge/Dencun regime indicators enter every specification to absorb systematic changes in fee levels and utilization unrelated to L2 adoption. We allow for serially correlated errors and compute heteroskedasticity- and autocorrelation-consistent (HAC) standard errors. In practice, the confirmatory levels run uses Prais-Winstein AR(1) FGLS; compact ARMA corrections are explored as diagnostics and reported alongside Ljung-Box and Breusch-Godfrey checks in the diagnostics appendix. Dynamic feedback is handled by including lagged outcomes when needed (e.g., Koyck, ECM) and by auditing residual autocorrelation in the diagnostics appendix. Kernel choices, bandwidth selection, and spline-based calendar robustness checks live in the diagnostics appendix. The confirmatory window spans the pre-Dencun London $\rightarrow$ Merge period (Section 3.1); post-Dencun estimates are labeled exploratory because treatment support collapses after the 2024 blob upgrade, as shown in the treatment-support diagnostics in Appendix B. Timing, instruments, and outcome family. To guard against mechanical same-day co-movement between $A_{t}^{clean}$ and congestion, we also estimate Equation 1 with $A_{t-1}^{clean}$ on the right-hand side. When exogenous variation is available (sequencer outages or blob-cost changes), we deploy it in a shift-share IV using pre-Dencun chain weights and report weak-instrument-robust confidence intervals in the instrumentation appendix. The confirmatory outcomes are $\log C_t^{fee}$ and $\log S_t$ ; we apply Benjamini-Hochberg corrections at the $5\%$ level and report the corresponding $q$ -values. Utilization and IV extensions are treated as exploratory and presented without multiple-testing adjustment. # 4.4 Confirmatory vs. Exploratory Scope We fix the main estimand (the 10pp semi-elasticity of $\log C_t^{fee}$ and $\log S_t$ with respect to $A_t^{clean}$ ), the adjustment set $(D_t^*,\mathbf{R}_t,\mathbf{Cal}_t,\mathbf{Shock}_t)$ , the levels and ECM specifications in Equations 1-3, and the confirmatory outcome family together with the Benjamini-Hochberg multiple-testing plan. Sections 5.1-5.3 report these confirmatory estimates, including adjustment dynamics and regime heterogeneity, with Benjamini-Hochberg corrections applied across the outcome family. Section 5.5 and the appendices present exploratory diagnostics and post-Dencun extensions that reuse the same adjustment set but fall outside the confirmatory outcome family (e.g., utilization, IV variations, and BSTs counterfactuals). # 5 Results We now present results organized around five questions. These cover how much L2 adoption reduces congestion (Section 5.1), how quickly fees adjust after adoption shocks (Section 5.2), and how effects differ across regimes and precision (Section 5.3). We then ask how robust the findings are across congestion metrics (Section 5.4) and what the exploratory diagnostics and welfare bridges suggest (Section 5.5). Sections 5.1-5.5 report these estimates; the appendices provide additional diagnostics and estimator details. # 5.1 How much does L2 adoption reduce congestion? Key results at a glance. Over the pre-Dencun (London+Merge) window, a 10 percentage point increase in posting-clean L2 adoption lowers median L1 base fees by about $13\%$ (roughly 5 Gwei at pre-Dencun levels), with deviations from the long-run relation decaying with an 11-day half-life. Block utilization and a scarcity index show similar relief. After Dencun, adoption is so high and compressed that the same design cannot reliably detect further fee reductions, even if they exist, so blob-era slopes are reported as exploratory only. # Key empirical results (confirmatory window). - Short-run ECM (Eq. 3): $\psi = -1.382$ (SE 0.368) with $N = 1,242$ days from the full 2021–2024 panel (post-Dencun flagged as a separate regime) implies a $-12.9\%$ change in daily base fees for a 10pp adoption shock. HAC (Bartlett, 7 lags) standard errors yield $p < 0.001$ . - Speed of adjustment: $\phi = -0.061$ (SE 0.011) maps to an 11.1-day half-life back to the long-run equilibrium, confirming meaningful reversion to the Engle-Granger cointegrating relation ( $p = 0.005$ ). - Dynamics: Local projections (Figure 3) show an immediate $-16.2\%$ response to a 10pp adoption step with a $95\%$ CI $[-22.7\%, -9.2\%]$ , and cumulative point estimates remain negative through 28 days even though the $95\%$ bands cross zero after the first week. - Multiple outcomes: Benjamini-Hochberg corrections over $\{\log C^{fee}, \log S_t\}$ yield $q_{\log C^{fee}} = 3.0 \times 10^{-8}$ and $q_{\log S_t} = 1.1 \times 10^{-3}$ ; exploratory outcomes remain unadjusted, with detailed FDR diagnostics reported in Appendix B. In sum, a 10pp increase in L2 adoption lowers mainnet fees by roughly $13\%$ within a few days, and this effect remains statistically precise after false-discovery-rate adjustment over the confirmatory outcome family. In the ECM, $\psi$ is the short-run semi-elasticity: the immediate percentage change in daily base fees from a one-point change in adoption. $\phi$ is the speed of adjustment: it tells us how quickly fees revert to the long-run relation after an adoption shock. We report both on a 10pp scale to match realistic shifts in L2 market share and to reuse the same units in the welfare translation below. Unit-root and cointegration tests (ADF, KPSS, Phillips-Perron, Engle-Granger, Johansen) support treating $A_{t}^{\mathrm{clean}}$ , $\log C_{t}^{fee}$ , and $D_{t}^{*}$ as $I(1)$ with a stable long-run relation. Section 4 outlines the workflow, and Appendix B lists full $p$ -values. This motivates the ECM as our confirmatory short-run design, with the levels specification retained as a descriptive benchmark for the welfare translation. Estimation uses the full 5 August 2021–31 December 2024 panel with post-Dencun days encoded as regime dummies so the causal interpretation remains anchored to the pre-Dencun support. Residual-dependence checks select a Prais-Winsten AR(1) (FGLS) error for the confirmatory levels specification; an ARMA(1,2) fit is retained as a diagnostic alternative in Table 5 of Appendix B. The ECM uses HAC on first differences, consistent with the confirmatory pipeline. A 10pp increase in adoption in the levels ITS corresponds to about an $11.3\%$ reduction in median base fees. At the pre-Dencun mean of 38 Gwei (about $1.02 for a 21k-gas transfer when ETH trades at $1,285), that is roughly 4-5 Gwei or about $0.12 for a typical ETH transfer. These Gwei and dollar translations are direct applications of the semi-elasticity estimand: they translate the log-fee semi-elasticity into the change in gas paid by a representative 21k-gas transfer when L2 adoption rises by 10 percentage points. During high-demand episodes, this back-of-the-envelope mapping implies aggregate short Table 2: Merged Confirmatory Total-Effect Estimates <table><tr><td>Parameter</td><td>Estimate (SE)</td><td>10pp mapping</td><td>Notes</td></tr><tr><td>ECM short-run ψ</td><td>-1.382*** (0.368)</td><td>-12.9%</td><td>Δ log Cfe on ΔAt clean, N = 1,242</td></tr><tr><td>Speed of adjustment φ</td><td>-0.061*** (0.011)</td><td>Half-life 11.1 days</td><td>Engle-Granger residual p = 0.005</td></tr><tr><td>Levels benchmark β</td><td>-1.194*** (0.211)</td><td>-11.3%</td><td>Prais-Winsten AR(1) FGLS, N = 1,244</td></tr><tr><td>Scarcity outcome βS</td><td>-0.062** (0.019)</td><td>-0.60%</td><td>Same spec, confirmatory outcome 2</td></tr></table> Notes: Semi-elasticities use $100 \times [\exp(0.10 \cdot \hat{\beta}) - 1]$ . Standard errors rely on Newey-West HAC (Bartlett, maxlag 7). Significance markers: $* * * p < 0.001$ , $* * p < 0.01$ . All models include the confirmatory adjustment set ( $D_{t}^{*}$ , regime/calendar dummies, targeted shocks, any_outage_t). Benjamini-Hochberg control across the confirmatory outcome family $\{\log C^{fee}, \log S_{t}\}$ yields $q_{\log C^{fee}} = 3.0 \times 10^{-8}$ and $q_{\log S_{t}} = 1.1 \times 10^{-3}$ . These q-values keep both confirmatory outcomes below the $5\%$ FDR threshold within this table. The levels row corresponds to the Prais-Winstein AR(1) FGLS specification used in the confirmatory pipeline; ARMA(1,2) appears only in the diagnostic grid in Appendix B. run savings of tens of millions of dollars across a few months. The BSTS welfare bridge (Figure 4) illustrates the counterfactual calculations behind that claim. Demand-factor stability checks using leave-one-out PCA variants and a lagged $D_{t}^{*}$ deliver the same sign, reinforcing that the result does not hinge on a particular macro proxy combination. Taken together, the ECM and levels views tell a consistent story. The ECM captures the "flow" interpretation (immediate reaction of fee growth to adoption growth), while the Prais-Winstein levels specification provides the "stock" interpretation required for this welfare translation. The gap between the two coefficients—roughly two percentage points—primarily reflects the autoregressive error structure rather than a contradiction in economic content. This confirms that the identification strategy developed in Section 4 yields consistent estimates across specifications. We also benchmark the magnitudes against the fee-market literature. Short-run elasticities in centralized exchange congestion studies typically span $-5\%$ to $-15\%$ for a ten-percentage-point load shift; our $-13\%$ effect sits at the upper end of that range, which is intuitive given the lumpy nature of L2 user adoption. The 11-day half-life matches the cadence observed in on-chain mempool reversion after large NFT mints. That alignment suggests the ECM dynamics are economically plausible rather than an artifact of spline controls. Additional robustness diagnostics—instrumental-variable timing tests, placebo shocks, and shuffled-treatment experiments—are cataloged in the IV and diagnostics appendices and retain the same sign pattern even when statistical power dips. Measurement alignment. The confirmatory estimand hinges on keeping treatment and outcome definitions synchronized with the DAG in Section 4. We therefore reiterate two checks that underpin the table above. First, $A_{t}^{clean}$ is computed from the exact same daily panel used in the ECM (no reindexing or smoothing), and its exclusion of blob- posting activity prevents mediator contamination. Second, the log base-fee outcome is benchmarked against the public eth_fee_history RPC as well as the internal BigQuery mirror so replication scripts and policy dashboards quote identical magnitudes. Detailed SQL and schema notes are provided alongside the replication materials to document both constructs consistently. Macroeconomic context. The confirmatory window spans multiple crypto market regimes—DeFi summer, the Terra/Luna unwind, the Merge, and the run-up to Den-cun—so we stress-tested whether any single macro period drives the headline coefficient. Splitting the sample along these historical boundaries yields semi-elasticities between $-0.9$ and $-1.5$ and the coefficient remains negative even when we drop the 60 most volatile days around Terra/Luna and FTX. These exercises underscore that the causal signal arises from broad-based adoption shifts rather than one-off crises. They also explain why we still include targeted event dummies to soak up short-lived disruptions. Targeted event controls leave both $\psi$ and $\phi$ unchanged, indicating that the latent demand factor is not masking omitted NFT mints, Terra/Luna, FTX, USDC depeg episodes, or sequencer outages. Timing and simultaneity diagnostics likewise return negative coefficients for lagged adoption and control-function IV corrections. Detailed IV tables in the instrumentation appendix document weak first stages (e.g., partial $F \approx 7.6$ for the pooled outage IV) and Anderson-Rubin intervals that span zero. We therefore classify IV evidence as exploratory support for the ITS design rather than a standalone confirmatory estimator. Diagnostic cross-checks. Beyond the core diagnostics, we revisit three common concerns raised in protocol-governance reviews. (i) Serial correlation: Ljung-Box tests up to lag 30 reject for the raw levels regression but not for the ECM residuals once the error-correction term is included. This matches the behavior recorded in the residual-dependence diagnostics in the diagnostics appendix. (ii) Multicollinearity: variance-inflation factors for $A_{t}^{clean}$ , $D_{t}^{*}$ , and the regime/calendar block stay below 2.0. Ridge-regression stress tests retain the negative sign, consistent with the demand-factor variants documented in the estimators appendix. (iii) Omitted mediator risk: the "posting-clean" construction plus the outage dummy ensure that blob-posting costs do not contaminate $A_{t}^{clean}$ . Placebo regressions of $A_{t}^{clean}$ on future congestion deliver coefficients near zero with $p > 0.6$ . Each of these checks has a concise counterpart in Appendices B and G, keeping the core causal claims defensible. Policy bridge. Translating coefficients into operational terminology helps protocol stewards reason about scaling targets. A 10pp increase in L2 adoption roughly corresponds to onboarding 2.3 million additional daily L2 user transactions at current volumes. Mapping our semi-elasticity through Equation 2 implies that achieving the EIP-4844 goal of “ $90\%$ of user activity off L1” would cut base fees by approximately $20\%$ relative to today's mix. Additional blockspace unlocked by future danksharding upgrades would further amplify that relief. This bridge motivates the welfare analysis later in the section and links Section 5.1's confirmatory focus directly to the policy narratives developed in Section 6. Link back to Methods. The confirmatory design summarized here inherits the adjustment set and instrument logic laid out in Section 4. Every robustness variant invoked above reuses that adjustment set rather than introducing ad-hoc controls, so the DAG-backed back-door criterion remains satisfied. Exploratory IVs and timing tests are documented in the instrumentation appendix, keeping Table 2 focused on the primary pathway from L2 adoption to fees. Overall, cointegration-supported ECM estimates and levels benchmarks show that higher L2 adoption delivers double-digit percentage fee relief in the pre-Dencun window, and this conclusion is robust to event controls and alternative demand factors. The magnitude of our semi-elasticity is in line with, but distinct from, prior fee-market studies. Liu et al. (2022) document limited changes in average fee levels around London but emphasize shifts in bidding behavior; our $11 - 13\%$ effect instead captures how aggregate L2 adoption shifts equilibrium fees under fixed protocol rules. Similarly, Gogol et al. (2024) report rollup arbitrage values of roughly $0.03 - 0.25\%$ of trading volume; at the aggregate level, a 10pp L2 penetration moves median L1 fees by an order of magnitude more in percentage terms. We next ask how rapidly these fee reductions materialize and how long they persist. # 5.2 How quickly do fees adjust after an adoption shock? A Koyck geometric-lag model (Eq. 4) yields high persistence in congestion ( $\rho = 0.888$ ) and a modest long-run multiplier ( $\beta_{\infty} \approx 0.13$ ). We therefore rely on Jordà-style local projections to characterize short-run responses. Figure 3 plots horizon-specific responses of $\Delta \log C_{t + h}^{fee}$ to a one-time 10pp adoption shock with HAC bands. The $h = 0$ effect is $-16.2\%$ (95% CI $[-22.7\%, -9.2\%]$ ). Point estimates remain negative through four weeks, but the 95% intervals include zero after the first week. Cumulative semi-elasticities stay below zero through 56 days, yet those longer-horizon intervals also cover zero. Appendix B reports the full grid. Excluding $\pm 7$ -day windows around London, Merge, and Dencun, or adding targeted event controls to the LPs, leaves the $h = 0$ coefficient virtually unchanged. That pattern suggests apparent "rebound" blips are tied to known shocks rather than structural sign flips. Two additional facts emerge from the LPs. First, the cumulative curve begins to Figure 3: Local-Projection Responses to a 10pp Adoption Shock Note: Panel A plots $\beta_{h}$ from regressions of $\Delta \log C_{t + h}^{fee}$ on $\Delta A_t^{clean}$ , $\Delta D_t^*$ , and the confirmatory adjustment set. Panel B maps cumulative responses back to the level scale via $100 \times [\exp(0.10\sum_{\tau \leq h}\hat{\beta}_{\tau}) - 1]$ . Shaded areas denote HAC $95\%$ bands; moving-block bootstrap bands (not shown) are similar for $h \leq 14$ . A 10pp adoption shock corresponds, for example, to raising the posting-clean adoption share $A_t^{clean}$ from $40\%$ to $50\%$ of end-user transactions. flatten after week three but never crosses zero within the 56-day window. The longer-run "sign flip" implied by the geometric-lag algebra would therefore have to materialize beyond two months—a horizon where the data become too noisy for confirmatory claims. Second, the variance of the LP coefficients grows roughly linearly with the horizon, mirroring the variance inflation that we observe when estimating high-order autoregressions. This reinforces the decision to emphasize the short-run ECM rather than chase long-horizon effects with weak precision. We also experiment with counterfactual shock profiles. Replacing the one-time 10pp step with a distributed ramp (five daily 2pp increases) yields nearly identical cumulative responses because adoption growth in practice arrives via multi-day rollouts. Likewise, filtering out the top 10 congestion days (NFT mega-mints plus sequencer outages) barely moves the $h = 0$ point estimate. This underscores that the dynamic profile is not an artifact of a handful of extreme outliers. These sensitivity exercises are logged in the LP diagnostics. Taken together, these estimates indicate that adoption shocks generate immediate fee relief that persists for roughly one month, while any longer-run reversion lies beyond the horizons that the data can estimate precisely. These dynamics interact strongly with regime heterogeneity, which we quantify in Section 5.3. # 5.3 How do effects differ across pre-Dencun vs blob era, and where is power? These dynamic results also explain the regime-split findings: most of the fee relief arrives in the first few weeks, exactly where pre-Dencun data provide rich variation. Once adoption saturates post-Dencun, incremental gains would have to play out beyond 56 days. That is precisely where LP bands are widest and our MDEs explode (Table 3). The post-Dencun period compresses adoption into a narrow 0.86-0.91 band (SD $\approx 0.02$ ), slashing the effective sample size despite 294 calendar days. Power diagnostics summarized in the diagnostics appendix show that the pre-Dencun window can detect semi-elasticities as small as $14\%$ for a 10pp change (effective $N = 147$ ). Post-Dencun inference has $N_{\mathrm{eff}} \approx 47$ and minimum detectable effects exceeding $240\%$ . Local post-Dencun slopes estimated strictly within the observed support are unstable and accompanied by wide partial-identification bounds. Put differently, even though point estimates remain negative after Dencun, the confidence sets are so wide that we cannot claim confirmatory evidence without additional variation (e.g., future windows with lower L1 share). Table 3: Regime-Split Estimates and Detectable Effects <table><tr><td>Metric</td><td>pre-Dencun</td><td>post-Dencun</td></tr><tr><td>Coefficient β (log pts)</td><td>-0.706***</td><td>-5.906</td></tr><tr><td>HAC SE</td><td>0.203</td><td>5.060</td></tr><tr><td>10pp semi-elasticity</td><td>-6.8%</td><td>-44.6%</td></tr><tr><td>Effective Neff</td><td>147.4</td><td>47.5</td></tr><tr><td>MDE (10pp change)</td><td>14%</td><td>240-325%</td></tr></table> Notes: Coefficients arise from regime-split ITS regressions with the confirmatory adjustment set. Effective sample sizes and MDEs correspond to the power analysis summarized in the diagnostics appendix. post-Dencun estimates are therefore labeled exploratory in the main text. We supplement the table with support-aware diagnostics summarized in Appendix B. Within the London+Merge window, semi-elasticities around $-7\%$ per 10pp change are precisely estimated. Post-Dencun slopes are under-powered (MDEs above $240\%$ for a 10pp change). We therefore label blob-era estimates as exploratory and refer readers to the partial-identification and local-support grids in the diagnostics appendix for full details. In other words, even a $45\%$ semi-elasticity in the blob era would be statistically indistinguishable from zero in our design; we can only say that pre-Dencun slopes of roughly $-7\%$ per 10pp are precisely identified, while post-Dencun slopes are essentially unidentifiable given the compressed adoption range. These regime-split results imply that pre-Dencun slopes are precisely estimated and economically modest (about a $7\%$ semi-elasticity). Post-Dencun contrasts are underpow ered—minimum detectable effects exceed $240 - 325\%$ for a 10pp change—so they should not be over-interpreted until treatment support widens. # 5.4 How robust are these results and what happens to other congestion metrics? The tornado, placebo, and outcome-swap diagnostics collapse into three takeaways: - Other congestion metrics. The scarcity outcome yields $\beta_{S} = -0.062$ (SE 0.019), mapping to roughly a $-0.6\%$ change in congestion for a 10pp adoption increase. Utilization $u_{t}$ moves in the same direction, about $-0.15$ percentage points for a 10pp change in the pre-Dencun window, with $q_{\log S_t} < 0.01$ and exploratory $q_{u_t} = 0.31$ . - Error processes. Prais-Winsten/HAC/ARMA sweeps (with ARMA(1,2) as the diagnostic alternative) shift the base-fee coefficient by under 0.15 log points across 15 specifications, matching the stability shown in the robustness grid. - **Placebos.** Shuffled-treatment and ridgeline-support indicators center on zero with $95\%$ confidence bands roughly $[-0.2, 0.2]$ , indicating that the estimated relief is not an artifact of support or calendar alignment. Appendix B and the public replication repository contain the full Benjamini-Hochberg tables, stationarity and error-process diagnostics, and robustness grids that underpin these claims. # 5.5 What do exploratory diagnostics and welfare translation suggest? Event-study and RDiT diagnostics are used solely as checks. Pre-trend F-tests reject parallel trends ( $F = 104$ , $p < 0.001$ ). Post-event coefficients briefly spike (about $+6\%$ ) before decaying. RDiT level shifts at Merge and Dencun of roughly $-0.78$ and $-0.62$ log points shrink when the boundaries are moved to placebo cutoffs. These patterns align with the confirmatory ITS/ECM story but remain exploratory. The BSTS welfare bridge (Figure 4) translates the 10pp semi-elasticity into Merge-era fee savings in the $75-$ 95M range. Appendix F and Table 12 detail the price/ adoption sensitivities that underpin this range. We keep this welfare translation exploratory, offering policy context without extending the confirmatory claims. # 6 Discussion Key takeaways (confirmatory window: London $\rightarrow$ Merge). (i) An increase of 10 percentage points (pp) in posting-clean L2 adoption is associated Figure 4: BSTS Counterfactual: Observed vs. Low-L2 Scenario (Exploratory) Note: Posterior median and $95\%$ credible interval for $\log C^{fee}$ when fixing $A_{t}^{clean}$ at the window's 10th percentile $(73.0\%)$ during 2023-10-28 to 2024-03-12, illustrating the fee-volume gap implied by the 10pp semi-elasticity estimates in Table 2. Post-Dencun days are excluded because extrapolated counterfactual paths become implausible. Detailed sensitivity tables are reported in the supplementary appendix. with $\approx 13\%$ lower median L1 base fees (about 5 Gwei for a 21k transfer at the window mean). (ii) The response is front-loaded: most adjustment occurs within roughly 2-3 weeks. Beyond about one month uncertainty dominates. (iii) Post-Dencun inference is descriptive because support collapses and regime mechanics change. We do not make causal claims for the blob era. # 6.1 Policy Interpretation We organize the implications into three questions: what the estimate means (and does not), how to use it as a planning curve, and why the mapping weakens in the blob era. # Policy mapping. - Effect size: $10\mathrm{pp}\rightarrow \approx 13\%$ lower median L1 base fee. - Timing: half-life $\approx 11$ days; usable horizon $\approx 1$ month. - Scope: London $\rightarrow$ Merge confirmatory window. - Post-Dencun status: descriptive/underpowered until new exogenous variation appears. # 6.1.1 What the estimate means (and does not) In the London $\rightarrow$ Merge confirmatory window, a 10pp increase in posting-clean L2 adoption lowers median L1 base fees by about $13\%$ (roughly 5.2 Gwei or $0.14 for a 21k-gas transfer at the window mean). The adjustment closes half the gap to equilibrium in approximately 11 days. Posting-clean adoption counts end-user execution routed to rollups while netting out sequencer posting traffic (Wang et al., 2025). The estimand therefore captures users leaving L1 execution rather than shifting posting burden. The statement covers median EIP-1559 base fees in that regime. It does not, by itself, pin down tips, total user cost, or blob-era dynamics. Mechanistically, pre-Dencun fee relief comes from fewer users competing for L1 execution gas. When end-user transactions migrate to rollups and sequencer posting is netted out, EIP-1559 demand falls and the base fee declines. After EIP-4844, L2 data availability migrates to blobs that are priced separately from execution gas (Buterin et al., 2024). Additional L2 growth can lower calldata pressure yet leave execution-layer congestion—and therefore the base fee—largely unchanged. To avoid over-reading the estimate, it is not a claim about: - total user cost (base fee $\neq$ base+tip $\neq$ L2 fees); - welfare net of subsidies (the welfare bridge remains exploratory); - blob-era causal effects (support and the mechanism change); - distributional incidence (median base fee $\neq$ tail events); - long-run equilibrium beyond roughly one month given widening uncertainty bands. # 6.1.2 How to use it as a planning curve Sequencer teams and ecosystem treasuries can treat the ECM semi-elasticity as a planning curve. Let $\psi = 0.13$ denote the estimated effect of a $10\mathrm{pp}$ change in posting-clean adoption. If an intervention raises adoption by $\Delta A$ pp for $T$ days, the expected change in the median base fee is $100\times [\exp (0.10\psi \cdot (\Delta A / 10)) - 1]$ percent over that horizon, with roughly half the adjustment arriving in 11 days and most within one month (Figure 3). A break-even rule replaces assertion with calculation: subsidy spend $\leq$ (predicted per-transaction base-fee savings $\times$ affected L1 transaction count). At the window mean, the per-transaction base-fee reduction is about $0.14, scaled by (\Delta A / 10)$ . Pushing L2 share from $60\%$ to $80\%$ (a 20pp move) would therefore be expected to trim median fees by about $24\%$ using the exponential mapping above. Campaigns launched when adoption already sits above $85\%$ may still be operationally valuable, but the variance of the effect and the confidence bands widen, making causal evaluation harder. This reframes congestion management as a portfolio decision over L2 market share rather than a binary "turn on/off" switch. # 6.1.3 Regime caveat: Dencun changes the mapping EIP-4844 routes L2 data availability to blobs and prices it separately from execution gas. Additional L2 adoption can ease calldata pressure. It may not meaningfully reduce L1 execution congestion because the EIP-1559 base fee remains tied to execution demand (Liu et al., 2022). Post-Dencun days also cluster in a narrow 0.86–0.91 adoption band. The effective sample size collapses. Table 3 and the diagnostics appendix therefore label blob-era slopes as underpowered. The post-Dencun estimates in this paper are descriptive signals for monitoring, not confirmatory causal updates. They remain descriptive until quasi-experimental variation appears (e.g., blob-parameter changes or exogenous sequencer outages). # 6.2 Limitations and Boundary Conditions Threats to validity fall into five buckets: - Internal validity (simultaneity / weak instrument). Timing diagnostics summarized in the instrumentation appendix show that lagged adoption has the expected sign but low precision. The control-function first stage ( $F = 7.58$ ) falls short of conventional strength, so we emphasize local identification around the pre-Dencun adoption support rather than claiming full exogeneity. - Dynamics and horizon. The Koyck parameter ( $\rho \approx 0.89$ ) and the widening LP bands documented in the diagnostics appendix indicate that any rebound beyond 56 days is statistically indistinguishable from zero. Welfare projections longer than about a month remain exploratory. - Regime validity (post-Dencun). Regime-split estimates in Table 3 combined with the MDE calculations show that even a $45\%$ semi-elasticity would be indistinguishable from noise in the blob era. Because blobs price data separately from execution gas, the structural channel linking adoption to the base fee also weakens. We therefore restrict confirmatory claims to the pre-Dencun window. - Measurement validity. Posting-clean adoption is constructed by netting sequencer posting from end-user execution. Misclassification, coverage gaps for newer rollups, or relabeling by data providers could introduce level shifts that affect both the instrument and outcome series until detected. - External validity. The semi-elasticity may differ across application mixes (DeFi vs NFT vs stablecoin flows) and could be muted if lower fees induce rebound demand. Extrapolating to other EIP-1559 chains requires similar L2 penetration, fee-market mechanics, and monitoring of distributional incidence. In practice, these threats encourage a division of labor between engineering experimentation and econometric evaluation. Short-run fee relief and within-regime comparisons can be evaluated with the present ECM and ITS toolkit, provided posting-clean labels are periodically audited for measurement drift. New instruments should avoid introducing additional simultaneity. Longer-run welfare or cross-regime counterfactuals will likely require new sources of quasi-experimental variation. Promising candidates include exogenous outages, parametric changes to blob markets, or natural experiments in sequencer fee rebates. External validity concerns also motivate segmenting outcomes by application mix before extrapolating to other chains. A replication log records these boundary conditions. Future updates—whether from Ethereum or other EIP-1559 chains—can extend the window for causal inference without revising the core identification strategy. Taken together, residual simultaneity, short-horizon precision limits, regime shifts, and measurement/external scope boundaries delimit where our core causal claims apply. They highlight the need for fresh instruments, monitoring of classification, and longer panels. # 6.3 Open Questions and Monitoring Playbook Replication artifacts are in Appendix A; the replication repository carries the full audit log and change history. The remaining agenda for L2-L1 congestion research is best framed as concrete, monitorable questions rather than meta-guidance: 1. Post-Dencun identification. What quasi-experimental shocks create exogenous adoption variation now that blobs absorb most L2 data? Candidates include blob fee parameter changes (e.g., target gas adjustments in Buterin et al., 2024), sequencer outages, and forced migrations during prover or bridge upgrades. A running changelog of these events—timestamped and paired with posting-clean adoption—keeps the ECM/ITS designs re-estimable the moment variation appears. 2. Mechanism split (blobs vs execution gas). Does higher L2 adoption still relieve execution congestion, or only calldata/DA pressure? Monitoring should separate blob pricing from execution-layer base fees. It should also track how sequencer pricing rules respond, leveraging the posting-pricing interaction modeled by Wang et al. (2025). 3. Heterogeneity and incidence. Which user segments capture the fee relief—DeFi vs NFT vs stablecoin flows? How does it differ for latency-sensitive traders versus routine transfers? Segmenting L2 inflows, bridge mix, and cross-rollup price gaps (cf. Gogol et al., 2024) would reveal whether congestion relief accrues to whales, retail users, or MEV searchers. 4. Early-warning monitoring. At what thresholds does the confirmatory design lose power (e.g., adoption sustained above $80 - 90\%$ ) and require fresh instruments? A lightweight playbook is three steps. (i) Maintain daily dashboards for posting-clean adoption, blob utilization, and sequencer incidents. (ii) Rerun the ECM each time a shock hits or the adoption distribution shifts. (iii) Archive the resulting IRFs and diagnostics alongside the replication bundle so the evidence base compounds across upgrades. These questions turn Section 5 into a live monitoring blueprint. Instead of restating transparency logistics, they specify what new variation to watch for, how to split mechanisms, and which distributional outcomes determine who benefits from the congestion relief. # 7 Conclusion Short answer: yes—higher L2 adoption decongests Ethereum's fee market in the short run, but the relief is partial and local in time. A 10 percentage point increase in posting- clean adoption lowers L1 base fees by roughly $13 \%$ (about 5 Gwei or $\$ 0.14$ for a 21k- gas transfer at the pre- Dencun mean), and deviations from the long- run relation decay with an 11- day half- life. Together with the dynamic profile in Figure 3 and the ECM benchmark in Table 2, these numbers provide regime- aware causal evidence that the rollup- centric roadmap already buys near- term congestion relief. Conceptually, the paper introduces a posting-clean adoption measure that captures user migration rather than posting load, a demand factor that avoids mediator contamination, and a regime-aware ITS-ECM template for monitoring rollup-centric scaling. Substantively, it delivers the first cross-regime causal estimate of how aggregate L2 adoption decongests Ethereum's mainnet and translates the semi-elasticity into Gwei and dollar savings that are directly interpretable for protocol designers and users. These claims are bounded. Inference is local to the pre-Dencun regime where adoption still moves, and precision fades beyond roughly a month of horizons. Instrument strength is modest, so simultaneity concerns are handled with cautious timing diagnostics rather than strong exclusion. As summarized in Section 6.2, these boundaries keep confirmatory claims narrow while flagging where additional variation is needed. For protocol designers and governance bodies, the practical implication is that fee-market reforms and L2 ecosystem support should be evaluated jointly. Moving L2 user share from $60\%$ to $80\%$ would lower median base fees by roughly a quarter at pre-Dencun demand levels, putting adoption subsidies on the same order as the fee changes analyzed around the London upgrade (Liu et al., 2022). In the blob era, incentives that shift activity onto rollups or smooth posting schedules operate alongside the blob-fee parameters in Buterin et al. (2024), making adoption-driven interventions a complementary lever rather than a substitute for base-fee tuning. Future work should extend the confirmatory window as post-Dencun variance widens, seek quasi-experimental shocks in blob pricing or sequencer operations, and map distributional incidence using address-tagged data so that welfare gains from rollup-driven congestion relief can be allocated across user types.
arxiv_physics
2025-12-09T00:00:00Z
https://arxiv.org/pdf/2512.14724
{"title": "Layer-2 Adoption and Ethereum Mainnet Congestion: Regime-Aware Causal Evidence Across London, the Merge, and Dencun (2021-2024)", "raw_content": "# Layer-2 Adoption and Ethereum Mainnet Congestion: Regime-Aware Causal Evidence Across London, the Merge, and Dencun (2021–2024)\n\nAysajan Eziz*\n\nIvey Business School, Western University, Canada\n\n# Abstract\n\nDo Ethereum's Layer-2 (L2) rollups actually decongest the Layer-1 (L1) mainnet once protocol upgrades and demand are held constant? Using a 1,245-day daily panel from August 5, 2021 to December 31, 2024 that spans the London, Merge, and Dencun upgrades, we link Ethereum fee and congestion metrics to L2 user activity, macro-demand proxies, and targeted event indicators. We estimate a regime-aware error-correction model that treats posting-clean L2 user share as a continuous treatment. Over the pre-Dencun (London+Merge) window, a 10 percentage point increase in L2 adoption lowers median base fees by about $13\\%$ roughly 5 Gwei at pre-Dencun levels—and deviations from the long-run relation decay with an 11-day half-life. Block utilization and a scarcity index show similar congestion relief. After Dencun, L2 adoption is already high and treatment support narrows, so blob-era estimates are statistically imprecise and we treat them as exploratory. The pre-Dencun window therefore delivers the first cross-regime causal estimate of how aggregate L2 adoption decongests Ethereum, together with a reusable template for monitoring rollup-centric scaling strategies.\n\nKeywords: Ethereum; Layer-2 rollups; transaction fees; congestion; causal time series\n\nJEL Classification: C22, C54, L86, O33\n\n# 1 Introduction\n\nEthereum's fee market has traversed three structural regimes in rapid succession—London's EIP-1559 base-fee burn, the Merge's proof-of-stake transition, and Dencun's EIP-4844 blob space. Each upgrade reshaped how congestion costs are priced and burned but did not expand Layer-1 (L1) execution capacity. Bursts of NFT minting, stablecoin arbitrage, or L2 posting therefore still push median fees into the tens of Gwei and crowd out smaller users.\n\nOver the same period, optimistic and zero-knowledge Layer-2 (L2) rollups matured from pilots into production systems that regularly settle more than half of Ethereum's transactions. These rollups offload execution but also consume L1 blockspace when publishing compressed batches. This creates an open question: does aggregate L2 adoption relieve mainnet congestion or merely reshuffle it across users, time, and layers? We ask: when overall demand and protocol regime are held constant, does higher L2 user adoption reduce Ethereum mainnet congestion?\n\nOur main findings are straightforward. Over the London $\\rightarrow$ Merge window, a 10 percentage point increase in posting-clean L2 adoption is associated with about a $13\\%$ reduction in median base fees. That corresponds to roughly 5 Gwei at pre-Dencun fee levels. An error-correction term implies an 11-day half-life back to the long-run relation between adoption, congestion, and demand. The fee relief is therefore meaningful but partial and short-run. Supporting metrics based on block utilization and a scarcity index show similar congestion relief. Blob-era slopes after Dencun are statistically imprecise because adoption is already near saturation, so we treat those estimates as exploratory.\n\nExisting work on Ethereum's fee market and rollups shows how individual upgrades and rollup designs affect incentives, price discovery, and posting costs. However, most studies focus on single events or descriptive dashboards rather than regime-spanning causal estimates. Empirical analyses of fee-market upgrades and rollup pricing quantify local changes in fees, waiting times, or cross-rollup spreads. They do not estimate the total effect of aggregate L2 adoption on mainnet congestion across the London $\\rightarrow$ Merge $\\rightarrow$ Dencun sequence or cleanly separate that effect from shared demand shocks.\n\nWe address this gap by assembling a regime-aware daily panel of $N = 1,245$ observations from August 5, 2021 through December 31, 2024 that spans the London, Merge, and post-Dencun eras. The panel links median base fees, block utilization, and a congestion scarcity index to a posting-clean measure of L2 user adoption and to a single demand factor summarizing ETH-market activity and stablecoin flows. Calendar and regime dummies plus targeted event indicators capture protocol shifts and discrete shocks. We estimate a regime-aware error-correction model and complementary time-series designs to map adoption shocks into short-run and medium-run congestion outcomes.\n\nThe adoption measure counts end-user transactions on rollups and mainnet while excluding L2-to-L1 posting flows, so the adoption $\\rightarrow$ posting $\\rightarrow$ congestion channel remains part of the estimand. Together with the demand factor, this keeps the estimand focused on the total effect of user migration onto L2s without conditioning on mediator pathways. Section 4 provides the full construction details and adjustment logic.\n\n# 1.1 Contributions\n\nOur contributions are fourfold:\n\n1. Cross-regime causal estimate. We provide a regime-aware causal estimate of the total effect of L2 adoption on L1 fees spanning the London $\\rightarrow$ Merge $\\rightarrow$ Dencun sequence, rather than focusing on a single upgrade or contemporaneous correlations. \n2. Measurement design. We introduce a posting-clean adoption measure and a demand factor that deliberately exclude mediator pathways, offering a reusable template for avoiding post-treatment conditioning in blockchain congestion studies. \n3. Policy translation. We map semi-elasticities into Gwei and dollar savings for representative transactions and adoption scenarios, connecting econometric quantities to fee levels and cost savings that protocol designers and users directly observe. \n4. Template for monitoring. We combine a regime-aware error-correction framework with a compact set of diagnostics into a monitoring toolkit that can be updated as new data arrive and ported to other rollup-centric ecosystems.\n\n# 1.2 Roadmap\n\nSection 2 situates this contribution relative to empirical studies of Ethereum's fee market, rollup design, and causal time-series methods, highlighting why existing work cannot recover the total effect of aggregate L2 adoption on mainnet congestion. Section 3 describes the panel construction and variable definitions, and Section 4 outlines the causal design and estimators. Section 5 reports the empirical results, and Sections 6-7 discuss implications and conclude. Appendix A documents the data and code assets, and the replication repository carries the full reproducibility record.\n\n# 2 Related Work\n\n# 2.1 Fee-Market Design and Ethereum Upgrades\n\nScholarship on Ethereum's fee market shows how protocol upgrades reshape incentives without immediately expanding Layer-1 (L1) throughput. EIP-1559's base-fee burn and\n\nelastic block size improved price discovery and reduced fee volatility while leaving the hard cap on computation unchanged (Buterin et al., 2021). The Merge stabilized slot times and validator incentives without materially increasing execution capacity. Dencun's EIP-4844 then introduced dedicated blob space that dramatically reduced Layer-2 (L2) posting costs (Buterin et al., 2024).\n\nEmpirical analyses of EIP-1559 document how the new fee mechanism affects transaction fees, waiting times, and consensus margins (Liu et al., 2022), while recent work on L2 arbitrage and rollup pricing studies cross-rollup spreads and the interaction between posting costs and liquidity provision (Gogol et al., 2024; Wang et al., 2025). Existing empirical work on Ethereum's fee market and rollups therefore either focuses on a single upgrade such as EIP-1559 or on protocol-level behavior inside specific rollup or application ecosystems, carefully quantifying local changes in fees, spreads, or posting costs but not the total effect of aggregate L2 user adoption on mainnet congestion across multiple protocol regimes. Industry observatories track the resulting growth of optimistic and zero-knowledge rollups, transitions from calldata to blob usage, and the emergence of posting-fee arbitrage, $^{1}$ but they typically treat L2 posting as part of user demand or abstract from macro shocks that jointly affect L1 congestion and L2 adoption. Our design fills this gap by treating L2 adoption as a continuous treatment and explicitly modeling the sequence of London, Merge, and Dencun regimes.\n\n# 2.2 Empirical Congestion and Causal Time-Series Methods\n\nCausal and time-series methods developed in adjacent technology and financial settings provide templates for credible evaluation of congestion policies. Interrupted time series (ITS) and segmented regression remain staples for policy impact analysis (Bernal et al., 2017; Penfold and Zhang, 2013). Continuous-treatment event studies extend difference-in-differences logic to dosage-style shocks with explicit pre-trend tests (de Chaisemartin and D'Haultfoeuille, 2020). Bayesian Structural Time Series (BSTS) constructs probabilistic counterfactual paths with state-space components for trends, seasonality, and contemporaneous covariates (Brodersen et al., 2015), and Regression Discontinuity in Time (RDiT) exploits sharp policy boundaries when smoothness assumptions hold (Hausman and Rapson, 2018). These designs have been deployed in fintech launches, payment reforms, and energy-market interventions, and they underlie several recent empirical studies of blockchain fee dynamics and rollup pricing. Yet existing congestion studies rarely combine DAG-guided adjustment sets, mediator exclusion, and semi-elasticity reporting that maps coefficients into user-level cost changes.\n\n# 2.3 Broader Congestion and Market-Design Literatures\n\nRegulatory and market-microstructure literatures highlight the perils of conditioning on post-treatment variables when evaluating market design. Work on tax holidays, exchange-fee rebates, and telecom interconnection policies stresses the need for clean treatment definitions and transparent adjustment sets to maintain credibility when interventions unfold over multiple regimes. In the rollup-centric roadmap, L2 adoption both responds to and influences L1 congestion, so empirical strategies must avoid conditioning on posting flows and clearly distinguish exploratory diagnostics from confirmatory estimands.\n\nViewed through this lens, Ethereum's L1/L2 stack resembles other congestion-pricing problems in transportation networks, electricity grids, and payment systems: multiple service layers share a common bottleneck, and welfare depends on how incentives, fee schedules, and governance are coupled across layers. Existing studies either focus on single upgrades, rely on contemporaneous correlations pulled from dashboards, or embed L2 posting in both treatment and controls, diluting the estimand. To our knowledge, there is no regime-aware, DAG-grounded causal study that estimates the total effect of L2 adoption on L1 congestion across London, the Merge, and Dencun, nor one that pairs a posting-clean treatment with a demand factor that excludes mediator pathways. This study fills that gap by providing cross-regime semi-elasticities and adjustment dynamics that speak directly to Ethereum's rollup-centric scaling roadmap.\n\n# 3 Data and Variables\n\nWe construct a daily UTC panel that tracks Ethereum Layer-1 congestion, Layer-2 user activity, and macro-demand proxies across the London, Merge, and Dencun upgrades. Each observation aggregates raw L1 and L2 transaction traces, blob-fee data, off-chain market indicators, and a curated event list into the variables summarized in Table 1. The unit of analysis is a calendar day, and unless stated otherwise all quantities are computed on this daily grid.\n\n# 3.1 Sample Window, Regimes, and Panel Snapshot\n\nOur daily sample runs from 5 August 2021 (London / EIP-1559 activation) through 31 December 2024, yielding $N = 1,245$ UTC days. It spans three protocol regimes: London (406 days), Merge (545 days), and the post-Dencun blob era (294 days). Figure 1 plots the posting-clean L2 transaction share $A_{t}^{clean}$ , log base fee, block utilization, and the scarcity index across the four labeled regimes (pre-London, London→Merge, Merge→Dencun, post-Dencun); shaded bands mark the upgrade dates that define the regime indicators $\\mathbf{R}_{t}$ .\n\nUnless noted otherwise, the pre-Dencun (London+Merge; $N = 951$ ) window is the confirmatory window because $A_{t}^{clean}$ still traverses a wide portion of [0,1]. The blobera post-Dencun window is retained for descriptive context, as $A_{t}^{clean}$ is already near saturation (Section 5.3). Descriptive figures and summary statistics continue to use the full $N = 1,245$ -day panel. Table 1 summarizes the key variables and data sources; extended descriptive and treatment-support diagnostics appear in Appendix B.\n\nTable 1: Key Variables and Data Sources \n\n<table><tr><td>Role</td><td>Symbol</td><td>Description</td><td>Construction (brief)</td><td>Source(s)</td></tr><tr><td>Treatment</td><td>Atclean</td><td>Posting-clean L2 adoption share</td><td>Daily share of L2 end-user tx in total L1+L2 user tx; L2→L1 postings removed from both sides</td><td>L1/L2 traces; rollup inbox registry</td></tr><tr><td>Outcome</td><td>logCtfee</td><td>Log median base fee</td><td>Log of median EIP-1559 base fee (Gwei) across blocks in day t</td><td>Ethereum mainnet block traces; public feed dashboards</td></tr><tr><td>Outcome</td><td>ut</td><td>Block utilization</td><td>Median gas used divided by gas limit across blocks in day t</td><td>Ethereum mainnet block traces</td></tr><tr><td>Outcome</td><td>St</td><td>Scarcity index</td><td>Composite (base + tip + blob) fee index relative to smoothed demand benchmark (Appendix G)</td><td>Ethereum execution and blob-fee data</td></tr><tr><td>Control</td><td>Dt*</td><td>Demand factor</td><td>First PC of ETH re-turns, CEX volumes, realized volatility, search intensity, and net stablecoin issuance; standardized</td><td>Off-chain market data; Google Trends</td></tr><tr><td>Control</td><td>Rt</td><td>Regime indicators</td><td>Dummies for London, Merge, post-Dencun regimes</td><td>Protocol up-grade calendar</td></tr><tr><td>Control</td><td>Calt</td><td>Calendar dummies</td><td>UTC weekend, month-end, and quarter-turn indicators</td><td>Calendar</td></tr><tr><td>Control</td><td>Shockt</td><td>Targeted shock dummies</td><td>Event flags for mega NFT mints, sequencer out-ages, airdrop claim days, market-stress episodes (Table 14)</td><td>Curated event catalog</td></tr></table>\n\n# 3.2 Treatment: Posting-Clean Adoption Share\n\nWe define the treatment as the posting-clean adoption share,\n\n$$\nA _ {t} ^ {c l e a n} = \\frac {\\mathrm {L 2 u s e r t r a n s a c t i o n s} _ {t}}{\\mathrm {L 2 u s e r t r a n s a c t i o n s} _ {t} + \\mathrm {L 1 u s e r t r a n s a c t i o n s} _ {t}},\n$$\n\n![](images/15cf84a1d1c633b644a37e516b80d64fb6fdf81a0267dff1e282d3656f3592b2.jpg) \nEvolution of L1-L2 dynamics (2019-2024)\n\n![](images/f6f2d050a07564d81b26efbe29d849c0bfadf58f0ba26595465bdefb9c1f7bd5.jpg)\n\n![](images/ef88bc9d0f31be843c3ab784732e030cbe1fbe756b1621bccdc06ce90496e03d.jpg)\n\n![](images/a1d27b4517432f2abe10b3aa5f4ceb120a12fb45597144254d730e497cc54b8e.jpg)\n\n![](images/78edc5162da2cae270662e940422bd251b5860a0121185a27b8dff2da0333a6b.jpg) \nFigure 1: Regime-Aware Time Series Overview\n\nNote: Daily UTC aggregates for treatment ( $A_{t}^{clean}$ ) and congestion outcomes ( $\\log C^{fee}$ , utilization $u_{t}$ , scarcity $S_{t}$ ). Shaded bands mark London (2021-08-05), Merge (2022-09-15), and Dencun (2024-03-13); lines show 7-day rolling medians with a log scale for congestion metrics.\n\nWe identify posting transactions via a point-in-time join against the rollup inbox registry. These postings are removed from both numerator and denominator before computing the share, so $A_{t}^{clean}$ captures end-user execution rather than sequencer posting burden. The construction is applied consistently across the set of canonical Ethereum rollups tracked in our registry, and all quantities are aggregated to the daily UTC grid. The rollup set includes Arbitrum, Optimism, Base, zkSync, Starknet, Linea, and Scroll; Appendix G.3 states the rollup set, and the replication bundle provides the full 12_inbox registries table with contract mappings. By stripping posting transactions from the share, we avoid conditioning on the L2 posting load that sits on the $A_{t}^{clean} \\to P_{t} \\to C_{t}$ path; Section 4.1 discusses this mediator logic in detail.\n\n# 3.3 Outcomes and Congestion Metrics\n\nThe primary outcome is the log median EIP-1559 base fee, $\\log C_t^{fee} = \\log (\\text{median base fee}_t)$ , computed from canonical Ethereum JSON-RPC traces and cross-checked against public explorers, mirroring the construction in Liu et al. (2022). For each day $t$ we take the median base fee across blocks and then apply the natural logarithm.\n\nWe track two congestion secondary outcomes. Block utilization $u_{t}$ is the median ratio of gas used to the regime-specific gas limit across blocks in day $t$ , $u_{t} = \\mathrm{median}_{b\\in t}\\left(\\frac{\\mathrm{gas~used}_b}{\\mathrm{gas~limit}_b}\\right)$ . The harmonized scarcity index $S_{t}$ combines base fees, priority tips, and blob fees into a single congestion proxy by scaling total per-unit fees relative to a smoothed execution-demand benchmark; the full construction (smoothing window, regime-aware components, and units) is documented in Appendix G.\n\nFigure 1 shows that median fees fall sharply after Dencun while utilization and scarcity compress, consistent with blob space easing congestion pressure. All three outcomes are winsorized at the $0.5\\%$ tails and share the same $N = 1,245$ daily coverage as the treatment.\n\n# 3.4 Controls and Auxiliary Inputs\n\nWe construct three groups of auxiliary variables—all defined on the same daily UTC grid as the treatment and outcomes—that will later enter the adjustment set $X_{t}$ :\n\n- Demand factor $(D_t^*)$ . We condense ETH log returns, centralized-exchange (CEX) log volumes, realized volatility, Google search intensity, and net stablecoin issuance into the first principal component, standardized to mean zero and unit variance. These inputs are purely off-chain and are detailed in the measurement appendix. \n- Regime and calendar indicators $(\\mathbf{R}_t, \\mathbf{C}\\mathbf{a}\\mathbf{l}_t)$ . Regime dummies flag the London, Merge, and post-Dencun eras. Calendar dummies mark weekends, month-ends, and\n\nquarter turns to capture deterministic seasonality documented in exploratory diagnostics.\n\n- Targeted event dummies $(\\mathbf{Shock}_t)$ . A curated event catalog covers mega NFT mints, sequencer outages, notable airdrop claim days, and major market-stress episodes; the full list appears in Table 14.\n\nAll days and calendar indicators are defined in UTC to match the aggregation grid. Together these variables form the adjustment set $\\{D_t^*,\\mathbf{R}_t,\\mathbf{Cal}_t,\\mathbf{Shock}_t\\}$ used in the ITS-ECM specifications summarized in Section 4 and listed in Table 1.\n\n- Summary. Daily UTC panel (5 August 2021-31 December 2024; $N = 1,245$ ) combining: (i) L1 and L2 on-chain traces for the posting-clean adoption share $A_{t}^{clean}$ ; (ii) EIP-1559 fee and gas-usage data for congestion metrics $(\\log C_{t}^{fee}, u_{t}, S_{t})$ ; and (iii) off-chain market and search data, protocol calendars, and curated events for the controls $\\{D_{t}^{*}, \\mathbf{R}_{t}, \\mathbf{Cal}_{t}, \\mathbf{Shock}_{t}\\}$ . The pre-Dencun (London+Merge; $N = 951$ ) window is the primary window with wide treatment support; post-Dencun days are retained descriptively.\n\n# 4 Methodology\n\nMethod overview. We study how the daily posting-clean Layer-2 adoption share $A_{t}^{clean}$ affects Ethereum Layer-1 congestion using an interrupted time-series (ITS) design. The main estimand is a semi-elasticity: the percentage change in the typical user's base fee for a 1 percentage point rise in $A_{t}^{clean}$ , which we report per 10 percentage points to match observed adoption swings. Our confirmatory analysis uses a levels specification and a corresponding error-correction model (ECM) for short-run dynamics with a fixed outcome family and multiple-testing adjustments; exploratory extensions reuse the same adjustment set but relax some of these constraints.\n\n# 4.1 Causal Estimand and DAG\n\n# 4.1.1 Estimand in plain language\n\nFormally, our main estimand is a semi-elasticity: the percentage change in the log base fee associated with a 1 percentage point increase in $A_t^{clean}$ , conditional on macro-demand, protocol regime, and calendar effects. Reporting effects for a 10 percentage point change aligns the scale with typical observed shifts in L2 market share. Economically, this measures how much a \"typical\" user's base fee responds to a shift in aggregate L2 adoption, holding the broader environment fixed.\n\nTreatment is $A_{t}^{clean}$ ; the confirmatory outcome family is $C_{t} = (\\log C_{t}^{fee}, \\log S_{t})$ with utilization $u_{t}$ reported as exploratory. The adjustment vector $X_{t} = \\{D_{t}^{*}, \\mathbf{R}_{t}, \\mathbf{Cal}_{t}, \\mathbf{Shock}_{t}\\}$ matches the covariates introduced in Section 3. For brevity in figures we occasionally write $A_{t}$ ; throughout this section $A_{t} \\equiv A_{t}^{clean}$ , the posting-clean adoption share defined in Section 3.2. Construction details, PCA loadings, and validation diagnostics remain in the methodology appendix and the public replication package (Appendix A).\n\n# 4.1.2 DAG and identification logic\n\nFigure 2 summarizes the causal structure we assume.\n\nFigure 2: Directed Acyclic Graph for Total-Effect Identification \n![](images/7415a9cf8ddaf762a5e941a988e60f2cc1a8aaca533b506c1f4cc14bf659eaa5.jpg) \nPaths: Solid = primary causal; dashed = confounding; dash-dotted = mediation; dotted = dynamic feedback. Nodes: Light grey = confounders; medium grey = treatment; darker grey = mediator; darkest grey = outcome.\n\nNote: The DAG encodes treatment $A_{t}^{clean}$ (posting-clean L2 adoption share; labeled $A_{t}$ in the graphic for brevity), outcomes $C_{t}$ (congestion metrics), confounders $D_{t}^{*}$ (latent demand) and $U_{t}$ (protocol regimes), mediator $P_{t}$ (posting load), and dynamic feedback $C_{t-1}$ . Conditioning on $\\{D_{t}^{*}, U_{t}, \\mathbf{Cal}_{t}, \\mathbf{Shock}_{t}\\}$ blocks the main back-door paths while the mediator-exclusion principle keeps posting activity out of the control set. Dynamic feedback is addressed via deterministic trends and robustness checks.\n\nConcretely, $A_{t}^{\\text{clean}}$ is the daily posting-clean adoption share from Section 3.2, $C_{t}$ stacks the congestion metrics introduced in Section 3.3, $D_{t}^{*}$ is the off-chain latent demand factor in Section 3.4, $U_{t}$ corresponds to the regime indicators $\\mathbf{R}_{t}$ in Section 3.1, and $P_{t}$ denotes the posting load on the $A_{t}^{\\text{clean}} \\to P_{t} \\to C_{t}$ path.\n\nIntuitively, both adoption and congestion respond to underlying demand shocks—ETH price moves, DeFi/NFT cycles, and macro news—summarized by $D_{t}^{*}$ together with regime, calendar, and targeted-shock indicators. Higher adoption raises posting load $P_{t}$ through data-availability transactions, which in turn pushes up congestion $C_{t}$ . Because our target is the total effect of adoption on congestion, we adjust for these common shocks while deliberately leaving the $A_{t}^{\\mathrm{clean}} \\rightarrow P_{t} \\rightarrow C_{t}$ path open. The posting-clean\n\nconstruction subtracts L2 posting transactions from both numerator and denominator when forming $A_{t}^{\\mathrm{clean}}$ , so the treatment reflects end-user execution rather than sequencer posting burden and we avoid \"bad-control\" contamination of the total-effect estimand (Wang et al., 2025).\n\nOperationally, the adjustment set $X_{t} = \\{D_{t}^{*},\\mathbf{R}_{t},\\mathbf{Cal}_{t},\\mathbf{Shock}_{t}\\}$ is built to support the identification assumptions listed below using three design choices, backed by diagnostics in the methodology appendix. First, the latent-demand factor uses only off-chain proxies so that mediator pathways (such as L2 posting) are excluded by construction. Second, deterministic regime and calendar structure capture discontinuities from protocol upgrades and recurring seasonality, preventing them from contaminating $A_{t}^{\\mathrm{clean}}$ . Third, targeted shock dummies isolate large day-specific shocks (NFT mega-mints, macro turmoil, sequencer outages) that would otherwise spill into both treatment and outcomes. With these controls active, the remaining identifying variation is slow-moving adoption intensity that is plausibly less contaminated by concurrent demand shocks, conditional on $X_{t}$ .\n\nIdentification assumptions. These design choices are intended to make the following assumptions plausible:\n\n1. Conditional exchangeability: Sequential ignorance holds once we condition on $X_{t}$ ; the covariate definitions and targeted-event coverage tables in the measurement appendix document how each covariate maps to the back-door paths in Figure 2. \n2. Positivity within regimes: Treatment-support diagnostics (Appendix B) show wide support across the [0, 1] domain during London and Merge, but post-Dencun days concentrate in a 0.86-0.91 band. Minimum-detectable-effect calculations therefore label post-Dencun slope estimates as exploratory, consistent with Section 5.3. \n3. SUTVA / stable interventions: The posting-clean construction keeps $A_{t}^{\\mathrm{clean}}$ within the [0, 1] simplex even when L2 posting volumes swell and defines a single aggregate adoption measure per day. Together with daily aggregation, this maintains a stable notion of the treatment (no hidden versions of $A_{t}^{\\mathrm{clean}}$ ) and limits cross-day interference, in line with the Stable Unit Treatment Value Assumption (SUTVA).\n\nDiagnostics summary. Exchangeability is probed via placebo regressions of $A_{t}^{\\mathrm{clean}}$ on lagged outcomes and on leads of $D_{t}^{*}$ ; coefficients cluster near zero in the diagnostics archive. Positivity is reinforced by trimming pre-London outliers where $A_{t}^{\\mathrm{clean}} < 0.05$ and by flagging post-Dencun estimates as exploratory whenever coverage collapses. Stability is evaluated through split-sample tests that compare pre- and post-Merge coefficients;\n\nthe absence of sign flips in the local-projection responses (Figure 3) suggests that the estimand retains meaning across hardware and software upgrades, though we continue to report regime-specific precision.\n\n# 4.1.3 Relation to existing empirical work\n\nConceptually, our design complements upgrade-focused empirical analyses of the fee market such as Liu et al. (2022), who compare pre- and post-London behavior, and transaction-level rollup studies such as Gogol et al. (2024), who analyze arbitrage and fee dynamics within specific L2s. Upgrade-focused studies treat London or Dencun as discrete interventions and rely on event-study or regression-discontinuity-in-time designs anchored on those dates. In contrast, our question concerns how continuous variation in aggregate L2 adoption affects L1 congestion across and within regimes, motivating an interrupted time-series design with a continuous treatment rather than a pure event-study/RDiT framework.\n\n# 4.2 Main Estimators: ITS Levels and ECM\n\nWe summarize the confirmatory estimators once here; derivations and additional estimator variants appear in the methodology appendix.\n\n# 4.2.1 Long-run levels specification\n\nThe long-run benchmark is a levels ITS specification,\n\n$$\n\\log C _ {t} ^ {f e e} = \\beta_ {0} + \\beta_ {1} A _ {t} ^ {c l e a n} + \\gamma D _ {t} ^ {*} + \\pmb {\\delta^ {\\prime}} \\mathbf {R} _ {t} + \\pmb {\\theta^ {\\prime}} \\mathbf {C a l} _ {t} + \\pmb {\\eta^ {\\prime}} \\mathbf {S h o c k} _ {t} + \\varepsilon_ {t}, \\tag {1}\n$$\n\nwhere $\\pmb{\\eta}$ stacks the targeted event controls and $\\varepsilon_{t}$ may exhibit serial dependence. Here, $\\beta_{1}$ captures the semi-elasticity of congestion with respect to adoption. Because $A_{t}^{clean}$ is scaled on [0, 1], a 1 percentage point increase corresponds to a 0.01 change in $A_{t}^{clean}$ . We report effects for a 10 percentage point increase in adoption, computed as\n\n$$\n\\% \\text {Change in Fees for} 10 \\mathrm {pp} = 100 \\times \\left[ \\exp \\left(0.10 \\times \\beta_ {1}\\right) - 1 \\right]. \\tag{2}\n$$\n\nReporting effects for a 10 percentage point change makes the magnitude directly comparable to typical movements in L2 market share. Boldface terms denote stacked indicator vectors (regimes $\\mathbf{R}_t$ , calendar $\\mathbf{Cal}_t$ , shocks $\\mathbf{Shock}_t$ ); primes on the corresponding coefficient blocks ( $\\delta', \\theta', \\eta'$ ) indicate row-vector transposes so that, for example, $\\delta' \\mathbf{R}_t = \\sum_j \\delta_j R_{j,t}$ .\n\n# 4.2.2 Short-run dynamics via error-correction model\n\nWe test for cointegration between $\\log C_t^{fee}$ and $A_t^{clean}$ using Engle-Granger residual unit-root tests and Johansen rank tests (Appendix B). In both cases we reject the null of no cointegration over the pre-Dencun window (Section 5.1), supporting the presence of a stable long-run relation. This motivates an Error-Correction Model (ECM) for short-run inference:\n\n$$\n\\Delta \\log C _ {t} ^ {f e e} = \\phi E C T _ {t - 1} + \\psi \\Delta A _ {t} ^ {c l e a n} + \\kappa \\Delta D _ {t} ^ {*} + \\boldsymbol {\\lambda} ^ {\\prime} \\Delta \\mathbf {C a l} _ {t} + \\omega^ {\\prime} \\Delta \\mathbf {S h o c k} _ {t} + \\nu_ {t}, (3)\n$$\n\nwhere $ECT_{t-1}$ is the lagged residual from the long-run relation implied by Equation 1. Here, $\\psi$ is the instantaneous effect of $\\Delta A_t^{clean}$ on the daily change in the log base fee, and $\\phi < 0$ is the speed at which fees adjust back to equilibrium. Estimation proceeds in three steps: (i) fit Equation 1 with HAC covariance to obtain the long-run residual, (ii) form $ECT_{t-1}$ by lagging that residual, and (iii) estimate Equation 3 with HAC or feasible GLS while tracking residual diagnostics. The implied half-life $t_{1/2} = \\ln(0.5) / \\ln(1 + \\phi)$ summarizes how quickly fees revert after an adoption shock, and the same three-step procedure yields comparable 10pp semi-elasticities from $\\psi$ across confirmatory outcomes. Confirmatory ECM inference uses the full 2021-2024 sample, with post-Dencun days flagged as a separate regime; after differencing and lagging this leaves $N = 1,242$ daily observations, and the primary causal interpretation remains anchored to the pre-Dencun support. Throughout, the ECM reuses the same adjustment set ( $D_t^*, \\mathbf{R}_t, \\mathbf{Cal}_t, \\mathbf{Shock}_t$ ) as the levels specification in Equation 1, so that differences between long-run and short-run estimates reflect dynamics rather than changes in control variables. The confirmatory levels estimator is Prais-Winstein AR(1) FGLS (selected by the residual-dependence diagnostics); ARMA(1,2) is retained solely as a diagnostic alternative.\n\n# 4.2.3 Alternative dynamic specifications (robustness)\n\nFor robustness, we also estimate distributed-lag, Koyck (geometric-lag), first-difference, and local-projection variants, detailed in the methodology appendix. These models share the same adjustment set and are used to check that the sign and magnitude of the adoption effect are not artifacts of the ECM specification. To provide additional evidence on persistence, we include a geometric-lag (Koyck) specification:\n\n$$\n\\log C _ {t} ^ {f e e} = \\alpha + \\rho \\log C _ {t - 1} ^ {f e e} + \\beta_ {0} A _ {t} ^ {c l e a n} + \\gamma D _ {t} ^ {*} + \\pmb {\\delta^ {\\prime}} \\mathbf {R} _ {t} + \\pmb {\\theta^ {\\prime}} \\mathbf {C a l} _ {t} + \\pmb {\\eta^ {\\prime}} \\mathbf {S h o c k} _ {t} + u _ {t}, \\qquad (4)\n$$\n\nwhere the long-run multiplier equals $\\beta_0 / (1 - \\rho)$ whenever $|\\rho| < 1$ . Estimates from this specification are treated as supportive evidence on persistence rather than as primary causal effects; full derivations and diagnostic checks are reported in the methodology appendix.\n\nRegime-aware variants. When sample support permits, we interact $A_{t}^{clean}$ with Merge and Dencun indicators to estimate differential slopes. Because post-Dencun adoption saturates the treatment domain, these interaction coefficients are reported in Section 5.3 and labeled exploratory.\n\n# 4.3 Controls, Regimes, and Inference\n\nThe implementation details that support Equations 1-3 are summarized in three blocks; extended diagnostics remain in the methodology appendix.\n\nAdjustment set and targeted shocks (controls). Our adjustment set combines the PCA-based latent demand factor $(D_t^*)$ , regime dummies $(\\mathbf{R}_t)$ , calendar indicators $(\\mathbf{Cal}_t)$ , and a curated set of targeted shock dummies $\\mathbf{Shock}_t$ covering mega NFT mints, sequencer or mainnet outages, large airdrop claim days, and major market-stress episodes (Section 3.4). This set is chosen to block the main back-door paths in Figure 2 while preserving the mediator path from adoption to posting to congestion. We retain an indicator for any sequencer or mainnet outage in both the long-run and short-run equations so that platform outages do not get misattributed as treatment shocks; detailed coverage diagnostics are reported in Appendix B.\n\nSeasonality, regimes, and serial dependence. Deterministic seasonality (weekends, month-ends, quarter turns) and Merge/Dencun regime indicators enter every specification to absorb systematic changes in fee levels and utilization unrelated to L2 adoption. We allow for serially correlated errors and compute heteroskedasticity- and autocorrelation-consistent (HAC) standard errors. In practice, the confirmatory levels run uses Prais-Winstein AR(1) FGLS; compact ARMA corrections are explored as diagnostics and reported alongside Ljung-Box and Breusch-Godfrey checks in the diagnostics appendix. Dynamic feedback is handled by including lagged outcomes when needed (e.g., Koyck, ECM) and by auditing residual autocorrelation in the diagnostics appendix. Kernel choices, bandwidth selection, and spline-based calendar robustness checks live in the diagnostics appendix. The confirmatory window spans the pre-Dencun London $\\rightarrow$ Merge period (Section 3.1); post-Dencun estimates are labeled exploratory because treatment support collapses after the 2024 blob upgrade, as shown in the treatment-support diagnostics in Appendix B.\n\nTiming, instruments, and outcome family. To guard against mechanical same-day co-movement between $A_{t}^{clean}$ and congestion, we also estimate Equation 1 with $A_{t-1}^{clean}$ on the right-hand side. When exogenous variation is available (sequencer outages or blob-cost changes), we deploy it in a shift-share IV using pre-Dencun chain weights and report weak-instrument-robust confidence intervals in the instrumentation appendix.\n\nThe confirmatory outcomes are $\\log C_t^{fee}$ and $\\log S_t$ ; we apply Benjamini-Hochberg corrections at the $5\\%$ level and report the corresponding $q$ -values. Utilization and IV extensions are treated as exploratory and presented without multiple-testing adjustment.\n\n# 4.4 Confirmatory vs. Exploratory Scope\n\nWe fix the main estimand (the 10pp semi-elasticity of $\\log C_t^{fee}$ and $\\log S_t$ with respect to $A_t^{clean}$ ), the adjustment set $(D_t^*,\\mathbf{R}_t,\\mathbf{Cal}_t,\\mathbf{Shock}_t)$ , the levels and ECM specifications in Equations 1-3, and the confirmatory outcome family together with the Benjamini-Hochberg multiple-testing plan. Sections 5.1-5.3 report these confirmatory estimates, including adjustment dynamics and regime heterogeneity, with Benjamini-Hochberg corrections applied across the outcome family. Section 5.5 and the appendices present exploratory diagnostics and post-Dencun extensions that reuse the same adjustment set but fall outside the confirmatory outcome family (e.g., utilization, IV variations, and BSTs counterfactuals).\n\n# 5 Results\n\nWe now present results organized around five questions. These cover how much L2 adoption reduces congestion (Section 5.1), how quickly fees adjust after adoption shocks (Section 5.2), and how effects differ across regimes and precision (Section 5.3). We then ask how robust the findings are across congestion metrics (Section 5.4) and what the exploratory diagnostics and welfare bridges suggest (Section 5.5). Sections 5.1-5.5 report these estimates; the appendices provide additional diagnostics and estimator details.\n\n# 5.1 How much does L2 adoption reduce congestion?\n\nKey results at a glance. Over the pre-Dencun (London+Merge) window, a 10 percentage point increase in posting-clean L2 adoption lowers median L1 base fees by about $13\\%$ (roughly 5 Gwei at pre-Dencun levels), with deviations from the long-run relation decaying with an 11-day half-life. Block utilization and a scarcity index show similar relief. After Dencun, adoption is so high and compressed that the same design cannot reliably detect further fee reductions, even if they exist, so blob-era slopes are reported as exploratory only.\n\n# Key empirical results (confirmatory window).\n\n- Short-run ECM (Eq. 3): $\\psi = -1.382$ (SE 0.368) with $N = 1,242$ days from the full 2021–2024 panel (post-Dencun flagged as a separate regime) implies a $-12.9\\%$ change in daily base fees for a 10pp adoption shock. HAC (Bartlett, 7 lags)\n\nstandard errors yield $p < 0.001$ .\n\n- Speed of adjustment: $\\phi = -0.061$ (SE 0.011) maps to an 11.1-day half-life back to the long-run equilibrium, confirming meaningful reversion to the Engle-Granger cointegrating relation ( $p = 0.005$ ). \n- Dynamics: Local projections (Figure 3) show an immediate $-16.2\\%$ response to a 10pp adoption step with a $95\\%$ CI $[-22.7\\%, -9.2\\%]$ , and cumulative point estimates remain negative through 28 days even though the $95\\%$ bands cross zero after the first week. \n- Multiple outcomes: Benjamini-Hochberg corrections over $\\{\\log C^{fee}, \\log S_t\\}$ yield $q_{\\log C^{fee}} = 3.0 \\times 10^{-8}$ and $q_{\\log S_t} = 1.1 \\times 10^{-3}$ ; exploratory outcomes remain unadjusted, with detailed FDR diagnostics reported in Appendix B.\n\nIn sum, a 10pp increase in L2 adoption lowers mainnet fees by roughly $13\\%$ within a few days, and this effect remains statistically precise after false-discovery-rate adjustment over the confirmatory outcome family.\n\nIn the ECM, $\\psi$ is the short-run semi-elasticity: the immediate percentage change in daily base fees from a one-point change in adoption. $\\phi$ is the speed of adjustment: it tells us how quickly fees revert to the long-run relation after an adoption shock. We report both on a 10pp scale to match realistic shifts in L2 market share and to reuse the same units in the welfare translation below.\n\nUnit-root and cointegration tests (ADF, KPSS, Phillips-Perron, Engle-Granger, Johansen) support treating $A_{t}^{\\mathrm{clean}}$ , $\\log C_{t}^{fee}$ , and $D_{t}^{*}$ as $I(1)$ with a stable long-run relation. Section 4 outlines the workflow, and Appendix B lists full $p$ -values. This motivates the ECM as our confirmatory short-run design, with the levels specification retained as a descriptive benchmark for the welfare translation. Estimation uses the full 5 August 2021–31 December 2024 panel with post-Dencun days encoded as regime dummies so the causal interpretation remains anchored to the pre-Dencun support.\n\nResidual-dependence checks select a Prais-Winsten AR(1) (FGLS) error for the confirmatory levels specification; an ARMA(1,2) fit is retained as a diagnostic alternative in Table 5 of Appendix B. The ECM uses HAC on first differences, consistent with the confirmatory pipeline.\n\nA 10pp increase in adoption in the levels ITS corresponds to about an $11.3\\%$ reduction in median base fees. At the pre-Dencun mean of 38 Gwei (about $1.02 for a 21k-gas transfer when ETH trades at $1,285), that is roughly 4-5 Gwei or about $0.12 for a typical ETH transfer. These Gwei and dollar translations are direct applications of the semi-elasticity estimand: they translate the log-fee semi-elasticity into the change in gas paid by a representative 21k-gas transfer when L2 adoption rises by 10 percentage points. During high-demand episodes, this back-of-the-envelope mapping implies aggregate short\n\nTable 2: Merged Confirmatory Total-Effect Estimates \n\n<table><tr><td>Parameter</td><td>Estimate (SE)</td><td>10pp mapping</td><td>Notes</td></tr><tr><td>ECM short-run ψ</td><td>-1.382*** (0.368)</td><td>-12.9%</td><td>Δ log Cfe on ΔAt clean, N = 1,242</td></tr><tr><td>Speed of adjustment φ</td><td>-0.061*** (0.011)</td><td>Half-life 11.1 days</td><td>Engle-Granger residual p = 0.005</td></tr><tr><td>Levels benchmark β</td><td>-1.194*** (0.211)</td><td>-11.3%</td><td>Prais-Winsten AR(1) FGLS, N = 1,244</td></tr><tr><td>Scarcity outcome βS</td><td>-0.062** (0.019)</td><td>-0.60%</td><td>Same spec, confirmatory outcome 2</td></tr></table>\n\nNotes: Semi-elasticities use $100 \\times [\\exp(0.10 \\cdot \\hat{\\beta}) - 1]$ . Standard errors rely on Newey-West HAC (Bartlett, maxlag 7). Significance markers: $* * * p < 0.001$ , $* * p < 0.01$ . All models include the confirmatory adjustment set ( $D_{t}^{*}$ , regime/calendar dummies, targeted shocks, any_outage_t). Benjamini-Hochberg control across the confirmatory outcome family $\\{\\log C^{fee}, \\log S_{t}\\}$ yields $q_{\\log C^{fee}} = 3.0 \\times 10^{-8}$ and $q_{\\log S_{t}} = 1.1 \\times 10^{-3}$ . These q-values keep both confirmatory outcomes below the $5\\%$ FDR threshold within this table. The levels row corresponds to the Prais-Winstein AR(1) FGLS specification used in the confirmatory pipeline; ARMA(1,2) appears only in the diagnostic grid in Appendix B.\n\nrun savings of tens of millions of dollars across a few months. The BSTS welfare bridge (Figure 4) illustrates the counterfactual calculations behind that claim. Demand-factor stability checks using leave-one-out PCA variants and a lagged $D_{t}^{*}$ deliver the same sign, reinforcing that the result does not hinge on a particular macro proxy combination.\n\nTaken together, the ECM and levels views tell a consistent story. The ECM captures the \"flow\" interpretation (immediate reaction of fee growth to adoption growth), while the Prais-Winstein levels specification provides the \"stock\" interpretation required for this welfare translation. The gap between the two coefficients—roughly two percentage points—primarily reflects the autoregressive error structure rather than a contradiction in economic content. This confirms that the identification strategy developed in Section 4 yields consistent estimates across specifications.\n\nWe also benchmark the magnitudes against the fee-market literature. Short-run elasticities in centralized exchange congestion studies typically span $-5\\%$ to $-15\\%$ for a ten-percentage-point load shift; our $-13\\%$ effect sits at the upper end of that range, which is intuitive given the lumpy nature of L2 user adoption. The 11-day half-life matches the cadence observed in on-chain mempool reversion after large NFT mints. That alignment suggests the ECM dynamics are economically plausible rather than an artifact of spline controls. Additional robustness diagnostics—instrumental-variable timing tests, placebo shocks, and shuffled-treatment experiments—are cataloged in the IV and diagnostics appendices and retain the same sign pattern even when statistical power dips.\n\nMeasurement alignment. The confirmatory estimand hinges on keeping treatment and outcome definitions synchronized with the DAG in Section 4. We therefore reiterate two checks that underpin the table above. First, $A_{t}^{clean}$ is computed from the exact same daily panel used in the ECM (no reindexing or smoothing), and its exclusion of blob-\n\nposting activity prevents mediator contamination. Second, the log base-fee outcome is benchmarked against the public eth_fee_history RPC as well as the internal BigQuery mirror so replication scripts and policy dashboards quote identical magnitudes. Detailed SQL and schema notes are provided alongside the replication materials to document both constructs consistently.\n\nMacroeconomic context. The confirmatory window spans multiple crypto market regimes—DeFi summer, the Terra/Luna unwind, the Merge, and the run-up to Den-cun—so we stress-tested whether any single macro period drives the headline coefficient. Splitting the sample along these historical boundaries yields semi-elasticities between $-0.9$ and $-1.5$ and the coefficient remains negative even when we drop the 60 most volatile days around Terra/Luna and FTX. These exercises underscore that the causal signal arises from broad-based adoption shifts rather than one-off crises. They also explain why we still include targeted event dummies to soak up short-lived disruptions.\n\nTargeted event controls leave both $\\psi$ and $\\phi$ unchanged, indicating that the latent demand factor is not masking omitted NFT mints, Terra/Luna, FTX, USDC depeg episodes, or sequencer outages. Timing and simultaneity diagnostics likewise return negative coefficients for lagged adoption and control-function IV corrections. Detailed IV tables in the instrumentation appendix document weak first stages (e.g., partial $F \\approx 7.6$ for the pooled outage IV) and Anderson-Rubin intervals that span zero. We therefore classify IV evidence as exploratory support for the ITS design rather than a standalone confirmatory estimator.\n\nDiagnostic cross-checks. Beyond the core diagnostics, we revisit three common concerns raised in protocol-governance reviews. (i) Serial correlation: Ljung-Box tests up to lag 30 reject for the raw levels regression but not for the ECM residuals once the error-correction term is included. This matches the behavior recorded in the residual-dependence diagnostics in the diagnostics appendix. (ii) Multicollinearity: variance-inflation factors for $A_{t}^{clean}$ , $D_{t}^{*}$ , and the regime/calendar block stay below 2.0. Ridge-regression stress tests retain the negative sign, consistent with the demand-factor variants documented in the estimators appendix. (iii) Omitted mediator risk: the \"posting-clean\" construction plus the outage dummy ensure that blob-posting costs do not contaminate $A_{t}^{clean}$ . Placebo regressions of $A_{t}^{clean}$ on future congestion deliver coefficients near zero with $p > 0.6$ . Each of these checks has a concise counterpart in Appendices B and G, keeping the core causal claims defensible.\n\nPolicy bridge. Translating coefficients into operational terminology helps protocol stewards reason about scaling targets. A 10pp increase in L2 adoption roughly corresponds to onboarding 2.3 million additional daily L2 user transactions at current volumes.\n\nMapping our semi-elasticity through Equation 2 implies that achieving the EIP-4844 goal of “ $90\\%$ of user activity off L1” would cut base fees by approximately $20\\%$ relative to today's mix. Additional blockspace unlocked by future danksharding upgrades would further amplify that relief. This bridge motivates the welfare analysis later in the section and links Section 5.1's confirmatory focus directly to the policy narratives developed in Section 6.\n\nLink back to Methods. The confirmatory design summarized here inherits the adjustment set and instrument logic laid out in Section 4. Every robustness variant invoked above reuses that adjustment set rather than introducing ad-hoc controls, so the DAG-backed back-door criterion remains satisfied. Exploratory IVs and timing tests are documented in the instrumentation appendix, keeping Table 2 focused on the primary pathway from L2 adoption to fees.\n\nOverall, cointegration-supported ECM estimates and levels benchmarks show that higher L2 adoption delivers double-digit percentage fee relief in the pre-Dencun window, and this conclusion is robust to event controls and alternative demand factors.\n\nThe magnitude of our semi-elasticity is in line with, but distinct from, prior fee-market studies. Liu et al. (2022) document limited changes in average fee levels around London but emphasize shifts in bidding behavior; our $11 - 13\\%$ effect instead captures how aggregate L2 adoption shifts equilibrium fees under fixed protocol rules. Similarly, Gogol et al. (2024) report rollup arbitrage values of roughly $0.03 - 0.25\\%$ of trading volume; at the aggregate level, a 10pp L2 penetration moves median L1 fees by an order of magnitude more in percentage terms.\n\nWe next ask how rapidly these fee reductions materialize and how long they persist.\n\n# 5.2 How quickly do fees adjust after an adoption shock?\n\nA Koyck geometric-lag model (Eq. 4) yields high persistence in congestion ( $\\rho = 0.888$ ) and a modest long-run multiplier ( $\\beta_{\\infty} \\approx 0.13$ ). We therefore rely on Jordà-style local projections to characterize short-run responses. Figure 3 plots horizon-specific responses of $\\Delta \\log C_{t + h}^{fee}$ to a one-time 10pp adoption shock with HAC bands. The $h = 0$ effect is $-16.2\\%$ (95% CI $[-22.7\\%, -9.2\\%]$ ). Point estimates remain negative through four weeks, but the 95% intervals include zero after the first week. Cumulative semi-elasticities stay below zero through 56 days, yet those longer-horizon intervals also cover zero. Appendix B reports the full grid. Excluding $\\pm 7$ -day windows around London, Merge, and Dencun, or adding targeted event controls to the LPs, leaves the $h = 0$ coefficient virtually unchanged. That pattern suggests apparent \"rebound\" blips are tied to known shocks rather than structural sign flips.\n\nTwo additional facts emerge from the LPs. First, the cumulative curve begins to\n\nFigure 3: Local-Projection Responses to a 10pp Adoption Shock \n![](images/df09b33e07a1005089c566fc4f119b905d34deb59dec1b04ddbdbf381a942d6e.jpg) \nNote: Panel A plots $\\beta_{h}$ from regressions of $\\Delta \\log C_{t + h}^{fee}$ on $\\Delta A_t^{clean}$ , $\\Delta D_t^*$ , and the confirmatory adjustment set. Panel B maps cumulative responses back to the level scale via $100 \\times [\\exp(0.10\\sum_{\\tau \\leq h}\\hat{\\beta}_{\\tau}) - 1]$ . Shaded areas denote HAC $95\\%$ bands; moving-block bootstrap bands (not shown) are similar for $h \\leq 14$ . A 10pp adoption shock corresponds, for example, to raising the posting-clean adoption share $A_t^{clean}$ from $40\\%$ to $50\\%$ of end-user transactions.\n\n![](images/0866a59716a2b0581f27be24f0f05d65b4b7f82d950e8d0d7fca96f27ea0789d.jpg)\n\nflatten after week three but never crosses zero within the 56-day window. The longer-run \"sign flip\" implied by the geometric-lag algebra would therefore have to materialize beyond two months—a horizon where the data become too noisy for confirmatory claims. Second, the variance of the LP coefficients grows roughly linearly with the horizon, mirroring the variance inflation that we observe when estimating high-order autoregressions. This reinforces the decision to emphasize the short-run ECM rather than chase long-horizon effects with weak precision.\n\nWe also experiment with counterfactual shock profiles. Replacing the one-time 10pp step with a distributed ramp (five daily 2pp increases) yields nearly identical cumulative responses because adoption growth in practice arrives via multi-day rollouts. Likewise, filtering out the top 10 congestion days (NFT mega-mints plus sequencer outages) barely moves the $h = 0$ point estimate. This underscores that the dynamic profile is not an artifact of a handful of extreme outliers. These sensitivity exercises are logged in the LP diagnostics.\n\nTaken together, these estimates indicate that adoption shocks generate immediate fee relief that persists for roughly one month, while any longer-run reversion lies beyond the horizons that the data can estimate precisely.\n\nThese dynamics interact strongly with regime heterogeneity, which we quantify in Section 5.3.\n\n# 5.3 How do effects differ across pre-Dencun vs blob era, and where is power?\n\nThese dynamic results also explain the regime-split findings: most of the fee relief arrives in the first few weeks, exactly where pre-Dencun data provide rich variation. Once adoption saturates post-Dencun, incremental gains would have to play out beyond 56 days. That is precisely where LP bands are widest and our MDEs explode (Table 3).\n\nThe post-Dencun period compresses adoption into a narrow 0.86-0.91 band (SD $\\approx 0.02$ ), slashing the effective sample size despite 294 calendar days. Power diagnostics summarized in the diagnostics appendix show that the pre-Dencun window can detect semi-elasticities as small as $14\\%$ for a 10pp change (effective $N = 147$ ). Post-Dencun inference has $N_{\\mathrm{eff}} \\approx 47$ and minimum detectable effects exceeding $240\\%$ . Local post-Dencun slopes estimated strictly within the observed support are unstable and accompanied by wide partial-identification bounds. Put differently, even though point estimates remain negative after Dencun, the confidence sets are so wide that we cannot claim confirmatory evidence without additional variation (e.g., future windows with lower L1 share).\n\nTable 3: Regime-Split Estimates and Detectable Effects \n\n<table><tr><td>Metric</td><td>pre-Dencun</td><td>post-Dencun</td></tr><tr><td>Coefficient β (log pts)</td><td>-0.706***</td><td>-5.906</td></tr><tr><td>HAC SE</td><td>0.203</td><td>5.060</td></tr><tr><td>10pp semi-elasticity</td><td>-6.8%</td><td>-44.6%</td></tr><tr><td>Effective Neff</td><td>147.4</td><td>47.5</td></tr><tr><td>MDE (10pp change)</td><td>14%</td><td>240-325%</td></tr></table>\n\nNotes: Coefficients arise from regime-split ITS regressions with the confirmatory adjustment set. Effective sample sizes and MDEs correspond to the power analysis summarized in the diagnostics appendix. post-Dencun estimates are therefore labeled exploratory in the main text.\n\nWe supplement the table with support-aware diagnostics summarized in Appendix B. Within the London+Merge window, semi-elasticities around $-7\\%$ per 10pp change are precisely estimated. Post-Dencun slopes are under-powered (MDEs above $240\\%$ for a 10pp change). We therefore label blob-era estimates as exploratory and refer readers to the partial-identification and local-support grids in the diagnostics appendix for full details.\n\nIn other words, even a $45\\%$ semi-elasticity in the blob era would be statistically indistinguishable from zero in our design; we can only say that pre-Dencun slopes of roughly $-7\\%$ per 10pp are precisely identified, while post-Dencun slopes are essentially unidentifiable given the compressed adoption range.\n\nThese regime-split results imply that pre-Dencun slopes are precisely estimated and economically modest (about a $7\\%$ semi-elasticity). Post-Dencun contrasts are underpow\n\nered—minimum detectable effects exceed $240 - 325\\%$ for a 10pp change—so they should not be over-interpreted until treatment support widens.\n\n# 5.4 How robust are these results and what happens to other congestion metrics?\n\nThe tornado, placebo, and outcome-swap diagnostics collapse into three takeaways:\n\n- Other congestion metrics. The scarcity outcome yields $\\beta_{S} = -0.062$ (SE 0.019), mapping to roughly a $-0.6\\%$ change in congestion for a 10pp adoption increase. Utilization $u_{t}$ moves in the same direction, about $-0.15$ percentage points for a 10pp change in the pre-Dencun window, with $q_{\\log S_t} < 0.01$ and exploratory $q_{u_t} = 0.31$ . \n- Error processes. Prais-Winsten/HAC/ARMA sweeps (with ARMA(1,2) as the diagnostic alternative) shift the base-fee coefficient by under 0.15 log points across 15 specifications, matching the stability shown in the robustness grid. \n- **Placebos.** Shuffled-treatment and ridgeline-support indicators center on zero with $95\\%$ confidence bands roughly $[-0.2, 0.2]$ , indicating that the estimated relief is not an artifact of support or calendar alignment.\n\nAppendix B and the public replication repository contain the full Benjamini-Hochberg tables, stationarity and error-process diagnostics, and robustness grids that underpin these claims.\n\n# 5.5 What do exploratory diagnostics and welfare translation suggest?\n\nEvent-study and RDiT diagnostics are used solely as checks. Pre-trend F-tests reject parallel trends ( $F = 104$ , $p < 0.001$ ). Post-event coefficients briefly spike (about $+6\\%$ ) before decaying. RDiT level shifts at Merge and Dencun of roughly $-0.78$ and $-0.62$ log points shrink when the boundaries are moved to placebo cutoffs. These patterns align with the confirmatory ITS/ECM story but remain exploratory.\n\nThe BSTS welfare bridge (Figure 4) translates the 10pp semi-elasticity into Merge-era fee savings in the $75-$ 95M range. Appendix F and Table 12 detail the price/ adoption sensitivities that underpin this range. We keep this welfare translation exploratory, offering policy context without extending the confirmatory claims.\n\n# 6 Discussion\n\nKey takeaways (confirmatory window: London $\\rightarrow$ Merge). (i) An increase of 10 percentage points (pp) in posting-clean L2 adoption is associated\n\n![](images/b4345acc71099b918196d84bda5052de30f8e15299bd514dcd34b682dc9f5416.jpg) \nFigure 4: BSTS Counterfactual: Observed vs. Low-L2 Scenario (Exploratory) Note: Posterior median and $95\\%$ credible interval for $\\log C^{fee}$ when fixing $A_{t}^{clean}$ at the window's 10th percentile $(73.0\\%)$ during 2023-10-28 to 2024-03-12, illustrating the fee-volume gap implied by the 10pp semi-elasticity estimates in Table 2. Post-Dencun days are excluded because extrapolated counterfactual paths become implausible. Detailed sensitivity tables are reported in the supplementary appendix.\n\nwith $\\approx 13\\%$ lower median L1 base fees (about 5 Gwei for a 21k transfer at the window mean). (ii) The response is front-loaded: most adjustment occurs within roughly 2-3 weeks. Beyond about one month uncertainty dominates. (iii) Post-Dencun inference is descriptive because support collapses and regime mechanics change. We do not make causal claims for the blob era.\n\n# 6.1 Policy Interpretation\n\nWe organize the implications into three questions: what the estimate means (and does not), how to use it as a planning curve, and why the mapping weakens in the blob era.\n\n# Policy mapping.\n\n- Effect size: $10\\mathrm{pp}\\rightarrow \\approx 13\\%$ lower median L1 base fee. \n- Timing: half-life $\\approx 11$ days; usable horizon $\\approx 1$ month. \n- Scope: London $\\rightarrow$ Merge confirmatory window. \n- Post-Dencun status: descriptive/underpowered until new exogenous variation appears.\n\n# 6.1.1 What the estimate means (and does not)\n\nIn the London $\\rightarrow$ Merge confirmatory window, a 10pp increase in posting-clean L2 adoption lowers median L1 base fees by about $13\\%$ (roughly 5.2 Gwei or $0.14 for a 21k-gas transfer at the window mean). The adjustment closes half the gap to equilibrium in approximately 11 days. Posting-clean adoption counts end-user execution routed to rollups while netting out sequencer posting traffic (Wang et al., 2025). The estimand therefore captures users leaving L1 execution rather than shifting posting burden. The statement covers median EIP-1559 base fees in that regime. It does not, by itself, pin down tips, total user cost, or blob-era dynamics.\n\nMechanistically, pre-Dencun fee relief comes from fewer users competing for L1 execution gas. When end-user transactions migrate to rollups and sequencer posting is netted out, EIP-1559 demand falls and the base fee declines. After EIP-4844, L2 data availability migrates to blobs that are priced separately from execution gas (Buterin et al., 2024). Additional L2 growth can lower calldata pressure yet leave execution-layer congestion—and therefore the base fee—largely unchanged.\n\nTo avoid over-reading the estimate, it is not a claim about:\n\n- total user cost (base fee $\\neq$ base+tip $\\neq$ L2 fees); \n- welfare net of subsidies (the welfare bridge remains exploratory);\n\n- blob-era causal effects (support and the mechanism change); \n- distributional incidence (median base fee $\\neq$ tail events); \n- long-run equilibrium beyond roughly one month given widening uncertainty bands.\n\n# 6.1.2 How to use it as a planning curve\n\nSequencer teams and ecosystem treasuries can treat the ECM semi-elasticity as a planning curve. Let $\\psi = 0.13$ denote the estimated effect of a $10\\mathrm{pp}$ change in posting-clean adoption. If an intervention raises adoption by $\\Delta A$ pp for $T$ days, the expected change in the median base fee is $100\\times [\\exp (0.10\\psi \\cdot (\\Delta A / 10)) - 1]$ percent over that horizon, with roughly half the adjustment arriving in 11 days and most within one month (Figure 3).\n\nA break-even rule replaces assertion with calculation: subsidy spend $\\leq$ (predicted per-transaction base-fee savings $\\times$ affected L1 transaction count). At the window mean, the per-transaction base-fee reduction is about $0.14, scaled by (\\Delta A / 10)$ . Pushing L2 share from $60\\%$ to $80\\%$ (a 20pp move) would therefore be expected to trim median fees by about $24\\%$ using the exponential mapping above. Campaigns launched when adoption already sits above $85\\%$ may still be operationally valuable, but the variance of the effect and the confidence bands widen, making causal evaluation harder. This reframes congestion management as a portfolio decision over L2 market share rather than a binary \"turn on/off\" switch.\n\n# 6.1.3 Regime caveat: Dencun changes the mapping\n\nEIP-4844 routes L2 data availability to blobs and prices it separately from execution gas. Additional L2 adoption can ease calldata pressure. It may not meaningfully reduce L1 execution congestion because the EIP-1559 base fee remains tied to execution demand (Liu et al., 2022). Post-Dencun days also cluster in a narrow 0.86–0.91 adoption band. The effective sample size collapses. Table 3 and the diagnostics appendix therefore label blob-era slopes as underpowered. The post-Dencun estimates in this paper are descriptive signals for monitoring, not confirmatory causal updates. They remain descriptive until quasi-experimental variation appears (e.g., blob-parameter changes or exogenous sequencer outages).\n\n# 6.2 Limitations and Boundary Conditions\n\nThreats to validity fall into five buckets:\n\n- Internal validity (simultaneity / weak instrument). Timing diagnostics summarized in the instrumentation appendix show that lagged adoption has the expected\n\nsign but low precision. The control-function first stage ( $F = 7.58$ ) falls short of conventional strength, so we emphasize local identification around the pre-Dencun adoption support rather than claiming full exogeneity.\n\n- Dynamics and horizon. The Koyck parameter ( $\\rho \\approx 0.89$ ) and the widening LP bands documented in the diagnostics appendix indicate that any rebound beyond 56 days is statistically indistinguishable from zero. Welfare projections longer than about a month remain exploratory. \n- Regime validity (post-Dencun). Regime-split estimates in Table 3 combined with the MDE calculations show that even a $45\\%$ semi-elasticity would be indistinguishable from noise in the blob era. Because blobs price data separately from execution gas, the structural channel linking adoption to the base fee also weakens. We therefore restrict confirmatory claims to the pre-Dencun window. \n- Measurement validity. Posting-clean adoption is constructed by netting sequencer posting from end-user execution. Misclassification, coverage gaps for newer rollups, or relabeling by data providers could introduce level shifts that affect both the instrument and outcome series until detected. \n- External validity. The semi-elasticity may differ across application mixes (DeFi vs NFT vs stablecoin flows) and could be muted if lower fees induce rebound demand. Extrapolating to other EIP-1559 chains requires similar L2 penetration, fee-market mechanics, and monitoring of distributional incidence.\n\nIn practice, these threats encourage a division of labor between engineering experimentation and econometric evaluation. Short-run fee relief and within-regime comparisons can be evaluated with the present ECM and ITS toolkit, provided posting-clean labels are periodically audited for measurement drift. New instruments should avoid introducing additional simultaneity. Longer-run welfare or cross-regime counterfactuals will likely require new sources of quasi-experimental variation. Promising candidates include exogenous outages, parametric changes to blob markets, or natural experiments in sequencer fee rebates. External validity concerns also motivate segmenting outcomes by application mix before extrapolating to other chains. A replication log records these boundary conditions. Future updates—whether from Ethereum or other EIP-1559 chains—can extend the window for causal inference without revising the core identification strategy.\n\nTaken together, residual simultaneity, short-horizon precision limits, regime shifts, and measurement/external scope boundaries delimit where our core causal claims apply. They highlight the need for fresh instruments, monitoring of classification, and longer panels.\n\n# 6.3 Open Questions and Monitoring Playbook\n\nReplication artifacts are in Appendix A; the replication repository carries the full audit log and change history.\n\nThe remaining agenda for L2-L1 congestion research is best framed as concrete, monitorable questions rather than meta-guidance:\n\n1. Post-Dencun identification. What quasi-experimental shocks create exogenous adoption variation now that blobs absorb most L2 data? Candidates include blob fee parameter changes (e.g., target gas adjustments in Buterin et al., 2024), sequencer outages, and forced migrations during prover or bridge upgrades. A running changelog of these events—timestamped and paired with posting-clean adoption—keeps the ECM/ITS designs re-estimable the moment variation appears. \n2. Mechanism split (blobs vs execution gas). Does higher L2 adoption still relieve execution congestion, or only calldata/DA pressure? Monitoring should separate blob pricing from execution-layer base fees. It should also track how sequencer pricing rules respond, leveraging the posting-pricing interaction modeled by Wang et al. (2025). \n3. Heterogeneity and incidence. Which user segments capture the fee relief—DeFi vs NFT vs stablecoin flows? How does it differ for latency-sensitive traders versus routine transfers? Segmenting L2 inflows, bridge mix, and cross-rollup price gaps (cf. Gogol et al., 2024) would reveal whether congestion relief accrues to whales, retail users, or MEV searchers. \n4. Early-warning monitoring. At what thresholds does the confirmatory design lose power (e.g., adoption sustained above $80 - 90\\%$ ) and require fresh instruments? A lightweight playbook is three steps. (i) Maintain daily dashboards for posting-clean adoption, blob utilization, and sequencer incidents. (ii) Rerun the ECM each time a shock hits or the adoption distribution shifts. (iii) Archive the resulting IRFs and diagnostics alongside the replication bundle so the evidence base compounds across upgrades.\n\nThese questions turn Section 5 into a live monitoring blueprint. Instead of restating transparency logistics, they specify what new variation to watch for, how to split mechanisms, and which distributional outcomes determine who benefits from the congestion relief.\n\n# 7 Conclusion\n\nShort answer: yes—higher L2 adoption decongests Ethereum's fee market in the short run, but the relief is partial and local in time. A 10 percentage point increase in posting-\n\nclean adoption lowers L1 base fees by roughly $13 \\%$ (about 5 Gwei or $\\$ 0.14$ for a 21k- gas transfer at the pre- Dencun mean), and deviations from the long- run relation decay with an 11- day half- life. Together with the dynamic profile in Figure 3 and the ECM benchmark in Table 2, these numbers provide regime- aware causal evidence that the rollup- centric roadmap already buys near- term congestion relief.\n\nConceptually, the paper introduces a posting-clean adoption measure that captures user migration rather than posting load, a demand factor that avoids mediator contamination, and a regime-aware ITS-ECM template for monitoring rollup-centric scaling. Substantively, it delivers the first cross-regime causal estimate of how aggregate L2 adoption decongests Ethereum's mainnet and translates the semi-elasticity into Gwei and dollar savings that are directly interpretable for protocol designers and users.\n\nThese claims are bounded. Inference is local to the pre-Dencun regime where adoption still moves, and precision fades beyond roughly a month of horizons. Instrument strength is modest, so simultaneity concerns are handled with cautious timing diagnostics rather than strong exclusion. As summarized in Section 6.2, these boundaries keep confirmatory claims narrow while flagging where additional variation is needed.\n\nFor protocol designers and governance bodies, the practical implication is that fee-market reforms and L2 ecosystem support should be evaluated jointly. Moving L2 user share from $60\\%$ to $80\\%$ would lower median base fees by roughly a quarter at pre-Dencun demand levels, putting adoption subsidies on the same order as the fee changes analyzed around the London upgrade (Liu et al., 2022). In the blob era, incentives that shift activity onto rollups or smooth posting schedules operate alongside the blob-fee parameters in Buterin et al. (2024), making adoption-driven interventions a complementary lever rather than a substitute for base-fee tuning. Future work should extend the confirmatory window as post-Dencun variance widens, seek quasi-experimental shocks in blob pricing or sequencer operations, and map distributional incidence using address-tagged data so that welfare gains from rollup-driven congestion relief can be allocated across user types.\n\n# Acknowledgments\n\nThis research benefited from support provided by the Ethereum Foundation academic grant.\n\n# References\n\nBernal, J. L., Cummins, S., and Gasparrini, A. (2017). Interrupted Time Series Regression for the Evaluation of Public Health Interventions: A Tutorial. International Journal of Epidemiology, 46(1):348-355. \nBrodersen, K. H., Gallusser, F., Koehler, J., Remy, N., and Scott, S. L. (2015). Inferring causal impact using Bayesian structural time-series models. Annals of Applied Statistics, 9(1):247-274. \nButerin, V., Conner, E., Dudley, R., Slipper, M., Norden, I., and Bakhta, A. (2021). EIP-1559: Fee Market Change for ETH 1.0 Chain. https://eips.ethereum.org/EIPS/eip-1559. Accessed: 2024. London hardfork implementing base fee mechanism. \nButerin, V., Dietrichs, A., et al. (2024). EIP-4844: Shard Blob Transactions. https://eips.ethereum.org/EIPS/eip-4844. Accessed: 2024. \nde Chaisemartin, C. and D'Haultfoeuil, X. (2020). Two-Way Fixed Effects Estimators with Heterogeneous Treatment Effects. American Economic Review, 110(9):2964-2996. \nGogol, K., Messias, J., Miori, D., Tessone, C. J., and Livshits, B. (2024). Layer-2 Arbitrage: An Empirical Analysis of Swap Dynamics and Price Disparities on Rollups. arXiv preprint. Empirical study of arbitrage and pricing on Ethereum rollups. \nHausman, C. and Rapson, D. S. (2018). Regression Discontinuity in Time: Considerations for Empirical Applications. Annual Review of Resource Economics, 10:533-552. \nLiu, Y., Lu, Y., Nayak, K., Zhang, F., Zhang, L., and Zhao, Y. (2022). Empirical Analysis of EIP-1559: Transaction Fees, Waiting Times, and Consensus Security. In Proceedings of the ACM Conference on Computer and Communications Security, pages 2099-2113. Empirical evaluation of the EIP-1559 fee mechanism. \nPenfold, R. B. and Zhang, F. (2013). Use of Interrupted Time Series Analysis in Evaluating Health Care Quality Improvements. Academic Pediatrics, 13(6 Suppl):S38-S44. \nWang, S., Crapis, D., and Moallemi, C. C. (2025). A Framework for Combined Transaction Posting and Pricing for Layer-2 Blockchains. arXiv preprint. Dynamic model of L2 posting and pricing under L1 gas volatility.\n\n# A Data and Code Availability\n\nAppendix road map. To audit or reuse the study, read top-down: (i) Appendix B for unit-root/cointegration tests, residual dependence, and support/MDE diagnostics; (ii) Appendix C for estimator variants, with exploratory extensions in Appendices D-F; (iii) Appendix G for the measurement dictionary, treatment/outcome construction, and the targeted-shock catalog.\n\nAll data and code needed to reproduce the empirical results in this paper are available in the public replication repository at github.com/AysajanE/12-11-causal-analysis-repro, mirrored on Zenodo (concept DOI 10.5281/zenodo.17665906; latest version for this arXiv release: 10.5281/zenodo.17832785, tag v1.1.1-arxiv). The archive contains a frozen version of the analysis-ready panel and the exact LATEX sources used for this manuscript.\n\nThe repository README documents the end-to-end workflow—data ingestion and cleaning, estimator scripts, and figure-building routines—together with environment files and reproducibility checklists. Consistent with replication practices in recent empirical studies of Ethereum's fee market and rollups (e.g., Liu et al., 2022; Gogol et al., 2024), these artifacts are released to support independent verification, robustness extensions, and reuse of the design in related policy and research applications.\n\n# B Statistical Diagnostics and Design Checks\n\n# B.1 Diagnostics and Design Checks\n\nThis appendix reports the diagnostics that justify the ECM/ITS design: integration order, cointegration, residual dependence, treatment support, power, and multiple-outcome control. Tables and plots are reproduced here so readers can audit identification and precision directly in the PDF; code logs remain in the replication bundle for reruns.\n\nNotation used across appendix tables: $A_{t}^{clean}$ denotes the posting-clean adoption share; EG $p$ is the Engle-Granger residual-unit-root test $p$ -value; LB $p@10$ is the Ljung-Box $p$ -value at lag 10; \"10pp\" indicates a 10 percentage point change in $A_{t}^{clean}$ .\n\n# Stationarity, Cointegration, and Error Processes\n\nTable 4 reproduces the unit-root evidence for the pre-Dencun confirmatory window. ADF tests on levels fail to reject a unit root for $A_{t}^{clean}$ , $\\log C_{t}^{fee}$ , $u_{t}$ , $S_{t}$ , and $D_{t}^{*}$ , and KPSS points to non-stationarity; Phillips-Perron tests are more mixed, rejecting a unit root for several level series. All first differences are stationary across ADF, KPSS, and PP, so we continue to treat these variables as $I(1)$ in the confirmatory design. A Phillips-Perron\n\nEngle-Granger residual test on the long-run relation $\\log C_t^{fee} \\sim A_t^{clean} + D_t^* + \\mathbf{R}_t + \\mathbf{Cal}_t$ rejects non-stationarity ( $p = 1.6 \\times 10^{-5}$ ), supporting the ECM formulation used in the confirmatory analysis. Figure 5 shows the residual ACF/PACF for both the levels and ECM equations; the Prais-Winsten AR(1) FGLS fit is the confirmatory specification, while the ARMA(1,2) alternative materially shrinks short-lag autocorrelation in the diagnostic grid even though Ljung-Box tests still reject at large $N$ .\n\nTable 4: Unit-Root and Cointegration Diagnostics (Pre-Dencun Window: London+Merge) \n\n<table><tr><td>Series</td><td>Transform</td><td>ADF stat (p)</td><td>KPSS stat (p)</td><td>PP stat (p)</td><td>I(d)</td></tr><tr><td>Acleant</td><td>level</td><td>-1.43 (0.85)</td><td>0.52 (0.01)</td><td>-4.92 (0.0003)</td><td>I(1)</td></tr><tr><td>log Cfeee</td><td>level</td><td>-1.95 (0.63)</td><td>0.63 (0.01)</td><td>-4.92 (0.0003)</td><td>I(1)</td></tr><tr><td>ut</td><td>level</td><td>-1.12 (0.93)</td><td>0.51 (0.01)</td><td>-39.57 (&lt; 0.001)</td><td>I(1)</td></tr><tr><td>St</td><td>level</td><td>-1.95 (0.63)</td><td>0.63 (0.01)</td><td>-4.92 (0.0003)</td><td>I(1)</td></tr><tr><td>D*tt</td><td>level</td><td>-2.98 (0.14)</td><td>0.42 (0.01)</td><td>-17.79 (&lt; 0.001)</td><td>I(1)</td></tr><tr><td>ΔAcleant</td><td>first diff</td><td>-10.26 (&lt; 0.001)</td><td>0.09 (0.10)</td><td>-50.30 (&lt; 0.001)</td><td>I(0)</td></tr><tr><td>Δ log Cfeee</td><td>first diff</td><td>-7.47 (&lt; 0.001)</td><td>0.18 (0.10)</td><td>-35.56 (&lt; 0.001)</td><td>I(0)</td></tr><tr><td>Δut</td><td>first diff</td><td>-7.09 (&lt; 0.001)</td><td>0.34 (0.10)</td><td>-547.78 (&lt; 0.001)</td><td>I(0)</td></tr><tr><td>ΔSt</td><td>first diff</td><td>-7.47 (&lt; 0.001)</td><td>0.18 (0.10)</td><td>-35.56 (&lt; 0.001)</td><td>I(0)</td></tr><tr><td>ΔD*tt</td><td>first diff</td><td>-10.07 (&lt; 0.001)</td><td>0.29 (0.10)</td><td>-68.47 (&lt; 0.001)</td><td>I(0)</td></tr></table>\n\nNote: Levels tests include a deterministic trend; first-difference tests include an intercept. KPSS uses the trend-stationary null with automatic lags. Engle-Granger residual Phillips-Perron test on $\\log C_t^{fee} \\sim A_t^{clean} + D_t^* + \\mathbf{R}_t + \\mathbf{Cal}_t$ rejects a unit root ( $p = 1.6 \\times 10^{-5}$ ), validating the error-correction setup. All statistics computed on the 2021-08-05 to 2024-03-12 pre-Dencun window ( $N = 951$ ).\n\nTable 5: Residual Dependence Diagnostics (Levels; ARMA Grid as Diagnostic) \n\n<table><tr><td>Specification</td><td>β̂</td><td>SE</td><td>DW</td><td>LB p@10</td><td>max |ρ1-10|</td><td>AIC</td><td>N</td></tr><tr><td>OLS-HAC (levels)</td><td>0.1057</td><td>0.5524</td><td>0.148</td><td>&lt; 10-6</td><td>0.926</td><td>-</td><td>1244</td></tr><tr><td>ARMA(1,2) errors</td><td>-1.1610</td><td>0.2599</td><td>1.980</td><td>2.3 × 10-7</td><td>0.148</td><td>-51.03</td><td>1244</td></tr></table>\n\nNote: The confirmatory levels estimate reported in the main text uses Prais-Winsten AR(1) FGLS; ARMA(1, 2) appears here solely as the best-AIC diagnostic alternative. DW moves close to 2 under ARMA(1, 2) errors. Ljung-Box still rejects at lag 10 given large $N$ , but the maximum residual ACF over lags 1–10 drops from 0.93 to 0.15, providing a robustness check while keeping the confirmatory specification unchanged. HAC (Bartlett, 10 lags) standard errors reported for the OLS line.\n\n# Positivity, Support, and Effective Sample Size\n\nPositivity within regimes is a binding constraint on post-Dencun inference. Table 6 summarizes a spline specification that allows the semi-elasticity to vary across low- and high-adoption regions, while Table 7 reports the implied minimum detectable effects (MDEs).\n\n![](images/f6838697d4ffc09b8f79913336e764b672e3aef26950eddf3d48de5650a72c6e.jpg) \nAutocorrelation and Partial Autocorrelation Functions\n\n![](images/775ebd192fbcc35595078832673e668f9b5fb171f5263951b58bfefb34b78b53.jpg)\n\n![](images/e873f54b915256f48183ce80ed6deae1e0b3ed6d4edd9ffcc1a518bf4b0c3cc7.jpg)\n\n![](images/9cc8c8bf9a25eb65b76d12a412f370452ace64a07de633a56741ebdb0fe67669.jpg)\n\n![](images/b32ae9528a91b9c5d8f907258959e0af0bbdfdf4daa4c98a0a71da488739bb7f.jpg)\n\n![](images/16ca59b3b89b76ae17b05c0aed6e0b49c91b7d6f044ac69abae8e955fa253ca6.jpg)\n\nFigure 5: Residual ACF/PACF for Levels and ECM Equations \n![](images/97759cc586bc12285ff374e4d5328c32462cc6df72285de6d182fe6a28f6fa6d.jpg) \nNote: Panels plot ACF and PACF up to lag 24 for (i) the levels specification with OLS residuals and (ii) the ECM error-correction residuals. The ARMA(1,2) choice reduces the maximum absolute ACF from 0.93 to 0.15 over lags 1-10, even though Ljung-Box tests still reject at large $N$ . See Appendix A for replication scripts.\n\n![](images/5bbe62b67f1bf90ad7c14893207f84ab4288f7be5c798ec3e19bece3cf7df695.jpg)\n\nTable 6: Piecewise Semi-Elasticities for Log Base Fee (Knot at 0.80) \n\n<table><tr><td>Regime Support</td><td>β</td><td>SE (HAC)</td><td>95% CI</td><td>Semi-elasticity (10pp)</td><td>Semi-elasticity CI</td></tr><tr><td>At clean ≤ 0.80</td><td>0.1401</td><td>0.4912</td><td>[-0.823, 1.103]</td><td>1.41%</td><td>[-7.90%, 11.66%]</td></tr><tr><td>At clean &gt; 0.80</td><td>-1.0338</td><td>4.2000</td><td>[-9.266, 7.198]</td><td>-9.82%</td><td>[-60.41%, 105.41%]</td></tr></table>\n\nArtifacts: see Appendix A for data and code paths.\n\nTable 7: Minimum Detectable Effect (MDE) by Regime with Effective Sample Size \n\n<table><tr><td>Regime</td><td>N</td><td>\\( N_{\\text{eff}} \\)</td><td>sd(\\( A_t^{clean} \\))</td><td>Max Adoption Range</td><td>HAC SE</td><td>MDE (beta units)</td><td>MDE (10pp %)</td></tr><tr><td>post-Dencun</td><td>294</td><td>47.48</td><td>0.0210</td><td>[0.760, 0.951]</td><td>4.37</td><td>12.23</td><td>239.90</td></tr><tr><td>pre-Dencun</td><td>950</td><td>147.43</td><td>0.3016</td><td>[0.000, 0.923]</td><td>0.47</td><td>1.31</td><td>14.01</td></tr></table>\n\nArtifacts: see Appendix A for data and code paths.\n\n# C Estimator Details and Extensions\n\n# C.1 Estimator Details and Variants\n\nThis appendix lists the specifications, timing variants, and robustness checks that sit behind Section 4. Full derivations and code live in the replication bundle; the tables summarize the information required to interpret the reported estimates.\n\n# ITS and ECM Workflow\n\nThe main text reports a merged set of ITS and ECM coefficients in Table 2. Here we highlight how alternative demand-factor constructions affect the short-run semi-elasticity. Table 8 reports a small grid of ECM runs using \"lite\" and \"full\" demand-factor definitions and same-day vs. lagged timing.\n\nTable 8: Demand Factor Variants and Timing Diagnostics \n\n<table><tr><td>Demand factor</td><td>ψ (10pp)</td><td>SE</td><td>p-value</td><td>EG p</td><td>Adj. R²</td><td>N</td></tr><tr><td>D*-lite (same-day)</td><td>-1.067</td><td>(0.362)</td><td>0.003</td><td>0.004</td><td>0.336</td><td>1241</td></tr><tr><td>D*-full (same-day)</td><td>-1.379</td><td>(0.368)</td><td>0.000</td><td>0.005</td><td>0.322</td><td>1241</td></tr><tr><td>D*-lite (t-1)</td><td>-0.857</td><td>(0.418)</td><td>0.040</td><td>0.005</td><td>0.162</td><td>1240</td></tr></table>\n\nNote: $\\psi$ is the ECM short-run semi-elasticity for a 10pp change in adoption. All specifications include the confirmatory adjustment set and use HAC (Bartlett) standard errors. Sample sizes are one day smaller than the main ECM in Table 2 ( $N = 1,242$ ) because rebuilding $D_{t}^{\\star}$ with the \"lite\"/\"full\" inputs shortens the overlapping input window by a single day; the $t - 1$ variant drops one additional day due to the lag on $D_{t - 1}^{\\star}$ . Engle-Granger $p$ -values test residual unit roots and confirm cointegration across variants, supporting the robustness claims in Section 5.1.\n\n# Targeted Dummies and Event Adjustments\n\nTargeted-event controls absorb days where congestion and adoption are jointly affected by large structural shocks. The curated catalog, rationale, and window flags are reported in Appendix G.7 (Table 14); this subsection retains only the specification logic used in the ITS/ECM regressions. We include the pooled outage indicator and the full Shockt vector in both the long-run and short-run equations so that sequencer/mainnet outages and mega-claim days do not masquerade as adoption shocks.\n\n# Robustness Catalog\n\nThe tornado plot, placebo treatments, and alternative outcome runs are part of the robustness replication assets referenced in Appendix A. Each CSV contains metadata (seed, bandwidth, estimator) so that the checks can be re-run without consulting this appendix. The main text cites these diagnostics as exploratory support; the confirmatory interpretation continues to lean on the ECM and ITS specifications documented above.\n\n# D Results Extensions\n\n# D.1 Exploratory Diagnostics and Policy Context\n\nThis appendix adds event-study views, regression-discontinuity-in-time (RDiT) snapshots, and a robustness \"tornado\" summary that sit alongside the main results in Section 5. The goal is to show how the ITS/ECM estimates behave around sharp protocol events and under alternative design choices for audiences focused on governance and fee-market policy.\n\n# Event-Study Diagnostics and RDiT Snapshots\n\nEvent-study plots align L2 adoption shocks and congestion outcomes around key protocol and L2 events (e.g., London, Merge, Dencun, major rollup launches). They mainly serve as visual diagnostics: pre-trend checks, anticipation effects, and short-run overshooting. Because pre-trend F-tests reject parallel trends for several events, we treat the event-study coefficients as exploratory and focus on whether the post-event patterns qualitatively match the ITS/ECM estimates (fee relief following L2 adoption surges).\n\nRDiT snapshots at the Merge and Dencun boundaries complement the event studies by estimating local level shifts in log fees. These designs naturally highlight mechanical changes in the base-fee process and blob pricing, which are distinct from the smooth treatment variation exploited by the main ITS/ECM specification. As a result, we keep RDiT estimates in the exploratory category and use them to bound the magnitude of congestion relief that hard-fork-style interventions can deliver relative to the continuous L2 adoption channel.\n\n# Robustness \"Tornado\" Summary\n\nThe robustness tornado aggregates a grid of alternative specifications—different HAC lag choices, alternative demand-factor constructions, and variations in calendar and regime controls—and visualizes how the semi-elasticity estimates move across this design space. The central message is that the sign and broad magnitude of the short-run semi-elasticity are stable across reasonable alternatives, with only extreme specifications (e.g., dropping demand controls entirely) pushing estimates toward zero. Full tornado CSVs and plots are part of the replication assets referenced in Appendix A.\n\n# E Instrumentation and Timing Diagnostics\n\n# E.1 Instrumentation and Timing Diagnostics\n\nThis appendix records the core instrumental-variable diagnostics that support the weak-instrument caveats in Sections 6.2 and 7.\n\n# Shift-Share IV Design\n\nThe primary shift-share instrument aggregates sequencer outages, fee-rebate programs, and exchange listings into a proxy for exogenous variation in L2 adoption. The design object is\n\n$$\nZ _ {t} = \\sum_ {l \\in \\mathcal {L} \\in} w _ {l} ^ {\\mathrm {p r e}} \\cdot \\mathrm {s h o c k} _ {l, t},\n$$\n\nwhere $w_{l}^{\\mathrm{pre}}$ is the pre-Dencun average share of end-user transactions on chain $l$ (Arbitrum 0.63, Optimism 0.27, Base 0.10) and $\\text{shock}_{l,t}$ is an outage/listing/rebate indicator or outage-hours intensity. Construction steps are scripted in the replication bundle referenced in Appendix A (IV analysis scripts and configuration files). Table 9 documents first-stage strength for the pooled-outage and shift-share variants; Table 10 retains the timing and over-identification diagnostics used in the discussion.\n\nTable 9: Instrument Variants and First-Stage Strength (Adoption on $Z_{t}$ ) \n\n<table><tr><td>Instrument variant</td><td>Coef on Zt</td><td>HAC SE</td><td>First-stage F</td><td>Partial R2</td><td>N</td></tr><tr><td>Pooled outage indicator (∀{any outage})</td><td>0.084</td><td>0.058</td><td>2.10</td><td>0.0017</td><td>1244</td></tr><tr><td>Shift-share outage (indicator)</td><td>0.146</td><td>0.128</td><td>1.30</td><td>0.0010</td><td>1244</td></tr><tr><td>Shift-share outage (hours)</td><td>0.024</td><td>0.047</td><td>0.25</td><td>0.0002</td><td>1244</td></tr><tr><td>Fee-rebate/listing shocks</td><td>0.000</td><td>0.000</td><td>0.00</td><td>0.0000</td><td>1244</td></tr></table>\n\nNote: HAC (Bartlett, 7 lags) standard errors. Weights $w_{l}^{\\mathrm{pre}}$ are computed from pre-Dencun chain shares; no fee-rebate or exchange-listing shocks occur in the confirmatory window, so that row records zeros explicitly. Coefficients are in adoption-share units; $F$ and partial $R^2$ use the residualized first stage with regime and calendar controls.\n\nTable 10: Timing and IV Checks for the Adoption Instrument \n\n<table><tr><td>Specification</td><td>β</td><td>SE</td><td>p-value</td><td>Semi-elasticity (10pp)</td><td>N</td><td>First-stage F</td><td>Partial R²</td><td>Instruments</td><td>J-stat</td><td>J-p</td><td>J-df</td></tr><tr><td>OLS-HAC (At clean)</td><td>0.1384</td><td>0.5713</td><td>0.8087</td><td>1.39%</td><td>1244</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>OLS-HAC (At clean)</td><td>0.3133</td><td>0.5850</td><td>0.5924</td><td>3.18%</td><td>1243</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>IV 2SLS</td><td>-0.6942</td><td>3.9681</td><td>0.8612</td><td>-6.71%</td><td>1244</td><td>7.58</td><td>0.0061</td><td>any_outage_t (pooled)</td><td>-</td><td>-</td><td>0</td></tr><tr><td>Control-function</td><td>-0.6942</td><td>1.9614</td><td>0.7235</td><td>-6.71%</td><td>1244</td><td>7.58</td><td>0.0061</td><td>any_outage_t (pooled)</td><td>-</td><td>-</td><td>-</td></tr></table>\n\nNote: The first-stage $F$ -statistic (7.58) and partial $R^2$ indicate weak instrument strength by conventional standards, motivating the cautious language around simultaneity in Sections 6.2 and 7. J-statistics are not reported for single-instrument specifications. Additional AR tests and reduced-form grids are documented in the IV replication assets referenced in Appendix A.\n\nTable 11 complements these diagnostics by reporting second-stage estimates for the shift-share outage variants that correspond to the first-stage metrics in Table 9.\n\nTable 11: Shift-Share IV for ${A}_{t}^{clean}$ Using Pre-Dencun Weights and Outages \n\n<table><tr><td>Specification</td><td>β̂</td><td>(SE)</td><td>p-value</td><td>N</td><td>Partial R²</td><td>First-stage F</td></tr><tr><td>2SLS (SS any)</td><td>-2.476</td><td>(7.506)</td><td>0.742</td><td>1245</td><td>0.0022</td><td>2.76</td></tr><tr><td>2SLS (SS hours)</td><td>-6.029</td><td>(18.198)</td><td>0.740</td><td>1245</td><td>0.0005</td><td>0.56</td></tr></table>\n\nNote: $Z_{t}^{SS} = \\sum_{l} w_{l}^{\\mathrm{pre}} \\cdot \\nVdash \\{\\text{outage}_{l,t}\\}$ uses pre-Dencun end-user shares (Arbitrum 0.63, Optimism 0.27, Base 0.10). An intensity variant replaces the indicator with outage hours. Outcome is log $C^{fee}$ ; controls: $D^{*}$ , regime dummies, calendar, and linear trends with regime interactions. HAC standard errors (Bartlett, 7 lags).\n\n# Timing Tests and Diagnostics Archive\n\nLead/lag timing tests confirm that instrument shocks do not predict pre-treatment outcomes at economically meaningful magnitudes, supporting the exclusion restriction in the narrow window used. AR tests, Anderson-Rubin intervals, and reduced-form grids are documented in the replication materials; this appendix highlights the summary diagnostics most relevant for policy interpretation.\n\n# F BSTS Welfare Bridge and Policy Context\n\n# F.1 BSTS Welfare Bridge\n\nThis appendix summarizes the Bayesian Structural Time Series (BSTS) analysis underlying Figure 4. The text records the design choices and the welfare-sensitivity table that informs the policy discussion; full code and data are included in the replication materials.\n\n# Design Summary\n\n- Window: Merge-era (2023-10-28 to 2024-03-12) with blob-era days excluded, so that treatment variation aligns with the pre-Dencun confirmatory window. \n- Inputs: Log base fee, posting-clean adoption, ETH price, and the PCA demand factor $D_{t}^{*}$ ; priors and sampler settings follow the published BSTS specifications and are documented with the replication materials. \n- Outputs: Welfare quantiles, price-sensitivity tables, and posterior predictive checks summarized below; full numerical outputs are available in the replication archive.\n\n# Welfare Mapping\n\nBSTS produces a counterfactual fee path $\\mathrm{BF}_t^{cf}$ under low L2 adoption. Per-day dollar savings are computed as\n\n$$\n\\mathrm {U S D} _ {t} = \\left(\\mathrm {B F} _ {t} ^ {\\text {o b s}} - \\mathrm {B F} _ {t} ^ {\\text {c f}} + \\mathbf {1} _ {\\text {t i p}} \\cdot \\mathrm {T I P} _ {t} ^ {\\text {o b s}}\\right) \\times \\mathrm {G A S} _ {t} \\times 1 0 ^ {- 9} \\times P _ {t}, \\tag {5}\n$$\n\nwhere $\\mathrm{BF}_t^{obs}$ is the observed base fee, $\\mathbf{1}_{\\mathrm{tip}} = 1$ when the Base+Tip welfare column is used (and 0 otherwise), $\\mathrm{TIP}_t^{obs}$ is the median priority tip, $\\mathrm{GAS}_t$ is total gas used, and $P_t$ is either the daily mean or close ETH/USD price. Aggregate welfare is $\\sum_{t} \\mathrm{USD}_t$ over the Merge-era window; baseline adoption percentiles (p05 vs. p25) anchor the counterfactual $A_t^{clean}$ series.\n\n# Welfare Sensitivity\n\nAnchoring the counterfactual on the pre-Dencun ECM semi-elasticity, the BSTS bridge maps a 10 percentage point increase in posting-clean adoption into aggregate fee savings that are robust across price baselines. A normal-approximation over the daily posterior draws yields:\n\n• Mean-price base only: median $79.6M; 50% CI [$74.0M, $85.3M]; 90% CI [$65.8M, $93.2M]. \n • Mean-price base+tip: median $92.2M; 50% CI [$85.8M, $98.8M]; 90% CI [$76.3M, $107.9M]. \n- Close-price variants: medians \\(79.9M (base) and \\)92.6M (base+tip) with comparable intervals (50% CIs [\\\\(74.3M, \\\\)85.6M] and [\\\\(86.1M, \\$99.1M]).\n\nMost savings accrue on high-congestion days rather than in quiet periods. Table 12 reports the scenario grid that underpins the exploratory policy range; replication scripts export the full posterior draws for alternative price/ adoption baselines.\n\nTable 12: Two-by-Two Welfare Sensitivity (Baseline Percentile $\\times$ Price Weighting) \n\n<table><tr><td>Baseline (Adoption)</td><td>Mean Price (Base / Base+Tip, M)</td><td>Close Price (Base / Base+Tip, M)</td></tr><tr><td>p05 (71.6%)</td><td>149.8 / 173.6</td><td>150.4 / 174.2</td></tr><tr><td>p25 (74.6%)</td><td>78.1 / 90.5</td><td>78.4 / 90.8</td></tr></table>\n\nArtifacts: see Appendix A for data and code paths.\n\nThe full counterfactual bundle is reproducible with the publicly released code and data, and all posterior predictive checks and alternative-prior panels are part of that release. The PDF retains only the tables needed to interpret the policy bridge.\n\n# G Measurement and Operationalization\n\n# G.1 Scope and Conventions\n\nThis appendix makes the measurement layer self-contained, mirroring the data-dictionary style in Liu et al. (2022). All variables are daily UTC aggregates; symbols match those in Sections 3 and 4. Code pointers refer to tables/files in the replication bundle (Appendix A).\n\n# G.2 Variable Dictionary\n\nTable 13: Variable Dictionary and Construction Summary \n\n<table><tr><td>Symbol</td><td>Name</td><td>Unit</td><td>Construction (daily)</td><td>Source(s)</td><td>Code pointer</td></tr><tr><td>Atclean</td><td>Posting-clean L2 adoption share</td><td>share [0,1]</td><td>L2 end-user tx / (L2 end-user tx + L1 user tx); L2→L1 posting tx identified via inbox registry and removed from both numerator and denominator</td><td>Rollup traces; Ethereum execution traces</td><td>mart_treatment_daily.A_t_clean</td></tr><tr><td>log Cfeefee</td><td>Log median base fee</td><td>log(Gwei)</td><td>log(medianb∈t base feeb) from EIP-1559 base-field; post-London only</td><td>Ethereum block traces; public fee dashboards</td><td>mart/master_daily.log_basefee</td></tr><tr><td>ut</td><td>Block utilization</td><td>ratio [0,1.5]</td><td>medianb∈t(gas usedb/gas limitb)</td><td>Ethereum block traces</td><td>mart/master_daily.util_ization</td></tr><tr><td>St</td><td>Scarcity index</td><td>log fee units</td><td>log(base feet+tip+1t&gt;Dencun blob feet) execution + where qt is the 7-day Tukey-smoothed execution-demand benchmark</td><td>Tfexecution + blob fee data; gas usage</td><td>mart/master_daily.scar city_index</td></tr><tr><td>Dt*</td><td>Latent demand factor</td><td>z-score</td><td>PC1 of standardized ETH log returns, CEX log volumes, realized volatility, Google Trends, and net stablecoin issuance (fit on pre-Dencun window; sign oriented so higher demand increases congestion)</td><td>Binance/OKX/Calebasdfact or.daily.D_star</td><td></td></tr><tr><td>Rt</td><td>Regime dummies</td><td>binary</td><td>London, Merge, and post-Dencun indicators</td><td>Protocol calendar</td><td>mart/master_daily.regime_*</td></tr><tr><td>Calt</td><td>Calendar dum-mies</td><td>binary</td><td>UTC weekend, month-end, quarter-turn indicators</td><td>Calendar</td><td>mart/master_dailycaleNDAR_*</td></tr><tr><td>Shockt</td><td>Targeted events</td><td>binary</td><td>Event flags for airdrops, sequencer outages, mega NFT mints, market-stress days; catalog in Table 14</td><td>Curated event list</td><td>controls_shock_daily.*</td></tr></table>\n\n# G.3 Treatment Construction: Posting-Clean Adoption\n\n1. Pull daily L2 transaction counts by chain from mart_l2.daily and L1 user transactions from stg_l1_blocks.daily. \n2. Identify L2 $\\rightarrow$ L1 posting transactions via the rollup inbox registry (12_inbox_registry); tag them in both datasets. \n3. Remove tagged posting transactions from the L2 numerator and the L1 denominator so that the treatment reflects end-user execution, not settlement load. \n4. Aggregate remaining L2 user transactions across tracked rollups (Arbitrum, Optimism, Base, zkSync, Starknet, Linea, Scroll) and compute $A_{t}^{clean}$ on the daily UTC grid; the full registry of inbox contracts and rollup identifiers lives in the replication bundle as 12_inboxregistry. \n5. Winsorize $A_{t}^{clean}$ at the $0.5\\%$ tails and carry the resulting share into all confirmatory and exploratory designs.\n\n# G.4 Outcome Definitions and Units\n\n- Base fee $(\\log C_t^{fee})$ . Natural log of the median EIP-1559 base fee (Gwei) across blocks in day $t$ . \n- Utilization $(u_{t})$ . Median block-level gas-used-to-gas-limit ratio per day, retaining the post-Merge 1.5 cap. \n- Scarcity index $(S_{t})$ . Combines execution gas and data-availability fees: daily median base fee + priority tip + (post-Dencun) blob base fee, divided by a 7-day smoothed demand benchmark $\\tilde{q}_{t}$ (median gas used smoothed with a Tukey-Hanning window) and logged. This keeps scarcity comparable across London, Merge, and blob eras.\n\n# G.5 Demand Factor $D_{t}^{*}$\n\n- Inputs: (i) ETH/USD log returns; (ii) log centralized-exchange spot volume (Binance, Coinbase, OKX aggregate); (iii) realized volatility from 5-minute returns; (iv) Google Trends “ethereum” index; (v) net stablecoin issuance (USDC + USDT + DAI). \n- Standardization and window: Each series is z-scored using its mean and standard deviation over the London $\\rightarrow$ Merge window (2021-08-05 to 2024-03-12) to avoid blob-era structural breaks; single-day gaps are forward-filled before standardization. \n- PCA fit: Principal components are estimated on the pre-Dencun standardized matrix; PC1 is rescaled to unit variance and sign-flipped so that higher $D_{t}^{*}$ aligns with higher fees.\n\n- Usage: The same $D_{t}^{*}$ enters ITS, ECM, IV, and BSTS designs; sensitivity checks with \"lite\" inputs appear in Table 8.\n\n# G.6 Quality Control and Harmonization\n\n- Time and aggregation. All variables use UTC calendar days; block-level quantities are aggregated with medians to limit outlier influence. \n- Winsorization. $A_{t}^{clean}$ , $\\log C_{t}^{fee}$ , $u_{t}$ , and $S_{t}$ are winsorized at the 0.5% tails across the full sample ( $N = 1,245$ ) before entering regressions. \n- Missingness. Days with missing treatment or base-fee fields ( $< 0.3\\%$ ) are dropped listwise; PCA inputs with single-day gaps are forward-filled prior to z-scoring. \n- Smoothing choices. The scarcity benchmark $\\tilde{q}_t$ uses a 7-day Tukey-Hanning window; BSTS price baselines use daily mean and close prices as noted in Appendix F.\n\n# G.7 Targeted Shock Catalog\n\nTable 14: Targeted Shock Catalog with Usage Flags \n\n<table><tr><td>Category</td><td>Event</td><td>Date (UTC)</td><td>Used in confirmatory window?</td><td>Duration</td><td>Rationale</td></tr><tr><td colspan=\"6\">Pre-Dencun (used in confirmatory window unless noted)</td></tr><tr><td>Protocol</td><td>London EIP-1559</td><td>2021-08-05</td><td>Y</td><td>1d</td><td>Fee-mechanism activation; sets baseline regime dummy.</td></tr><tr><td>Launch</td><td>Arbitrum One mainnet</td><td>2021-09-01</td><td>Y</td><td>1d</td><td>Major L2 launch; sudden user migration.</td></tr><tr><td>Airdrop</td><td>dYdX airdrop</td><td>2021-09-08</td><td>Y</td><td>1d</td><td>Large claim day; spikes L2+L1 usage.</td></tr><tr><td>Launch</td><td>Polygon Herzemz v1</td><td>2021-03-01</td><td>N</td><td>1d</td><td>Pre-sample launch noted for completeness.</td></tr><tr><td>Airdrop</td><td>Immutable X airdrop</td><td>2021-11-05</td><td>Y</td><td>1d</td><td>NFT airdrop; gas spike.</td></tr><tr><td>Launch</td><td>Starknet Alpha mainnet</td><td>2021-11-16</td><td>Y</td><td>1d</td><td>Early Starknet deployment.</td></tr></table>\n\nContinued on next page\n\nTable 14 (continued) \n\n<table><tr><td>Category</td><td>Event</td><td>Date (UTC)</td><td>Used in confirmatory window?</td><td>Duration</td><td>Rationale</td></tr><tr><td>Launch</td><td>Optimism public mainnet</td><td>2021-12-16</td><td>Y</td><td>1d</td><td>Public rollout; user onboarding burst.</td></tr><tr><td>Airdrop</td><td>Optimism airdrop 1</td><td>2022-05-31</td><td>Y</td><td>1d</td><td>First OP distribution; heavy claim traffic.</td></tr><tr><td>Upgrade</td><td>Arbitrum Nitro upgrade</td><td>2022-08-31</td><td>Y</td><td>1d</td><td>Sequencer upgrade; throughput jump.</td></tr><tr><td>Protocol</td><td>Ethereum Merge</td><td>2022-09-15</td><td>Y</td><td>1d</td><td>Consensus shift; volatility control.</td></tr><tr><td>Airdrop</td><td>Optimism airdrop 2</td><td>2023-02-09</td><td>Y</td><td>1d</td><td>Second OP claim event.</td></tr><tr><td>Airdrop</td><td>Arbitrum airdrop</td><td>2023-03-23</td><td>Y</td><td>1d</td><td>ARB token claim; gas surge.</td></tr><tr><td>Launch</td><td>zkSync Era mainnet</td><td>2023-03-24</td><td>Y</td><td>1d</td><td>zkSync Era launch.</td></tr><tr><td>Launch</td><td>Polygon zkEVM mainnet</td><td>2023-03-27</td><td>Y</td><td>1d</td><td>Polygon zkEVM debut.</td></tr><tr><td>Upgrade</td><td>Optimism Bedrock upgrade</td><td>2023-06-06</td><td>Y</td><td>1d</td><td>Bedrock migration; temporary pause/resume.</td></tr><tr><td>Launch</td><td>Linea mainnet</td><td>2023-07-11</td><td>Y</td><td>1d</td><td>Linea mainnet go-live.</td></tr><tr><td>Launch</td><td>Mantle mainnet</td><td>2023-07-17</td><td>Y</td><td>1d</td><td>Mantle mainnet go-live.</td></tr><tr><td>Campaign</td><td>Base Onchain Summer</td><td>2023-08-09</td><td>Y</td><td>7d</td><td>Promo campaign; NFT mint surge.</td></tr><tr><td>Launch</td><td>Base mainnet</td><td>2023-08-09</td><td>Y</td><td>1d</td><td>Base public launch.</td></tr><tr><td>Airdrop</td><td>Optimism airdrop 3</td><td>2023-09-18</td><td>Y</td><td>1d</td><td>Third OP claim wave.</td></tr><tr><td>Launch</td><td>Scroll mainnet</td><td>2023-10-17</td><td>Y</td><td>1d</td><td>Scroll mainnet launch.</td></tr><tr><td>Campaign</td><td>Starknet STRK token launch</td><td>2024-02-14</td><td>Y</td><td>1d</td><td>Token announcement; claim anticipation.</td></tr><tr><td>Airdrop</td><td>Optimism airdrop 4</td><td>2024-02-15</td><td>Y</td><td>1d</td><td>Fourth OP claim day.</td></tr><tr><td>Protocol</td><td>Dencun EIP-4844</td><td>2024-03-13</td><td>N</td><td>1d</td><td>Blob activation; start of exploratory blob era.</td></tr></table>\n\nPost-Dencun (used in exploratory sensitivity only)\n\nTable 14 (continued) \n\n<table><tr><td>Category</td><td>Event</td><td>Date (UTC)</td><td>Used in confirmatory window?</td><td>Duration</td><td>Rationale</td></tr><tr><td>Airdrop</td><td>zkSync airdrop</td><td>2024-06-17</td><td>N</td><td>1d</td><td>Large airdrop during blob era.</td></tr><tr><td>Upgrade</td><td>Polygon MATIC-to-POL transition</td><td>2024-09-04</td><td>N</td><td>1d</td><td>Token transition; potential bridge congestion.</td></tr><tr><td>Campaign</td><td>Starknet staking launch</td><td>2024-11-26</td><td>N</td><td>1d</td><td>Staking launch; sequencer load risk.</td></tr></table>\n\nNote: Column 4 flags inclusion in the confirmatory London→Dencun window; post-Dencun events are retained for exploratory robustness only. Duration records the anchor day used in regressions (multi-day campaigns are coded with a single start-day dummy). Rationale summarizes why the event could jointly shift adoption and congestion."}
Crisis looms in Israel over ultra-Orthodox conscription bill An impending crisis over conscripting ultra-Orthodox Jews into the Israeli army is threatening to undermine Israel's government and split the country. Public opinion on the issue has shifted dramatically in Israel after two years of war, and this is now perhaps the most explosive political risk facing Prime Minister Benjamin Netanyahu. Lawmakers are currently considering a draft bill to end the exemption granted to ultra-Orthodox men enrolled in full-time religious study, established when the State of Israel was declared in 1948. That exemption was ruled illegal by Israel's High Court of Justice almost 20 years ago. Temporary arrangements to continue it were formally ended by the court last year, forcing the government to begin drafting the community. Some 24,000 draft notices were issued last year, but only around 1,200 ultra-Orthodox - or Haredi - draftees reported for duty, according to military testimony given to lawmakers. Tensions are erupting onto the streets, with lawmakers now debating a new draft bill to force ultra-Orthodox men into military service alongside other Israeli Jews. Two Haredi politicians were targeted this month by some extreme ultra-Orthodox protesters, who are furious with parliament's discussion of the proposed law. And last week, a special Border Police unit had to rescue Military Police officers who were targeted by a large crowd of Haredi men as they tried to arrest a suspected draft-evader. These arrests have sparked the creation of a new messaging system called "Black Alert" to spread word quickly through ultra-Orthodox communities and summon protesters to prevent arrests taking place. The push to conscript more ultra-Orthodox also triggered a vast protest by tens of thousands of Haredi men in Jerusalem last month - with the issue seen by many as part of a wider conflict around the identity of the Jewish state, and the place of religion within it. "We're a Jewish country," said Shmuel Orbach, one of the protesters. "You can't fight against Judaism in a Jewish country. It doesn't work." But the changes blowing through Israel have not yet breached the walls of the Kisse Rahamim yeshiva - or Jewish seminary - in Bnei Brak, an ultra-Orthodox city on the outskirts of Tel Aviv. Inside the classroom, teenage boys sit in pairs to discuss Judaism's religious laws, their brightly coloured school notebooks popping against the rows of white shirts and small black kippahs (traditional skullcaps). "Come at one in the morning, and you will see half the guys are studying Torah," the head of the yeshiva, Rabbi Tzemach Mazuz, told me, in what his office said was his first interview with foreign media, or with any female journalist. "By studying Torah, we protect the soldiers wherever they are. This is our army." Ultra-Orthodox believe continuous prayer and religious study protect Israel's soldiers, and are as crucial to its military success as its tanks and air force. That belief was accepted by Israel's politicians in the past, Rabbi Mazuz said, but he acknowledged that Israel was changing. "Today, many in the government and the Knesset [parliament] have distanced themselves from religion. They say yeshiva students are lazy, which is not true," he said. "In Tel Aviv, there are tens of thousands of draft-dodgers - why don't they take them? Why are they attacking yeshiva students?" Despite attacks from the right, Tel Aviv was a top contributor of soldiers during the war. And the pressure felt by Israeli conscripts and reservists over the past two years has thrown a spotlight on those who do not serve. The ultra-Orthodox population has more than doubled its share of Israel's population over the past seven decades, and now accounts for 14%. What began as an exemption for several hundred religious students became, by the start of the Gaza war, a cohort of some 60,000 men left out of the draft. Opinion polls suggest support for ultra-Orthodox conscription is rising. A survey in July by the Israel Democracy Institute think tank found that 85% of non-Haredi Jews - including almost three-quarters in Netanyahu's own right-wing Likud party - supported sanctions for those who refused a draft order, with a firm majority in favour of withdrawing benefits, passports, or the right to vote. "It makes me feel there are people who live in this country without giving anything back," one off-duty soldier in Tel Aviv explained. "I don't think, however religious you are, [it] should be an excuse not to go and serve your country," said Gabby, a young woman also in Tel Aviv. "If you're born here, I find it quite ridiculous that you want to exempt yourself just to study Torah all day." Support for extending the draft is also coming from religious Jews outside the Haredi community, like Dorit Barak, who lives near the yeshiva in Bnei Brak and points to non-Haredi religious Jews who do serve in the military while also studying Torah. "I'm very angry that ultra-Orthodox people don't serve in the army," she said. "It's unfair. I also believe in the Torah, but there's a saying in Hebrew - "Safra and Saifa" [The Book and the Sword] – it means the Torah and the guns together. That's the way forward, until the days of peace." Ms Barak runs a small memorial in Bnei Brak to local soldiers, both religious and secular, who were killed in battle during Israel's wars. Long columns of faces peer out from the black and white photographs lining the back wall. The last soldier from the neighbourhood died in 1983 - a sign, she says, of Israel's shifting demographics. "It's completely changed," she said. "When I was a child, almost half the residents here were not religious, and a small percentage were ultra-Orthodox. Today, almost everyone is ultra-Orthodox, and since 1983 no soldiers were killed, because no one is serving in the army." There are special army and police units for the small number of ultra-Orthodox men who currently choose to serve. But Benjamin Netanyahu said at the opening of parliament's winter session in October that the new draft bill would see 10,000 yeshiva students drafted within two years - something he described as "a real revolution". Ultra-Orthodox parties are crucial allies in Netanyahu's governing coalition, and also in his bid for political survival while standing trial on corruption charges, which he denies. A key demand in return for their loyalty is continued exemption for their supporters from the military draft. The issue has twice brought down Netanyahu's governments in the past. The draft bill now going through parliament is an attempt to find a way through the issue, or at least to buy time ahead of elections due next year. "A balanced law, a good law, a law that is good for the army, good for the yeshiva students, good for the people of Israel [and] good for the state," said lawmaker Boaz Bismuth, a Netanyahu loyalist charged with shepherding the bill through parliament. But many lawmakers, including those from the governing coalition, said this week the current draft of the bill was far too lenient, and that neither they nor the courts would approve it. The current text appears to largely maintain the status quo by conscripting only those ultra-Orthodox men not in full-time religious study, and lifting all sanctions on draft-dodgers once they turn 26. Opposition leader Yair Lapid, who heads the centrist Yesh Atid party, called the draft text a "disgrace" and a "betrayal", and vowed it would not pass. Even some within Netanyahu's own Likud party have refused to support it. Tzachi Hanegbi, a former National Security Adviser recently dismissed by Netanyahu, described it as "an instrument of evasion [that] endangers the future of the state", adding that he and his four sons had all served significant time in the military. Israel's ultra-Orthodox parties have been split over whether to concede to the growing pressure for change, but in a move seen as evidence of the bill's leniency, the hardline Degel HaTorah party - part of the governing coalition - is reportedly considering supporting the current text. Asked whether it was better to back this version of the bill, or risk toppling Benjamin Netanyahu completely, Rabbi Mazuz avoided giving a concrete answer. "The world is guided by God," he said. "When [US President Donald] Trump didn't win a second term [in 2020], I and many millions were hurt. Why did God do this?" "But He knew the future, and He knew the Hamas plan. God wanted Trump [in power] during this period," he added, referring to the Hamas-led attack on Israel on 7 October 2023, which triggered the Gaza war. Rabbi Mazuz gestured to the religious manuscripts lining his office - hundreds of years old, he said. "Between us, Israeli prisons are not like the ones in Russia, thank God. We will get through this too. But I hope we don't get to that." The Haredi way of life has changed little in centuries, but they and their political allies are now locked in a debate over what it means to be Jewish and Israeli, and whether that means fighting for Israel, or fighting for their way of life against the modern demands of war. Additional reporting by Oren Rosenfeld and Samantha Granville
Crisis looms in Israel over ultra-Orthodox conscription bill An impending crisis over conscripting ultra-Orthodox Jews into the Israeli army is threatening to undermine Israel's government and split the country. Public opinion on the issue has shifted dramatically in Israel after two years of war, and this is now perhaps the most explosive political risk facing Prime Minister Benjamin Netanyahu. Lawmakers are currently considering a draft bill to end the exemption granted to ultra-Orthodox men enrolled in full-time religious study, established when the State of Israel was declared in 1948. That exemption was ruled illegal by Israel's High Court of Justice almost 20 years ago. Temporary arrangements to continue it were formally ended by the court last year, forcing the government to begin drafting the community. Some 24,000 draft notices were issued last year, but only around 1,200 ultra-Orthodox - or Haredi - draftees reported for duty, according to military testimony given to lawmakers. Tensions are erupting onto the streets, with lawmakers now debating a new draft bill to force ultra-Orthodox men into military service alongside other Israeli Jews. Two Haredi politicians were targeted this month by some extreme ultra-Orthodox protesters, who are furious with parliament's discussion of the proposed law. And last week, a special Border Police unit had to rescue Military Police officers who were targeted by a large crowd of Haredi men as they tried to arrest a suspected draft-evader. These arrests have sparked the creation of a new messaging system called "Black Alert" to spread word quickly through ultra-Orthodox communities and summon protesters to prevent arrests taking place. The push to conscript more ultra-Orthodox also triggered a vast protest by tens of thousands of Haredi men in Jerusalem last month - with the issue seen by many as part of a wider conflict around the identity of the Jewish state, and the place of religion within it. "We're a Jewish country," said Shmuel Orbach, one of the protesters. "You can't fight against Judaism in a Jewish country. It doesn't work." But the changes blowing through Israel have not yet breached the walls of the Kisse Rahamim yeshiva - or Jewish seminary - in Bnei Brak, an ultra-Orthodox city on the outskirts of Tel Aviv. Inside the classroom, teenage boys sit in pairs to discuss Judaism's religious laws, their brightly coloured school notebooks popping against the rows of white shirts and small black kippahs (traditional skullcaps). "Come at one in the morning, and you will see half the guys are studying Torah," the head of the yeshiva, Rabbi Tzemach Mazuz, told me, in what his office said was his first interview with foreign media, or with any female journalist. "By studying Torah, we protect the soldiers wherever they are. This is our army." Ultra-Orthodox believe continuous prayer and religious study protect Israel's soldiers, and are as crucial to its military success as its tanks and air force. That belief was accepted by Israel's politicians in the past, Rabbi Mazuz said, but he acknowledged that Israel was changing. "Today, many in the government and the Knesset [parliament] have distanced themselves from religion. They say yeshiva students are lazy, which is not true," he said. "In Tel Aviv, there are tens of thousands of draft-dodgers - why don't they take them? Why are they attacking yeshiva students?" Despite attacks from the right, Tel Aviv was a top contributor of soldiers during the war. And the pressure felt by Israeli conscripts and reservists over the past two years has thrown a spotlight on those who do not serve. The ultra-Orthodox population has more than doubled its share of Israel's population over the past seven decades, and now accounts for 14%. What began as an exemption for several hundred religious students became, by the start of the Gaza war, a cohort of some 60,000 men left out of the draft. Opinion polls suggest support for ultra-Orthodox conscription is rising. A survey in July by the Israel Democracy Institute think tank found that 85% of non-Haredi Jews - including almost three-quarters in Netanyahu's own right-wing Likud party - supported sanctions for those who refused a draft order, with a firm majority in favour of withdrawing benefits, passports, or the right to vote. "It makes me feel there are people who live in this country without giving anything back," one off-duty soldier in Tel Aviv explained. "I don't think, however religious you are, [it] should be an excuse not to go and serve your country," said Gabby, a young woman also in Tel Aviv. "If you're born here, I find it quite ridiculous that you want to exempt yourself just to study Torah all day." Support for extending the draft is also coming from religious Jews outside the Haredi community, like Dorit Barak, who lives near the yeshiva in Bnei Brak and points to non-Haredi religious Jews who do serve in the military while also studying Torah. "I'm very angry that ultra-Orthodox people don't serve in the army," she said. "It's unfair. I also believe in the Torah, but there's a saying in Hebrew - "Safra and Saifa" [The Book and the Sword] – it means the Torah and the guns together. That's the way forward, until the days of peace." Ms Barak runs a small memorial in Bnei Brak to local soldiers, both religious and secular, who were killed in battle during Israel's wars. Long columns of faces peer out from the black and white photographs lining the back wall. The last soldier from the neighbourhood died in 1983 - a sign, she says, of Israel's shifting demographics. "It's completely changed," she said. "When I was a child, almost half the residents here were not religious, and a small percentage were ultra-Orthodox. Today, almost everyone is ultra-Orthodox, and since 1983 no soldiers were killed, because no one is serving in the army." There are special army and police units for the small number of ultra-Orthodox men who currently choose to serve. But Benjamin Netanyahu said at the opening of parliament's winter session in October that the new draft bill would see 10,000 yeshiva students drafted within two years - something he described as "a real revolution". Ultra-Orthodox parties are crucial allies in Netanyahu's governing coalition, and also in his bid for political survival while standing trial on corruption charges, which he denies. A key demand in return for their loyalty is continued exemption for their supporters from the military draft. The issue has twice brought down Netanyahu's governments in the past. The draft bill now going through parliament is an attempt to find a way through the issue, or at least to buy time ahead of elections due next year. "A balanced law, a good law, a law that is good for the army, good for the yeshiva students, good for the people of Israel [and] good for the state," said lawmaker Boaz Bismuth, a Netanyahu loyalist charged with shepherding the bill through parliament. But many lawmakers, including those from the governing coalition, said this week the current draft of the bill was far too lenient, and that neither they nor the courts would approve it. The current text appears to largely maintain the status quo by conscripting only those ultra-Orthodox men not in full-time religious study, and lifting all sanctions on draft-dodgers once they turn 26. Opposition leader Yair Lapid, who heads the centrist Yesh Atid party, called the draft text a "disgrace" and a "betrayal", and vowed it would not pass. Even some within Netanyahu's own Likud party have refused to support it. Tzachi Hanegbi, a former National Security Adviser recently dismissed by Netanyahu, described it as "an instrument of evasion [that] endangers the future of the state", adding that he and his four sons had all served significant time in the military. Israel's ultra-Orthodox parties have been split over whether to concede to the growing pressure for change, but in a move seen as evidence of the bill's leniency, the hardline Degel HaTorah party - part of the governing coalition - is reportedly considering supporting the current text. Asked whether it was better to back this version of the bill, or risk toppling Benjamin Netanyahu completely, Rabbi Mazuz avoided giving a concrete answer. "The world is guided by God," he said. "When [US President Donald] Trump didn't win a second term [in 2020], I and many millions were hurt. Why did God do this?" "But He knew the future, and He knew the Hamas plan. God wanted Trump [in power] during this period," he added, referring to the Hamas-led attack on Israel on 7 October 2023, which triggered the Gaza war. Rabbi Mazuz gestured to the religious manuscripts lining his office - hundreds of years old, he said. "Between us, Israeli prisons are not like the ones in Russia, thank God. We will get through this too. But I hope we don't get to that." The Haredi way of life has changed little in centuries, but they and their political allies are now locked in a debate over what it means to be Jewish and Israeli, and whether that means fighting for Israel, or fighting for their way of life against the modern demands of war. Additional reporting by Oren Rosenfeld and Samantha Granville
bbc_news
2025-12-03T06:06:49Z
https://www.bbc.com/news/articles/cly580gkd9ro
{"title": "Crisis looms in Israel over ultra-Orthodox conscription bill"}
Iranian director given jail sentence while on trip to collect US awards Award-winning Iranian film-maker Jafar Panahi has been given a prison sentence on charges of creating propaganda against the political system, his lawyer has said, on the same day his new film won a string of awards in the US. Panahi has been handed a one-year sentence and a travel ban in Iran, his lawyer said on Monday. However, he was in New York to pick up three prizes, including best director, at the Gotham Awards for his latest film, It Was Just An Accident, which he shot illegally in Iran. Panahi, 65, has served two previous spells in prison in his home country, and said in an interview shortly before receiving his latest sentence that he planned to return. Panahi is one of Iran's leading directors but has been subjected to constraints from authorities including a ban on making films in the country as well as the prison sentences and travel restrictions. He didn't refer to the new sentence in his Gotham Awards speeches, but praised "film-makers who keep the camera rolling in silence, without support, and at times, by risking everything they have, only with their faith in truth and humanity". He added: "I hope that this dedication will be considered a small tribute to all film-makers who have been deprived of the right to see and to be seen, but continue to create and to exist." It Was Just An Accident also won best screenplay and best international film, and is expected to be a contender at the Oscars in Hollywood in the spring. Panahi covertly shot the film, which tells the tale of five ordinary Iranians who are confronted with a man they believed tortured some of them in jail. He has said it was partly inspired by his last spell in jail and stories that other prisoners "told me about, the violence and the brutality of the Iranian government". When the film won the top prize at the Cannes Film Festival in France in May, he used his acceptance speech to speak out against the restrictions of the regime. Panahi was jailed in 2022 for protesting against the detention of two fellow film-makers who had been critical of the authorities. He was released after seven months of the six-year sentence. He was previously sentenced to six years in 2010 for supporting anti-government protests and creating "propaganda against the system". He was released on conditional bail after two months. In an interview with the Financial Times conducted in Los Angeles shortly before his latest sentence was delivered, he recalled a recent conversation with an elderly Iranian exile who he had met in the city. "She begged me not to go back," he said. "But I told her I can't live outside Iran. I can't adapt to anywhere else. "And I said she shouldn't worry, because what are the officials going to do that they haven't done already?"
Iranian director given jail sentence while on trip to collect US awards Award-winning Iranian film-maker Jafar Panahi has been given a prison sentence on charges of creating propaganda against the political system, his lawyer has said, on the same day his new film won a string of awards in the US. Panahi has been handed a one-year sentence and a travel ban in Iran, his lawyer said on Monday. However, he was in New York to pick up three prizes, including best director, at the Gotham Awards for his latest film, It Was Just An Accident, which he shot illegally in Iran. Panahi, 65, has served two previous spells in prison in his home country, and said in an interview shortly before receiving his latest sentence that he planned to return. Panahi is one of Iran's leading directors but has been subjected to constraints from authorities including a ban on making films in the country as well as the prison sentences and travel restrictions. He didn't refer to the new sentence in his Gotham Awards speeches, but praised "film-makers who keep the camera rolling in silence, without support, and at times, by risking everything they have, only with their faith in truth and humanity". He added: "I hope that this dedication will be considered a small tribute to all film-makers who have been deprived of the right to see and to be seen, but continue to create and to exist." It Was Just An Accident also won best screenplay and best international film, and is expected to be a contender at the Oscars in Hollywood in the spring. Panahi covertly shot the film, which tells the tale of five ordinary Iranians who are confronted with a man they believed tortured some of them in jail. He has said it was partly inspired by his last spell in jail and stories that other prisoners "told me about, the violence and the brutality of the Iranian government". When the film won the top prize at the Cannes Film Festival in France in May, he used his acceptance speech to speak out against the restrictions of the regime. Panahi was jailed in 2022 for protesting against the detention of two fellow film-makers who had been critical of the authorities. He was released after seven months of the six-year sentence. He was previously sentenced to six years in 2010 for supporting anti-government protests and creating "propaganda against the system". He was released on conditional bail after two months. In an interview with the Financial Times conducted in Los Angeles shortly before his latest sentence was delivered, he recalled a recent conversation with an elderly Iranian exile who he had met in the city. "She begged me not to go back," he said. "But I told her I can't live outside Iran. I can't adapt to anywhere else. "And I said she shouldn't worry, because what are the officials going to do that they haven't done already?"
bbc_news
2025-12-02T10:40:12Z
https://www.bbc.com/news/articles/c1m8e8l1mp2o
{"title": "Iranian director given jail sentence while on trip to collect US awards"}
Trump releases fraudster executive days into prison sentence US President Donald Trump has commuted the sentence of former investment manager David Gentile, who was just days into a seven-year prison sentence for fraud. Bureau of Prisons records show that Gentile was released on Wednesday, less than two weeks after he reported to prison. Gentile, the former chief executive and founder of GPB Capital, was convicted last year in what federal prosecutors described as a multi-year scheme to defraud more than 10,000 investors by misrepresenting the performance of private equity funds. He's the latest in a string of white-collar criminals whose sentences Trump has commuted. Gentile was convicted in August last year of securities and wire fraud charges, and sentenced in May. His co-defendant, Jeffry Schneider, was sentenced to six years on the same charges and is due to report to prison in January. US attorney Joseph Nocella said at the time of Gentile's sentencing that GPB Capital was built on a "foundation of lies" and that the company made $1.6bn (£1.2bn) while using investor capital to pay distributions to other investors. "The sentences imposed today are well deserved and should serve as a warning to would-be fraudsters that seek to get rich by taking advantage of investors gets you only a one-way ticket to jail," he said. But the White House says the Department of Justice under former President Joe Biden made multiple missteps - and that investors were aware that their money could be going towards other people's dividends. "Even though this was disclosed to investors the Biden Department of Justice claimed this was a Ponzi scheme," the White House official said. "This claim was profoundly undercut by the fact that GPB had explicitly told investors what would happen." The official also cited concerns from Gentile that prosecutors had elicited false testimony. Trump's commutation of Gentile's sentence does not clear him of his crimes like a full presidential pardon would, and it does not get rid of other potential penalties imposed. So far in his second term, the president has pardoned or commuted the sentences of multiple people convicted of different types of fraud, including wire, securities, tax and healthcare fraud. Last month, he pardoned Tennessee state House Speaker Glen Casada who was convicted of fraud, money laundering and conspiracy charges. Correction 1 December 2025: This article incorrectly stated that Jeffry Schneider "remains behind bars". It has been amended to make clear that he is yet to begin serving his prison sentence.
Trump releases fraudster executive days into prison sentence US President Donald Trump has commuted the sentence of former investment manager David Gentile, who was just days into a seven-year prison sentence for fraud. Bureau of Prisons records show that Gentile was released on Wednesday, less than two weeks after he reported to prison. Gentile, the former chief executive and founder of GPB Capital, was convicted last year in what federal prosecutors described as a multi-year scheme to defraud more than 10,000 investors by misrepresenting the performance of private equity funds. He's the latest in a string of white-collar criminals whose sentences Trump has commuted. Gentile was convicted in August last year of securities and wire fraud charges, and sentenced in May. His co-defendant, Jeffry Schneider, was sentenced to six years on the same charges and is due to report to prison in January. US attorney Joseph Nocella said at the time of Gentile's sentencing that GPB Capital was built on a "foundation of lies" and that the company made $1.6bn (£1.2bn) while using investor capital to pay distributions to other investors. "The sentences imposed today are well deserved and should serve as a warning to would-be fraudsters that seek to get rich by taking advantage of investors gets you only a one-way ticket to jail," he said. But the White House says the Department of Justice under former President Joe Biden made multiple missteps - and that investors were aware that their money could be going towards other people's dividends. "Even though this was disclosed to investors the Biden Department of Justice claimed this was a Ponzi scheme," the White House official said. "This claim was profoundly undercut by the fact that GPB had explicitly told investors what would happen." The official also cited concerns from Gentile that prosecutors had elicited false testimony. Trump's commutation of Gentile's sentence does not clear him of his crimes like a full presidential pardon would, and it does not get rid of other potential penalties imposed. So far in his second term, the president has pardoned or commuted the sentences of multiple people convicted of different types of fraud, including wire, securities, tax and healthcare fraud. Last month, he pardoned Tennessee state House Speaker Glen Casada who was convicted of fraud, money laundering and conspiracy charges. Correction 1 December 2025: This article incorrectly stated that Jeffry Schneider "remains behind bars". It has been amended to make clear that he is yet to begin serving his prison sentence.
bbc_news
2025-12-01T22:51:58Z
https://www.bbc.com/news/articles/c7vmn61l75ro
{"title": "Trump releases fraudster executive days into prison sentence"}
Strictly semi-finalists confirmed after musicals week elimination Spoiler warning: This article reveals details from Sunday's elimination Only four couples remain in BBC One's Strictly Come Dancing 2025 after another celebrity and their professional partner were eliminated from the competition. Former Emmerdale star Lewis Cope and his partner Katya Jones were in the bottom two pairs following their performance for musical week, alongside reality TV star Amber Davies and her dance partner Nikita Kuzmin. The judges decided to send Cope, who performed a salsa to Dance At The Gym from West Side Story, home. Davies, who danced a Charleston to Sit Down You're Rockin' The Boat from Guys And Dolls, joins Balvinder Sopal, George Clarke and Karen Carney in the semi-finals. "It's been more than I could have ever wished for," Cope said of the competition following his elimination on Sunday. "If someone would have said that I'd have done 11 weeks on the show at the beginning, I'd have been over the moon." He also paid tribute to his dance partner, of whom he said: "You've literally given me absolutely everything I could wish for as a friend, as a teacher." Jones, in return, described Cope as a "gentleman", as well as "so humble and so kind". "I'm so glad that we had a chance to see you and showcase your talent, and what a beautiful person you are to the world," she added. Davies and Kuzmin topped Saturday's leaderboard for their performance, receiving a perfect score. EastEnders actress Balvinder Sopal and her dance partner, Julian Caillon, came second with their Viennese Waltz to Never Enough from The Greatest Showman. Internet star George Clarke performed an Argentine tango to The Point Of No Return from The Phantom Of The Opera with his partner Alexis Warr. And former Lioness and sports broadcaster Karen Carney and Carlos Gu took on the samba, dancing to The Rhythm Of Life from Sweet Charity. The remaining four couples will perform during next weekend's semi-final, airing on BBC One and BBC iPlayer at 18:35 GMT on Saturday 13 December, with the results show at 19:45 GMT next Sunday. Each couple will perform two new routines and there will be performances from Australian singer Kylie Minogue and boyband Five.
Strictly semi-finalists confirmed after musicals week elimination Spoiler warning: This article reveals details from Sunday's elimination Only four couples remain in BBC One's Strictly Come Dancing 2025 after another celebrity and their professional partner were eliminated from the competition. Former Emmerdale star Lewis Cope and his partner Katya Jones were in the bottom two pairs following their performance for musical week, alongside reality TV star Amber Davies and her dance partner Nikita Kuzmin. The judges decided to send Cope, who performed a salsa to Dance At The Gym from West Side Story, home. Davies, who danced a Charleston to Sit Down You're Rockin' The Boat from Guys And Dolls, joins Balvinder Sopal, George Clarke and Karen Carney in the semi-finals. "It's been more than I could have ever wished for," Cope said of the competition following his elimination on Sunday. "If someone would have said that I'd have done 11 weeks on the show at the beginning, I'd have been over the moon." He also paid tribute to his dance partner, of whom he said: "You've literally given me absolutely everything I could wish for as a friend, as a teacher." Jones, in return, described Cope as a "gentleman", as well as "so humble and so kind". "I'm so glad that we had a chance to see you and showcase your talent, and what a beautiful person you are to the world," she added. Davies and Kuzmin topped Saturday's leaderboard for their performance, receiving a perfect score. EastEnders actress Balvinder Sopal and her dance partner, Julian Caillon, came second with their Viennese Waltz to Never Enough from The Greatest Showman. Internet star George Clarke performed an Argentine tango to The Point Of No Return from The Phantom Of The Opera with his partner Alexis Warr. And former Lioness and sports broadcaster Karen Carney and Carlos Gu took on the samba, dancing to The Rhythm Of Life from Sweet Charity. The remaining four couples will perform during next weekend's semi-final, airing on BBC One and BBC iPlayer at 18:35 GMT on Saturday 13 December, with the results show at 19:45 GMT next Sunday. Each couple will perform two new routines and there will be performances from Australian singer Kylie Minogue and boyband Five.
bbc_news
2025-12-07T20:31:12Z
https://www.bbc.com/news/articles/cwyxex7671xo
{"title": "Strictly semi-finalists confirmed after musicals week elimination"}
Israel's PM says second phase of Gaza peace plan is close Israel's Prime Minister Benjamin Netanyahu has said a second phase of the US-brokered plan to end the war in Gaza is close - but that key issues still need to be resolved. Under the second phase of President Donald Trump's plan, Israel should withdraw its troops further from Gaza as a transitional authority is set up and an international security force is deployed. Hamas is meant to disarm and reconstruction to begin. With questions outstanding over Hamas disarmament, one senior official has suggested the group is ready to consider "freezing or storing" its remaining weapons. The US and other mediators have been applying pressure on both sides to advance to the next stages of Trump's plan. According to Arab media reports, a Red Cross team and members of Hamas's armed wing, are resuming searches for the last remaining deceased Israeli hostage, police officer Sergeant Ran Gvili, in the Zeitoun area of Gaza City. Gvili was killed in the Hamas-led 7 October 2023 attacks and his body should be returned under the terms of the initial ceasefire deal between Israel and Hamas. "We'll get him out", Netanyahu said at a news conference on Sunday. Two months after the Gaza ceasefire came into effect, both sides continue to accuse each other of almost daily violations. Israeli forces remain in control of more than half of the Gaza Strip. Hamas has largely re-established itself in the remainder of the territory. Speaking to journalists, Netanyahu said that he would hold important discussions with President Trump at the end of the month on how to ensure the plan's second stage was achieved. An Israeli government spokeswoman announced on Monday that the meeting would take place on 29 December. After meeting German Chancellor Friedrich Merz in Jerusalem on Sunday, Netanyahu reiterated that Hamas rule of Gaza had to end and that the armed group had to follow through on "their commitment" to give up their weapons and for the strip to be demilitarised. Later, when addressing a gathering of Israeli ambassadors and diplomats, he expressed scepticism about whether a planned multinational force would be able to disarm Hamas. "Now there is a question here: our friends in America want to try to establish an international force that will do the job. I said - please. Are there volunteers here? Please, on the contrary," Netanyahu said, seemingly questioning whether foreign troops would be willing to disarm Hamas by force. "And we know that there are certain tasks that this force can do. I don't want to go into detail, they can't do everything, and maybe they can't do the main thing, but we'll see." He went on to reiterate that Israel would ensure disarmament would happen, saying: "It can be done the easy way, it can be done the hard way. But eventually it will be done." Speaking to the Associated Press, a top Hamas official, Bassem Naim, said his group was ready for talks on "freezing or storing" its arsenal of weapons in a possible approach to one of the most challenging issues ahead. "We are open to have a comprehensive approach in order to avoid further escalations or in order to avoid any further clashes or explosions," Naim - a member of the Hamas political bureau - said in an interview in Qatar, where much of the group's leadership is based. Hamas has previously refused to give up its weapons without the creation of an independent Palestinian state. Naim also claimed that Israel had failed to carry out key ceasefire pledges, saying Gaza had not been flooded with aid and the Rafah border crossing with Egypt had not reopened. Humanitarian agencies say there has been a dramatic increase in supplies entering the strip, but that they are still facing Israeli restrictions on their work and insecurity. Last week, Israel said it was ready to reopen Rafah - Gaza's main gateway to the world - but only for people to leave. Egypt and the Palestinians did not accept that and insisted that Israel was obliged to open the crossing in both directions. The ceasefire deal stopped a devastating two-year Israeli offensive in Gaza, triggered by the deadly Hamas attacks and mass hostage taking in southern Israel. The first stage of the peace plan involved the return of the 20 living hostages and the remains of the 28 dead hostages still in Gaza. In exchange for the release of the living hostages, Israel handed over nearly 2,000 Palestinian detainees. For each of the Israeli hostages handed over, Israel has been sending back the bodies of 15 Palestinians. Israel has accused Hamas of delaying the return of dead hostages. The Hamas-run health ministry in Gaza says that more than 370 Palestinians have been killed by Israeli fire since the ceasefire took hold. Israel says its strikes have been in response to Palestinian violations, including people entering Israeli-held parts of Gaza. Three Israeli soldiers have also been killed in fighting with dozens of Hamas operatives still said to be holed up in underground tunnels in the very south of Gaza. Last week, Trump said that the second phase of the Gaza plan was "going to happen pretty soon", and on Saturday, the Qatari Prime Minister Sheikh Mohammed bin Abdul Rahman Al Thani said that "a critical moment" had been reached.
Israel's PM says second phase of Gaza peace plan is close Israel's Prime Minister Benjamin Netanyahu has said a second phase of the US-brokered plan to end the war in Gaza is close - but that key issues still need to be resolved. Under the second phase of President Donald Trump's plan, Israel should withdraw its troops further from Gaza as a transitional authority is set up and an international security force is deployed. Hamas is meant to disarm and reconstruction to begin. With questions outstanding over Hamas disarmament, one senior official has suggested the group is ready to consider "freezing or storing" its remaining weapons. The US and other mediators have been applying pressure on both sides to advance to the next stages of Trump's plan. According to Arab media reports, a Red Cross team and members of Hamas's armed wing, are resuming searches for the last remaining deceased Israeli hostage, police officer Sergeant Ran Gvili, in the Zeitoun area of Gaza City. Gvili was killed in the Hamas-led 7 October 2023 attacks and his body should be returned under the terms of the initial ceasefire deal between Israel and Hamas. "We'll get him out", Netanyahu said at a news conference on Sunday. Two months after the Gaza ceasefire came into effect, both sides continue to accuse each other of almost daily violations. Israeli forces remain in control of more than half of the Gaza Strip. Hamas has largely re-established itself in the remainder of the territory. Speaking to journalists, Netanyahu said that he would hold important discussions with President Trump at the end of the month on how to ensure the plan's second stage was achieved. An Israeli government spokeswoman announced on Monday that the meeting would take place on 29 December. After meeting German Chancellor Friedrich Merz in Jerusalem on Sunday, Netanyahu reiterated that Hamas rule of Gaza had to end and that the armed group had to follow through on "their commitment" to give up their weapons and for the strip to be demilitarised. Later, when addressing a gathering of Israeli ambassadors and diplomats, he expressed scepticism about whether a planned multinational force would be able to disarm Hamas. "Now there is a question here: our friends in America want to try to establish an international force that will do the job. I said - please. Are there volunteers here? Please, on the contrary," Netanyahu said, seemingly questioning whether foreign troops would be willing to disarm Hamas by force. "And we know that there are certain tasks that this force can do. I don't want to go into detail, they can't do everything, and maybe they can't do the main thing, but we'll see." He went on to reiterate that Israel would ensure disarmament would happen, saying: "It can be done the easy way, it can be done the hard way. But eventually it will be done." Speaking to the Associated Press, a top Hamas official, Bassem Naim, said his group was ready for talks on "freezing or storing" its arsenal of weapons in a possible approach to one of the most challenging issues ahead. "We are open to have a comprehensive approach in order to avoid further escalations or in order to avoid any further clashes or explosions," Naim - a member of the Hamas political bureau - said in an interview in Qatar, where much of the group's leadership is based. Hamas has previously refused to give up its weapons without the creation of an independent Palestinian state. Naim also claimed that Israel had failed to carry out key ceasefire pledges, saying Gaza had not been flooded with aid and the Rafah border crossing with Egypt had not reopened. Humanitarian agencies say there has been a dramatic increase in supplies entering the strip, but that they are still facing Israeli restrictions on their work and insecurity. Last week, Israel said it was ready to reopen Rafah - Gaza's main gateway to the world - but only for people to leave. Egypt and the Palestinians did not accept that and insisted that Israel was obliged to open the crossing in both directions. The ceasefire deal stopped a devastating two-year Israeli offensive in Gaza, triggered by the deadly Hamas attacks and mass hostage taking in southern Israel. The first stage of the peace plan involved the return of the 20 living hostages and the remains of the 28 dead hostages still in Gaza. In exchange for the release of the living hostages, Israel handed over nearly 2,000 Palestinian detainees. For each of the Israeli hostages handed over, Israel has been sending back the bodies of 15 Palestinians. Israel has accused Hamas of delaying the return of dead hostages. The Hamas-run health ministry in Gaza says that more than 370 Palestinians have been killed by Israeli fire since the ceasefire took hold. Israel says its strikes have been in response to Palestinian violations, including people entering Israeli-held parts of Gaza. Three Israeli soldiers have also been killed in fighting with dozens of Hamas operatives still said to be holed up in underground tunnels in the very south of Gaza. Last week, Trump said that the second phase of the Gaza plan was "going to happen pretty soon", and on Saturday, the Qatari Prime Minister Sheikh Mohammed bin Abdul Rahman Al Thani said that "a critical moment" had been reached.
bbc_news
2025-12-08T16:05:47Z
https://www.bbc.com/news/articles/c0r90gkzkezo
{"title": "Israel's PM says second phase of Gaza peace plan is close"}
Flood alerts triggered across South East Wet weather has prompted a warning of flooding in areas of south-east England. Flood alerts have been put in place on the River Mole and its tributaries from Kinnersley Manor to South Hersham in Surrey. They have also been triggered on the Western Rother, Climping Seafront, River Adur East Branch, Upper Ouse and Cuckmere River in Sussex, according to the government's flooding alerts' website. Kent's Rivers Eden and Eden Brook, and the Isle of Sheppey and coast from Kemsley to Seasalter are also at risk of flooding. Most alerts are set to remain in place throughout Sunday. Strong winds overnight from Friday to Saturday damaged homes in Seaford in East Sussex, which residents called "a mini tornado". It comes as the Met Office has issued a yellow warning for rain affecting south-east England and London. The government agency said heavy rain may bring travel disruption in places from Monday night into Tuesday. People can expect a "slight chance" of power cuts and loss of other services to some homes and businesses during a yellow rain warning, it detailed. Fast-flowing or deep floodwater is also possible, which the Met Office said could cause a "danger to life", alongside delays or cancellations to train and bus services. Discussing the UK's upcoming weather picture, Met Office's deputy chief meteorologist Steven Keates said the exact track, depth and timings of the low-pressure system were "uncertain". He added this made it "harder to determine where will be most impacted by strong winds and/or heavy rain". The Met Office forecast for the rest of December remains unsettled with further periods of low pressure predicted. Meteorologists said it was too early to provide an accurate forecast for the Christmas period.
Flood alerts triggered across South East Wet weather has prompted a warning of flooding in areas of south-east England. Flood alerts have been put in place on the River Mole and its tributaries from Kinnersley Manor to South Hersham in Surrey. They have also been triggered on the Western Rother, Climping Seafront, River Adur East Branch, Upper Ouse and Cuckmere River in Sussex, according to the government's flooding alerts' website. Kent's Rivers Eden and Eden Brook, and the Isle of Sheppey and coast from Kemsley to Seasalter are also at risk of flooding. Most alerts are set to remain in place throughout Sunday. Strong winds overnight from Friday to Saturday damaged homes in Seaford in East Sussex, which residents called "a mini tornado". It comes as the Met Office has issued a yellow warning for rain affecting south-east England and London. The government agency said heavy rain may bring travel disruption in places from Monday night into Tuesday. People can expect a "slight chance" of power cuts and loss of other services to some homes and businesses during a yellow rain warning, it detailed. Fast-flowing or deep floodwater is also possible, which the Met Office said could cause a "danger to life", alongside delays or cancellations to train and bus services. Discussing the UK's upcoming weather picture, Met Office's deputy chief meteorologist Steven Keates said the exact track, depth and timings of the low-pressure system were "uncertain". He added this made it "harder to determine where will be most impacted by strong winds and/or heavy rain". The Met Office forecast for the rest of December remains unsettled with further periods of low pressure predicted. Meteorologists said it was too early to provide an accurate forecast for the Christmas period.
bbc_news
2025-12-07T14:34:10Z
https://www.bbc.com/news/articles/cvgj3jeqln0o
{"title": "Flood alerts triggered across South East"}
Heathrow 'pepper spray attack' and 'Harry gun cop U-turn' The Daily Telegraph says the leader of Reform UK, Nigel Farage, has been reported to the police because of claims he broke rules on campaign spending. The paper says that Richard Everett, a former Reform councillor, who helped Farage win his seat in Clacton in Essex at the general election, submitted the documents to the police. They are said to show that Reform came close to the limit of just over £20,000. But Everett alleges the figure excludes some costs including leaflets, utility bills and the refurbishment of a bar in the campaign office. He says he believes Farage was "blissfully unaware" of the omissions, but the Telegraph says that if the claims are found to be accurate he and his election agent could be found personally liable. In response, Reform UK described Everett as a "disgruntled former councillor" and denied any laws had been broken. The decision to strip the Duke of Sussex of the right to 24-hour armed police protection when he is visiting the UK from his home in America is to be reviewed by the Home Office, according to the Sun. The paper says it could mean a reunion for King Charles III with his grandchildren, Archie and Lilibet, who he hasn't seen since 2022. Prince Harry has previously said it is not safe for his family to visit Britain without protection, which ceased when he stopped being a working royal in 2020. The Guardian highlights figures from an NHS watchdog suggesting that one in seven patient hospital referrals in England get lost, rejected or delayed. The paper says Healthwatch England's survey also found that the majority of those patients only discovered they weren't on a waiting list after chasing the NHS themselves. According to the Times' lead, every workplace will be required to tell staff about their right to join a union as part of the government's Employment Rights Bill. The paper says an approved statement will be given to workers in an effort to stop "hostile" employers from discouraging union membership. The Conservatives warn the plan will lead to a collapse of British productivity. And many of the papers carry triumphant pictures of an emotional Lando Norris, after the British driver won his first Formula 1 Championship. The Telegraph says it proves "nice guys win too". The back page headline in the i Paper is "Lando hope and glory".
Heathrow 'pepper spray attack' and 'Harry gun cop U-turn' The Daily Telegraph says the leader of Reform UK, Nigel Farage, has been reported to the police because of claims he broke rules on campaign spending. The paper says that Richard Everett, a former Reform councillor, who helped Farage win his seat in Clacton in Essex at the general election, submitted the documents to the police. They are said to show that Reform came close to the limit of just over £20,000. But Everett alleges the figure excludes some costs including leaflets, utility bills and the refurbishment of a bar in the campaign office. He says he believes Farage was "blissfully unaware" of the omissions, but the Telegraph says that if the claims are found to be accurate he and his election agent could be found personally liable. In response, Reform UK described Everett as a "disgruntled former councillor" and denied any laws had been broken. The decision to strip the Duke of Sussex of the right to 24-hour armed police protection when he is visiting the UK from his home in America is to be reviewed by the Home Office, according to the Sun. The paper says it could mean a reunion for King Charles III with his grandchildren, Archie and Lilibet, who he hasn't seen since 2022. Prince Harry has previously said it is not safe for his family to visit Britain without protection, which ceased when he stopped being a working royal in 2020. The Guardian highlights figures from an NHS watchdog suggesting that one in seven patient hospital referrals in England get lost, rejected or delayed. The paper says Healthwatch England's survey also found that the majority of those patients only discovered they weren't on a waiting list after chasing the NHS themselves. According to the Times' lead, every workplace will be required to tell staff about their right to join a union as part of the government's Employment Rights Bill. The paper says an approved statement will be given to workers in an effort to stop "hostile" employers from discouraging union membership. The Conservatives warn the plan will lead to a collapse of British productivity. And many of the papers carry triumphant pictures of an emotional Lando Norris, after the British driver won his first Formula 1 Championship. The Telegraph says it proves "nice guys win too". The back page headline in the i Paper is "Lando hope and glory".
bbc_news
2025-12-08T01:34:47Z
https://www.bbc.com/news/articles/cvgkpqk01pro
{"title": "Heathrow 'pepper spray attack' and 'Harry gun cop U-turn'"}
Benin coup plot leader hiding in Togo, official tells BBC A senior government official in Benin has told the BBC that the leader of Sunday's failed coup is taking refuge in neighbouring Togo. Speaking on condition of anonymity, the official said that the government would request Lt Col Pascal Tigri's extradition. Togo's government has not yet commented. The failed coup came after a series of military takeovers in West Africa, raising concern that democracy is increasingly under threat in the region. It was thwarted after regional power Nigeria sent fighter jets to dislodge the mutineers from a military base and the offices of state TV following a request from President Patrice Talon's government. A group of soldiers appeared on state TV early on Sunday to announce they had seized power, and gunfire was heard near the presidential residence. The Beninese government official said the authorities knew that Lt Col Pascal Tigri was in Togo's capital, Lomé, in the same area where President Faure Gnassingbé lives. "We don't know how to explain this but we will make an official extradition request and see how the Togolese authorities will react," the official added. There is no independent confirmation of the claim. Togo is part of the West African regional bloc, Ecowas, which condemned the coup attempt. French special forces also helped loyalist troops to thwart the coup, the head of the Benin's republican guard, which is in charge of protecting the president, told AFP news agency. Dieudonne Djimon Tevoedjre said Benin's troops were "truly valiant and faced the enemy all day" on Sunday. "French special forces were sent from [Ivory Coast's main city] Abidjan, used for mopping up operations after the Beninese army had done the job," he was quoted as saying. Benin's government spokesman Wilfried Léandre Houngbédji could not confirm the deployment of French forces. He told the BBC that as far as he knew, France had mainly provided intelligence support. Ecowas has deployed troops from Nigeria, Ghana, Sierra Leone, and Ivory Coast to secure key installations in Benin. The deployment signals that Ecowas is no longer willing to watch civilian governments fall without resistance. Benin, a former French colony, has been regarded as one of Africa's more stable democracies. The nation is one of the continent's largest cotton producers, but ranks among the world's poorest countries. Nigeria described the coup attempt as a "direct assault on democracy". Houngbédji told the BBC that a small number of soldiers from the National Guard were behind the coup attempt. "The National Guard is a recent creation within our army, initiated by President Talon as part of our fight against terrorism. It is a land forces unit equipped with significant resources, following major investments in recent years, and its personnel are well trained," he said. Houngbédji added that Talon asked Ecowas to carry out airstrikes to neutralise the mutineers following indications that they had planned to attack the main airport in Cotonou, Benin's largest city, putting at risk the lives of civilians living in the area. "This led to the strategy of carrying out targeted airstrikes to immobilise their equipment, including the armored vehicles they threatened to use," he said. The rebel soldiers justified their actions by criticising Talon's management of the country, complaining first about his handling of the "continuing deterioration of the security situation in northern Benin". Benin's army has suffered losses near its northern border with insurgency-hit Niger and Burkina Faso in recent years, as jihadist militants linked to Islamic State and al-Qaeda spread southwards. The soldiers' statement cited "the ignorance and neglect of the situation of our brothers in arms who have fallen at the front and, above all, that of their families, abandoned to their sad fate by Mr Patrice Talon's policies". The rebels also hit out at cuts in health care, including the cancellation of state-funded kidney dialysis, and taxes rises, as well as curbs on political activities. Talon, who is regarded as a close ally of the West, is due to step down next year after completing his second term in office, with elections scheduled for April. A businessman known as the "king of cotton", he first came to power in 2016. He has endorsed Finance Minister Romuald Wadagni as his successor. Talon has been praised by his supporters for overseeing economic development, but his government has also been criticised for suppressing dissenting voices. In October, Benin's electoral commission barred the main opposition candidate from contesting the election. The attempted coup came just over a week after Guinea-Bissau's President Umaro Sissoco Embaló was overthrown - though some regional figures have questioned whether this was staged. In recent years, West Africa has also seen coups in Burkina Faso, Guinea, Mali and Niger, prompting concerns about the region's stability. Russia has strengthened its ties with these Sahel countries over recent years - and Burkina Faso, Mali and Niger have left the West African regional bloc Ecowas to form their own group, the Alliance of Sahel States. News of the attempted takeover in Benin was hailed by several pro-Russian social media accounts, according to BBC Monitoring. Go to BBCAfrica.com for more news from the African continent.
Benin coup plot leader hiding in Togo, official tells BBC A senior government official in Benin has told the BBC that the leader of Sunday's failed coup is taking refuge in neighbouring Togo. Speaking on condition of anonymity, the official said that the government would request Lt Col Pascal Tigri's extradition. Togo's government has not yet commented. The failed coup came after a series of military takeovers in West Africa, raising concern that democracy is increasingly under threat in the region. It was thwarted after regional power Nigeria sent fighter jets to dislodge the mutineers from a military base and the offices of state TV following a request from President Patrice Talon's government. A group of soldiers appeared on state TV early on Sunday to announce they had seized power, and gunfire was heard near the presidential residence. The Beninese government official said the authorities knew that Lt Col Pascal Tigri was in Togo's capital, Lomé, in the same area where President Faure Gnassingbé lives. "We don't know how to explain this but we will make an official extradition request and see how the Togolese authorities will react," the official added. There is no independent confirmation of the claim. Togo is part of the West African regional bloc, Ecowas, which condemned the coup attempt. French special forces also helped loyalist troops to thwart the coup, the head of the Benin's republican guard, which is in charge of protecting the president, told AFP news agency. Dieudonne Djimon Tevoedjre said Benin's troops were "truly valiant and faced the enemy all day" on Sunday. "French special forces were sent from [Ivory Coast's main city] Abidjan, used for mopping up operations after the Beninese army had done the job," he was quoted as saying. Benin's government spokesman Wilfried Léandre Houngbédji could not confirm the deployment of French forces. He told the BBC that as far as he knew, France had mainly provided intelligence support. Ecowas has deployed troops from Nigeria, Ghana, Sierra Leone, and Ivory Coast to secure key installations in Benin. The deployment signals that Ecowas is no longer willing to watch civilian governments fall without resistance. Benin, a former French colony, has been regarded as one of Africa's more stable democracies. The nation is one of the continent's largest cotton producers, but ranks among the world's poorest countries. Nigeria described the coup attempt as a "direct assault on democracy". Houngbédji told the BBC that a small number of soldiers from the National Guard were behind the coup attempt. "The National Guard is a recent creation within our army, initiated by President Talon as part of our fight against terrorism. It is a land forces unit equipped with significant resources, following major investments in recent years, and its personnel are well trained," he said. Houngbédji added that Talon asked Ecowas to carry out airstrikes to neutralise the mutineers following indications that they had planned to attack the main airport in Cotonou, Benin's largest city, putting at risk the lives of civilians living in the area. "This led to the strategy of carrying out targeted airstrikes to immobilise their equipment, including the armored vehicles they threatened to use," he said. The rebel soldiers justified their actions by criticising Talon's management of the country, complaining first about his handling of the "continuing deterioration of the security situation in northern Benin". Benin's army has suffered losses near its northern border with insurgency-hit Niger and Burkina Faso in recent years, as jihadist militants linked to Islamic State and al-Qaeda spread southwards. The soldiers' statement cited "the ignorance and neglect of the situation of our brothers in arms who have fallen at the front and, above all, that of their families, abandoned to their sad fate by Mr Patrice Talon's policies". The rebels also hit out at cuts in health care, including the cancellation of state-funded kidney dialysis, and taxes rises, as well as curbs on political activities. Talon, who is regarded as a close ally of the West, is due to step down next year after completing his second term in office, with elections scheduled for April. A businessman known as the "king of cotton", he first came to power in 2016. He has endorsed Finance Minister Romuald Wadagni as his successor. Talon has been praised by his supporters for overseeing economic development, but his government has also been criticised for suppressing dissenting voices. In October, Benin's electoral commission barred the main opposition candidate from contesting the election. The attempted coup came just over a week after Guinea-Bissau's President Umaro Sissoco Embaló was overthrown - though some regional figures have questioned whether this was staged. In recent years, West Africa has also seen coups in Burkina Faso, Guinea, Mali and Niger, prompting concerns about the region's stability. Russia has strengthened its ties with these Sahel countries over recent years - and Burkina Faso, Mali and Niger have left the West African regional bloc Ecowas to form their own group, the Alliance of Sahel States. News of the attempted takeover in Benin was hailed by several pro-Russian social media accounts, according to BBC Monitoring. Go to BBCAfrica.com for more news from the African continent.
bbc_news
2025-12-10T17:09:21Z
https://www.bbc.com/news/articles/cwyln60219qo
{"title": "Benin coup plot leader hiding in Togo, official tells BBC"}
Fears grow that world's rarest apes were swept away in Sumatran floods An unusual silence in the forests of north Sumatra in Indonesia is worrying wildlife experts and conservationists. Here, in the mountainous forests of Batang Toru, is where they had always seen and heard the world's rarest ape, the Tapanuli orangutans. But ever since Cyclone Senyar devastated Sumatra on 25 November, the critically endangered primates have not been seen in the area, conservation workers say. Their absence has fuelled speculation as to whether the great apes were swept away by floods and landslides. And while some believe the animals may have travelled to a safer location, a carcass found in the area, said to be that of an orangutan, is fuelling conservationists' fears. Fewer than 800 Tapanuli orangutans remain and any loss would have a serious impact on the species, conservationists say. Humanitarian workers told the BBC they found the dead animal semi-buried in the debris of mud and logs in Pulo Pakkat village in central Tapanuli district earlier this week. "When I first saw it I was not sure what it was, because it was kind of defaced, perhaps because it was buried underneath by the sludge and logs," said Deckey Chandra, who has been working with a humanitarian team in the area. He previously worked in the conservation of the Tapanuli orangutans. "I have seen several dead bodies of humans in the past few days but this was the first dead wildlife," he said. "They used to come to this place to eat fruits. But now it seems to have become their graveyard." Mr Chandra shared with the BBC pictures he took of the carcass, some of which show him with the dead animal. Conservationists working in the region believe it is of the Tapanuli orangutan, a species that was only discovered in 2017. The other two species are Bornean and Sumatran orangutans. More than 900 people have died as a result of heavy rain, floods and landslides since Cyclone Senyar ravaged parts of Indonesia in late November. Hundreds are still missing, with many villages in Sumatra completely destroyed as the storm swept across the island. Professor Erik Meijaard, managing director of Borneo Futures in Brunei, is now studying the disasters' impact on the orangutans with the help of satellite images. He said 4,800 hectares (11,860 acres) of forest on the mountain slopes can be seen as destroyed by landslides - but since part of the satellite image is cloud covered, he's extrapolated the destruction figure to 7,200 hectares in his preliminary observation. "The destroyed areas would have contained some 35 orangutans, and considering the violence of the destruction it wouldn't surprise us if they are all dead. That's a major blow to the population," he told the BBC. "These areas show as bare soil on satellite imagery where two weeks ago it was primary forest. Complete destruction. Many patches of several hectares completely denuded. It must have been hellish in the forest at the time." Prof Mejjard said he too has seen the picture of the dead orangutan shared by Chandra. "What struck me is that all the flesh had been ripped off the face," he said. "If a few hectares of forest comes down in massive landslides, even powerful orangutans are helpless and just get mangled." Panut Hadisiswoyo, founder of Orangutan Information Centre which works for the conservation of the primates in the region, said the carcass meant it was highly possible some Tapanuli orangutans were unable to escape as rushing waters and landslides swept through their habitat. Pictures showing the carcass of a Sumatran elephant, another critically endangered species, being swept away by floods in Aceh in northern Sumatra went viral on social media last week. The island hosts a range of endangered species like Sumatran tigers, elephants and rhinos. But conservation workers say there are particular concerns for orangutans and other primates, like gibbons, because huge parts of the mountainous forest in the Tapanuli district saw massive landslides due to Cyclone Senyar's extreme rainfall. Some locals say the primates must have escaped before the disaster struck, as they can sense danger beforehand. But some primate experts say that may not have been the case. "During heavy rains orangutans either just sit in a tree or gather branches and leaves to use as an umbrella and then wait for the rain to stop," said Serge Wich, professor of primate biology at Liverpool John Moores University, who has conducted research on Tapanuli orangutans. "But this time, by the time the rain stopped it was too late: parts of their habitat - the slopes of valleys - were wiped out by landslides, which means there must have been consequences for them." The recent floods have also damaged a number of orangutan research centres in Sumatra - including at Ketambe, the world's first orangutan research centre, in Aceh. Dr Ian Singleton, scientific director for the Sumatran Orangutan Conservation Programme, said the Ketambe centre is now almost completely destroyed. "It needs to be rebuilt as soon as possible so it can continue to play that role in protecting the forests in that area and its orangutans."
Fears grow that world's rarest apes were swept away in Sumatran floods An unusual silence in the forests of north Sumatra in Indonesia is worrying wildlife experts and conservationists. Here, in the mountainous forests of Batang Toru, is where they had always seen and heard the world's rarest ape, the Tapanuli orangutans. But ever since Cyclone Senyar devastated Sumatra on 25 November, the critically endangered primates have not been seen in the area, conservation workers say. Their absence has fuelled speculation as to whether the great apes were swept away by floods and landslides. And while some believe the animals may have travelled to a safer location, a carcass found in the area, said to be that of an orangutan, is fuelling conservationists' fears. Fewer than 800 Tapanuli orangutans remain and any loss would have a serious impact on the species, conservationists say. Humanitarian workers told the BBC they found the dead animal semi-buried in the debris of mud and logs in Pulo Pakkat village in central Tapanuli district earlier this week. "When I first saw it I was not sure what it was, because it was kind of defaced, perhaps because it was buried underneath by the sludge and logs," said Deckey Chandra, who has been working with a humanitarian team in the area. He previously worked in the conservation of the Tapanuli orangutans. "I have seen several dead bodies of humans in the past few days but this was the first dead wildlife," he said. "They used to come to this place to eat fruits. But now it seems to have become their graveyard." Mr Chandra shared with the BBC pictures he took of the carcass, some of which show him with the dead animal. Conservationists working in the region believe it is of the Tapanuli orangutan, a species that was only discovered in 2017. The other two species are Bornean and Sumatran orangutans. More than 900 people have died as a result of heavy rain, floods and landslides since Cyclone Senyar ravaged parts of Indonesia in late November. Hundreds are still missing, with many villages in Sumatra completely destroyed as the storm swept across the island. Professor Erik Meijaard, managing director of Borneo Futures in Brunei, is now studying the disasters' impact on the orangutans with the help of satellite images. He said 4,800 hectares (11,860 acres) of forest on the mountain slopes can be seen as destroyed by landslides - but since part of the satellite image is cloud covered, he's extrapolated the destruction figure to 7,200 hectares in his preliminary observation. "The destroyed areas would have contained some 35 orangutans, and considering the violence of the destruction it wouldn't surprise us if they are all dead. That's a major blow to the population," he told the BBC. "These areas show as bare soil on satellite imagery where two weeks ago it was primary forest. Complete destruction. Many patches of several hectares completely denuded. It must have been hellish in the forest at the time." Prof Mejjard said he too has seen the picture of the dead orangutan shared by Chandra. "What struck me is that all the flesh had been ripped off the face," he said. "If a few hectares of forest comes down in massive landslides, even powerful orangutans are helpless and just get mangled." Panut Hadisiswoyo, founder of Orangutan Information Centre which works for the conservation of the primates in the region, said the carcass meant it was highly possible some Tapanuli orangutans were unable to escape as rushing waters and landslides swept through their habitat. Pictures showing the carcass of a Sumatran elephant, another critically endangered species, being swept away by floods in Aceh in northern Sumatra went viral on social media last week. The island hosts a range of endangered species like Sumatran tigers, elephants and rhinos. But conservation workers say there are particular concerns for orangutans and other primates, like gibbons, because huge parts of the mountainous forest in the Tapanuli district saw massive landslides due to Cyclone Senyar's extreme rainfall. Some locals say the primates must have escaped before the disaster struck, as they can sense danger beforehand. But some primate experts say that may not have been the case. "During heavy rains orangutans either just sit in a tree or gather branches and leaves to use as an umbrella and then wait for the rain to stop," said Serge Wich, professor of primate biology at Liverpool John Moores University, who has conducted research on Tapanuli orangutans. "But this time, by the time the rain stopped it was too late: parts of their habitat - the slopes of valleys - were wiped out by landslides, which means there must have been consequences for them." The recent floods have also damaged a number of orangutan research centres in Sumatra - including at Ketambe, the world's first orangutan research centre, in Aceh. Dr Ian Singleton, scientific director for the Sumatran Orangutan Conservation Programme, said the Ketambe centre is now almost completely destroyed. "It needs to be rebuilt as soon as possible so it can continue to play that role in protecting the forests in that area and its orangutans."
bbc_news
2025-12-12T00:08:17Z
https://www.bbc.com/news/articles/cj4q1l0ly7wo
{"title": "Fears grow that world's rarest apes were swept away in Sumatran floods"}
Tanzania crackdown on planned protest leaves streets deserted Security was tightened across Tanzania on Tuesday with police and military seen patrolling major cities ahead of anticipated anti-government protests called to coincide with independence day. By sunset, however, no major demonstrations had taken place. Residents in Dar es Salaam, Arusha, Mbeya, Mwanza and several other urban centres reported an unusually slow start to the day, with many people choosing to remain indoors amid uncertainty over whether protests would happen. The demonstrations were called to demand political reforms in the wake of October's post-election unrest which left an unknown number of people dead. The authorities have admitted using force against protesters, claiming that some groups were attempting to overthrow the regime. On Tuesday, BBC reporters observed nearly empty streets in the commercial capital, Dar es Salaam. This was a stark contrast to the city's usual weekday bustle. Although quiet, the atmosphere remained tense. In a statement, police spokesperson David Misime assured the public of their safety and the protection of their property, saying the situation remained calm nationwide. He also urged citizens to dismiss old photos and video clips circulating on social media that falsely suggest protests are taking place. Security vehicles were seen driving along major roads and intersections, while officers took up positions at strategic locations, including around key public infrastructure. Public transport stopped operating entirely, the AFP news agency reported. On social media, activists and campaigners urged supporters to stay alert, suggesting any demonstrations were unlikely to begin until the afternoon. The messaging echoed previous protest calls in Tanzania, when turnout increased later in the day. "We will move out, it is our right to protest... I know police are everywhere in the town and even in the street where I live... we have plans so wait, you will see what will happen," a resident of Arusha told the BBC earlier on Tuesday. "I am scared for my children, if these protests happen, it will create a bad atmosphere. Like now my husband is hospitalised, how am I going to attend to him? I feel protesters should call off plans to move to the streets, we need to live in peace," said a resident of Mwanza in northern Tanzania. Motorists who ventured out reported frequent checks at roadblocks, where officers questioned drivers about their destinations. The government has not issued detailed comments on the heightened security measures or on the planned protests. Tanzanian authorities have banned the planned protests and cancelled independence day celebrations, urging citizens to stay indoors. Meanwhile, in neighbouring Kenya several activists were arrested on Tuesday as they were holding a solidarity protest outside the Tanzanian high commission in the capital, Nairobi. Go to BBCAfrica.com for more news from the African continent.
Tanzania crackdown on planned protest leaves streets deserted Security was tightened across Tanzania on Tuesday with police and military seen patrolling major cities ahead of anticipated anti-government protests called to coincide with independence day. By sunset, however, no major demonstrations had taken place. Residents in Dar es Salaam, Arusha, Mbeya, Mwanza and several other urban centres reported an unusually slow start to the day, with many people choosing to remain indoors amid uncertainty over whether protests would happen. The demonstrations were called to demand political reforms in the wake of October's post-election unrest which left an unknown number of people dead. The authorities have admitted using force against protesters, claiming that some groups were attempting to overthrow the regime. On Tuesday, BBC reporters observed nearly empty streets in the commercial capital, Dar es Salaam. This was a stark contrast to the city's usual weekday bustle. Although quiet, the atmosphere remained tense. In a statement, police spokesperson David Misime assured the public of their safety and the protection of their property, saying the situation remained calm nationwide. He also urged citizens to dismiss old photos and video clips circulating on social media that falsely suggest protests are taking place. Security vehicles were seen driving along major roads and intersections, while officers took up positions at strategic locations, including around key public infrastructure. Public transport stopped operating entirely, the AFP news agency reported. On social media, activists and campaigners urged supporters to stay alert, suggesting any demonstrations were unlikely to begin until the afternoon. The messaging echoed previous protest calls in Tanzania, when turnout increased later in the day. "We will move out, it is our right to protest... I know police are everywhere in the town and even in the street where I live... we have plans so wait, you will see what will happen," a resident of Arusha told the BBC earlier on Tuesday. "I am scared for my children, if these protests happen, it will create a bad atmosphere. Like now my husband is hospitalised, how am I going to attend to him? I feel protesters should call off plans to move to the streets, we need to live in peace," said a resident of Mwanza in northern Tanzania. Motorists who ventured out reported frequent checks at roadblocks, where officers questioned drivers about their destinations. The government has not issued detailed comments on the heightened security measures or on the planned protests. Tanzanian authorities have banned the planned protests and cancelled independence day celebrations, urging citizens to stay indoors. Meanwhile, in neighbouring Kenya several activists were arrested on Tuesday as they were holding a solidarity protest outside the Tanzanian high commission in the capital, Nairobi. Go to BBCAfrica.com for more news from the African continent.
bbc_news
2025-12-09T16:19:00Z
https://www.bbc.com/news/articles/cx2e3j819eqo
{"title": "Tanzania crackdown on planned protest leaves streets deserted"}
#include <iostream> #include <string> using namespace std; int main() { double balance = 25000.2; int option; char again; do { cout << "\n---Welcome to AUCT Bank---" << endl; cout << "1. My Account" << endl; cout << "2. Funds Transfer" << endl; cout << "3. Load Mobile Bundles" << endl; cout << "4. Exit" << endl; cout << "Enter your option: "; cin >> option; switch (option) { case 1: cout << "Your current balance is: Rs." << balance << endl; break; case 2: { string beneficiary; char bankChoice; cout << "Enter beneficiary name: "; cin >> beneficiary; cout << "Select Bank of Beneficiary:" << endl; cout << "a. Alfalah Bank" << endl; cout << "b. Bank of Punjab" << endl; cout << "f. FAST Bank" << endl; cin >> bankChoice; if (bankChoice != 'f' && bankChoice != 'F') { cout << "Beneficiary does not have FAST Bank account." << endl; } else { double amount; if (balance <= 5000) { cout << "Low balance! Cannot transfer funds." << endl; } else { cout << "Enter transfer amount: "; cin >> amount; if (amount <= 0) { cout << "Invalid amount entered!" << endl; } else if (balance - amount < 5000) { cout << "Insufficient balance after transfer. Maintain Rs.5000 minimum." << endl; } else { balance -= amount; cout << "Rs. " << amount << " have been transferred to " << beneficiary << ". Your new balance is Rs. " << balance << endl; } } } break; } case 3: { if (balance <= 5000) { cout << "Low Balance! Cannot load bundles." << endl; } else { char operatorChoice; int packageChoice; cout << "u. Ufone" << endl; cout << "t. Telenor" << endl; cout << "Select your operator: "; cin >> operatorChoice; if (operatorChoice == 'u' || operatorChoice == 'U') { cout << "1. Super Card Plus (Rs.699)" << endl; cout << "2. Super Card Gold (Rs.1099)" << endl; cout << "Enter your choice: "; cin >> packageChoice; if (packageChoice == 1) { balance -= 699; cout << "Super Card Plus has been loaded. Your new balance is Rs." << balance << endl; } else if (packageChoice == 2) { balance -= 1099; cout << "Super Card Gold has been loaded. Your new balance is Rs." << balance << endl; } else { cout << "Invalid choice!" << endl; } } else if (operatorChoice == 't' || operatorChoice == 'T') { cout << "1. Monthly Easy Card (Rs.700)" << endl; cout << "2. Weekly Easy Card (Rs.300)" << endl; cout << "Enter your choice: "; cin >> packageChoice; if (packageChoice == 1) { balance -= 700; cout << "Monthly Easy Card has been loaded. Your new balance is Rs." << balance << endl; } else if (packageChoice == 2) { balance -= 300; cout << "Weekly Easy Card has been loaded. Your new balance is Rs." << balance << endl; } else { cout << "Invalid choice!" << endl; } } else { cout << "Invalid operator selected!" << endl; } } break; } case 4: cout << "Thank you for using AUCT Bank!" << endl; return 0; default: cout << "Invalid option selected!" << endl; } cout << "\nDo you want to go back to the main menu? (Y/N): "; cin >> again; } while (again == 'Y' || again == 'y'); cout << "Thank you for using AUCT Bank. Goodbye!" << endl; return 0; }
#include <iostream> #include <string> using namespace std; int main() { double balance = 25000.2; int option; char again; do { cout << "\n---Welcome to AUCT Bank---" << endl; cout << "1. My Account" << endl; cout << "2. Funds Transfer" << endl; cout << "3. Load Mobile Bundles" << endl; cout << "4. Exit" << endl; cout << "Enter your option: "; cin >> option; switch (option) { case 1: cout << "Your current balance is: Rs." << balance << endl; break; case 2: { string beneficiary; char bankChoice; cout << "Enter beneficiary name: "; cin >> beneficiary; cout << "Select Bank of Beneficiary:" << endl; cout << "a. Alfalah Bank" << endl; cout << "b. Bank of Punjab" << endl; cout << "f. FAST Bank" << endl; cin >> bankChoice; if (bankChoice != 'f' && bankChoice != 'F') { cout << "Beneficiary does not have FAST Bank account." << endl; } else { double amount; if (balance <= 5000) { cout << "Low balance! Cannot transfer funds." << endl; } else { cout << "Enter transfer amount: "; cin >> amount; if (amount <= 0) { cout << "Invalid amount entered!" << endl; } else if (balance - amount < 5000) { cout << "Insufficient balance after transfer. Maintain Rs.5000 minimum." << endl; } else { balance -= amount; cout << "Rs. " << amount << " have been transferred to " << beneficiary << ". Your new balance is Rs. " << balance << endl; } } } break; } case 3: { if (balance <= 5000) { cout << "Low Balance! Cannot load bundles." << endl; } else { char operatorChoice; int packageChoice; cout << "u. Ufone" << endl; cout << "t. Telenor" << endl; cout << "Select your operator: "; cin >> operatorChoice; if (operatorChoice == 'u' || operatorChoice == 'U') { cout << "1. Super Card Plus (Rs.699)" << endl; cout << "2. Super Card Gold (Rs.1099)" << endl; cout << "Enter your choice: "; cin >> packageChoice; if (packageChoice == 1) { balance -= 699; cout << "Super Card Plus has been loaded. Your new balance is Rs." << balance << endl; } else if (packageChoice == 2) { balance -= 1099; cout << "Super Card Gold has been loaded. Your new balance is Rs." << balance << endl; } else { cout << "Invalid choice!" << endl; } } else if (operatorChoice == 't' || operatorChoice == 'T') { cout << "1. Monthly Easy Card (Rs.700)" << endl; cout << "2. Weekly Easy Card (Rs.300)" << endl; cout << "Enter your choice: "; cin >> packageChoice; if (packageChoice == 1) { balance -= 700; cout << "Monthly Easy Card has been loaded. Your new balance is Rs." << balance << endl; } else if (packageChoice == 2) { balance -= 300; cout << "Weekly Easy Card has been loaded. Your new balance is Rs." << balance << endl; } else { cout << "Invalid choice!" << endl; } } else { cout << "Invalid operator selected!" << endl; } } break; } case 4: cout << "Thank you for using AUCT Bank!" << endl; return 0; default: cout << "Invalid option selected!" << endl; } cout << "\nDo you want to go back to the main menu? (Y/N): "; cin >> again; } while (again == 'Y' || again == 'y'); cout << "Thank you for using AUCT Bank. Goodbye!" << endl; return 0; }
github_cpp
2025-12-04T21:17:11Z
https://github.com/ayyansaqib7o5/BANKING-SYSTEM-CPP/blob/d19403526c293298bfa6d77fea729d26985dc6b0/Desktop/AUCT BANK/backend/pr.cpp
{}
/* This source file must have a .cpp extension so that all C++ compilers recognize the extension without flags. Borland does not know .cxx for example. */ #ifndef __cplusplus # error "A C compiler has been selected for C++." #endif #if !defined(__has_include) /* If the compiler does not have __has_include, pretend the answer is always no. */ # define __has_include(x) 0 #endif /* Version number components: V=Version, R=Revision, P=Patch Version date components: YYYY=Year, MM=Month, DD=Day */ #if defined(__COMO__) # define COMPILER_ID "Comeau" /* __COMO_VERSION__ = VRR */ # define COMPILER_VERSION_MAJOR DEC(__COMO_VERSION__ / 100) # define COMPILER_VERSION_MINOR DEC(__COMO_VERSION__ % 100) #elif defined(__INTEL_COMPILER) || defined(__ICC) # define COMPILER_ID "Intel" # if defined(_MSC_VER) # define SIMULATE_ID "MSVC" # endif # if defined(__GNUC__) # define SIMULATE_ID "GNU" # endif /* __INTEL_COMPILER = VRP prior to 2021, and then VVVV for 2021 and later, except that a few beta releases use the old format with V=2021. */ # if __INTEL_COMPILER < 2021 || __INTEL_COMPILER == 202110 || __INTEL_COMPILER == 202111 # define COMPILER_VERSION_MAJOR DEC(__INTEL_COMPILER/100) # define COMPILER_VERSION_MINOR DEC(__INTEL_COMPILER/10 % 10) # if defined(__INTEL_COMPILER_UPDATE) # define COMPILER_VERSION_PATCH DEC(__INTEL_COMPILER_UPDATE) # else # define COMPILER_VERSION_PATCH DEC(__INTEL_COMPILER % 10) # endif # else # define COMPILER_VERSION_MAJOR DEC(__INTEL_COMPILER) # define COMPILER_VERSION_MINOR DEC(__INTEL_COMPILER_UPDATE) /* The third version component from --version is an update index, but no macro is provided for it. */ # define COMPILER_VERSION_PATCH DEC(0) # endif # if defined(__INTEL_COMPILER_BUILD_DATE) /* __INTEL_COMPILER_BUILD_DATE = YYYYMMDD */ # define COMPILER_VERSION_TWEAK DEC(__INTEL_COMPILER_BUILD_DATE) # endif # if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) # endif # if defined(__GNUC__) # define SIMULATE_VERSION_MAJOR DEC(__GNUC__) # elif defined(__GNUG__) # define SIMULATE_VERSION_MAJOR DEC(__GNUG__) # endif # if defined(__GNUC_MINOR__) # define SIMULATE_VERSION_MINOR DEC(__GNUC_MINOR__) # endif # if defined(__GNUC_PATCHLEVEL__) # define SIMULATE_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) # endif #elif (defined(__clang__) && defined(__INTEL_CLANG_COMPILER)) || defined(__INTEL_LLVM_COMPILER) # define COMPILER_ID "IntelLLVM" #if defined(_MSC_VER) # define SIMULATE_ID "MSVC" #endif #if defined(__GNUC__) # define SIMULATE_ID "GNU" #endif /* __INTEL_LLVM_COMPILER = VVVVRP prior to 2021.2.0, VVVVRRPP for 2021.2.0 and * later. Look for 6 digit vs. 8 digit version number to decide encoding. * VVVV is no smaller than the current year when a version is released. */ #if __INTEL_LLVM_COMPILER < 1000000L # define COMPILER_VERSION_MAJOR DEC(__INTEL_LLVM_COMPILER/100) # define COMPILER_VERSION_MINOR DEC(__INTEL_LLVM_COMPILER/10 % 10) # define COMPILER_VERSION_PATCH DEC(__INTEL_LLVM_COMPILER % 10) #else # define COMPILER_VERSION_MAJOR DEC(__INTEL_LLVM_COMPILER/10000) # define COMPILER_VERSION_MINOR DEC(__INTEL_LLVM_COMPILER/100 % 100) # define COMPILER_VERSION_PATCH DEC(__INTEL_LLVM_COMPILER % 100) #endif #if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) #endif #if defined(__GNUC__) # define SIMULATE_VERSION_MAJOR DEC(__GNUC__) #elif defined(__GNUG__) # define SIMULATE_VERSION_MAJOR DEC(__GNUG__) #endif #if defined(__GNUC_MINOR__) # define SIMULATE_VERSION_MINOR DEC(__GNUC_MINOR__) #endif #if defined(__GNUC_PATCHLEVEL__) # define SIMULATE_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) #endif #elif defined(__PATHCC__) # define COMPILER_ID "PathScale" # define COMPILER_VERSION_MAJOR DEC(__PATHCC__) # define COMPILER_VERSION_MINOR DEC(__PATHCC_MINOR__) # if defined(__PATHCC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PATHCC_PATCHLEVEL__) # endif #elif defined(__BORLANDC__) && defined(__CODEGEARC_VERSION__) # define COMPILER_ID "Embarcadero" # define COMPILER_VERSION_MAJOR HEX(__CODEGEARC_VERSION__>>24 & 0x00FF) # define COMPILER_VERSION_MINOR HEX(__CODEGEARC_VERSION__>>16 & 0x00FF) # define COMPILER_VERSION_PATCH DEC(__CODEGEARC_VERSION__ & 0xFFFF) #elif defined(__BORLANDC__) # define COMPILER_ID "Borland" /* __BORLANDC__ = 0xVRR */ # define COMPILER_VERSION_MAJOR HEX(__BORLANDC__>>8) # define COMPILER_VERSION_MINOR HEX(__BORLANDC__ & 0xFF) #elif defined(__WATCOMC__) && __WATCOMC__ < 1200 # define COMPILER_ID "Watcom" /* __WATCOMC__ = VVRR */ # define COMPILER_VERSION_MAJOR DEC(__WATCOMC__ / 100) # define COMPILER_VERSION_MINOR DEC((__WATCOMC__ / 10) % 10) # if (__WATCOMC__ % 10) > 0 # define COMPILER_VERSION_PATCH DEC(__WATCOMC__ % 10) # endif #elif defined(__WATCOMC__) # define COMPILER_ID "OpenWatcom" /* __WATCOMC__ = VVRP + 1100 */ # define COMPILER_VERSION_MAJOR DEC((__WATCOMC__ - 1100) / 100) # define COMPILER_VERSION_MINOR DEC((__WATCOMC__ / 10) % 10) # if (__WATCOMC__ % 10) > 0 # define COMPILER_VERSION_PATCH DEC(__WATCOMC__ % 10) # endif #elif defined(__SUNPRO_CC) # define COMPILER_ID "SunPro" # if __SUNPRO_CC >= 0x5100 /* __SUNPRO_CC = 0xVRRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>12) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xFF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # else /* __SUNPRO_CC = 0xVRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>8) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # endif #elif defined(__HP_aCC) # define COMPILER_ID "HP" /* __HP_aCC = VVRRPP */ # define COMPILER_VERSION_MAJOR DEC(__HP_aCC/10000) # define COMPILER_VERSION_MINOR DEC(__HP_aCC/100 % 100) # define COMPILER_VERSION_PATCH DEC(__HP_aCC % 100) #elif defined(__DECCXX) # define COMPILER_ID "Compaq" /* __DECCXX_VER = VVRRTPPPP */ # define COMPILER_VERSION_MAJOR DEC(__DECCXX_VER/10000000) # define COMPILER_VERSION_MINOR DEC(__DECCXX_VER/100000 % 100) # define COMPILER_VERSION_PATCH DEC(__DECCXX_VER % 10000) #elif defined(__IBMCPP__) && defined(__COMPILER_VER__) # define COMPILER_ID "zOS" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__ibmxl__) && defined(__clang__) # define COMPILER_ID "XLClang" # define COMPILER_VERSION_MAJOR DEC(__ibmxl_version__) # define COMPILER_VERSION_MINOR DEC(__ibmxl_release__) # define COMPILER_VERSION_PATCH DEC(__ibmxl_modification__) # define COMPILER_VERSION_TWEAK DEC(__ibmxl_ptf_fix_level__) #elif defined(__IBMCPP__) && !defined(__COMPILER_VER__) && __IBMCPP__ >= 800 # define COMPILER_ID "XL" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__IBMCPP__) && !defined(__COMPILER_VER__) && __IBMCPP__ < 800 # define COMPILER_ID "VisualAge" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__NVCOMPILER) # define COMPILER_ID "NVHPC" # define COMPILER_VERSION_MAJOR DEC(__NVCOMPILER_MAJOR__) # define COMPILER_VERSION_MINOR DEC(__NVCOMPILER_MINOR__) # if defined(__NVCOMPILER_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__NVCOMPILER_PATCHLEVEL__) # endif #elif defined(__PGI) # define COMPILER_ID "PGI" # define COMPILER_VERSION_MAJOR DEC(__PGIC__) # define COMPILER_VERSION_MINOR DEC(__PGIC_MINOR__) # if defined(__PGIC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PGIC_PATCHLEVEL__) # endif #elif defined(_CRAYC) # define COMPILER_ID "Cray" # define COMPILER_VERSION_MAJOR DEC(_RELEASE_MAJOR) # define COMPILER_VERSION_MINOR DEC(_RELEASE_MINOR) #elif defined(__TI_COMPILER_VERSION__) # define COMPILER_ID "TI" /* __TI_COMPILER_VERSION__ = VVVRRRPPP */ # define COMPILER_VERSION_MAJOR DEC(__TI_COMPILER_VERSION__/1000000) # define COMPILER_VERSION_MINOR DEC(__TI_COMPILER_VERSION__/1000 % 1000) # define COMPILER_VERSION_PATCH DEC(__TI_COMPILER_VERSION__ % 1000) #elif defined(__CLANG_FUJITSU) # define COMPILER_ID "FujitsuClang" # define COMPILER_VERSION_MAJOR DEC(__FCC_major__) # define COMPILER_VERSION_MINOR DEC(__FCC_minor__) # define COMPILER_VERSION_PATCH DEC(__FCC_patchlevel__) # define COMPILER_VERSION_INTERNAL_STR __clang_version__ #elif defined(__FUJITSU) # define COMPILER_ID "Fujitsu" # if defined(__FCC_version__) # define COMPILER_VERSION __FCC_version__ # elif defined(__FCC_major__) # define COMPILER_VERSION_MAJOR DEC(__FCC_major__) # define COMPILER_VERSION_MINOR DEC(__FCC_minor__) # define COMPILER_VERSION_PATCH DEC(__FCC_patchlevel__) # endif # if defined(__fcc_version) # define COMPILER_VERSION_INTERNAL DEC(__fcc_version) # elif defined(__FCC_VERSION) # define COMPILER_VERSION_INTERNAL DEC(__FCC_VERSION) # endif #elif defined(__ghs__) # define COMPILER_ID "GHS" /* __GHS_VERSION_NUMBER = VVVVRP */ # ifdef __GHS_VERSION_NUMBER # define COMPILER_VERSION_MAJOR DEC(__GHS_VERSION_NUMBER / 100) # define COMPILER_VERSION_MINOR DEC(__GHS_VERSION_NUMBER / 10 % 10) # define COMPILER_VERSION_PATCH DEC(__GHS_VERSION_NUMBER % 10) # endif #elif defined(__SCO_VERSION__) # define COMPILER_ID "SCO" #elif defined(__ARMCC_VERSION) && !defined(__clang__) # define COMPILER_ID "ARMCC" #if __ARMCC_VERSION >= 1000000 /* __ARMCC_VERSION = VRRPPPP */ # define COMPILER_VERSION_MAJOR DEC(__ARMCC_VERSION/1000000) # define COMPILER_VERSION_MINOR DEC(__ARMCC_VERSION/10000 % 100) # define COMPILER_VERSION_PATCH DEC(__ARMCC_VERSION % 10000) #else /* __ARMCC_VERSION =
/* This source file must have a .cpp extension so that all C++ compilers recognize the extension without flags. Borland does not know .cxx for example. */ #ifndef __cplusplus # error "A C compiler has been selected for C++." #endif #if !defined(__has_include) /* If the compiler does not have __has_include, pretend the answer is always no. */ # define __has_include(x) 0 #endif /* Version number components: V=Version, R=Revision, P=Patch Version date components: YYYY=Year, MM=Month, DD=Day */ #if defined(__COMO__) # define COMPILER_ID "Comeau" /* __COMO_VERSION__ = VRR */ # define COMPILER_VERSION_MAJOR DEC(__COMO_VERSION__ / 100) # define COMPILER_VERSION_MINOR DEC(__COMO_VERSION__ % 100) #elif defined(__INTEL_COMPILER) || defined(__ICC) # define COMPILER_ID "Intel" # if defined(_MSC_VER) # define SIMULATE_ID "MSVC" # endif # if defined(__GNUC__) # define SIMULATE_ID "GNU" # endif /* __INTEL_COMPILER = VRP prior to 2021, and then VVVV for 2021 and later, except that a few beta releases use the old format with V=2021. */ # if __INTEL_COMPILER < 2021 || __INTEL_COMPILER == 202110 || __INTEL_COMPILER == 202111 # define COMPILER_VERSION_MAJOR DEC(__INTEL_COMPILER/100) # define COMPILER_VERSION_MINOR DEC(__INTEL_COMPILER/10 % 10) # if defined(__INTEL_COMPILER_UPDATE) # define COMPILER_VERSION_PATCH DEC(__INTEL_COMPILER_UPDATE) # else # define COMPILER_VERSION_PATCH DEC(__INTEL_COMPILER % 10) # endif # else # define COMPILER_VERSION_MAJOR DEC(__INTEL_COMPILER) # define COMPILER_VERSION_MINOR DEC(__INTEL_COMPILER_UPDATE) /* The third version component from --version is an update index, but no macro is provided for it. */ # define COMPILER_VERSION_PATCH DEC(0) # endif # if defined(__INTEL_COMPILER_BUILD_DATE) /* __INTEL_COMPILER_BUILD_DATE = YYYYMMDD */ # define COMPILER_VERSION_TWEAK DEC(__INTEL_COMPILER_BUILD_DATE) # endif # if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) # endif # if defined(__GNUC__) # define SIMULATE_VERSION_MAJOR DEC(__GNUC__) # elif defined(__GNUG__) # define SIMULATE_VERSION_MAJOR DEC(__GNUG__) # endif # if defined(__GNUC_MINOR__) # define SIMULATE_VERSION_MINOR DEC(__GNUC_MINOR__) # endif # if defined(__GNUC_PATCHLEVEL__) # define SIMULATE_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) # endif #elif (defined(__clang__) && defined(__INTEL_CLANG_COMPILER)) || defined(__INTEL_LLVM_COMPILER) # define COMPILER_ID "IntelLLVM" #if defined(_MSC_VER) # define SIMULATE_ID "MSVC" #endif #if defined(__GNUC__) # define SIMULATE_ID "GNU" #endif /* __INTEL_LLVM_COMPILER = VVVVRP prior to 2021.2.0, VVVVRRPP for 2021.2.0 and * later. Look for 6 digit vs. 8 digit version number to decide encoding. * VVVV is no smaller than the current year when a version is released. */ #if __INTEL_LLVM_COMPILER < 1000000L # define COMPILER_VERSION_MAJOR DEC(__INTEL_LLVM_COMPILER/100) # define COMPILER_VERSION_MINOR DEC(__INTEL_LLVM_COMPILER/10 % 10) # define COMPILER_VERSION_PATCH DEC(__INTEL_LLVM_COMPILER % 10) #else # define COMPILER_VERSION_MAJOR DEC(__INTEL_LLVM_COMPILER/10000) # define COMPILER_VERSION_MINOR DEC(__INTEL_LLVM_COMPILER/100 % 100) # define COMPILER_VERSION_PATCH DEC(__INTEL_LLVM_COMPILER % 100) #endif #if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) #endif #if defined(__GNUC__) # define SIMULATE_VERSION_MAJOR DEC(__GNUC__) #elif defined(__GNUG__) # define SIMULATE_VERSION_MAJOR DEC(__GNUG__) #endif #if defined(__GNUC_MINOR__) # define SIMULATE_VERSION_MINOR DEC(__GNUC_MINOR__) #endif #if defined(__GNUC_PATCHLEVEL__) # define SIMULATE_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) #endif #elif defined(__PATHCC__) # define COMPILER_ID "PathScale" # define COMPILER_VERSION_MAJOR DEC(__PATHCC__) # define COMPILER_VERSION_MINOR DEC(__PATHCC_MINOR__) # if defined(__PATHCC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PATHCC_PATCHLEVEL__) # endif #elif defined(__BORLANDC__) && defined(__CODEGEARC_VERSION__) # define COMPILER_ID "Embarcadero" # define COMPILER_VERSION_MAJOR HEX(__CODEGEARC_VERSION__>>24 & 0x00FF) # define COMPILER_VERSION_MINOR HEX(__CODEGEARC_VERSION__>>16 & 0x00FF) # define COMPILER_VERSION_PATCH DEC(__CODEGEARC_VERSION__ & 0xFFFF) #elif defined(__BORLANDC__) # define COMPILER_ID "Borland" /* __BORLANDC__ = 0xVRR */ # define COMPILER_VERSION_MAJOR HEX(__BORLANDC__>>8) # define COMPILER_VERSION_MINOR HEX(__BORLANDC__ & 0xFF) #elif defined(__WATCOMC__) && __WATCOMC__ < 1200 # define COMPILER_ID "Watcom" /* __WATCOMC__ = VVRR */ # define COMPILER_VERSION_MAJOR DEC(__WATCOMC__ / 100) # define COMPILER_VERSION_MINOR DEC((__WATCOMC__ / 10) % 10) # if (__WATCOMC__ % 10) > 0 # define COMPILER_VERSION_PATCH DEC(__WATCOMC__ % 10) # endif #elif defined(__WATCOMC__) # define COMPILER_ID "OpenWatcom" /* __WATCOMC__ = VVRP + 1100 */ # define COMPILER_VERSION_MAJOR DEC((__WATCOMC__ - 1100) / 100) # define COMPILER_VERSION_MINOR DEC((__WATCOMC__ / 10) % 10) # if (__WATCOMC__ % 10) > 0 # define COMPILER_VERSION_PATCH DEC(__WATCOMC__ % 10) # endif #elif defined(__SUNPRO_CC) # define COMPILER_ID "SunPro" # if __SUNPRO_CC >= 0x5100 /* __SUNPRO_CC = 0xVRRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>12) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xFF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # else /* __SUNPRO_CC = 0xVRP */ # define COMPILER_VERSION_MAJOR HEX(__SUNPRO_CC>>8) # define COMPILER_VERSION_MINOR HEX(__SUNPRO_CC>>4 & 0xF) # define COMPILER_VERSION_PATCH HEX(__SUNPRO_CC & 0xF) # endif #elif defined(__HP_aCC) # define COMPILER_ID "HP" /* __HP_aCC = VVRRPP */ # define COMPILER_VERSION_MAJOR DEC(__HP_aCC/10000) # define COMPILER_VERSION_MINOR DEC(__HP_aCC/100 % 100) # define COMPILER_VERSION_PATCH DEC(__HP_aCC % 100) #elif defined(__DECCXX) # define COMPILER_ID "Compaq" /* __DECCXX_VER = VVRRTPPPP */ # define COMPILER_VERSION_MAJOR DEC(__DECCXX_VER/10000000) # define COMPILER_VERSION_MINOR DEC(__DECCXX_VER/100000 % 100) # define COMPILER_VERSION_PATCH DEC(__DECCXX_VER % 10000) #elif defined(__IBMCPP__) && defined(__COMPILER_VER__) # define COMPILER_ID "zOS" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__ibmxl__) && defined(__clang__) # define COMPILER_ID "XLClang" # define COMPILER_VERSION_MAJOR DEC(__ibmxl_version__) # define COMPILER_VERSION_MINOR DEC(__ibmxl_release__) # define COMPILER_VERSION_PATCH DEC(__ibmxl_modification__) # define COMPILER_VERSION_TWEAK DEC(__ibmxl_ptf_fix_level__) #elif defined(__IBMCPP__) && !defined(__COMPILER_VER__) && __IBMCPP__ >= 800 # define COMPILER_ID "XL" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__IBMCPP__) && !defined(__COMPILER_VER__) && __IBMCPP__ < 800 # define COMPILER_ID "VisualAge" /* __IBMCPP__ = VRP */ # define COMPILER_VERSION_MAJOR DEC(__IBMCPP__/100) # define COMPILER_VERSION_MINOR DEC(__IBMCPP__/10 % 10) # define COMPILER_VERSION_PATCH DEC(__IBMCPP__ % 10) #elif defined(__NVCOMPILER) # define COMPILER_ID "NVHPC" # define COMPILER_VERSION_MAJOR DEC(__NVCOMPILER_MAJOR__) # define COMPILER_VERSION_MINOR DEC(__NVCOMPILER_MINOR__) # if defined(__NVCOMPILER_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__NVCOMPILER_PATCHLEVEL__) # endif #elif defined(__PGI) # define COMPILER_ID "PGI" # define COMPILER_VERSION_MAJOR DEC(__PGIC__) # define COMPILER_VERSION_MINOR DEC(__PGIC_MINOR__) # if defined(__PGIC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__PGIC_PATCHLEVEL__) # endif #elif defined(_CRAYC) # define COMPILER_ID "Cray" # define COMPILER_VERSION_MAJOR DEC(_RELEASE_MAJOR) # define COMPILER_VERSION_MINOR DEC(_RELEASE_MINOR) #elif defined(__TI_COMPILER_VERSION__) # define COMPILER_ID "TI" /* __TI_COMPILER_VERSION__ = VVVRRRPPP */ # define COMPILER_VERSION_MAJOR DEC(__TI_COMPILER_VERSION__/1000000) # define COMPILER_VERSION_MINOR DEC(__TI_COMPILER_VERSION__/1000 % 1000) # define COMPILER_VERSION_PATCH DEC(__TI_COMPILER_VERSION__ % 1000) #elif defined(__CLANG_FUJITSU) # define COMPILER_ID "FujitsuClang" # define COMPILER_VERSION_MAJOR DEC(__FCC_major__) # define COMPILER_VERSION_MINOR DEC(__FCC_minor__) # define COMPILER_VERSION_PATCH DEC(__FCC_patchlevel__) # define COMPILER_VERSION_INTERNAL_STR __clang_version__ #elif defined(__FUJITSU) # define COMPILER_ID "Fujitsu" # if defined(__FCC_version__) # define COMPILER_VERSION __FCC_version__ # elif defined(__FCC_major__) # define COMPILER_VERSION_MAJOR DEC(__FCC_major__) # define COMPILER_VERSION_MINOR DEC(__FCC_minor__) # define COMPILER_VERSION_PATCH DEC(__FCC_patchlevel__) # endif # if defined(__fcc_version) # define COMPILER_VERSION_INTERNAL DEC(__fcc_version) # elif defined(__FCC_VERSION) # define COMPILER_VERSION_INTERNAL DEC(__FCC_VERSION) # endif #elif defined(__ghs__) # define COMPILER_ID "GHS" /* __GHS_VERSION_NUMBER = VVVVRP */ # ifdef __GHS_VERSION_NUMBER # define COMPILER_VERSION_MAJOR DEC(__GHS_VERSION_NUMBER / 100) # define COMPILER_VERSION_MINOR DEC(__GHS_VERSION_NUMBER / 10 % 10) # define COMPILER_VERSION_PATCH DEC(__GHS_VERSION_NUMBER % 10) # endif #elif defined(__SCO_VERSION__) # define COMPILER_ID "SCO" #elif defined(__ARMCC_VERSION) && !defined(__clang__) # define COMPILER_ID "ARMCC" #if __ARMCC_VERSION >= 1000000 /* __ARMCC_VERSION = VRRPPPP */ # define COMPILER_VERSION_MAJOR DEC(__ARMCC_VERSION/1000000) # define COMPILER_VERSION_MINOR DEC(__ARMCC_VERSION/10000 % 100) # define COMPILER_VERSION_PATCH DEC(__ARMCC_VERSION % 10000) #else /* __ARMCC_VERSION = VRPPPP */ # define COMPILER_VERSION_MAJOR DEC(__ARMCC_VERSION/100000) # define COMPILER_VERSION_MINOR DEC(__ARMCC_VERSION/10000 % 10) # define COMPILER_VERSION_PATCH DEC(__ARMCC_VERSION % 10000) #endif #elif defined(__clang__) && defined(__apple_build_version__) # define COMPILER_ID "AppleClang" # if defined(_MSC_VER) # define SIMULATE_ID "MSVC" # endif # define COMPILER_VERSION_MAJOR DEC(__clang_major__) # define COMPILER_VERSION_MINOR DEC(__clang_minor__) # define COMPILER_VERSION_PATCH DEC(__clang_patchlevel__) # if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) # endif # define COMPILER_VERSION_TWEAK DEC(__apple_build_version__) #elif defined(__clang__) && defined(__ARMCOMPILER_VERSION) # define COMPILER_ID "ARMClang" # define COMPILER_VERSION_MAJOR DEC(__ARMCOMPILER_VERSION/1000000) # define COMPILER_VERSION_MINOR DEC(__ARMCOMPILER_VERSION/10000 % 100) # define COMPILER_VERSION_PATCH DEC(__ARMCOMPILER_VERSION % 10000) # define COMPILER_VERSION_INTERNAL DEC(__ARMCOMPILER_VERSION) #elif defined(__clang__) # define COMPILER_ID "Clang" # if defined(_MSC_VER) # define SIMULATE_ID "MSVC" # endif # define COMPILER_VERSION_MAJOR DEC(__clang_major__) # define COMPILER_VERSION_MINOR DEC(__clang_minor__) # define COMPILER_VERSION_PATCH DEC(__clang_patchlevel__) # if defined(_MSC_VER) /* _MSC_VER = VVRR */ # define SIMULATE_VERSION_MAJOR DEC(_MSC_VER / 100) # define SIMULATE_VERSION_MINOR DEC(_MSC_VER % 100) # endif #elif defined(__GNUC__) || defined(__GNUG__) # define COMPILER_ID "GNU" # if defined(__GNUC__) # define COMPILER_VERSION_MAJOR DEC(__GNUC__) # else # define COMPILER_VERSION_MAJOR DEC(__GNUG__) # endif # if defined(__GNUC_MINOR__) # define COMPILER_VERSION_MINOR DEC(__GNUC_MINOR__) # endif # if defined(__GNUC_PATCHLEVEL__) # define COMPILER_VERSION_PATCH DEC(__GNUC_PATCHLEVEL__) # endif #elif defined(_MSC_VER) # define COMPILER_ID "MSVC" /* _MSC_VER = VVRR */ # define COMPILER_VERSION_MAJOR DEC(_MSC_VER / 100) # define COMPILER_VERSION_MINOR DEC(_MSC_VER % 100) # if defined(_MSC_FULL_VER) # if _MSC_VER >= 1400 /* _MSC_FULL_VER = VVRRPPPPP */ # define COMPILER_VERSION_PATCH DEC(_MSC_FULL_VER % 100000) # else /* _MSC_FULL_VER = VVRRPPPP */ # define COMPILER_VERSION_PATCH DEC(_MSC_FULL_VER % 10000) # endif # endif # if defined(_MSC_BUILD) # define COMPILER_VERSION_TWEAK DEC(_MSC_BUILD) # endif #elif defined(__VISUALDSPVERSION__) || defined(__ADSPBLACKFIN__) || defined(__ADSPTS__) || defined(__ADSP21000__) # define COMPILER_ID "ADSP" #if defined(__VISUALDSPVERSION__) /* __VISUALDSPVERSION__ = 0xVVRRPP00 */ # define COMPILER_VERSION_MAJOR HEX(__VISUALDSPVERSION__>>24) # define COMPILER_VERSION_MINOR HEX(__VISUALDSPVERSION__>>16 & 0xFF) # define COMPILER_VERSION_PATCH HEX(__VISUALDSPVERSION__>>8 & 0xFF) #endif #elif defined(__IAR_SYSTEMS_ICC__) || defined(__IAR_SYSTEMS_ICC) # define COMPILER_ID "IAR" # if defined(__VER__) && defined(__ICCARM__) # define COMPILER_VERSION_MAJOR DEC((__VER__) / 1000000) # define COMPILER_VERSION_MINOR DEC(((__VER__) / 1000) % 1000) # define COMPILER_VERSION_PATCH DEC((__VER__) % 1000) # define COMPILER_VERSION_INTERNAL DEC(__IAR_SYSTEMS_ICC__) # elif defined(__VER__) && (defined(__ICCAVR__) || defined(__ICCRX__) || defined(__ICCRH850__) || defined(__ICCRL78__) || defined(__ICC430__) || defined(__ICCRISCV__) || defined(__ICCV850__) || defined(__ICC8051__) || defined(__ICCSTM8__)) # define COMPILER_VERSION_MAJOR DEC((__VER__) / 100) # define COMPILER_VERSION_MINOR DEC((__VER__) - (((__VER__) / 100)*100)) # define COMPILER_VERSION_PATCH DEC(__SUBVERSION__) # define COMPILER_VERSION_INTERNAL DEC(__IAR_SYSTEMS_ICC__) # endif /* These compilers are either not known or too old to define an identification macro. Try to identify the platform and guess that it is the native compiler. */ #elif defined(__hpux) || defined(__hpua) # define COMPILER_ID "HP" #else /* unknown compiler */ # define COMPILER_ID "" #endif /* Construct the string literal in pieces to prevent the source from getting matched. Store it in a pointer rather than an array because some compilers will just produce instructions to fill the array rather than assigning a pointer to a static array. */ char const* info_compiler = "INFO" ":" "compiler[" COMPILER_ID "]"; #ifdef SIMULATE_ID char const* info_simulate = "INFO" ":" "simulate[" SIMULATE_ID "]"; #endif #ifdef __QNXNTO__ char const* qnxnto = "INFO" ":" "qnxnto[]"; #endif #if defined(__CRAYXT_COMPUTE_LINUX_TARGET) char const *info_cray = "INFO" ":" "compiler_wrapper[CrayPrgEnv]"; #endif #define STRINGIFY_HELPER(X) #X #define STRINGIFY(X) STRINGIFY_HELPER(X) /* Identify known platforms by name. */ #if defined(__linux) || defined(__linux__) || defined(linux) # define PLATFORM_ID "Linux" #elif defined(__MSYS__) # define PLATFORM_ID "MSYS" #elif defined(__CYGWIN__) # define PLATFORM_ID "Cygwin" #elif defined(__MINGW32__) # define PLATFORM_ID "MinGW" #elif defined(__APPLE__) # define PLATFORM_ID "Darwin" #elif defined(_WIN32) || defined(__WIN32__) || defined(WIN32) # define PLATFORM_ID "Windows" #elif defined(__FreeBSD__) || defined(__FreeBSD) # define PLATFORM_ID "FreeBSD" #elif defined(__NetBSD__) || defined(__NetBSD) # define PLATFORM_ID "NetBSD" #elif defined(__OpenBSD__) || defined(__OPENBSD) # define PLATFORM_ID "OpenBSD" #elif defined(__sun) || defined(sun) # define PLATFORM_ID "SunOS" #elif defined(_AIX) || defined(__AIX) || defined(__AIX__) || defined(__aix) || defined(__aix__) # define PLATFORM_ID "AIX" #elif defined(__hpux) || defined(__hpux__) # define PLATFORM_ID "HP-UX" #elif defined(__HAIKU__) # define PLATFORM_ID "Haiku" #elif defined(__BeOS) || defined(__BEOS__) || defined(_BEOS) # define PLATFORM_ID "BeOS" #elif defined(__QNX__) || defined(__QNXNTO__) # define PLATFORM_ID "QNX" #elif defined(__tru64) || defined(_tru64) || defined(__TRU64__) # define PLATFORM_ID "Tru64" #elif defined(__riscos) || defined(__riscos__) # define PLATFORM_ID "RISCos" #elif defined(__sinix) || defined(__sinix__) || defined(__SINIX__) # define PLATFORM_ID "SINIX" #elif defined(__UNIX_SV__) # define PLATFORM_ID "UNIX_SV" #elif defined(__bsdos__) # define PLATFORM_ID "BSDOS" #elif defined(_MPRAS) || defined(MPRAS) # define PLATFORM_ID "MP-RAS" #elif defined(__osf) || defined(__osf__) # define PLATFORM_ID "OSF1" #elif defined(_SCO_SV) || defined(SCO_SV) || defined(sco_sv) # define PLATFORM_ID "SCO_SV" #elif defined(__ultrix) || defined(__ultrix__) || defined(_ULTRIX) # define PLATFORM_ID "ULTRIX" #elif defined(__XENIX__) || defined(_XENIX) || defined(XENIX) # define PLATFORM_ID "Xenix" #elif defined(__WATCOMC__) # if defined(__LINUX__) # define PLATFORM_ID "Linux" # elif defined(__DOS__) # define PLATFORM_ID "DOS" # elif defined(__OS2__) # define PLATFORM_ID "OS2" # elif defined(__WINDOWS__) # define PLATFORM_ID "Windows3x" # elif defined(__VXWORKS__) # define PLATFORM_ID "VxWorks" # else /* unknown platform */ # define PLATFORM_ID # endif #elif defined(__INTEGRITY) # if defined(INT_178B) # define PLATFORM_ID "Integrity178" # else /* regular Integrity */ # define PLATFORM_ID "Integrity" # endif #else /* unknown platform */ # define PLATFORM_ID #endif /* For windows compilers MSVC and Intel we can determine the architecture of the compiler being used. This is because the compilers do not have flags that can change the architecture, but rather depend on which compiler is being used */ #if defined(_WIN32) && defined(_MSC_VER) # if defined(_M_IA64) # define ARCHITECTURE_ID "IA64" # elif defined(_M_ARM64EC) # define ARCHITECTURE_ID "ARM64EC" # elif defined(_M_X64) || defined(_M_AMD64) # define ARCHITECTURE_ID "x64" # elif defined(_M_IX86) # define ARCHITECTURE_ID "X86" # elif defined(_M_ARM64) # define ARCHITECTURE_ID "ARM64" # elif defined(_M_ARM) # if _M_ARM == 4 # define ARCHITECTURE_ID "ARMV4I" # elif _M_ARM == 5 # define ARCHITECTURE_ID "ARMV5I" # else # define ARCHITECTURE_ID "ARMV" STRINGIFY(_M_ARM) # endif # elif defined(_M_MIPS) # define ARCHITECTURE_ID "MIPS" # elif defined(_M_SH) # define ARCHITECTURE_ID "SHx" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #elif defined(__WATCOMC__) # if defined(_M_I86) # define ARCHITECTURE_ID "I86" # elif defined(_M_IX86) # define ARCHITECTURE_ID "X86" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #elif defined(__IAR_SYSTEMS_ICC__) || defined(__IAR_SYSTEMS_ICC) # if defined(__ICCARM__) # define ARCHITECTURE_ID "ARM" # elif defined(__ICCRX__) # define ARCHITECTURE_ID "RX" # elif defined(__ICCRH850__) # define ARCHITECTURE_ID "RH850" # elif defined(__ICCRL78__) # define ARCHITECTURE_ID "RL78" # elif defined(__ICCRISCV__) # define ARCHITECTURE_ID "RISCV" # elif defined(__ICCAVR__) # define ARCHITECTURE_ID "AVR" # elif defined(__ICC430__) # define ARCHITECTURE_ID "MSP430" # elif defined(__ICCV850__) # define ARCHITECTURE_ID "V850" # elif defined(__ICC8051__) # define ARCHITECTURE_ID "8051" # elif defined(__ICCSTM8__) # define ARCHITECTURE_ID "STM8" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #elif defined(__ghs__) # if defined(__PPC64__) # define ARCHITECTURE_ID "PPC64" # elif defined(__ppc__) # define ARCHITECTURE_ID "PPC" # elif defined(__ARM__) # define ARCHITECTURE_ID "ARM" # elif defined(__x86_64__) # define ARCHITECTURE_ID "x64" # elif defined(__i386__) # define ARCHITECTURE_ID "X86" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #elif defined(__TI_COMPILER_VERSION__) # if defined(__TI_ARM__) # define ARCHITECTURE_ID "ARM" # elif defined(__MSP430__) # define ARCHITECTURE_ID "MSP430" # elif defined(__TMS320C28XX__) # define ARCHITECTURE_ID "TMS320C28x" # elif defined(__TMS320C6X__) || defined(_TMS320C6X) # define ARCHITECTURE_ID "TMS320C6x" # else /* unknown architecture */ # define ARCHITECTURE_ID "" # endif #else # define ARCHITECTURE_ID #endif /* Convert integer to decimal digit literals. */ #define DEC(n) \ ('0' + (((n) / 10000000)%10)), \ ('0' + (((n) / 1000000)%10)), \ ('0' + (((n) / 100000)%10)), \ ('0' + (((n) / 10000)%10)), \ ('0' + (((n) / 1000)%10)), \ ('0' + (((n) / 100)%10)), \ ('0' + (((n) / 10)%10)), \ ('0' + ((n) % 10)) /* Convert integer to hex digit literals. */ #define HEX(n) \ ('0' + ((n)>>28 & 0xF)), \ ('0' + ((n)>>24 & 0xF)), \ ('0' + ((n)>>20 & 0xF)), \ ('0' + ((n)>>16 & 0xF)), \ ('0' + ((n)>>12 & 0xF)), \ ('0' + ((n)>>8 & 0xF)), \ ('0' + ((n)>>4 & 0xF)), \ ('0' + ((n) & 0xF)) /* Construct a string literal encoding the version number. */ #ifdef COMPILER_VERSION char const* info_version = "INFO" ":" "compiler_version[" COMPILER_VERSION "]"; /* Construct a string literal encoding the version number components. */ #elif defined(COMPILER_VERSION_MAJOR) char const info_version[] = { 'I', 'N', 'F', 'O', ':', 'c','o','m','p','i','l','e','r','_','v','e','r','s','i','o','n','[', COMPILER_VERSION_MAJOR, # ifdef COMPILER_VERSION_MINOR '.', COMPILER_VERSION_MINOR, # ifdef COMPILER_VERSION_PATCH '.', COMPILER_VERSION_PATCH, # ifdef COMPILER_VERSION_TWEAK '.', COMPILER_VERSION_TWEAK, # endif # endif # endif ']','\0'}; #endif /* Construct a string literal encoding the internal version number. */ #ifdef COMPILER_VERSION_INTERNAL char const info_version_internal[] = { 'I', 'N', 'F', 'O', ':', 'c','o','m','p','i','l','e','r','_','v','e','r','s','i','o','n','_', 'i','n','t','e','r','n','a','l','[', COMPILER_VERSION_INTERNAL,']','\0'}; #elif defined(COMPILER_VERSION_INTERNAL_STR) char const* info_version_internal = "INFO" ":" "compiler_version_internal[" COMPILER_VERSION_INTERNAL_STR "]"; #endif /* Construct a string literal encoding the version number components. */ #ifdef SIMULATE_VERSION_MAJOR char const info_simulate_version[] = { 'I', 'N', 'F', 'O', ':', 's','i','m','u','l','a','t','e','_','v','e','r','s','i','o','n','[', SIMULATE_VERSION_MAJOR, # ifdef SIMULATE_VERSION_MINOR '.', SIMULATE_VERSION_MINOR, # ifdef SIMULATE_VERSION_PATCH '.', SIMULATE_VERSION_PATCH, # ifdef SIMULATE_VERSION_TWEAK '.', SIMULATE_VERSION_TWEAK, # endif # endif # endif ']','\0'}; #endif /* Construct the string literal in pieces to prevent the source from getting matched. Store it in a pointer rather than an array because some compilers will just produce instructions to fill the array rather than assigning a pointer to a static array. */ char const* info_platform = "INFO" ":" "platform[" PLATFORM_ID "]"; char const* info_arch = "INFO" ":" "arch[" ARCHITECTURE_ID "]"; #if defined(__INTEL_COMPILER) && defined(_MSVC_LANG) && _MSVC_LANG < 201403L # if defined(__INTEL_CXX11_MODE__) # if defined(__cpp_aggregate_nsdmi) # define CXX_STD 201402L # else # define CXX_STD 201103L # endif # else # define CXX_STD 199711L # endif #elif defined(_MSC_VER) && defined(_MSVC_LANG) # define CXX_STD _MSVC_LANG #else # define CXX_STD __cplusplus #endif const char* info_language_standard_default = "INFO" ":" "standard_default[" #if CXX_STD > 202002L "23" #elif CXX_STD > 201703L "20" #elif CXX_STD >= 201703L "17" #elif CXX_STD >= 201402L "14" #elif CXX_STD >= 201103L "11" #else "98" #endif "]"; const char* info_language_extensions_default = "INFO" ":" "extensions_default[" /* !defined(_MSC_VER) to exclude Clang's MSVC compatibility mode. */ #if (defined(__clang__) || defined(__GNUC__) || \ defined(__TI_COMPILER_VERSION__)) && \ !defined(__STRICT_ANSI__) && !defined(_MSC_VER) "ON" #else "OFF" #endif "]"; /*--------------------------------------------------------------------------*/ int main(int argc, char* argv[]) { int require = 0; require += info_compiler[argc]; require += info_platform[argc]; #ifdef COMPILER_VERSION_MAJOR require += info_version[argc]; #endif #ifdef COMPILER_VERSION_INTERNAL require += info_version_internal[argc]; #endif #ifdef SIMULATE_ID require += info_simulate[argc]; #endif #ifdef SIMULATE_VERSION_MAJOR require += info_simulate_version[argc]; #endif #if defined(__CRAYXT_COMPUTE_LINUX_TARGET) require += info_cray[argc]; #endif require += info_language_standard_default[argc]; require += info_language_extensions_default[argc]; (void)argv; return require; }
github_cpp
2025-12-04T04:05:59Z
https://github.com/omarfaysal1111/smart_focus_image/blob/89642c3ebd3cd2aa0073e0cdd0707e63defa5a73/example/android/app/.cxx/Debug/2t525o26/armeabi-v7a/CMakeFiles/3.22.1-g37088a8/CompilerIdCXX/CMakeCXXCompilerId.cpp
{}
/*This file is part of FFB Arcade Plugin. FFB Arcade Plugin is free software : you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. FFB Arcade Plugin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with FFB Arcade Plugin.If not, see < https://www.gnu.org/licenses/>. */ #include <string> #include "LGI3D.h" #include "SDL.h" #include <Windows.h> extern int joystick_index1; extern int joystick_index2; extern SDL_Haptic* haptic2; extern SDL_Joystick* GameController2; extern SDL_Haptic* ControllerHaptic2; static bool init = false; void LGI3D::FFBLoop(EffectConstants *constants, Helpers *helpers, EffectTriggers* triggers) { int ff = helpers->ReadIntPtr(0x0065DA20, /* isRelativeOffset */ true); UINT8 ff1 = helpers->ReadByte(ff + 0x44, /* isRelativeOffset */ false); INT_PTR health1p1 = helpers->ReadIntPtr(0x008429F4, /* isRelativeOffset*/ true); INT_PTR health1p2 = helpers->ReadIntPtr(health1p1 + 0x4, /* isRelativeOffset */ false); INT_PTR health1p3 = helpers->ReadIntPtr(health1p2 + 0x74, /* isRelativeOffset */ false); INT_PTR health2p3 = helpers->ReadIntPtr(health1p2 + 0x78, /* isRelativeOffset */ false); float health1p = helpers->ReadFloat32(health1p3 + 0x14, /* isRelativeOffset */ false); //1p health float health2p = helpers->ReadFloat32(health2p3 + 0x14, /* isRelativeOffset */ false); //2p health helpers->log("got value: "); std::string ffs = std::to_string(ff1); helpers->log((char *)ffs.c_str()); float static oldFloat1 = 0.0; float static oldFloat2 = 0.0; float newFloat1 = health1p; float newFloat2 = health2p; wchar_t *settingsFilename = TEXT(".\\FFBPlugin.ini"); int configFeedbackLength = GetPrivateProfileInt(TEXT("Settings"), TEXT("FeedbackLength"), 120, settingsFilename); int HowtoRumbleKnockEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleKnockEffect"), 0, settingsFilename); int HowtoRumbleMotorEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleMotorEffect"), 0, settingsFilename); int HowtoRumbleHealthEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleHealthEffect"), 0, settingsFilename); int Knock1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Knock1pStrength"), 0, settingsFilename); int Motor1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Motor1pStrength"), 0, settingsFilename); int Health1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Health1pStrength"), 0, settingsFilename); int Knock2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Knock2pStrength"), 0, settingsFilename); int Motor2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Motor2pStrength"), 0, settingsFilename); int Health2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Health2pStrength"), 0, settingsFilename); if (!init) { for (int i = 0; i < SDL_NumJoysticks(); i++) { wchar_t* deviceGUIDString2 = new wchar_t[256]; int Device2GUID = GetPrivateProfileString(TEXT("Settings"), TEXT("Device2GUID"), NULL, deviceGUIDString2, 256, settingsFilename); char joystick_guid[256]; sprintf(joystick_guid, "%S", deviceGUIDString2); SDL_JoystickGUID guid, dev_guid; int numJoysticks = SDL_NumJoysticks(); std::string njs = std::to_string(numJoysticks); ((char)njs.c_str()); for (int i = 0; i < SDL_NumJoysticks(); i++) { extern int joystick1Index; if (i == joystick1Index) { continue; } SDL_Joystick* js2 = SDL_JoystickOpen(i); SDL_JoystickGUID guid = SDL_JoystickGetGUID(js2); char guid_str[1024]; SDL_JoystickGetGUIDString(guid, guid_str, sizeof(guid_str)); const char* name = SDL_JoystickName(js2); char text[256]; sprintf(text, "Joystick: %d / Name: %s / GUID: %s\n", i, name, guid_str); guid = SDL_JoystickGetGUIDFromString(joystick_guid); dev_guid = SDL_JoystickGetGUID(js2); if (!memcmp(&guid, &dev_guid, sizeof(SDL_JoystickGUID))) { GameController2 = SDL_JoystickOpen(i); joystick_index2 = SDL_JoystickInstanceID(GameController2); ControllerHaptic2 = SDL_HapticOpenFromJoystick(GameController2); break; } SDL_JoystickClose(js2); } haptic2 = ControllerHaptic2; if ((SDL_HapticRumbleSupported(haptic2) == SDL_TRUE)) { SDL_HapticRumbleInit; SDL_HapticRumbleInit(ControllerHaptic2); } } init = true; } if ((oldFloat1 != newFloat1) && (health1p != 0x01)) { if (HowtoRumbleHealthEffect == 0) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 1) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 2) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if ((oldFloat2 != newFloat2) && (health2p != 0x01)) { if (HowtoRumbleHealthEffect == 0) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 1) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 2) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } if (ff1 & 0x20) { if (HowtoRumbleKnockEffect == 0) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 1) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 2) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if (ff1 & 0x40) { if (HowtoRumbleMotorEffect == 0) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 1) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 2) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if (ff1 & 0x04) { if (HowtoRumbleKnockEffect == 0) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 1) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 2) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } if (ff1 & 0x08) { if (HowtoRumbleMotorEffect == 0) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 1) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 2) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } oldFloat1 = newFloat1; oldFloat2 = newFloat2; }
/*This file is part of FFB Arcade Plugin. FFB Arcade Plugin is free software : you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. FFB Arcade Plugin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with FFB Arcade Plugin.If not, see < https://www.gnu.org/licenses/>. */ #include <string> #include "LGI3D.h" #include "SDL.h" #include <Windows.h> extern int joystick_index1; extern int joystick_index2; extern SDL_Haptic* haptic2; extern SDL_Joystick* GameController2; extern SDL_Haptic* ControllerHaptic2; static bool init = false; void LGI3D::FFBLoop(EffectConstants *constants, Helpers *helpers, EffectTriggers* triggers) { int ff = helpers->ReadIntPtr(0x0065DA20, /* isRelativeOffset */ true); UINT8 ff1 = helpers->ReadByte(ff + 0x44, /* isRelativeOffset */ false); INT_PTR health1p1 = helpers->ReadIntPtr(0x008429F4, /* isRelativeOffset*/ true); INT_PTR health1p2 = helpers->ReadIntPtr(health1p1 + 0x4, /* isRelativeOffset */ false); INT_PTR health1p3 = helpers->ReadIntPtr(health1p2 + 0x74, /* isRelativeOffset */ false); INT_PTR health2p3 = helpers->ReadIntPtr(health1p2 + 0x78, /* isRelativeOffset */ false); float health1p = helpers->ReadFloat32(health1p3 + 0x14, /* isRelativeOffset */ false); //1p health float health2p = helpers->ReadFloat32(health2p3 + 0x14, /* isRelativeOffset */ false); //2p health helpers->log("got value: "); std::string ffs = std::to_string(ff1); helpers->log((char *)ffs.c_str()); float static oldFloat1 = 0.0; float static oldFloat2 = 0.0; float newFloat1 = health1p; float newFloat2 = health2p; wchar_t *settingsFilename = TEXT(".\\FFBPlugin.ini"); int configFeedbackLength = GetPrivateProfileInt(TEXT("Settings"), TEXT("FeedbackLength"), 120, settingsFilename); int HowtoRumbleKnockEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleKnockEffect"), 0, settingsFilename); int HowtoRumbleMotorEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleMotorEffect"), 0, settingsFilename); int HowtoRumbleHealthEffect = GetPrivateProfileInt(TEXT("Settings"), TEXT("HowtoRumbleHealthEffect"), 0, settingsFilename); int Knock1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Knock1pStrength"), 0, settingsFilename); int Motor1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Motor1pStrength"), 0, settingsFilename); int Health1pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Health1pStrength"), 0, settingsFilename); int Knock2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Knock2pStrength"), 0, settingsFilename); int Motor2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Motor2pStrength"), 0, settingsFilename); int Health2pStrength = GetPrivateProfileInt(TEXT("Settings"), TEXT("Health2pStrength"), 0, settingsFilename); if (!init) { for (int i = 0; i < SDL_NumJoysticks(); i++) { wchar_t* deviceGUIDString2 = new wchar_t[256]; int Device2GUID = GetPrivateProfileString(TEXT("Settings"), TEXT("Device2GUID"), NULL, deviceGUIDString2, 256, settingsFilename); char joystick_guid[256]; sprintf(joystick_guid, "%S", deviceGUIDString2); SDL_JoystickGUID guid, dev_guid; int numJoysticks = SDL_NumJoysticks(); std::string njs = std::to_string(numJoysticks); ((char)njs.c_str()); for (int i = 0; i < SDL_NumJoysticks(); i++) { extern int joystick1Index; if (i == joystick1Index) { continue; } SDL_Joystick* js2 = SDL_JoystickOpen(i); SDL_JoystickGUID guid = SDL_JoystickGetGUID(js2); char guid_str[1024]; SDL_JoystickGetGUIDString(guid, guid_str, sizeof(guid_str)); const char* name = SDL_JoystickName(js2); char text[256]; sprintf(text, "Joystick: %d / Name: %s / GUID: %s\n", i, name, guid_str); guid = SDL_JoystickGetGUIDFromString(joystick_guid); dev_guid = SDL_JoystickGetGUID(js2); if (!memcmp(&guid, &dev_guid, sizeof(SDL_JoystickGUID))) { GameController2 = SDL_JoystickOpen(i); joystick_index2 = SDL_JoystickInstanceID(GameController2); ControllerHaptic2 = SDL_HapticOpenFromJoystick(GameController2); break; } SDL_JoystickClose(js2); } haptic2 = ControllerHaptic2; if ((SDL_HapticRumbleSupported(haptic2) == SDL_TRUE)) { SDL_HapticRumbleInit; SDL_HapticRumbleInit(ControllerHaptic2); } } init = true; } if ((oldFloat1 != newFloat1) && (health1p != 0x01)) { if (HowtoRumbleHealthEffect == 0) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 1) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 2) { double percentForce = ((Health1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if ((oldFloat2 != newFloat2) && (health2p != 0x01)) { if (HowtoRumbleHealthEffect == 0) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 1) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleHealthEffect == 2) { double percentForce = ((Health2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } if (ff1 & 0x20) { if (HowtoRumbleKnockEffect == 0) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 1) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 2) { double percentForce = ((Knock1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if (ff1 & 0x40) { if (HowtoRumbleMotorEffect == 0) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 1) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(0, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 2) { double percentForce = ((Motor1pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->Rumble(percentForce, 0, percentLength); } } if (ff1 & 0x04) { if (HowtoRumbleKnockEffect == 0) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 1) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleKnockEffect == 2) { double percentForce = ((Knock2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } if (ff1 & 0x08) { if (HowtoRumbleMotorEffect == 0) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 1) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(0, percentForce, percentLength); } else if (HowtoRumbleMotorEffect == 2) { double percentForce = ((Motor2pStrength) / 100.0); double percentLength = configFeedbackLength; triggers->RumbleDevice2(percentForce, 0, percentLength); } } oldFloat1 = newFloat1; oldFloat2 = newFloat2; }
github_cpp
2025-12-14T18:42:17Z
https://github.com/Endprodukt/FFBPluginRacerMAME/blob/6a1921acbf2200f760cace7718e5a040eec25fe0/Game Files/LGI3D.cpp
{}
#include <iostream> #include <fstream> #include <cmath> #include<unistd.h> #include <SFML/Graphics.hpp> #include <SFML/Audio.hpp> #include <SFML/Window.hpp> #include <cstdlib> using namespace sf; using namespace std; int screen_x = 1136; int screen_y = 896; void display_level(RenderWindow& window, char**lvl, Texture& bgTex,Sprite& bgSprite,Texture& blockTexture,Sprite& blockSprite,Texture& blockLTexture,Sprite& blockLSprite,Texture& blockRTexture,Sprite& blockRSprite, const int height, const int width, const int cell_size) { window.draw(bgSprite); for (int i = 0; i < height; i += 1) { for (int j = 0; j < width; j += 1) { if (lvl[i][j] == '#') { blockSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockSprite); } if(lvl[i][j] == 'L'){ blockLSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockLSprite); } if(lvl[i][j] == 'R'){ blockRSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockRSprite); } } } } void platform_collision_y(char** lvl,float& offset_x,float& speed_x,float &postion_x,float& position_y,const int cell_size,int& height,int &width){ if (speed_x == 0) return; offset_x = postion_x; offset_x += speed_x; char left_up = lvl[(int)(position_y + 10) / cell_size][(int)(offset_x) / cell_size]; char left_mid = lvl[(int)(position_y + height/2) / cell_size][(int)(offset_x) / cell_size]; char left_down = lvl[(int)(position_y + height - 10) / cell_size][(int)(offset_x) / cell_size]; char right_up = lvl[(int)(position_y + 10) / cell_size][(int)(offset_x + width) / cell_size]; char right_mid = lvl[(int)(position_y + height /2) / cell_size][(int)(offset_x + width) / cell_size]; char right_down = lvl[(int)(position_y + height - 10) / cell_size][(int)(offset_x + width) / cell_size]; if (speed_x > 0){ if(right_up == '#' || right_mid == '#' || right_down == '#') speed_x = 0; } else if(speed_x < 0){ if ( left_down == '#' || left_up == '#' || left_mid == '#'){ speed_x = 0; } } } void player_gravity(char** lvl, float& offset_y, float& velocityY, bool& onGround, const float& gravity, float& terminal_Velocity, float& player_x, float& player_y, const int cell_size, int& Pheight, int& Pwidth,bool isfacingleft) { offset_y = player_y; offset_y += velocityY; int x = 0; if(!isfacingleft){//if player is facing right then it means the hitbox is diff from visual so we manually adjust the collision x = 100; } onGround = false; char bottom_left_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x-x) / cell_size]; char bottom_right_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x-x + Pwidth) / cell_size]; char bottom_mid_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x -x+ Pwidth / 2) / cell_size]; bool touching_flat = (bottom_left_down == '#' || bottom_right_down == '#' || bottom_mid_down == '#'); // 2. Check for SLOPED Platform collision (L or R) bool touching_L = (bottom_left_down == 'L' || bottom_right_down == 'L' || bottom_mid_down == 'L'); bool touching_R = (bottom_left_down == 'R' || bottom_right_down == 'R' || bottom_mid_down == 'R'); if (velocityY >= 0)//if going down check for collision and above platform { // Calculate the grid row where the feet are currently(taken from above) int feet_row = (int)(offset_y + Pheight) / cell_size; float platform_top = feet_row * cell_size;//same as feet_row because feet is on platform float old_feet_y = player_y + Pheight; if (old_feet_y <= platform_top + 5) //if sprite was above the platform ( the lower we go the y coordinate increases) { if(touching_flat){ player_y = platform_top - Pheight;//teleports to top of platform to avoid sticking in middle {player_y is the top left corner of sprite hitbox so subtract // P_height to appear at top, in y axis subtract means to go up} velocityY = 0; onGround = true; return; // Exit function so it doesnot go down } else if (touching_L || touching_R) { float slide_speed = 5.0f; if (touching_R) { player_x += slide_speed; onGround = true; } else if (touching_L) { player_x -= slide_speed; onGround = true; } return; } } } //if going up ignore collision player_y = offset_y; velocityY += gravity; if (velocityY >= terminal_Velocity) velocityY = terminal_Velocity; } bool checkcollision(float x1, float y1, float w1, float h1, float x2,float y2, float w2, float h2,float speed1,float speed2){ //formula to check collision between two rectangles by checking 9 points of first rectangle if they lie within second rectangle //adjust the hitbox if (speed1 > 0){ x1 -= w1; } if (speed2 > 0){ x2 -= w2; } y2 += 40;//shifting the position of ghost a bit down to avoid unnecessary collision while jumping //top left corner if(x1 >= x2 && x1<= x2+w2 && y1 >= y2 && y1 <= y2+h2){ return true; } //top middle point if (x1 + w1/2 >= x2 && x1+w1/2 <= x2+w2 && y1 >= y2&& y1 <= y2+h2){ return true; } //top right corner if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1 >= y2 && y1 <= y2+h2){ return true; } //left middle point if(x1>=x2 && x1 <= x2 + w2 && y1+h1/2 >= y2 && y1+h1/2 <= y2+h2){ return true; } //right middle point if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1+h1/2 >= y2 && y1+h1/2 <= y2+h2){ return true; } //bottom left corner if(x1 >= x2 && x1<= x2+w2 && y1+h1 >= y2 && y1+h1 <= y2+h2){ return true; } //bottom middle poiint if (x1 + w1/2 >= x2 && x1+w1/2 <= x2+w2 && y1+h1 >= y2&& y1+h1 <= y2+h2){ return true; } //bottom right corner if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1+h1 >= y2 && y1+h1 <= y2+h2){ return true; } return false; } void jump(char** lvl, float& offset_y, float& velocityY, bool& onGround, const float& gravity, float& terminal_Velocity, float& player_x, float& player_y, const int cell_size, int& Pheight, int& Pwidth) { velocityY -= 22*gravity; } //sucking mechanism //when an enemy comes in the range of the vacuum then it will start being sucked and when it touches the player then it will disappear and go into the bag void suck(float speed,float& enemy_x,float &enemy_y,int enemy_w,int enemy_h, float e_speed,int player_x,int player_y,int pwidth, int pheight,Sprite& enemysprite,bool &isenemyalive){ if(Keyboard::isKeyPressed(Keyboard::A)){ if (checkcollision(player_x-15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x += 10; } else if (Keyboard::isKeyPressed(Keyboard::D)){ if (checkcollision(player_x+15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x -= 10; } else if ((speed > 0) && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S))){ if (checkcollision(player_x+15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x -= 10; } else if (speed < 0 && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S))){ if (checkcollision(player_x-15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x += 10; } if (Keyboard::isKeyPressed(Keyboard::W)){ if (checkcollision(player_x,player_y -15,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_y += 10; } if (Keyboard::isKeyPressed(Keyboard::S)){//+10 is because the vaccum is below the player so we adjust the hitbox accordingly if (checkcollision(player_x,player_y +15,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_y -= 10; } } void moveright(float &player_x,float& speed,Sprite& playerSprite,int& frame,int& timer,bool Greenplayer){ if (Greenplayer) speed = 5; else if (!Greenplayer) speed = 5*1.2; if(speed < 0){ speed *= -1; } player_x += speed; if (player_x == 1150){ player_x = 0; } timer++; if (timer > 8){ if (Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),36,32,45)); if(!Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),224,32,45)); if (frame > 2) frame = 0; frame++; timer=0; } } void moveleft(float &player_x,float& speed,Sprite& playerSprite,int& frame, int& timer,bool Greenplayer){ if (Greenplayer) speed = 5; else if (!Greenplayer) speed = 5*1.2; if (speed > 0){ speed *= -1; } player_x += speed; if (player_x == 0){ player_x = 1150; } timer++; playerSprite.setScale(3,3); if (timer > 8){ if (Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),36,32,45)); if(!Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),224,32,45)); if (frame > 2) frame = 0; frame++; timer=0; } } void ghosts(float ghost_x[],int ghost_speed[],int n,Sprite ghostsprite[],bool isfacingleft[],int ghost_state[],int ghost_timer[]){ //n are the no of ghost. for (int i = 0 ; i < n; i++){ ghost_timer[i]--; if(ghost_timer[i]<=0){ if(ghost_state[i]==1){//is moving ghost_state[i]=0; ghost_timer[i] = rand()%60+30;//wait between 0.5 and 1.5 seconds } else{ ghost_state[i]=1; ghost_timer[i]=rand()%180+300;//walk btween 3 to 5 if( rand()%10 + 2){ //generates a random no beteen 2 and 10 and 50/50 chance of odd/even ghost_speed[i] *= -1; } } } if (ghost_state[i]==1){ if (ghost_x[i] >= 1130){ ghost_speed[i] *= -1; } if (ghost_x[i] <= 0){ ghost_speed[i] *= -1; } if
#include <iostream> #include <fstream> #include <cmath> #include<unistd.h> #include <SFML/Graphics.hpp> #include <SFML/Audio.hpp> #include <SFML/Window.hpp> #include <cstdlib> using namespace sf; using namespace std; int screen_x = 1136; int screen_y = 896; void display_level(RenderWindow& window, char**lvl, Texture& bgTex,Sprite& bgSprite,Texture& blockTexture,Sprite& blockSprite,Texture& blockLTexture,Sprite& blockLSprite,Texture& blockRTexture,Sprite& blockRSprite, const int height, const int width, const int cell_size) { window.draw(bgSprite); for (int i = 0; i < height; i += 1) { for (int j = 0; j < width; j += 1) { if (lvl[i][j] == '#') { blockSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockSprite); } if(lvl[i][j] == 'L'){ blockLSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockLSprite); } if(lvl[i][j] == 'R'){ blockRSprite.setPosition(j * cell_size, i * cell_size); window.draw(blockRSprite); } } } } void platform_collision_y(char** lvl,float& offset_x,float& speed_x,float &postion_x,float& position_y,const int cell_size,int& height,int &width){ if (speed_x == 0) return; offset_x = postion_x; offset_x += speed_x; char left_up = lvl[(int)(position_y + 10) / cell_size][(int)(offset_x) / cell_size]; char left_mid = lvl[(int)(position_y + height/2) / cell_size][(int)(offset_x) / cell_size]; char left_down = lvl[(int)(position_y + height - 10) / cell_size][(int)(offset_x) / cell_size]; char right_up = lvl[(int)(position_y + 10) / cell_size][(int)(offset_x + width) / cell_size]; char right_mid = lvl[(int)(position_y + height /2) / cell_size][(int)(offset_x + width) / cell_size]; char right_down = lvl[(int)(position_y + height - 10) / cell_size][(int)(offset_x + width) / cell_size]; if (speed_x > 0){ if(right_up == '#' || right_mid == '#' || right_down == '#') speed_x = 0; } else if(speed_x < 0){ if ( left_down == '#' || left_up == '#' || left_mid == '#'){ speed_x = 0; } } } void player_gravity(char** lvl, float& offset_y, float& velocityY, bool& onGround, const float& gravity, float& terminal_Velocity, float& player_x, float& player_y, const int cell_size, int& Pheight, int& Pwidth,bool isfacingleft) { offset_y = player_y; offset_y += velocityY; int x = 0; if(!isfacingleft){//if player is facing right then it means the hitbox is diff from visual so we manually adjust the collision x = 100; } onGround = false; char bottom_left_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x-x) / cell_size]; char bottom_right_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x-x + Pwidth) / cell_size]; char bottom_mid_down = lvl[(int)(offset_y + Pheight) / cell_size][(int)(player_x -x+ Pwidth / 2) / cell_size]; bool touching_flat = (bottom_left_down == '#' || bottom_right_down == '#' || bottom_mid_down == '#'); // 2. Check for SLOPED Platform collision (L or R) bool touching_L = (bottom_left_down == 'L' || bottom_right_down == 'L' || bottom_mid_down == 'L'); bool touching_R = (bottom_left_down == 'R' || bottom_right_down == 'R' || bottom_mid_down == 'R'); if (velocityY >= 0)//if going down check for collision and above platform { // Calculate the grid row where the feet are currently(taken from above) int feet_row = (int)(offset_y + Pheight) / cell_size; float platform_top = feet_row * cell_size;//same as feet_row because feet is on platform float old_feet_y = player_y + Pheight; if (old_feet_y <= platform_top + 5) //if sprite was above the platform ( the lower we go the y coordinate increases) { if(touching_flat){ player_y = platform_top - Pheight;//teleports to top of platform to avoid sticking in middle {player_y is the top left corner of sprite hitbox so subtract // P_height to appear at top, in y axis subtract means to go up} velocityY = 0; onGround = true; return; // Exit function so it doesnot go down } else if (touching_L || touching_R) { float slide_speed = 5.0f; if (touching_R) { player_x += slide_speed; onGround = true; } else if (touching_L) { player_x -= slide_speed; onGround = true; } return; } } } //if going up ignore collision player_y = offset_y; velocityY += gravity; if (velocityY >= terminal_Velocity) velocityY = terminal_Velocity; } bool checkcollision(float x1, float y1, float w1, float h1, float x2,float y2, float w2, float h2,float speed1,float speed2){ //formula to check collision between two rectangles by checking 9 points of first rectangle if they lie within second rectangle //adjust the hitbox if (speed1 > 0){ x1 -= w1; } if (speed2 > 0){ x2 -= w2; } y2 += 40;//shifting the position of ghost a bit down to avoid unnecessary collision while jumping //top left corner if(x1 >= x2 && x1<= x2+w2 && y1 >= y2 && y1 <= y2+h2){ return true; } //top middle point if (x1 + w1/2 >= x2 && x1+w1/2 <= x2+w2 && y1 >= y2&& y1 <= y2+h2){ return true; } //top right corner if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1 >= y2 && y1 <= y2+h2){ return true; } //left middle point if(x1>=x2 && x1 <= x2 + w2 && y1+h1/2 >= y2 && y1+h1/2 <= y2+h2){ return true; } //right middle point if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1+h1/2 >= y2 && y1+h1/2 <= y2+h2){ return true; } //bottom left corner if(x1 >= x2 && x1<= x2+w2 && y1+h1 >= y2 && y1+h1 <= y2+h2){ return true; } //bottom middle poiint if (x1 + w1/2 >= x2 && x1+w1/2 <= x2+w2 && y1+h1 >= y2&& y1+h1 <= y2+h2){ return true; } //bottom right corner if(x1 + w1 >= x2 && x1+w1 <= x2+w2 && y1+h1 >= y2 && y1+h1 <= y2+h2){ return true; } return false; } void jump(char** lvl, float& offset_y, float& velocityY, bool& onGround, const float& gravity, float& terminal_Velocity, float& player_x, float& player_y, const int cell_size, int& Pheight, int& Pwidth) { velocityY -= 22*gravity; } //sucking mechanism //when an enemy comes in the range of the vacuum then it will start being sucked and when it touches the player then it will disappear and go into the bag void suck(float speed,float& enemy_x,float &enemy_y,int enemy_w,int enemy_h, float e_speed,int player_x,int player_y,int pwidth, int pheight,Sprite& enemysprite,bool &isenemyalive){ if(Keyboard::isKeyPressed(Keyboard::A)){ if (checkcollision(player_x-15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x += 10; } else if (Keyboard::isKeyPressed(Keyboard::D)){ if (checkcollision(player_x+15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x -= 10; } else if ((speed > 0) && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S))){ if (checkcollision(player_x+15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x -= 10; } else if (speed < 0 && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S))){ if (checkcollision(player_x-15,player_y,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_x += 10; } if (Keyboard::isKeyPressed(Keyboard::W)){ if (checkcollision(player_x,player_y -15,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_y += 10; } if (Keyboard::isKeyPressed(Keyboard::S)){//+10 is because the vaccum is below the player so we adjust the hitbox accordingly if (checkcollision(player_x,player_y +15,pwidth,pheight,enemy_x,enemy_y,enemy_w,enemy_h,speed,e_speed)) isenemyalive = false; enemy_y -= 10; } } void moveright(float &player_x,float& speed,Sprite& playerSprite,int& frame,int& timer,bool Greenplayer){ if (Greenplayer) speed = 5; else if (!Greenplayer) speed = 5*1.2; if(speed < 0){ speed *= -1; } player_x += speed; if (player_x == 1150){ player_x = 0; } timer++; if (timer > 8){ if (Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),36,32,45)); if(!Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),224,32,45)); if (frame > 2) frame = 0; frame++; timer=0; } } void moveleft(float &player_x,float& speed,Sprite& playerSprite,int& frame, int& timer,bool Greenplayer){ if (Greenplayer) speed = 5; else if (!Greenplayer) speed = 5*1.2; if (speed > 0){ speed *= -1; } player_x += speed; if (player_x == 0){ player_x = 1150; } timer++; playerSprite.setScale(3,3); if (timer > 8){ if (Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),36,32,45)); if(!Greenplayer) playerSprite.setTextureRect(IntRect(317-(32*frame),224,32,45)); if (frame > 2) frame = 0; frame++; timer=0; } } void ghosts(float ghost_x[],int ghost_speed[],int n,Sprite ghostsprite[],bool isfacingleft[],int ghost_state[],int ghost_timer[]){ //n are the no of ghost. for (int i = 0 ; i < n; i++){ ghost_timer[i]--; if(ghost_timer[i]<=0){ if(ghost_state[i]==1){//is moving ghost_state[i]=0; ghost_timer[i] = rand()%60+30;//wait between 0.5 and 1.5 seconds } else{ ghost_state[i]=1; ghost_timer[i]=rand()%180+300;//walk btween 3 to 5 if( rand()%10 + 2){ //generates a random no beteen 2 and 10 and 50/50 chance of odd/even ghost_speed[i] *= -1; } } } if (ghost_state[i]==1){ if (ghost_x[i] >= 1130){ ghost_speed[i] *= -1; } if (ghost_x[i] <= 0){ ghost_speed[i] *= -1; } if (ghost_speed[i]< 0){ if (!isfacingleft[i] ){ ghost_x[i] -= 96; isfacingleft[i] = true; } ghostsprite[i].setScale(3,3); } if (ghost_speed[i]> 0){ if (isfacingleft[i] == true ){ ghost_x[i] += 96; isfacingleft[i] = false; } ghostsprite[i].setScale(-3,3); } ghost_x[i] += ghost_speed[i];} } } //function to handle skeleton movement and behavior void skeletons(float skeleton_x[],float skeleton_y[],int skeleton_speed[],int n,Sprite skeletonSprite[],bool isfacingleft[],int skeleton_state[],int skeleton_timer[],int cell_size){ //n are the no of skeletons. //for level one float upperPlatform = 3*cell_size - 225; float middlePlatform = 9*cell_size - 225; float floor = 13*cell_size -225; float platform_left = 3*cell_size; float platform_right = 15*cell_size; for (int i = 0 ; i < n; i++){ skeleton_timer[i]--; if(skeleton_timer[i]<=0){ if((rand()%10)<3 && skeleton_state[i] == 1){//chance is 30% if(skeleton_y[i] == floor){ skeleton_y[i] = middlePlatform; if (skeleton_x[i] < platform_left) skeleton_x[i] = platform_left; if ( skeleton_x[i] > platform_right) skeleton_x[i] = platform_right; } if(skeleton_y[i] == middlePlatform){ if(rand()%2==0)// 50 % chance of up or down skeleton_y[i] = upperPlatform; else skeleton_y[i] = floor; if (skeleton_x[i] < platform_left) skeleton_x[i] = platform_left; if ( skeleton_x[i] > platform_right) skeleton_x[i] = platform_right; } if(skeleton_y[i] == upperPlatform){ skeleton_y[i] = middlePlatform; if (skeleton_x[i] < platform_left) skeleton_x[i] = platform_left; if ( skeleton_x[i] > platform_right) skeleton_x[i] = platform_right; } } else{ if(skeleton_state[i]==1){//is moving skeleton_state[i]=0; skeleton_timer[i] = rand()%60+30;//wait between 0.5 and 1.5 seconds } else{ skeleton_state[i]=1; skeleton_timer[i]=rand()%180+300;//walk btween 3 to 5 } } } if (skeleton_state[i]==1){ if (skeleton_x[i] >= 1130){ skeleton_speed[i] *= -1; } if (skeleton_x[i] <= 0){ skeleton_speed[i] *= -1; } if (skeleton_speed[i]< 0){ if (!isfacingleft[i] ){ skeleton_x[i] -= 96; isfacingleft[i] = true; } skeletonSprite[i].setScale(3,3); } if (skeleton_speed[i]> 0){ if (isfacingleft[i] == true ){ skeleton_x[i] += 96; isfacingleft[i] = false; } skeletonSprite[i].setScale(-3,3); } skeleton_x[i] += skeleton_speed[i];} } } void invisible_man(float invis_x[],float invis_y[],float player_x,float player_y,bool isvisible[],int n,Sprite invisprite[],float invis_speed[],bool isfacingleft[],int invis_timer[]){ for (int i = 0; i < 3; i++){ invis_timer[i]--; if (invis_timer[i]<=0){ if(!isvisible[i]){ isvisible[i] = true; invis_timer[i] = rand() %300 + 300; //visible for 5 to 10 seconds } else{ isvisible[i] = false; invis_timer[i] = rand() %300+60; //invisible for 1 to 5 seconds } } if (invis_timer[i] > 200 && invis_timer[i] < 300){ invis_x[i] = player_x - 300; invis_y[i] = player_y; } invis_x[i] += invis_speed[i]; if (invis_x[i] >= 1130){ invis_x[i] = 1130; invis_speed[i] *= -1; } if (invis_x[i] <= 0){ invis_speed[i] *= -1; } if (invis_speed[i]< 0){ if (!isfacingleft[i] ){ invis_x[i] -= 96; isfacingleft[i] = true; } invisprite[i].setScale(3,3); } if (invis_speed[i]> 0){ if (isfacingleft[i] == true ){ invis_x[i] += 96; isfacingleft[i] = false; } invisprite[i].setScale(-3,3); } } } //function to check if ghost is on platform bool onplatform(char **lvl,float width, float height,float posx, float posy, const int cell_size,int speed){ float offset = posx + speed; char bottomleft = lvl[((int)(posy + height)/cell_size)][(int)(offset)/cell_size]; char bottommiddle = lvl[((int)(posy + height)/cell_size)][(int)(offset + width/2)/cell_size]; char bottomright = lvl[((int)(posy + height)/cell_size)][(int)(offset+width)/cell_size]; if (bottomleft == '#' || bottomleft == 'L' || bottomleft == 'R'){//only check for bottom left since we added the width in ghost func so it is the left bottom of sprite. return true; } return false; } void chelnovs(char** lvl, float chelnov_x[], float chelnov_y[], int chelnov_speed[], int n, Sprite chelnovSprite[], bool isfacingleft[], int chelnov_state[], int chelnov_timer[], int cell_size, int height_limit) { int c_width = 120; int c_height = 135; for (int i = 0; i < n; i++) { chelnov_timer[i]--; if (chelnov_timer[i] <= 0) { // 30% Chance to try changing platforms if currently moving if ((rand() % 10) < 3 && chelnov_state[i] == 1) { int current_col = (int)(chelnov_x[i] + c_width / 2) / cell_size; int current_row = (int)(chelnov_y[i] + c_height) / cell_size; int direction = rand() % 2; bool moved = false; if (direction == 0) { // Scan 5 blocks upward for (int r = current_row - 2; r > current_row - 7; r--) { if (r < 0) break; // Don't look off screen // If we find a block '#' if (lvl[r][current_col] == '#') { chelnov_y[i] = (r - 1) * cell_size; // Teleport to top of that block moved = true; break; } } } else { // Scan 5 blocks downward for (int r = current_row + 1; r < height_limit; r++) { // If we find a block '#' if (lvl[r][current_col] == '#') { // Teleport to top of that block chelnov_y[i] = (r-1) * cell_size; // -1 to stand ON it, not IN it moved = true; break; } } } // If we didn't move just turn around instead if (!moved) { chelnov_speed[i] *= -1; } } else { // Toggle Moving/Idle state if (chelnov_state[i] == 1) { chelnov_state[i] = 0; chelnov_timer[i] = rand() % 60 + 30; } else { chelnov_state[i] = 1; chelnov_timer[i] = rand() % 180 + 300; } } } if (chelnov_state[i] == 1) { // Screen edge bounce if (chelnov_x[i] >= 1130) chelnov_speed[i] *= -1; if (chelnov_x[i] <= 0) chelnov_speed[i] *= -1; // flip sprite if (chelnov_speed[i] < 0) { if (!isfacingleft[i]) { chelnov_x[i] -= 96; isfacingleft[i] = true; } chelnovSprite[i].setScale(3, 3); } if (chelnov_speed[i] > 0) { if (isfacingleft[i] == true) { chelnov_x[i] += 96; isfacingleft[i] = false; } chelnovSprite[i].setScale(-3, 3); } chelnov_x[i] += chelnov_speed[i]; } } } //function to handle player death animation void playerdies(Sprite &playersprite,int& frame,int& timer){ timer++; if(timer>10){ playersprite.setTextureRect(IntRect(19+(frame*32+frame*15),85,32,34)); frame++; //frame is not reset to zero so that it does not repeat the animation timer = 0; } } //function to handle vacuum animation and position void getvacuum(Sprite& vacuumsprite,Sprite& vacupsprite,float& player_x,float& player_y,int& frame,int& timer,float &speed,float& vac_x,float& vac_y,int& vacwidth, int&vacheight){ if(Keyboard :: isKeyPressed(Keyboard::A) || Keyboard :: isKeyPressed(Keyboard::D) || Keyboard :: isKeyPressed(Keyboard::Space)){ timer++; if (timer > 0){//height and width of three frames are different so we manually check all frames instead of general formula timer = 0; //reset the timer if (frame > 2){ frame = 0;//reset it to 0 so that the animation keeps repeating } if(frame == 0){ vacuumsprite.setTextureRect(IntRect(470,179,12,17)); //check the direction of the player and then adjust the position of the vacuum accordingly if (speed < 0){ vac_x = player_x-27; vac_y = player_y + 55; vacuumsprite.setPosition(vac_x,vac_y); } if(speed > 0){ vac_x = player_x+27; vac_y = player_y + 55; vacuumsprite.setPosition(vac_x,vac_y); } } else if (frame == 1 ){ vacuumsprite.setTextureRect(IntRect(440,177,24,20)); if (speed < 0){ vac_x = player_x-62; vac_y = player_y + 53; vacuumsprite.setPosition(vac_x,vac_y); } if (speed > 0){ vac_x = player_x+62; vac_y = player_y + 55; vacuumsprite.setPosition(vac_x,vac_y); } } else if (frame == 2){ vacuumsprite.setTextureRect(IntRect(400,176,31,24)); if (speed<0){ vac_x = player_x-83; vac_y = player_y + 51; vacuumsprite.setPosition(vac_x,vac_y); } if (speed > 0){ vac_x = player_x+81; vac_y = player_y + 51; vacuumsprite.setPosition(vac_x,vac_y); } } if(speed > 0 ){ vacuumsprite.setScale(-3,3); } if(speed < 0){ vacuumsprite.setScale(3,3); } //make the vacuum usable in 4 directions using WASD if (Keyboard::isKeyPressed(Keyboard::D)){ if (speed < 0){ vacuumsprite.setScale(-3,3); vac_x = player_x+83+120; vac_y = player_y + 51; vacuumsprite.setPosition(vac_x,vac_y); vacwidth = 93; vacheight = 72; vacupsprite.setPosition(-10000,10000);//move the up vacuum out of screen } } if(Keyboard::isKeyPressed(Keyboard::A)){ if(speed > 0){ vacuumsprite.setScale(3,3); vac_x = player_x-83-120; vac_y = player_y + 55; vacuumsprite.setPosition(vac_x,vac_y); vacwidth = 93; vacheight = 72; vacupsprite.setPosition(-10000,10000);//move the up vacuum out of screen } } if (Keyboard::isKeyPressed(Keyboard::W)){ vacupsprite.setScale(1,1); vac_x = speed > 0 ?player_x - vacwidth:player_x;//we subtrac vac_y = player_y - vacheight;//adjust the postition of the vaccum according to the visuals of the player vacwidth = 50; vacheight = 97; vacupsprite.setPosition(vac_x,vac_y); vacuumsprite.setPosition(-10000,10000);//move the side vacuum out of screen } if (Keyboard::isKeyPressed(Keyboard::S)){ vacupsprite.setScale(1,-1); vac_x = speed > 0 ?player_x - vacwidth:player_x;//adjust the postition of the vaccum according to the visuals of the player vac_y = player_y + vacheight +120; vacupsprite.setPosition(vac_x,vac_y); vacwidth = 50; vacheight = 97; vacuumsprite.setPosition(-10000,10000);//move the side vacuum out of screen } //if only the space is pressed and no direction keys are pressed then hide the up vacuum if (Keyboard :: isKeyPressed(Keyboard::Space) && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::A) || Keyboard::isKeyPressed(Keyboard::S) || Keyboard::isKeyPressed(Keyboard::D))){ vacupsprite.setPosition(-10000,10000);//move the up vacuum out of screen } frame++;//goto next frame }} } void shoot(Sprite bulletsprite[],int& captured,int player_x,int player_y, float speed,int bulletx[],int bullety[],bool isbulletactive[],int bulletspeedx[],int bulletspeeedy[],int& shoottimer,Texture& bursttex){ if (shoottimer > 0) { shoottimer--; return; // cant shoot yet } if (Keyboard::isKeyPressed(Keyboard::B) && captured > 0) { int idx = 0; bulletsprite[0].setTexture(bursttex); bulletsprite[0].setTextureRect(IntRect(6, 15, 79, 65)); bulletsprite[0].setScale(1, 1); bulletx[idx] = player_x; bullety[idx] = player_y; bulletsprite[idx].setPosition(bulletx[idx], bullety[idx]); isbulletactive[idx] = true; if (Keyboard::isKeyPressed(Keyboard::D) && speed > 0) { bulletspeedx[idx] = 5; bulletspeeedy[idx] = 0; } else if (Keyboard::isKeyPressed(Keyboard::A) && speed < 0) { bulletspeedx[idx] = -5; bulletspeeedy[idx] = 0; } else if (Keyboard::isKeyPressed(Keyboard::W)) { bulletspeedx[idx] = 0; bulletspeeedy[idx] = -5; } else if (Keyboard::isKeyPressed(Keyboard::S)) { bulletspeedx[idx] = 0; bulletspeeedy[idx] = 5; } else { bulletspeedx[idx] = (speed > 0) ? 5 : -5; bulletspeeedy[idx] = 0; } shoottimer = 10; // consume all captured captured = 0; return; } if (captured <= 0) return; //place bullet where player is bulletx[captured -1] = player_x; bullety[captured-1] = player_y; bool fired = false; if(Keyboard::isKeyPressed(Keyboard::D) && speed > 0){ bulletsprite[captured -1].setPosition(bulletx[captured-1], bullety[captured-1]); isbulletactive[captured -1] = true; fired = true; bulletspeedx[captured -1] = 5; bulletspeeedy[captured -1] = 0; } else if (Keyboard::isKeyPressed(Keyboard::A) && speed < 0){ bulletsprite[captured -1].setPosition(bulletx[captured -1], bullety[captured -1]); isbulletactive[captured -1] = true; fired = true; bulletspeedx[captured -1] = -5; bulletspeeedy[captured -1] = 0; } else if (Keyboard::isKeyPressed(Keyboard::W)){ bulletsprite[captured -1].setPosition(bulletx[captured -1], bullety[captured -1]); isbulletactive[captured -1] = true; fired = true; bulletspeedx[captured -1] = 0; bulletspeeedy[captured -1] = -5; } else if (Keyboard::isKeyPressed(Keyboard::S)){ bulletsprite[captured -1].setPosition(bulletx[captured -1], bullety[captured -1]); isbulletactive[captured -1] = true; fired = true; bulletspeedx[captured -1] = 0; bulletspeeedy[captured -1] = 5; } if (fired){ shoottimer = 10; //reset timer afer shooting captured--;//decrease the captured count only if a shot was fired } } void updatebullets(char** lvl,int levelWidth,int levelHeight,int cell_size,int bulletx[],int bullety[],bool bulletactive[],int speedx[],int speedy[],Sprite bulletsprite[],int bullettype[],int maxbullets,int gravity){ for (int i = 0; i < maxbullets; i++){ if (!bulletactive[i]) continue; bulletx[i] += speedx[i]; bullety[i] += speedy[i]; bulletsprite[i].setPosition(bulletx[i], bullety[i]); int bw = 96; int bh = 96; // ceiling bounce for upward shots if (speedy[i] < 0 && bullety[i] <= 0){ bullety[i] = 0; speedy[i] = -(speedy[i]); // reverse direction to make it fall } //if bullet is moving down, check for platform collision beneath it if (speedy[i] >= 0){ int bottomrow = (bullety[i] + bh) / cell_size;// int midCol = (bulletx[i] + bw/2) / cell_size;//midcol if (bottomrow >= 0 && bottomrow < levelHeight && midCol >= 0 && midCol < levelWidth){ if (lvl[bottomrow][midCol] == '#' || lvl[bottomrow][midCol] == 'L' || lvl[bottomrow][midCol] == 'R'){ //adjust bullet y to be on top of platform bullety[i] = bottomrow * cell_size - bh; speedy[i] = 0; if (speedx[i] == 0) speedx[i] = (rand() % 2 == 0) ? -3 : 3;//choose random direction } else { if (speedy[i] == 0){ int belowRow = (bullety[i] + bh + 1) / cell_size; if (belowRow >= levelHeight || (lvl[belowRow][midCol] != '#' && lvl[belowRow][midCol] != 'L' && lvl[belowRow][midCol] != 'R' )){ speedy[i] = gravity; // start falling } } else { speedy[i] += gravity; if (speedy[i] > 10) speedy[i] = 10; } } } } // make it fall if (speedy[i] == 0 && speedx[i] != 0){ int belowRow = (bullety[i] + (int)bh + 1) / cell_size; int midCol = (bulletx[i] + (int)(bw/2)) / cell_size; if (belowRow >= levelHeight || midCol < 0 || midCol >= levelWidth || (lvl[belowRow][midCol] != '#' && lvl[belowRow][midCol] != 'L' && lvl[belowRow][midCol] != 'R')){ speedy[i] = gravity; } } int levelPixelW = levelWidth* cell_size;//calculate level widht in terms of pixels int levelPixelH = levelHeight * cell_size; if (bulletx[i] <= 0){ bulletx[i] = 0; speedx[i] = -speedx[i]; } else if (bulletx[i] + bw >= levelPixelW){//add bw to check the right edge bulletx[i] = levelPixelW - bw; speedx[i] = -speedx[i]; } // deactivate only in bottom corners if ((bullety[i] + bh >= levelPixelH+cell_size) && (bulletx[i] <= 0 || bulletx[i] + bw >= levelPixelW)){ bulletactive[i] = false; continue; } } } void animbullet(){ } void menu(Sprite& selectsprite,Sprite& arrow,bool& Greenplayer,int& current_level,int& arrowx){ selectsprite.setPosition(200,300); if(Keyboard::isKeyPressed(Keyboard::Left)){ arrowx = 200;//only setting x bc we need to check which character is selected arrow.setPosition(arrowx,70); } else if (Keyboard::isKeyPressed(Keyboard::Right)){ arrowx = 650; arrow.setPosition(arrowx,70); } if (Keyboard::isKeyPressed(Keyboard::Enter) && arrowx == 200){ Greenplayer = false; current_level = 1; } if (Keyboard::isKeyPressed(Keyboard::Enter) && arrowx == 650){ Greenplayer = true; current_level = 1; } } bool enemy_gravity(char** lvl, float& x, float& y, int width, int height, int cell_size) { // Check the bottom left and bottomright corners int feet_y = (int)(y + height) / cell_size; int left_x = (int)(x + 10) / cell_size; // +10 to avoid edge clipping int right_x = (int)(x + width - 10) / cell_size; //prevent segmentation fault if (left_x < 0) left_x = 0; if (right_x >= 18) right_x = 17; // Clamp to the last valid column if (feet_y >= 14) feet_y = 13; // Clamp to the last valid row // Check blocks below char block_left = lvl[feet_y][left_x]; char block_right = lvl[feet_y][right_x]; if (block_left != '#' && block_right != '#') { // No ground below either foot soo fall y += 5.0f; return false; } else { // movee to top of block float ground_top = feet_y * cell_size; if (y + height > ground_top && y + height < ground_top + 15) { y = ground_top - height; } return true; } } void resize_arrays(float*& x, float*& y, float*& spd, int*& type, int*& state, int*& timer,bool*& face, bool*& alive, bool*& vis, Sprite*& spr, int& capacity,bool*& issucked) { int new_cap = capacity + 2; // make new arrays float* new_x = new float[new_cap]; float* new_y = new float[new_cap]; float* new_spd = new float[new_cap]; int* new_type = new int[new_cap]; int* new_state = new int[new_cap]; int* new_timer = new int[new_cap]; bool* new_face = new bool[new_cap]; bool* new_alive = new bool[new_cap]; bool* new_vis = new bool[new_cap]; Sprite* new_spr = new Sprite[new_cap]; bool* new_issucked = new bool[new_cap]; //copy old data for (int i = 0; i < capacity; i++) { new_x[i] = x[i]; new_y[i] = y[i]; new_spd[i] = spd[i]; new_type[i] = type[i]; new_state[i] = state[i]; new_timer[i] = timer[i]; new_face[i] = face[i]; new_alive[i] = alive[i]; new_vis[i] = vis[i]; new_spr[i] = spr[i]; new_issucked[i] = issucked[i]; } //Delete old arrays delete[] x; delete[] y; delete[] spd; delete[] type; delete[] state; delete[] timer; delete[] face; delete[] alive; delete[] vis; delete[] spr; delete[] new_issucked; // Update x = new_x; y = new_y; spd = new_spd; type = new_type; state = new_state; timer = new_timer; face = new_face; alive = new_alive; vis = new_vis; spr = new_spr; issucked = new_issucked; capacity = new_cap; } void spawn_dynamic(int& count, int& capacity, float spawn_x, float spawn_y,float*& x, float*& y, float*& spd, int*& type, int*& state, int*& timer, bool*& face, bool*& alive, bool*& vis, Sprite*& spr,Texture& ghostT, Texture& skelT, Texture& invisT, Texture& chelT,bool*& issucked) { // Resize if full if (count >= capacity) { resize_arrays(x, y, spd, type, state, timer, face, alive, vis, spr, capacity,issucked); } // Initialize Index int i = count; x[i] = spawn_x; y[i] = spawn_y; alive[i] = true; vis[i] = true; // Default visible face[i] = (rand() % 2); state[i] = 1; // Moving timer[i] = rand() % 120; // Random Type (0-3) type[i] = rand() % 4; if (type[i] == 0) { // Ghost spr[i].setTexture(ghostT); spr[i].setTextureRect(IntRect(0,0,40,40)); spd[i] = 2; } else if (type[i] == 1) { // Skeleton spr[i].setTexture(skelT); spr[i].setTextureRect(IntRect(0,0,40,75)); spd[i] = 3; } else if (type[i] == 2) { // Invisible Man spr[i].setTexture(invisT); spr[i].setTextureRect(IntRect(8,16,32,45)); spd[i] = 3; } else if (type[i] == 3) { // Chelnov spr[i].setTexture(chelT); spr[i].setTextureRect(IntRect(0,0,32,45)); spd[i] = 4; } spr[i].setScale(3,3); count++; } void spawnpower (float x, float y,bool isactive[], int powerupx[], int powerupy[]){ if (rand()%100 < 10){//10% chance to spawn powerup for (int i = 0; i < 4; i++){ if (!isactive[i]){ powerupx[i] = x ; powerupy[i] = y; isactive[i] = true; cout<<"Works\n"; break; } } } } void level_two(char** lvl,int width,int height,float ghost_x[8],float ghost_y[8],int ghost_speed[8],float skeleton_x[4],float skeleton_y[4],int skeleton_speed[4],float player_x,float player_y,int &lives,const int cell_size,int pwidth,int pheight,float &speed, Sprite ghostsprite[],bool isghostfacingleft[],int ghost_state[],int ghost_timer[],Sprite skeletonSprite[],bool isskeletonfacngleft[],int skeleton_state[],int skeleton_timer[], float& vac_x,float& vac_y,int& vacwidth,int& vacheight,bool isghostalive[],bool isskeletonalive[],int& captured,Texture& ghosttex,Texture& skeletonTex,Sprite bulletsprite[],int bullettype[],int bulletx[],int bullety[],bool bulletactive[],int speedx[],int speedy[],int maxbullets,int& shoottimer,int& gspawntimer,int&sspawntimer,int& skeleton_spawned,int&ghost_spawned,float chelnov_x[4], float chelnov_y[4], int chelnov_speed[4], Sprite chelnovSprite[], bool ischelnovfacingleft[], int chelnov_state[], int chelnov_timer[], bool ischelnovalive[], int& chelnov_spawned, int& cspawntimer, Texture& chelnovtex,int& invis_spawned,int invis_timer[],float invis_x[],float invis_y[],float invis_speed[],bool isvisible[],Sprite invisprite[],bool isinvisfacingleft[],bool isinvisalive[],int& invis_spawntimer,Texture& invisTex,bool ispoweractive[], int powerupx[], int powerupy[], Sprite powerupsprite[]){ gspawntimer++; if (gspawntimer > 240){//240 will spawn a ghost every 4 seconds if (ghost_spawned< 4){ isghostalive[ghost_spawned] = true; ghost_spawned++; gspawntimer = 0; } } sspawntimer++; if (sspawntimer > 240){ if(skeleton_spawned < 9){ isskeletonalive[skeleton_spawned] = true; skeleton_spawned++; sspawntimer = 0; } } cspawntimer++; if (cspawntimer > 360) { // 360 will spawn a Chelnov every 6 seconds if (chelnov_spawned < 4) { ischelnovalive[chelnov_spawned] = true; chelnov_spawned++; cspawntimer = 0; } } invis_spawntimer++; if (invis_spawntimer > 300){ if (invis_spawned < 3){ isinvisalive[invis_spawned] = true; invis_spawned++; invis_spawntimer = 0; } } //call and control ghost ghosts(ghost_x,ghost_speed,4,ghostsprite,isghostfacingleft,ghost_state,ghost_timer); skeletons(skeleton_x,skeleton_y,skeleton_speed,9,skeletonSprite,isskeletonfacngleft,skeleton_state,skeleton_timer,cell_size); chelnovs(lvl,chelnov_x,chelnov_y,chelnov_speed,4,chelnovSprite,ischelnovfacingleft,chelnov_state,chelnov_timer,cell_size, height); invisible_man(invis_x,invis_y,player_x,player_y,isvisible,3,invisprite,invis_speed,isinvisfacingleft,invis_timer); for(int i = 0; i < 4; i++){ bool enemyonground = enemy_gravity(lvl, ghost_x[i], ghost_y[i], 96, 120, cell_size); if (enemyonground){ if(onplatform(lvl,96,120,ghost_x[i],ghost_y[i],cell_size,ghost_speed[i]) == false){ ghost_speed[i] *= -1; } } } for (int i = 0; i < 4; i++){ if(checkcollision(player_x,player_y,pwidth,pheight,ghost_x[i],ghost_y[i],60,80,speed,ghost_speed[i]) && isghostalive[i])//smaller dimensions of ghost passed to fix collision when jumping lives--; } for(int i = 0; i < 9; i++){ bool enemyonground = enemy_gravity(lvl, skeleton_x[i], skeleton_y[i], 120, 225, cell_size); if (enemyonground){ if(onplatform(lvl,120,225,skeleton_x[i],skeleton_y[i],cell_size,skeleton_speed[i]) == false){ skeleton_speed[i] *= -1; } } } for (int i = 0; i < 9; i++){ if(checkcollision(player_x,player_y,pwidth,pheight,skeleton_x[i],skeleton_y[i],60,175,speed,skeleton_speed[i]) && isskeletonalive[i])//smaller dimensions of skeleton passed to fix collision when jumping lives--; } for (int i = 0; i < 3; i++){ bool enemyonground = enemy_gravity(lvl, invis_x[i], invis_y[i], 96, 120, cell_size); if (enemyonground){ if(onplatform(lvl,96,120,invis_x[i],invis_y[i],cell_size,invis_speed[i]) == false){ invis_speed[i] *= -1; } } if(checkcollision(player_x,player_y,pwidth,pheight,invis_x[i],invis_y[i],96,120,speed,invis_speed[i]) && isinvisalive[i]){ lives--; }//smaller dimensions } for (int i = 0; i < 4; i++){ if (ischelnovalive[i]) { bool enemyonground = enemy_gravity(lvl, chelnov_x[i], chelnov_y[i], 120, 135, cell_size); if (enemyonground) { // If about to fall off platform, turn around if (onplatform(lvl, 120, 135, chelnov_x[i], chelnov_y[i], cell_size, chelnov_speed[i]) == false) { chelnov_speed[i] *= -1; } } } if(checkcollision(player_x, player_y, pwidth, pheight, chelnov_x[i], chelnov_y[i], 60, 75, speed, chelnov_speed[i]) && ischelnovalive[i]) { lives=0; } } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i < 4; i++){ //check collision between vacuum and ghost if(checkcollision(vac_x,vac_y,vacwidth,vacheight,ghost_x[i],ghost_y[i],120,120,speed,ghost_speed[i]) && captured < 5 ){ bool isalive = isghostalive[i]; suck(speed,ghost_x[i],ghost_y[i],120,120,ghost_speed[i],player_x,player_y,pwidth,pheight,ghostsprite[i],isghostalive[i]); if (isghostalive[i]){ bullettype[captured] = 0;//0 for ghost bulletsprite[captured].setTexture(ghosttex); bulletsprite[captured].setScale(3,3); bulletsprite[captured].setTextureRect(IntRect(936,9,32,30)); } if (!isghostalive[i] && isalive){ captured++;//only increment captured when the ghost was alive before and is dead now } } } } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i< 9; i++){ //check collision between vacuum and skeleton if (checkcollision(vac_x,vac_y,vacwidth,vacheight,skeleton_x[i],skeleton_y[i],60,175,speed,skeleton_speed[i]) && captured < 5){ bool isalive = isskeletonalive[i]; suck(speed,skeleton_x[i],skeleton_y[i],60,175,skeleton_speed[i],player_x,player_y,pwidth,pheight,skeletonSprite[i],isskeletonalive[i]); if(isskeletonalive[i]){ bullettype[captured] = 1;//1 for skeleton bulletsprite[captured].setTexture(skeletonTex); bulletsprite[captured].setScale(3,3); bulletsprite[captured].setTextureRect(IntRect(1039,39,32,32)); } if(!isskeletonalive[i] && isalive){ captured++;//only increment captured when the skeleton was alive before and is dead now } } } } if (Keyboard::isKeyPressed(Keyboard::Space)) { for (int i = 0; i < 4; i++) { if (checkcollision(vac_x, vac_y, vacwidth, vacheight, chelnov_x[i], chelnov_y[i], 96, 120, speed, chelnov_speed[i]) && captured < 3) { bool isalive = ischelnovalive[i]; suck(speed, chelnov_x[i], chelnov_y[i], 96, 120, chelnov_speed[i], player_x, player_y, pwidth, pheight, chelnovSprite[i], ischelnovalive[i]); if (ischelnovalive[i]) { bullettype[captured] = 2; //2 for chelnov bulletsprite[captured].setTexture(chelnovtex); bulletsprite[captured].setScale(3, 3); bulletsprite[captured].setTextureRect(IntRect(0, 130, 40, 45)); } if (!ischelnovalive[i] && isalive) { captured++; } } } } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i< 3; i++){ //check collision between vacuum and invisible man if (checkcollision(vac_x,vac_y,vacwidth,vacheight,invis_x[i],invis_y[i],60,175,speed,invis_speed[i]) && captured < 5){ bool isalive = isinvisalive[i]; suck(speed,invis_x[i],invis_y[i],60,175,invis_speed[i],player_x,player_y,pwidth,pheight,invisprite[i],isinvisalive[i]); if(isinvisalive[i]){ bullettype[captured] = 1;//1 for invisible man bulletsprite[captured].setTexture(invisTex); bulletsprite[captured].setScale(3,3); bulletsprite[captured].setTextureRect(IntRect(849,24,32,32)); } if(!isinvisalive[i] && isalive){ captured++;//only increment captured when the invisible man was alive before and is dead now } } } } for (int i = 0;i < 4; i++){ if(isghostalive[i]) ghostsprite[i].setPosition(ghost_x[i],ghost_y[i]); } for (int i = 0; i < 9; i++){ if (isskeletonalive[i]){ skeletonSprite[i].setPosition(skeleton_x[i],skeleton_y[i]); } } for (int i = 0; i < 4; i++){ if (ischelnovalive[i]){ chelnovSprite[i].setPosition(chelnov_x[i],chelnov_y[i]); } } for (int i = 0; i < 3; i++){ if (isinvisalive[i]){ invisprite[i].setPosition(invis_x[i],invis_y[i]); } } //collision with power ups if (ispoweractive[0]){ if (checkcollision(player_x,player_y,pwidth,pheight,powerupx[0],powerupy[0],50,50,speed,0)){ lives++; ispoweractive[0] = false; cout<<"Powerup collected"<<endl; } } else if (ispoweractive[1]){ if (checkcollision(player_x,player_y,pwidth,pheight,powerupx[1],powerupy[1],50,50,speed,0)){ speed += 4; ispoweractive[1] = false; cout<<"Powerup collected"<<endl; } } else if (ispoweractive[2]){ if (checkcollision(player_x,player_y,pwidth,pheight,powerupx[2],powerupy[2],50,50,speed,0)){ vacheight += 20; ispoweractive[2] = false; cout<<"Powerup collected"<<endl; } } else if (ispoweractive[3]){ if (checkcollision(player_x,player_y,pwidth,pheight,powerupx[3],powerupy[3],50,50,speed,0)){ vacwidth += 20; ispoweractive[3] = false; cout<<"Powerup collected"<<endl; } } for (int b = 0; b < maxbullets; b++){ if (bulletactive[b]) { // ghosts for (int g = 0; g < 4; g++){ if (isghostalive[g]) { int bw = 96; int bh = 96; int gw = 120; int gh = 120; float bx = bulletx[b]; float by = bullety[b]; float gx = ghost_x[g]; float gy = ghost_y[g]; if (!(bx + bw < gx || bx > gx + gw || by + bh < gy || by > gy + gh)){ isghostalive[g] = false; bulletactive[b] = false; spawnpower(gx,gy,ispoweractive,powerupx,powerupy); break; } } } if (bulletactive[b]) { // skeletons for (int s = 0; s < 9; s++){ if (isskeletonalive[s]) { if (checkcollision(bulletx[b], bullety[b], 96, 96, skeleton_x[s], skeleton_y[s], 60, 175, speedx[b], skeleton_speed[s])){ isskeletonalive[s] = false; bulletactive[b] = false; spawnpower(skeleton_x[s],skeleton_y[s],ispoweractive,powerupx,powerupy); break; } } } //invisible man for (int i = 0; i < 3;i++){ if (isinvisalive[i]){ if (checkcollision(bulletx[b], bullety[b], 96, 96, invis_x[i], invis_y[i], 96, 120, speedx[b], invis_speed[i])){ isinvisalive[i] = false; bulletactive[b] = false; spawnpower(invis_x[i],invis_y[i],ispoweractive,powerupx,powerupy); break; } } } } } } for (int i = 0; i < 4; i++){ if(ispoweractive[i]){ powerupsprite[i].setPosition(powerupx[i],powerupy[i]); } } } void level_one(char **lvl,int width,int height,float ghost_x[8],float ghost_y[8],int ghost_speed[8],float skeleton_x[4],float skeleton_y[4],int skeleton_speed[4],float player_x,float player_y,int &lives,const int cell_size,int pwidth,int pheight,float &speed, Sprite ghostsprite[],bool isghostfacingleft[],int ghost_state[],int ghost_timer[],Sprite skeletonSprite[],bool isskeletonfacngleft[],int skeleton_state[],int skeleton_timer[], float& vac_x,float& vac_y,int& vacwidth,int& vacheight,bool isghostalive[],bool isskeletonalive[],int& captured,Texture& ghosttex,Texture& skeletonTex,Sprite bulletsprite[],int bullettype[],int bulletx[],int bullety[],bool bulletactive[],int speedx[],int speedy[],int maxbullets,int& shoottimer){ //platform to spawn player for(int i=8 ; i < 10; i++){ lvl[6][i] = '#'; } for(int i=3;i<width-3;i++){//upper horizontal lvl[3][i] = '#'; } for(int i=3;i<width-3;i++){//lower horizontal on vertical stand lvl[9][i] = '#'; } //left platforms for(int i=0;i<5;i++){//upper left lvl[6][i] = '#'; } //right platforms for(int i=18;i>width-6;i--){//upper right lvl[6][i] = '#'; } ghosts(ghost_x,ghost_speed,8,ghostsprite,isghostfacingleft,ghost_state,ghost_timer); skeletons(skeleton_x,skeleton_y,skeleton_speed,4,skeletonSprite,isskeletonfacngleft,skeleton_state,skeleton_timer,cell_size); //lowest platform for(int i=0;i<width;i++){ lvl[13][i] = '#'; } for(int i = 0; i < 8; i++){ if(onplatform(lvl,96,120,ghost_x[i],ghost_y[i],cell_size,ghost_speed[i]) == false){ ghost_speed[i] *= -1; } } for (int i = 0; i < 8; i++){ if(checkcollision(player_x,player_y,pwidth,pheight,ghost_x[i],ghost_y[i],60,80,speed,ghost_speed[i]) && isghostalive[i])//smaller dimensions of ghost passed to fix collision when jumping lives = 0; } for(int i = 0; i < 4; i++){ if(onplatform(lvl,120,225,skeleton_x[i],skeleton_y[i],cell_size,skeleton_speed[i]) == false){ skeleton_speed[i] *= -1; } } for (int i = 0; i < 4; i++){ if(checkcollision(player_x,player_y,pwidth,pheight,skeleton_x[i],skeleton_y[i],60,175,speed,skeleton_speed[i]) && isskeletonalive[i])//smaller dimensions of skeleton passed to fix collision when jumping lives = 0; } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i < 8; i++){ //check collision between vacuum and ghost if(checkcollision(vac_x,vac_y,vacwidth,vacheight,ghost_x[i],ghost_y[i],120,120,speed,ghost_speed[i]) && captured < 3 ){ bool isalive = isghostalive[i]; suck(speed,ghost_x[i],ghost_y[i],120,120,ghost_speed[i],player_x,player_y,pwidth,pheight,ghostsprite[i],isghostalive[i]); if (isghostalive[i]){ bullettype[captured] = 0;//0 for ghost bulletsprite[captured].setTexture(ghosttex); bulletsprite[captured].setScale(3,3); bulletsprite[captured].setTextureRect(IntRect(936,9,32,30)); } if (!isghostalive[i] && isalive){ captured++;//only increment captured when the ghost was alive before and is dead now } } } } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i< 4; i++){ //check collision between vacuum and skeleton if (checkcollision(vac_x,vac_y,vacwidth,vacheight,skeleton_x[i],skeleton_y[i],60,175,speed,skeleton_speed[i]) && captured < 3){ bool isalive = isskeletonalive[i]; suck(speed,skeleton_x[i],skeleton_y[i],60,175,skeleton_speed[i],player_x,player_y,pwidth,pheight,skeletonSprite[i],isskeletonalive[i]); if(isskeletonalive[i]){ bullettype[captured] = 1;//1 for skeleton bulletsprite[captured].setTexture(skeletonTex); bulletsprite[captured].setScale(3,3); bulletsprite[captured].setTextureRect(IntRect(1039,39,32,32)); } if(!isskeletonalive[i] && isalive){ captured++;//only increment captured when the skeleton was alive before and is dead now } } } } for (int i = 0;i < 8; i++){ if(isghostalive[i]) ghostsprite[i].setPosition(ghost_x[i],ghost_y[i]); } for (int i = 0; i < 4; i++){ if (isskeletonalive[i]){ skeletonSprite[i].setPosition(skeleton_x[i],skeleton_y[i]); } } for (int b = 0; b < maxbullets; b++){ if (bulletactive[b]) { // ghosts for (int g = 0; g < 8; g++){ if (isghostalive[g]) { int bw = 96; int bh = 96; int gw = 120; int gh = 120; float bx = (float)bulletx[b]; float by = (float)bullety[b]; float gx = ghost_x[g]; float gy = ghost_y[g]; if (!(bx + bw < gx || bx > gx + gw || by + bh < gy || by > gy + gh)){ isghostalive[g] = false; bulletactive[b] = false; break; } } } if (bulletactive[b]) { // skeletons for (int s = 0; s < 4; s++){ if (isskeletonalive[s]) { if (checkcollision(bulletx[b], bullety[b], 96, 96, skeleton_x[s], skeleton_y[s], 60, 175, speedx[b], skeleton_speed[s])){ isskeletonalive[s] = false; bulletactive[b] = false; break; } } } } } } } bool check_level_completion(bool isghostalive[],bool isskeletonalive[],bool ischelnovalive[],bool isinvisalive[],int current_level,int ghost_spawned,int skeleton_spawned,int chelnov_spawned,int invis_spawned){ int ghost_num,skeleton_num,chelnov_num,invis_num; if(current_level==1){ ghost_num = 8; skeleton_num = 4; } if(current_level==2){ ghost_num = 4; skeleton_num = 9; chelnov_num = 4; invis_num = 3; if (ghost_spawned < ghost_num) return false; if (skeleton_spawned < skeleton_num) return false; if (chelnov_spawned < chelnov_num) return false; if (invis_spawned < invis_num) return false; } // Check Ghosts for (int i = 0; i < ghost_num; i++) { if (isghostalive[i]) return false; } // Check Skeletons for (int i = 0; i < skeleton_num; i++) { if (isskeletonalive[i]) return false; } // check Chelnovs for (int i = 0; i < chelnov_num; i++) { if (ischelnovalive[i]) return false; } // check Invisible for (int i = 0; i < invis_num; i++) { if (isinvisalive[i]) return false;} return true; // Everyone is dead } void initialize_level2(char** lvl,int width,int height){ //clear map for(int i=0;i<height;i++){ for(int j=0;j<width;j++) lvl[i][j] = '\0'; } //lowest platform for(int i=0;i<width;i++){ lvl[13][i] = '#'; } if (rand() % 2==0){ // main diagonal (\) for (int i = 4; i < 12; i++) { for (int j = 4; j < 11; j++) if (i == j) lvl[j][i] = 'R'; } for (int c = 1; c <= 3; ++c) { lvl[4][c] = '#'; } // lower pad below row 10: row 11, cols 8..10 for (int c = 10; c <= 12; ++c) { lvl[10][c] = '#'; } } else{ // secondary diagonal (/) for (int i = 4; i < 12; i++) { for (int j = 4; j < 11; j++) if (i + j == 15) lvl[j][i] = 'L'; } for (int c = 12; c <= 14; ++c) { lvl[4][c] = '#'; } // lower pad below row 10: row 11, cols 5..7 for (int c = 2; c <= 4; ++c) { lvl[11][c] = '#'; } } int platforms_needed = (rand() % 2) + 4; // Generates 4 or 5 int attempts = 0; while (platforms_needed > 0 && attempts < 5000) { attempts++; // random Length 4 to 6 int len = (rand() % 3) + 4; // Random Row: 3 to 11 (Avoids very top ceiling and very bottom floor) int r = (rand() % 9) + 3; // random column ensure platform fits within width int c = (rand() % (width - len - 2)) + 1; bool can_place = true; for (int k = 0; k < len; k++) { // If the spot is not empty,we cant build here if (lvl[r][c + k] != '\0') { can_place = false; break; } //also check 2 rows below and above to ensure no overlapping for (int i = -2; i <= 2; i++) { if (r + i < height) { // Ensure we don't look outside the map if (lvl[r + i][c + k] != '\0') { can_place = false; break; } } } } // If clear, place the '#' blocks if (can_place) { for (int k = 0; k < len; k++) { lvl[r][c + k] = '#'; } platforms_needed--; } } } void initialize_level3(char** lvl,int width,int height){ //clear map for(int i=0;i<height;i++){ for(int j=0;j<width;j++) lvl[i][j] = '\0'; } //lowest platform for(int i=0;i<width;i++){ lvl[13][i] = '#'; } //left platforms for(int i=0;i<4;i++){ lvl[3][i] = '#'; } for(int i=0;i<3;i++){ lvl[6][i] = '#'; } for(int i=0;i<6;i++){ lvl[10][i] = '#'; } //right platforms for(int i=width-1;i>width-4;i--){ lvl[3][i] = '#'; } for(int i=width-1;i>width-7;i--){ lvl[6][i] = '#'; } for(int i=width-1;i>width-5;i--){ lvl[10][i] = '#'; } } void initialize_level4(char** lvl,int width,int height){ //clear map for(int i=0;i<height;i++){ for(int j=0;j<width;j++) lvl[i][j] = '\0'; } for (int i = 0; i < width; i++) { lvl[height - 1][i] = '#'; } //left platforms for(int i=0;i<6;i++){ lvl[5][i] = '#'; } for(int i=0;i<5;i++){ lvl[9][i] = '#'; } for(int i=0;i<9;i++){ lvl[15][i] = '#'; } //right platforms for(int i=width-1;i>width-6;i--){ lvl[5][i] = '#'; } for(int i=width-1;i>width-11;i--){ lvl[9][i] = '#'; } for(int i=width-1;i>width-8;i--){ lvl[15][i] = '#'; } } int main() { int lives = 1; int score = 0; RenderWindow window(VideoMode(screen_x, screen_y), "Tumble-POP", Style::Resize); window.setVerticalSyncEnabled(true); window.setFramerateLimit(60); //level specifics int cell_size = 64; int height = 14; int width = 18; char** lvl; //level and background textures and sprites Texture bgTex; Sprite bgSprite; Texture blockTexture; Sprite blockSprite; Texture blockRTexture; Sprite blockRSprite; Texture blockLTexture; Sprite blockLSprite; Texture chelnovtex; Sprite chelnovSprite[4]; Texture ghosttex; Sprite ghostsprite[8]; Texture vacuumtex; Sprite vacuumsprite; Texture skeletonTex; Sprite skeletonSprite[9]; Texture vacuptex; Sprite vacupsprite; Texture bursttex; Sprite arrowsprite; Texture arrowtex; Sprite selectsprite; Texture selecttex; Texture invistex; Sprite invisprite[3]; Sprite powerup[4]; Texture poweruptex; Texture octotex; Sprite octosprite; Texture minitex; Sprite minisprite; Texture cloudtex; Sprite cloudsprite; Texture pottex; Sprite potsprite; int octo_x = 500; int octo_y = 1000; octotex.loadFromFile("Assets/Octopus.png"); minitex.loadFromFile("Assets/min1.png"); octosprite.setTexture(octotex); octosprite.setScale(7,7); minisprite.setTexture(minitex); minisprite.setScale(3,3); octosprite.setPosition(octo_x,octo_y); pottex.loadFromFile("Assets/pot.png"); potsprite.setTexture(pottex); cloudtex.loadFromFile("Assets/cloud.png"); cloudsprite.setTexture(cloudtex); cloudsprite.setTextureRect(IntRect(15,90,320,180)); cloudsprite.setPosition(400,500); cloudsprite.setScale(1,1); int cloudy = cell_size*3; float cloudspeed = 1; poweruptex.loadFromFile("Assets/tumblepoppers.png"); for (int i = 0; i < 4; i++){ powerup[i].setTexture(poweruptex); powerup[i].setScale(3,3); } invistex.loadFromFile("Assets/invisible_man.png"); for (int i = 0; i < 3; i++){ invisprite[i].setTexture(invistex); invisprite[i].setScale(3,3); invisprite[i].setTextureRect(IntRect(8,16,32,45)); } bursttex.loadFromFile("Assets/burst.png"); selecttex.loadFromFile("Assets/select.png"); arrowtex.loadFromFile("Assets/arrow.png"); selectsprite.setTexture(selecttex); arrowsprite.setTexture(arrowtex); arrowsprite.setPosition(200,70); arrowsprite.setScale(0.5,0.5); ghosttex.loadFromFile("Assets/ghost.png"); for (int i = 0; i < 8; i++){ ghostsprite[i].setTexture(ghosttex); ghostsprite[i].setTextureRect(IntRect(0,0,40,40)); } skeletonTex.loadFromFile("Assets/skeleton.png"); for (int i = 0; i < 9; i++){ skeletonSprite[i].setTexture(skeletonTex); skeletonSprite[i].setTextureRect(IntRect(0,0,40,75)); } chelnovtex.loadFromFile("Assets/chelnov.png"); for(int i=0;i<4;i++){ chelnovSprite[i].setTexture(chelnovtex); chelnovSprite[i].setTextureRect(IntRect(0,0,40,45)); chelnovSprite[i].setScale(3,3); } bgTex.loadFromFile("Data/bg1.png"); bgSprite.setTexture(bgTex); bgSprite.setPosition(0,0); blockTexture.loadFromFile("Data/block1.png"); blockSprite.setTexture(blockTexture); blockLTexture.loadFromFile("Data/blockL.png"); blockLSprite.setTexture(blockLTexture); blockRTexture.loadFromFile("Data/blockR.png"); blockRSprite.setTexture(blockRTexture); vacuumtex.loadFromFile("Assets/tumblepoppers.png"); vacuumsprite.setTexture(vacuumtex); vacuumsprite.setScale(3,3); vacuumsprite.setTextureRect(IntRect(470,179,12,17)); vacuptex.loadFromFile("Assets/vacup.png"); vacupsprite.setTexture(vacuptex); //Music initialisation Music lvlMusic; // lvlMusic.openFromFile("Data/mus.ogg"); // lvlMusic.setVolume(20); // lvlMusic.play(); // lvlMusic.setLoop(true); //random time every time game is opened srand(time(0)); //player data float player_x = 650; float player_y = 150; float speed = 5; float invis_x[3]={300,300,300}; float invis_y[3]={700,700,700}; bool isvisible[3]={false,false,false}; int invis_timer[3]; for(int i=0;i<3;i++) invis_timer[i]=rand() % (300); bool isinvisalive[3]={false,false,false}; float invis_speed[3]={2,2,2}; float ghost_x[8]={4*cell_size,15*cell_size,30,1000,4*cell_size,15*cell_size,30,15*cell_size}; float ghost_y[8]={3*cell_size-120,3*cell_size-120,6*cell_size-120,6*cell_size-120,9*cell_size-120,9*cell_size-120,13*cell_size-120,13*cell_size-120}; int ghost_state[8]={1,1,1,1,1,1,1,1}; //1 means moving int ghost_timer[8]; for(int i = 0; i < 8; i++) ghost_timer[i] = rand() % (120); int ghost_speed[8] = {2,2,2,2,2,2,2,2}; float skeleton_x[9]={5*cell_size,5*cell_size,4*cell_size,12*cell_size}; float skeleton_y[9]={3*cell_size-225,9*cell_size-225,13*cell_size-225,13*cell_size-225}; int skeleton_state[9]={1,1,1,1,1,1,1,1,1}; int skeleton_timer[9]; for(int i=0; i< 9; i++) skeleton_timer[i] = rand () % 120; int skeleton_speed[9]={3,3,3,3,3,3,3,3,3}; float chelnov_x[4] = {4*cell_size, 12*cell_size, 5*cell_size, 10*cell_size}; float chelnov_y[4] = {3*cell_size-120, 3*cell_size-120, 9*cell_size-120, 13*cell_size-120}; int chelnov_speed[4] = {3, 3, 3, 3}; int chelnov_state[4] = {1, 1, 1, 1}; int chelnov_timer[4]; for(int i=0; i<4; i++) chelnov_timer[i] = rand() % 180; float vac_x; float vac_y; const float jumpStrength = -200; // Initial jump velocity const float gravity = 1; // Gravity acceleration bool isJumping = false; // Track if jumping bool up_collide = false; bool left_collide = false; bool right_collide = false; Texture PlayerTexture; Sprite PlayerSprite; bool onGround = false; float offset_x = 0; float offset_y = 0; float velocityY = 0; float terminal_Velocity = 20; int PlayerHeight = 135; int PlayerWidth = 96; int vacwidth = 93; int vacheight = 72; bool up_button = false; char top_left = '\0'; char top_right = '\0'; char top_mid = '\0'; char left_mid = '\0'; char right_mid = '\0'; char bottom_left = '\0'; char bottom_right = '\0'; char bottom_mid = '\0'; char bottom_left_down = '\0'; char bottom_right_down = '\0'; char bottom_mid_down = '\0'; char top_right_up = '\0'; char top_mid_up = '\0'; char top_left_up = '\0'; PlayerTexture.loadFromFile("Assets/tumblepoppers.png"); PlayerSprite.setTexture(PlayerTexture); PlayerSprite.setScale(-3,3); PlayerSprite.setPosition(player_x, player_y); PlayerSprite.setTextureRect(IntRect(12,36,32,45)); powerup[0].setTextureRect(IntRect(21,389,20,21));//speed up powerup[1].setTextureRect(IntRect(489,385,25,25));//extra life powerup[2].setTextureRect(IntRect(270,392,28,20));//vacuum range powerup[3].setTextureRect(IntRect(124,388,20,22));//vacuum power bool ispoweractive[4]={false,false,false,false}; int powerupx[4]={0,0,0,0}; int powerupy[4]={0,0,0,0}; for (int i = 0; i< 8; i++){ ghostsprite[i].setScale(3,3); } for (int i = 0; i< 9; i++){ skeletonSprite[i].setScale(3,3); } //creating level array lvl = new char* [height]; for (int i = 0; i < height; i += 1) { lvl[i] = new char[width]; } RectangleShape hitbox; hitbox.setSize(Vector2f(96,96)); int gspawntimer = 0; int ghost_spawned = 0; int sspawntimer = 0; int skeleton_spawned = 0; int chelnov_spawned = 0; int cspawntimer = 0; int invisspawntimer = 0; int invis_spawned = 0; float hitx = player_x; int arrowx = 200; bool Greenplayer = true;//to check which tumble popper to use int frame = 0; int timer = 0; int shoottimer = 10; int vacframe = 0; int vactim = 0; bool isdead = false; bool isfacingleft = false; bool isskeletonfacingleft[8]; bool isghostalive[8]; bool isskeletonalive[9]; int captured = 0;//no of enemies captured const int maxbullets = 3; Sprite bullets[maxbullets]; int bulletx[maxbullets]; int bullety[maxbullets]; bool bulletactive[maxbullets]; int bullettype[maxbullets];//0 for ghost, 1 for skeleton int speedx[maxbullets]; int speedy[maxbullets]; bool isinvisfacingleft[3]; for (int i = 0; i < 3; i++){ isinvisfacingleft[i] = false; } // initialize bullet pool for (int i = 0; i < maxbullets; i++){ bulletactive[i] = false; bulletx[i] = 0; bullety[i] = 0; speedx[i] = 0; speedy[i] = 0; bullettype[i] = -1; // moves out of screen bullets[i].setPosition(-1000, -1000); } for (int i = 0; i<8; i++){ isskeletonfacingleft[i] = false; isskeletonalive[i] = true; } bool isghostfacingleft[8]; for (int i = 0; i<8; i++){ isghostalive[i] = true; isghostfacingleft[i] = false; } bool ischelnovfacingleft[4] = {false, false, false, false}; bool ischelnovalive[4] = {true, true, true, true}; //levels int current_level=0; int level2Loaded = false; int level3Loaded = false; int level4Loaded = false; // --- DYNAMIC ARRAYS (Pointers) --- int dy_capacity = 5; // Initial size int dy_count = 0; // Current number of enemies // Allocate memory for each property float* dy_x = new float[dy_capacity]; float* dy_y = new float[dy_capacity]; float* dy_speed = new float[dy_capacity]; int* dy_type = new int[dy_capacity]; // 0=Ghost, 1=Skel, 2=Invis, 3=Chelnov int* dy_state = new int[dy_capacity]; int* dy_timer = new int[dy_capacity]; bool* dy_facingLeft = new bool[dy_capacity]; bool* dy_alive = new bool[dy_capacity]; bool* dy_visible = new bool[dy_capacity]; // For invisible man Sprite* dy_sprite = new Sprite[dy_capacity]; //Sprite array bool* dy_is_sucked = new bool[dy_capacity]; // Pot (level 3) state int potHealth = 5; bool potDestroyed = false; int potSpawnTimer = 0; int potSpawnInterval = 120; // frames between spawns Event ev; //main loop while (window.isOpen()) { while (window.pollEvent(ev)) { if (ev.type == Event::Closed) { window.close(); } if (ev.type == Event::KeyPressed) { } if (Keyboard :: isKeyPressed(Keyboard::Up) && onGround){ jump(lvl,offset_y,velocityY,onGround,gravity,terminal_Velocity, player_x, player_y, cell_size, PlayerHeight, PlayerWidth); onGround = false; } } if (speed < 0) isfacingleft = true; //presing escape to close if (Keyboard::isKeyPressed(Keyboard::Escape)) { window.close(); } window.clear(); if(current_level == 0){ menu(selectsprite,arrowsprite,Greenplayer,current_level,arrowx); window.draw(bgSprite); window.draw(selectsprite); window.draw(arrowsprite); } else{ display_level(window, lvl, bgTex, bgSprite, blockTexture, blockSprite, blockLTexture, blockLSprite, blockRTexture, blockRSprite, height, width, cell_size); player_gravity(lvl,offset_y,velocityY,onGround,gravity,terminal_Velocity, player_x, player_y, cell_size, PlayerHeight, PlayerWidth,isfacingleft); PlayerSprite.setPosition(player_x, player_y); if (lives <= 0){ playerdies(PlayerSprite,frame,timer); isdead = true; } else{ if (Keyboard::isKeyPressed(Keyboard::Right) && !isdead) { if (isfacingleft){ player_x+=96; isfacingleft = false; } PlayerSprite.setPosition(player_x, player_y); PlayerSprite.setScale(-3,3); moveright(player_x,speed,PlayerSprite,frame,timer,Greenplayer); } else if (Keyboard::isKeyPressed(Keyboard::Left) && !isdead){ if (isfacingleft == false){ player_x -= 96; isfacingleft = true; } PlayerSprite.setPosition(player_x, player_y); moveleft(player_x,speed,PlayerSprite,frame,timer,Greenplayer); } else if (!onGround){//jumping animation if (Greenplayer) PlayerSprite.setTextureRect(IntRect(525,30,30,42)); if (!Greenplayer) PlayerSprite.setTextureRect(IntRect(524,219,30,42)); } else {//stand still animation if(Greenplayer) PlayerSprite.setTextureRect(IntRect(12,36,32,45)); if(!Greenplayer) PlayerSprite.setTextureRect(IntRect(12,224,32,45)); frame = 0; } //vaccum //vaccum if (Keyboard::isKeyPressed(Keyboard::Space)){ getvacuum(vacuumsprite,vacupsprite,player_x,player_y,vacframe,vactim,speed,vac_x,vac_y,vacwidth,vacheight); if(Keyboard::isKeyPressed(Keyboard::A) || Keyboard::isKeyPressed(Keyboard::D) || Keyboard::isKeyPressed(Keyboard::Space) && !(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S)) ) window.draw(vacuumsprite); if(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::S)) window.draw(vacupsprite); } } if (PlayerSprite.getScale().x < 0){ hitx = player_x - 100; } else if (PlayerSprite.getScale().x >0){ hitx = player_x; } hitbox.setPosition(hitx,player_y); hitbox.setFillColor(Color::Transparent); hitbox.setOutlineColor(Color::Red); hitbox.setOutlineThickness(2); window.draw(hitbox); if(!(isdead== true && frame > 5)) window.draw(PlayerSprite); if(current_level==1){ // Player shooting try to fire captured enemy as bullet if (Keyboard::isKeyPressed(Keyboard::F)) shoot(bullets, captured, player_x, player_y, speed, bulletx, bullety, bulletactive, speedx, speedy, shoottimer,bursttex); // Update bullets (physics only) before level logic so level can check collisions updatebullets(lvl, width, height, cell_size, bulletx, bullety, bulletactive, speedx, speedy, bullets, bullettype, maxbullets, (int)gravity); // Levelspecific logic and now level-side bullet collision checks level_one(lvl,width,height,ghost_x,ghost_y,ghost_speed,skeleton_x,skeleton_y,skeleton_speed,player_x,player_y,lives,cell_size,PlayerWidth,PlayerHeight,speed,ghostsprite,isghostfacingleft,ghost_state,ghost_timer,skeletonSprite,isskeletonfacingleft,skeleton_state,skeleton_timer,vac_x,vac_y,vacwidth,vacheight,isghostalive,isskeletonalive,captured,ghosttex,skeletonTex,bullets,bullettype,bulletx,bullety,bulletactive,speedx,speedy,maxbullets,shoottimer); // Draw active bullets for (int b = 0; b < maxbullets; b++){ if (bulletactive[b]){ bullets[b].setPosition(bulletx[b], bullety[b]); window.draw(bullets[b]); } } for (int i = 0;i < 8; i++){ ghostsprite[i].setPosition(ghost_x[i],ghost_y[i]); } for (int i = 0;i < 4; i++){ skeletonSprite[i].setPosition(skeleton_x[i],skeleton_y[i]); } float current_speed_x = 0; platform_collision_y(lvl,offset_x,current_speed_x,player_x,player_y,cell_size,PlayerHeight,PlayerWidth); player_x += current_speed_x; for(int i = 0; i < 8; i++){ if(isghostalive[i]) window.draw(ghostsprite[i]);} for(int i = 0; i < 4; i++){ if(isskeletonalive[i]) window.draw(skeletonSprite[i]);} if(check_level_completion(isghostalive,isskeletonalive,ischelnovalive,isinvisalive,current_level,ghost_spawned,skeleton_spawned,chelnov_spawned,invis_spawned)){ current_level = 2;} } else if(current_level==2){ if(!level2Loaded){ initialize_level2(lvl,width,height); level2Loaded = true; //reset the variables from level one ghost_spawned = 0; skeleton_spawned = 0; chelnov_spawned = 0; invis_spawned = 0; for (int i = 0; i<4; i++) isghostalive[i] = false; for (int i = 0 ; i<9; i++) isskeletonalive[i] = false; for (int i = 0; i < 4; i++) ischelnovalive[i] = false; ghost_x[0] = 110; ghost_y[0] = 700; ghost_x[1] = 110; ghost_y[1] = 700; ghost_x[2] = 100; ghost_y[2] = 700; ghost_x[3] = 15*cell_size; ghost_y[3] = 13*cell_size - 120; skeleton_x[0] = 3 * cell_size; skeleton_y[0] = 3 * cell_size; skeleton_x[1] = 5 * cell_size; skeleton_y[1] = 3 * cell_size; // Top Right area skeleton_x[2] = 12 * cell_size; skeleton_y[2] = 3 * cell_size; skeleton_x[3] = 14 * cell_size; skeleton_y[3] = 3 * cell_size; // Bottom Left area skeleton_x[4] = 3 * cell_size; skeleton_y[4] = 10 * cell_size; skeleton_x[5] = 5 * cell_size; skeleton_y[5] = 10 * cell_size; // Bottom Right area skeleton_x[6] = 12 * cell_size; skeleton_y[6] = 10 * cell_size; skeleton_x[7] = 14 * cell_size; skeleton_y[7] = 10 * cell_size; // Middle Top skeleton_x[8] = 9 * cell_size; skeleton_y[8] = 2 * cell_size; invis_x[0] = 7 * cell_size; invis_y[0] = 7 * cell_size; invis_x[1] = 9 * cell_size; invis_y[1] = 7 * cell_size; invis_x[2] = 11 * cell_size; invis_y[2] = 7 * cell_size; } // Player shooting try to fire captured enemy as bullet if (Keyboard::isKeyPressed(Keyboard::F)) shoot(bullets, captured, player_x, player_y, speed, bulletx, bullety, bulletactive, speedx, speedy, shoottimer,bursttex); updatebullets(lvl, width, height, cell_size, bulletx, bullety, bulletactive, speedx, speedy, bullets, bullettype, maxbullets, gravity); level_two(lvl,width,height,ghost_x,ghost_y,ghost_speed,skeleton_x,skeleton_y,skeleton_speed,player_x,player_y,lives,cell_size,PlayerWidth,PlayerHeight,speed,ghostsprite,isghostfacingleft,ghost_state,ghost_timer,skeletonSprite,isskeletonfacingleft,skeleton_state,skeleton_timer,vac_x,vac_y,vacwidth,vacheight,isghostalive,isskeletonalive,captured,ghosttex,skeletonTex,bullets,bullettype,bulletx,bullety,bulletactive,speedx,speedy,maxbullets,shoottimer,gspawntimer,sspawntimer,ghost_spawned,skeleton_spawned,chelnov_x,chelnov_y,chelnov_speed,chelnovSprite,ischelnovfacingleft,chelnov_state,chelnov_timer,ischelnovalive,chelnov_spawned,cspawntimer,chelnovtex,invis_spawned,invis_timer,invis_x,invis_y,invis_speed,isvisible,invisprite,isinvisfacingleft,isinvisalive,invisspawntimer,invistex,ispoweractive,powerupx,powerupy,powerup); for (int i =0 ; i < 4; i++){ if(isghostalive[i]){ window.draw(ghostsprite[i]); enemy_gravity(lvl, ghost_x[i], ghost_y[i], 96, 120, cell_size); } } for(int i = 0 ; i < 9 ; i++){ if (isskeletonalive[i]){ enemy_gravity(lvl, skeleton_x[i], skeleton_y[i], 120, 225, cell_size); window.draw(skeletonSprite[i]); } } for (int i = 0; i < 3 ; i++){ if (isinvisalive[i]&& isvisible[i]){ enemy_gravity(lvl, invis_x[i], invis_y[i], 96, 120, cell_size); window.draw(invisprite[i]); } } for (int b = 0; b < 3; b++){ if (bulletactive[b]){ bullets[b].setPosition(bulletx[b], bullety[b]); window.draw(bullets[b]); } } for (int i = 0 ; i < 4; i++){ if(ispoweractive[i]){ window.draw(powerup[i]); powerup[i].setPosition(powerupx[i],powerupy[i]); } } for (int i = 0; i < 4 ; i++){ if (ischelnovalive[i]){ enemy_gravity(lvl, chelnov_x[i], chelnov_y[i], 120, 135, cell_size); window.draw(chelnovSprite[i]); } } if(check_level_completion(isghostalive,isskeletonalive,ischelnovalive,isinvisalive,current_level,ghost_spawned,skeleton_spawned,chelnov_spawned,invis_spawned)){ current_level = 3;} } else if(current_level==3){ if(!level3Loaded){ initialize_level3(lvl,width,height); level3Loaded = true; //reset variables from level 2 lives = 1; } cloudy += cloudspeed; if (cloudy >= cell_size*8) cloudspeed = -cloudspeed; if (cloudy <= cell_size*3) cloudspeed = -cloudspeed; cloudy += cloudspeed; cloudsprite.setPosition(400,cloudy); potsprite.setPosition(500,cloudy ); window.draw(cloudsprite); window.draw(potsprite); float pot_w = 40; // Assuming pot size float pot_h = 40; if (!potDestroyed) { // bullets hitting the pot for (int b = 0; b < maxbullets; b++) { if (bulletactive[b]) { float bx = bulletx[b]; float by = bullety[b]; float bw = 96; float bh = 96; if (checkcollision(bx, by, bw, bh, 500, cloudy, pot_w, pot_h, 0, 0)) { // Hit the pot potHealth--; bulletactive[b] = false; // Move bullet away bulletx[b] = -1000; bullety[b] = -1000; if (potHealth <= 0) { potDestroyed = true; break; } } } } // Spawning Timer potSpawnTimer++; if (potSpawnTimer >= 180) { float spawn_center_x = 500 + 20; float sx = spawn_center_x + (rand() % 41 - 20); float sy = cloudy - 10.0f; // Slightly above pot spawn_dynamic(dy_count, dy_capacity, sx, sy, dy_x, dy_y, dy_speed, dy_type, dy_state, dy_timer, dy_facingLeft, dy_alive, dy_visible, dy_sprite, ghosttex, skeletonTex, invistex, chelnovtex,dy_is_sucked); potSpawnTimer = 0; } } for(int i = 0;i < dy_count; i++) { enemy_gravity(lvl, dy_x[i], dy_y[i], 96, 120, cell_size); } if (Keyboard :: isKeyPressed(Keyboard::Space)){ for (int i = 0; i< dy_count; i++){ //check collision between vacuum and skeleton if (checkcollision(vac_x,vac_y,vacwidth,vacheight,dy_x[i],dy_y[i],60,175,speed,dy_speed[i])){ dy_is_sucked[i] = true; bool isalive = dy_alive[i]; suck(speed,dy_x[i],dy_y[i],60,175,dy_speed[i],player_x,player_y,PlayerWidth,PlayerHeight,dy_sprite[i],dy_alive[i]); if(dy_alive[i]){ bullets[captured].setTexture(skeletonTex); bullets[captured].setScale(3,3); bullets[captured].setTextureRect(IntRect(1039,39,32,32)); } if(!dy_alive[i] && isalive){ captured++;//only increment captured when the skeleton was alive before and is dead now } } } } //player collision with enemy for(int i = 0;i < dy_count; i++){ float bx = bulletx[i]; float by = bullety[i]; float bw = 96; float bh = 96; // Determine enemy hitbox based on type float ex = dy_x[i]; float ey = dy_y[i]; float ew = 0; float eh = 0; if (dy_type[i] == 0) { // Ghost ew = 96; eh = 120; // Approx scaled size } else if (dy_type[i] == 1) { // Skeleton ew = 120; eh = 225; } else { // Others ew = 96; eh = 120; } if(checkcollision(player_x,player_y,PlayerWidth,PlayerHeight,dy_x[i],dy_y[i],40,40,speed,dy_speed[i]) && dy_is_sucked[i]== false && dy_alive[i]== true){ // Handle player collision with enemy cout<<"Works"<<endl; lives--; } } // Update and draw dynamic enemies for (int i = 0; i < dy_count; i++) { if (!dy_alive[i]) continue; if (dy_facingLeft[i]) dy_x[i] -= dy_speed[i]; else dy_x[i] += dy_speed[i]; // screen Bounds flip if (dy_x[i] < 0) { dy_x[i] = 0; dy_facingLeft[i] = false; } else if (dy_x[i] > width * cell_size - 40) { // Assuming width is screen edge dy_x[i] = width * cell_size - 40; dy_facingLeft[i] = true; } // Sync Sprite Position dy_sprite[i].setPosition(dy_x[i], dy_y[i]); window.draw(dy_sprite[i]); //collision between bullet and enemy for (int b = 0; b < maxbullets; b++) { if (!bulletactive[b]) continue; float bx = (float)bulletx[b]; float by = (float)bullety[b]; float bw = 96.0f; float bh = 96.0f; // Determine enemy hitbox based on type float ex = dy_x[i]; float ey = dy_y[i]; float ew = 0; float eh = 0; if (dy_type[i] == 0) { // Ghost ew = 96.0f; eh = 120.0f; // Approx scaled size } else if (dy_type[i] == 1) { // Skeleton ew = 120.0f; eh = 225.0f; } else { // Others ew = 96.0f; eh = 120.0f; } // collision check between bullet and enemy if (checkcollision(bx, by, bw, bh, ex, ey, ew, eh, 0, 0)) { dy_alive[i] = false; // Enemy killed bulletactive[b] = false; bulletx[b] = -1000; bullety[b] = -1000; break; } } } } if (current_level == 4) { if (!level4Loaded) { for (int i = 0; i < height; i++) { delete[] lvl[i]; } delete[] lvl; int old_cell_size = cell_size; // Save old size to calculate scale factor width = width * 1.5; height = height * 1.5; cell_size = cell_size / 1.5; //new array for level 4 lvl = new char* [height]; for (int i = 0; i < height; i += 1) { lvl[i] = new char[width]; } initialize_level4(lvl, width, height); float scaleFactor = (float)cell_size / (float)old_cell_size; // Apply scale to block sprite so they don't overlap blockSprite.setScale(scaleFactor, scaleFactor); // Reset Player Position to a safe spot player_x = 2 * cell_size; player_y = 2 * cell_size; level4Loaded = true; } display_level(window, lvl, bgTex, bgSprite, blockTexture, blockSprite, blockLTexture, blockLSprite, blockRTexture, blockRSprite, height, width, cell_size); // Update Player Physics PlayerSprite.setPosition(player_x, player_y); player_gravity(lvl, offset_y, velocityY, onGround, gravity, terminal_Velocity, player_x, player_y, cell_size, PlayerHeight, PlayerWidth, isfacingleft); window.draw(PlayerSprite); octosprite.setPosition(10 * cell_size, 15 * cell_size - 120); window.draw(octosprite); } int ghosts_left = 0; int skels_left = 0; for(int i=0; i<8; i++) if(isghostalive[i]) ghosts_left++; for(int i=0; i<9; i++) if(isskeletonalive[i]) skels_left++; } window.display(); } //stopping music and deleting level array lvlMusic.stop(); for (int i = 0; i < height; i++) { delete[] lvl[i]; } delete[] lvl; delete[] dy_x; delete[] dy_y; delete[] dy_speed; delete[] dy_type; delete[] dy_state; delete[] dy_timer; delete[] dy_facingLeft; delete[] dy_alive; delete[] dy_visible; delete[] dy_sprite; delete[] dy_is_sucked; return 0; }
github_cpp
2025-12-07T15:21:21Z
https://github.com/Xakak/PF-Project/blob/daa78fb5f01e719778f9462c6561ec27d18a9595/tumblepop.cpp
{}
/**************************************************************************** ** Meta object code from reading C++ file 'loginwindow.h' ** ** Created by: The Qt Meta Object Compiler version 68 (Qt 6.7.1) ** ** WARNING! All changes made in this file will be lost! *****************************************************************************/ #include "../../../../include/ui/loginwindow.h" #include <QtGui/qtextcursor.h> #include <QtCore/qmetatype.h> #include <QtCore/qtmochelpers.h> #include <memory> #include <QtCore/qxptype_traits.h> #if !defined(Q_MOC_OUTPUT_REVISION) #error "The header file 'loginwindow.h' doesn't include <QObject>." #elif Q_MOC_OUTPUT_REVISION != 68 #error "This file was generated using the moc from 6.7.1. It" #error "cannot be used with the include files from this version of Qt." #error "(The moc has changed too much.)" #endif #ifndef Q_CONSTINIT #define Q_CONSTINIT #endif QT_WARNING_PUSH QT_WARNING_DISABLE_DEPRECATED QT_WARNING_DISABLE_GCC("-Wuseless-cast") namespace { #ifdef QT_MOC_HAS_STRINGDATA struct qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS_t {}; constexpr auto qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS = QtMocHelpers::stringData( "PMS::UI::LoginWindow", "loginSuccess", "", "onLoginClicked", "onCancelClicked", "onRegisterClicked", "onRegistrationSuccess", "username" ); #else // !QT_MOC_HAS_STRINGDATA #error "qtmochelpers.h not found or too old." #endif // !QT_MOC_HAS_STRINGDATA } // unnamed namespace Q_CONSTINIT static const uint qt_meta_data_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS[] = { // content: 12, // revision 0, // classname 0, 0, // classinfo 5, 14, // methods 0, 0, // properties 0, 0, // enums/sets 0, 0, // constructors 0, // flags 1, // signalCount // signals: name, argc, parameters, tag, flags, initial metatype offsets 1, 0, 44, 2, 0x06, 1 /* Public */, // slots: name, argc, parameters, tag, flags, initial metatype offsets 3, 0, 45, 2, 0x08, 2 /* Private */, 4, 0, 46, 2, 0x08, 3 /* Private */, 5, 0, 47, 2, 0x08, 4 /* Private */, 6, 1, 48, 2, 0x08, 5 /* Private */, // signals: parameters QMetaType::Void, // slots: parameters QMetaType::Void, QMetaType::Void, QMetaType::Void, QMetaType::Void, QMetaType::QString, 7, 0 // eod }; Q_CONSTINIT const QMetaObject PMS::UI::LoginWindow::staticMetaObject = { { QMetaObject::SuperData::link<QWidget::staticMetaObject>(), qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS.offsetsAndSizes, qt_meta_data_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS, qt_static_metacall, nullptr, qt_incomplete_metaTypeArray<qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS_t, // Q_OBJECT / Q_GADGET QtPrivate::TypeAndForceComplete<LoginWindow, std::true_type>, // method 'loginSuccess' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onLoginClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onCancelClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onRegisterClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onRegistrationSuccess' QtPrivate::TypeAndForceComplete<void, std::false_type>, QtPrivate::TypeAndForceComplete<const QString &, std::false_type> >, nullptr } }; void PMS::UI::LoginWindow::qt_static_metacall(QObject *_o, QMetaObject::Call _c, int _id, void **_a) { if (_c == QMetaObject::InvokeMetaMethod) { auto *_t = static_cast<LoginWindow *>(_o); (void)_t; switch (_id) { case 0: _t->loginSuccess(); break; case 1: _t->onLoginClicked(); break; case 2: _t->onCancelClicked(); break; case 3: _t->onRegisterClicked(); break; case 4: _t->onRegistrationSuccess((*reinterpret_cast< std::add_pointer_t<QString>>(_a[1]))); break; default: ; } } else if (_c == QMetaObject::IndexOfMethod) { int *result = reinterpret_cast<int *>(_a[0]); { using _t = void (LoginWindow::*)(); if (_t _q_method = &LoginWindow::loginSuccess; *reinterpret_cast<_t *>(_a[1]) == _q_method) { *result = 0; return; } } } } const QMetaObject *PMS::UI::LoginWindow::metaObject() const { return QObject::d_ptr->metaObject ? QObject::d_ptr->dynamicMetaObject() : &staticMetaObject; } void *PMS::UI::LoginWindow::qt_metacast(const char *_clname) { if (!_clname) return nullptr; if (!strcmp(_clname, qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS.stringdata0)) return static_cast<void*>(this); return QWidget::qt_metacast(_clname); } int PMS::UI::LoginWindow::qt_metacall(QMetaObject::Call _c, int _id, void **_a) { _id = QWidget::qt_metacall(_c, _id, _a); if (_id < 0) return _id; if (_c == QMetaObject::InvokeMetaMethod) { if (_id < 5) qt_static_metacall(this, _c, _id, _a); _id -= 5; } else if (_c == QMetaObject::RegisterMethodArgumentMetaType) { if (_id < 5) *reinterpret_cast<QMetaType *>(_a[0]) = QMetaType(); _id -= 5; } return _id; } // SIGNAL 0 void PMS::UI::LoginWindow::loginSuccess() { QMetaObject::activate(this, &staticMetaObject, 0, nullptr); } QT_WARNING_POP
/**************************************************************************** ** Meta object code from reading C++ file 'loginwindow.h' ** ** Created by: The Qt Meta Object Compiler version 68 (Qt 6.7.1) ** ** WARNING! All changes made in this file will be lost! *****************************************************************************/ #include "../../../../include/ui/loginwindow.h" #include <QtGui/qtextcursor.h> #include <QtCore/qmetatype.h> #include <QtCore/qtmochelpers.h> #include <memory> #include <QtCore/qxptype_traits.h> #if !defined(Q_MOC_OUTPUT_REVISION) #error "The header file 'loginwindow.h' doesn't include <QObject>." #elif Q_MOC_OUTPUT_REVISION != 68 #error "This file was generated using the moc from 6.7.1. It" #error "cannot be used with the include files from this version of Qt." #error "(The moc has changed too much.)" #endif #ifndef Q_CONSTINIT #define Q_CONSTINIT #endif QT_WARNING_PUSH QT_WARNING_DISABLE_DEPRECATED QT_WARNING_DISABLE_GCC("-Wuseless-cast") namespace { #ifdef QT_MOC_HAS_STRINGDATA struct qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS_t {}; constexpr auto qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS = QtMocHelpers::stringData( "PMS::UI::LoginWindow", "loginSuccess", "", "onLoginClicked", "onCancelClicked", "onRegisterClicked", "onRegistrationSuccess", "username" ); #else // !QT_MOC_HAS_STRINGDATA #error "qtmochelpers.h not found or too old." #endif // !QT_MOC_HAS_STRINGDATA } // unnamed namespace Q_CONSTINIT static const uint qt_meta_data_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS[] = { // content: 12, // revision 0, // classname 0, 0, // classinfo 5, 14, // methods 0, 0, // properties 0, 0, // enums/sets 0, 0, // constructors 0, // flags 1, // signalCount // signals: name, argc, parameters, tag, flags, initial metatype offsets 1, 0, 44, 2, 0x06, 1 /* Public */, // slots: name, argc, parameters, tag, flags, initial metatype offsets 3, 0, 45, 2, 0x08, 2 /* Private */, 4, 0, 46, 2, 0x08, 3 /* Private */, 5, 0, 47, 2, 0x08, 4 /* Private */, 6, 1, 48, 2, 0x08, 5 /* Private */, // signals: parameters QMetaType::Void, // slots: parameters QMetaType::Void, QMetaType::Void, QMetaType::Void, QMetaType::Void, QMetaType::QString, 7, 0 // eod }; Q_CONSTINIT const QMetaObject PMS::UI::LoginWindow::staticMetaObject = { { QMetaObject::SuperData::link<QWidget::staticMetaObject>(), qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS.offsetsAndSizes, qt_meta_data_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS, qt_static_metacall, nullptr, qt_incomplete_metaTypeArray<qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS_t, // Q_OBJECT / Q_GADGET QtPrivate::TypeAndForceComplete<LoginWindow, std::true_type>, // method 'loginSuccess' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onLoginClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onCancelClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onRegisterClicked' QtPrivate::TypeAndForceComplete<void, std::false_type>, // method 'onRegistrationSuccess' QtPrivate::TypeAndForceComplete<void, std::false_type>, QtPrivate::TypeAndForceComplete<const QString &, std::false_type> >, nullptr } }; void PMS::UI::LoginWindow::qt_static_metacall(QObject *_o, QMetaObject::Call _c, int _id, void **_a) { if (_c == QMetaObject::InvokeMetaMethod) { auto *_t = static_cast<LoginWindow *>(_o); (void)_t; switch (_id) { case 0: _t->loginSuccess(); break; case 1: _t->onLoginClicked(); break; case 2: _t->onCancelClicked(); break; case 3: _t->onRegisterClicked(); break; case 4: _t->onRegistrationSuccess((*reinterpret_cast< std::add_pointer_t<QString>>(_a[1]))); break; default: ; } } else if (_c == QMetaObject::IndexOfMethod) { int *result = reinterpret_cast<int *>(_a[0]); { using _t = void (LoginWindow::*)(); if (_t _q_method = &LoginWindow::loginSuccess; *reinterpret_cast<_t *>(_a[1]) == _q_method) { *result = 0; return; } } } } const QMetaObject *PMS::UI::LoginWindow::metaObject() const { return QObject::d_ptr->metaObject ? QObject::d_ptr->dynamicMetaObject() : &staticMetaObject; } void *PMS::UI::LoginWindow::qt_metacast(const char *_clname) { if (!_clname) return nullptr; if (!strcmp(_clname, qt_meta_stringdata_CLASSPMSSCOPEUISCOPELoginWindowENDCLASS.stringdata0)) return static_cast<void*>(this); return QWidget::qt_metacast(_clname); } int PMS::UI::LoginWindow::qt_metacall(QMetaObject::Call _c, int _id, void **_a) { _id = QWidget::qt_metacall(_c, _id, _a); if (_id < 0) return _id; if (_c == QMetaObject::InvokeMetaMethod) { if (_id < 5) qt_static_metacall(this, _c, _id, _a); _id -= 5; } else if (_c == QMetaObject::RegisterMethodArgumentMetaType) { if (_id < 5) *reinterpret_cast<QMetaType *>(_a[0]) = QMetaType(); _id -= 5; } return _id; } // SIGNAL 0 void PMS::UI::LoginWindow::loginSuccess() { QMetaObject::activate(this, &staticMetaObject, 0, nullptr); } QT_WARNING_POP
github_cpp
2025-12-13T15:51:43Z
https://github.com/wz-intel-coding/Enterprise-Project-Management-System/blob/c73913b63a8f484c77a7131aa75de7a350944baa/build_windows/PMS_autogen/include_Debug/JCP2AIONHY/moc_loginwindow.cpp
{}
#include <esp_attr.h> #include <esp_heap_caps.h> #include <esp_log.h> #include <stddef.h> #include <string.h> #include "esp_jpeg_common.h" #include "esp_jpeg_enc.h" #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER #include "driver/jpeg_encode.h" #endif #include "image_to_jpeg.h" #define TAG "image_to_jpeg" static void* malloc_psram(size_t size) { void* p = malloc(size); if (p) return p; #if (CONFIG_SPIRAM_SUPPORT && (CONFIG_SPIRAM_USE_CAPS_ALLOC || CONFIG_SPIRAM_USE_MALLOC)) return heap_caps_malloc(size, MALLOC_CAP_SPIRAM | MALLOC_CAP_8BIT); #else return NULL; #endif } static __always_inline uint8_t expand_5_to_8(uint8_t v) { return (uint8_t)((v << 3) | (v >> 2)); } static __always_inline uint8_t expand_6_to_8(uint8_t v) { return (uint8_t)((v << 2) | (v >> 4)); } static uint8_t* convert_input_to_encoder_buf(const uint8_t* src, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, jpeg_pixel_format_t* out_fmt, int* out_size) { // 直接支持的格式:GRAY、RGB888、YCbYCr(YUYV) if (format == V4L2_PIX_FMT_GREY) { int sz = (int)width * (int)height; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_GRAY; if (out_size) *out_size = sz; return buf; } // V4L2 YUYV (Y Cb Y Cr) 可直接作为 JPEG_PIXEL_FORMAT_YCbYCr 输入 if (format == V4L2_PIX_FMT_YUYV) { int sz = (int)width * (int)height * 2; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // V4L2 UYVY (Cb Y Cr Y) -> 重排为 YUYV 再作为 YCbYCr 输入 if (format == V4L2_PIX_FMT_UYVY) { int sz = (int)width * (int)height * 2; const uint8_t* s = src; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; uint8_t* d = buf; for (int i = 0; i < sz; i += 4) { // src: Cb, Y0, Cr, Y1 -> dst: Y0, Cb, Y1, Cr d[0] = s[1]; d[1] = s[0]; d[2] = s[3]; d[3] = s[2]; s += 4; d += 4; } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // V4L2 YUV422P (YUV422 Planar) -> 重排为 YUYV (YCbYCr) if (format == V4L2_PIX_FMT_YUV422P) { int sz = (int)width * (int)height * 2; const uint8_t* y_plane = src; const uint8_t* u_plane = y_plane + (int)width * (int)height; const uint8_t* v_plane = u_plane + ((int)width / 2) * (int)height; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; uint8_t* dst = buf; for (int y = 0; y < height; y++) { const uint8_t* y_row = y_plane + y * (int)width; const uint8_t* u_row = u_plane + y * ((int)width / 2); const uint8_t* v_row = v_plane + y * ((int)width / 2); for (int x = 0; x < width; x += 2) { uint8_t y0 = y_row[x + 0]; uint8_t y1 = y_row[x + 1]; uint8_t cb = u_row[x / 2]; uint8_t cr = v_row[x / 2]; dst[0] = y0; dst[1] = cb; dst[2] = y1; dst[3] = cr; dst += 4; } } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // 其余格式转换为 RGB888 int rgb_size = (int)width * (int)height * 3; uint8_t* rgb = (uint8_t*)jpeg_calloc_align(rgb_size, 16); if (!rgb) return NULL; if (format == V4L2_PIX_FMT_RGB24) { // V4L2_RGB24 即 RGB888 memcpy(rgb, src, rgb_size); } else if (format == V4L2_PIX_FMT_RGB565) { // RGB565 小端,需要转换为 RGB888 const uint8_t* p = src; uint8_t* d = rgb; int pixels = (int)width * (int)height; for (int i = 0; i < pixels; i++) { uint8_t lo = p[0]; // 低字节(LSB) uint8_t hi = p[1]; // 高字节(MSB) p += 2; uint8_t r5 = (hi >> 3) & 0x1F; uint8_t g6 = ((hi & 0x07) << 3) | ((lo & 0xE0) >> 5); uint8_t b5 = lo & 0x1F; d[0] = expand_5_to_8(r5); d[1] = expand_6_to_8(g6); d[2] = expand_5_to_8(b5); d += 3; } } else { // 其他未覆盖格式,清零 memset(rgb, 0, rgb_size); } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_RGB888; if (out_size) *out_size = rgb_size; return rgb; } #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER static jpeg_encoder_handle_t s_hw_jpeg_handle = NULL; static bool hw_jpeg_ensure_inited(void) { if (s_hw_jpeg_handle) { return true; } jpeg_encode_engine_cfg_t eng_cfg = { .intr_priority = 0, .timeout_ms = 100, }; esp_err_t er = jpeg_new_encoder_engine(&eng_cfg, &s_hw_jpeg_handle); if (er != ESP_OK) { ESP_LOGE(TAG, "jpeg_new_encoder_engine failed: %d", (int)er); s_hw_jpeg_handle = NULL; return false; } return true; } static uint8_t* convert_input_to_hw_encoder_buf(const uint8_t* src, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, jpeg_enc_input_format_t* out_fmt, int* out_size) { if (format == V4L2_PIX_FMT_GREY) { int sz = (int)width * (int)height; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_GRAY; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_RGB24) { int sz = (int)width * (int)height * 3; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) { ESP_LOGE(TAG, "malloc_psram failed"); return NULL; } memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_RGB888; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_RGB565) { int sz = (int)width * (int)height * 2; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_RGB565; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_YUYV) { // 硬件需要 | Y1 V Y0 U | 的“大端”格式,因此需要 bswap16 int sz = (int)width * (int)height * 2; uint16_t* buf = (uint16_t*)malloc_psram(sz); if (!buf) return NULL; const uint16_t* bsrc = (const uint16_t*)src; for (int i = 0; i < sz / 2; i++) { buf[i] = __builtin_bswap16(bsrc[i]); } if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_YUV422; if (out_size) *out_size = sz; return (uint8_t*)buf; } return NULL; } static bool encode_with_hw_jpeg(const uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, uint8_t** jpg_out, size_t* jpg_out_len, jpg_out_cb cb, void* cb_arg) { if (quality < 1) quality = 1; if (quality > 100) quality = 100; jpeg_enc_input_format_t enc_src_type = JPEG_ENCODE_IN_FORMAT_RGB888; int enc_in_size = 0; uint8_t* enc_in = convert_input_to_hw_encoder_buf(src, width, height, format, &enc_src_type, &enc_in_size); if (!enc_in) { ESP_LOGW(TAG, "hw jpeg: unsupported format, fallback to sw"); return false; } if (!hw_jpeg_ensure_inited()) { free(enc_in); return false; } jpeg_encode_cfg_t enc_cfg = {0}; enc_cfg.width = width; enc_cfg.height = height; enc_cfg.src_type = enc_src_type; enc_cfg.image_quality = quality; enc_cfg.sub_sample = (enc_src_type == JPEG_ENCODE_IN_FORMAT_GRAY) ? JPEG_DOWN_SAMPLING_GRAY : JPEG_DOWN_SAMPLING_YUV422; size_t out_cap = (size_t)width * (size_t)height * 3 / 2 + 64 * 1024; if (out_cap < 128 * 1024) out_cap = 128 * 1024; jpeg_encode_memory_alloc_cfg_t jpeg_enc_output_mem_cfg = { .buffer_direction = JPEG_ENC_ALLOC_OUTPUT_BUFFER }; size_t out_cap_aligned = 0; uint8_t* outbuf = (uint8_t*)jpeg_alloc_encoder_mem(out_cap, &jpeg_enc_output_mem_cfg, &out_cap_aligned); if (!outbuf) { free(enc_in); ESP_LOGE(TAG, "alloc out buffer failed"); return false; } uint32_t out_len = 0; esp_err_t er = jpeg_encoder_process(s_hw_jpeg_handle, &enc_cfg, enc_in, (uint32_t)enc_in_size, outbuf, (uint32_t)out_cap_aligned, &out_len); free(enc_in); if (er != ESP_OK) { free(outbuf); ESP_LOGE(TAG, "jpeg_encoder_process failed: %d", (int)er); return false; } if (cb) { cb(cb_arg, 0, outbuf, (size_t)out_len); cb(cb_arg, 1, NULL, 0); free(outbuf); if (jpg_out) *jpg_out = NULL; if (jpg_out_len) *jpg_out_len = 0; return true; } if (jpg_out && jpg_out_len) { *jpg_out = outbuf; *jpg_out_len = (size_t)out_len; return true; } free(outbuf); return true; } #endif // CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER static bool encode_with_esp_new_jpeg(const uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, uint8_t** jpg_out, size_t* jpg_out_len,
#include <esp_attr.h> #include <esp_heap_caps.h> #include <esp_log.h> #include <stddef.h> #include <string.h> #include "esp_jpeg_common.h" #include "esp_jpeg_enc.h" #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER #include "driver/jpeg_encode.h" #endif #include "image_to_jpeg.h" #define TAG "image_to_jpeg" static void* malloc_psram(size_t size) { void* p = malloc(size); if (p) return p; #if (CONFIG_SPIRAM_SUPPORT && (CONFIG_SPIRAM_USE_CAPS_ALLOC || CONFIG_SPIRAM_USE_MALLOC)) return heap_caps_malloc(size, MALLOC_CAP_SPIRAM | MALLOC_CAP_8BIT); #else return NULL; #endif } static __always_inline uint8_t expand_5_to_8(uint8_t v) { return (uint8_t)((v << 3) | (v >> 2)); } static __always_inline uint8_t expand_6_to_8(uint8_t v) { return (uint8_t)((v << 2) | (v >> 4)); } static uint8_t* convert_input_to_encoder_buf(const uint8_t* src, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, jpeg_pixel_format_t* out_fmt, int* out_size) { // 直接支持的格式:GRAY、RGB888、YCbYCr(YUYV) if (format == V4L2_PIX_FMT_GREY) { int sz = (int)width * (int)height; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_GRAY; if (out_size) *out_size = sz; return buf; } // V4L2 YUYV (Y Cb Y Cr) 可直接作为 JPEG_PIXEL_FORMAT_YCbYCr 输入 if (format == V4L2_PIX_FMT_YUYV) { int sz = (int)width * (int)height * 2; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // V4L2 UYVY (Cb Y Cr Y) -> 重排为 YUYV 再作为 YCbYCr 输入 if (format == V4L2_PIX_FMT_UYVY) { int sz = (int)width * (int)height * 2; const uint8_t* s = src; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; uint8_t* d = buf; for (int i = 0; i < sz; i += 4) { // src: Cb, Y0, Cr, Y1 -> dst: Y0, Cb, Y1, Cr d[0] = s[1]; d[1] = s[0]; d[2] = s[3]; d[3] = s[2]; s += 4; d += 4; } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // V4L2 YUV422P (YUV422 Planar) -> 重排为 YUYV (YCbYCr) if (format == V4L2_PIX_FMT_YUV422P) { int sz = (int)width * (int)height * 2; const uint8_t* y_plane = src; const uint8_t* u_plane = y_plane + (int)width * (int)height; const uint8_t* v_plane = u_plane + ((int)width / 2) * (int)height; uint8_t* buf = (uint8_t*)jpeg_calloc_align(sz, 16); if (!buf) return NULL; uint8_t* dst = buf; for (int y = 0; y < height; y++) { const uint8_t* y_row = y_plane + y * (int)width; const uint8_t* u_row = u_plane + y * ((int)width / 2); const uint8_t* v_row = v_plane + y * ((int)width / 2); for (int x = 0; x < width; x += 2) { uint8_t y0 = y_row[x + 0]; uint8_t y1 = y_row[x + 1]; uint8_t cb = u_row[x / 2]; uint8_t cr = v_row[x / 2]; dst[0] = y0; dst[1] = cb; dst[2] = y1; dst[3] = cr; dst += 4; } } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_YCbYCr; if (out_size) *out_size = sz; return buf; } // 其余格式转换为 RGB888 int rgb_size = (int)width * (int)height * 3; uint8_t* rgb = (uint8_t*)jpeg_calloc_align(rgb_size, 16); if (!rgb) return NULL; if (format == V4L2_PIX_FMT_RGB24) { // V4L2_RGB24 即 RGB888 memcpy(rgb, src, rgb_size); } else if (format == V4L2_PIX_FMT_RGB565) { // RGB565 小端,需要转换为 RGB888 const uint8_t* p = src; uint8_t* d = rgb; int pixels = (int)width * (int)height; for (int i = 0; i < pixels; i++) { uint8_t lo = p[0]; // 低字节(LSB) uint8_t hi = p[1]; // 高字节(MSB) p += 2; uint8_t r5 = (hi >> 3) & 0x1F; uint8_t g6 = ((hi & 0x07) << 3) | ((lo & 0xE0) >> 5); uint8_t b5 = lo & 0x1F; d[0] = expand_5_to_8(r5); d[1] = expand_6_to_8(g6); d[2] = expand_5_to_8(b5); d += 3; } } else { // 其他未覆盖格式,清零 memset(rgb, 0, rgb_size); } if (out_fmt) *out_fmt = JPEG_PIXEL_FORMAT_RGB888; if (out_size) *out_size = rgb_size; return rgb; } #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER static jpeg_encoder_handle_t s_hw_jpeg_handle = NULL; static bool hw_jpeg_ensure_inited(void) { if (s_hw_jpeg_handle) { return true; } jpeg_encode_engine_cfg_t eng_cfg = { .intr_priority = 0, .timeout_ms = 100, }; esp_err_t er = jpeg_new_encoder_engine(&eng_cfg, &s_hw_jpeg_handle); if (er != ESP_OK) { ESP_LOGE(TAG, "jpeg_new_encoder_engine failed: %d", (int)er); s_hw_jpeg_handle = NULL; return false; } return true; } static uint8_t* convert_input_to_hw_encoder_buf(const uint8_t* src, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, jpeg_enc_input_format_t* out_fmt, int* out_size) { if (format == V4L2_PIX_FMT_GREY) { int sz = (int)width * (int)height; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_GRAY; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_RGB24) { int sz = (int)width * (int)height * 3; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) { ESP_LOGE(TAG, "malloc_psram failed"); return NULL; } memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_RGB888; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_RGB565) { int sz = (int)width * (int)height * 2; uint8_t* buf = (uint8_t*)malloc_psram(sz); if (!buf) return NULL; memcpy(buf, src, sz); if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_RGB565; if (out_size) *out_size = sz; return buf; } if (format == V4L2_PIX_FMT_YUYV) { // 硬件需要 | Y1 V Y0 U | 的“大端”格式,因此需要 bswap16 int sz = (int)width * (int)height * 2; uint16_t* buf = (uint16_t*)malloc_psram(sz); if (!buf) return NULL; const uint16_t* bsrc = (const uint16_t*)src; for (int i = 0; i < sz / 2; i++) { buf[i] = __builtin_bswap16(bsrc[i]); } if (out_fmt) *out_fmt = JPEG_ENCODE_IN_FORMAT_YUV422; if (out_size) *out_size = sz; return (uint8_t*)buf; } return NULL; } static bool encode_with_hw_jpeg(const uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, uint8_t** jpg_out, size_t* jpg_out_len, jpg_out_cb cb, void* cb_arg) { if (quality < 1) quality = 1; if (quality > 100) quality = 100; jpeg_enc_input_format_t enc_src_type = JPEG_ENCODE_IN_FORMAT_RGB888; int enc_in_size = 0; uint8_t* enc_in = convert_input_to_hw_encoder_buf(src, width, height, format, &enc_src_type, &enc_in_size); if (!enc_in) { ESP_LOGW(TAG, "hw jpeg: unsupported format, fallback to sw"); return false; } if (!hw_jpeg_ensure_inited()) { free(enc_in); return false; } jpeg_encode_cfg_t enc_cfg = {0}; enc_cfg.width = width; enc_cfg.height = height; enc_cfg.src_type = enc_src_type; enc_cfg.image_quality = quality; enc_cfg.sub_sample = (enc_src_type == JPEG_ENCODE_IN_FORMAT_GRAY) ? JPEG_DOWN_SAMPLING_GRAY : JPEG_DOWN_SAMPLING_YUV422; size_t out_cap = (size_t)width * (size_t)height * 3 / 2 + 64 * 1024; if (out_cap < 128 * 1024) out_cap = 128 * 1024; jpeg_encode_memory_alloc_cfg_t jpeg_enc_output_mem_cfg = { .buffer_direction = JPEG_ENC_ALLOC_OUTPUT_BUFFER }; size_t out_cap_aligned = 0; uint8_t* outbuf = (uint8_t*)jpeg_alloc_encoder_mem(out_cap, &jpeg_enc_output_mem_cfg, &out_cap_aligned); if (!outbuf) { free(enc_in); ESP_LOGE(TAG, "alloc out buffer failed"); return false; } uint32_t out_len = 0; esp_err_t er = jpeg_encoder_process(s_hw_jpeg_handle, &enc_cfg, enc_in, (uint32_t)enc_in_size, outbuf, (uint32_t)out_cap_aligned, &out_len); free(enc_in); if (er != ESP_OK) { free(outbuf); ESP_LOGE(TAG, "jpeg_encoder_process failed: %d", (int)er); return false; } if (cb) { cb(cb_arg, 0, outbuf, (size_t)out_len); cb(cb_arg, 1, NULL, 0); free(outbuf); if (jpg_out) *jpg_out = NULL; if (jpg_out_len) *jpg_out_len = 0; return true; } if (jpg_out && jpg_out_len) { *jpg_out = outbuf; *jpg_out_len = (size_t)out_len; return true; } free(outbuf); return true; } #endif // CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER static bool encode_with_esp_new_jpeg(const uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, uint8_t** jpg_out, size_t* jpg_out_len, jpg_out_cb cb, void* cb_arg) { if (quality < 1) quality = 1; if (quality > 100) quality = 100; jpeg_pixel_format_t enc_src_type = JPEG_PIXEL_FORMAT_RGB888; int enc_in_size = 0; uint8_t* enc_in = convert_input_to_encoder_buf(src, width, height, format, &enc_src_type, &enc_in_size); if (!enc_in) { ESP_LOGE(TAG, "alloc/convert input failed"); return false; } jpeg_enc_config_t cfg = DEFAULT_JPEG_ENC_CONFIG(); cfg.width = width; cfg.height = height; cfg.src_type = enc_src_type; cfg.subsampling = (enc_src_type == JPEG_PIXEL_FORMAT_GRAY) ? JPEG_SUBSAMPLE_GRAY : JPEG_SUBSAMPLE_420; cfg.quality = quality; cfg.rotate = JPEG_ROTATE_0D; cfg.task_enable = false; jpeg_enc_handle_t h = NULL; jpeg_error_t ret = jpeg_enc_open(&cfg, &h); if (ret != JPEG_ERR_OK) { jpeg_free_align(enc_in); ESP_LOGE(TAG, "jpeg_enc_open failed: %d", (int)ret); return false; } // 估算输出缓冲区:宽高的 1.5 倍 + 64KB size_t out_cap = (size_t)width * (size_t)height * 3 / 2 + 64 * 1024; if (out_cap < 128 * 1024) out_cap = 128 * 1024; uint8_t* outbuf = (uint8_t*)malloc_psram(out_cap); if (!outbuf) { jpeg_enc_close(h); jpeg_free_align(enc_in); ESP_LOGE(TAG, "alloc out buffer failed"); return false; } int out_len = 0; ret = jpeg_enc_process(h, enc_in, enc_in_size, outbuf, (int)out_cap, &out_len); jpeg_enc_close(h); jpeg_free_align(enc_in); if (ret != JPEG_ERR_OK) { free(outbuf); ESP_LOGE(TAG, "jpeg_enc_process failed: %d", (int)ret); return false; } if (cb) { cb(cb_arg, 0, outbuf, (size_t)out_len); cb(cb_arg, 1, NULL, 0); // 结束信号 free(outbuf); if (jpg_out) *jpg_out = NULL; if (jpg_out_len) *jpg_out_len = 0; return true; } if (jpg_out && jpg_out_len) { *jpg_out = outbuf; *jpg_out_len = (size_t)out_len; return true; } free(outbuf); return true; } bool image_to_jpeg(uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, uint8_t** out, size_t* out_len) { #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER if (encode_with_hw_jpeg(src, src_len, width, height, format, quality, out, out_len, NULL, NULL)) { return true; } // Fallback to esp_new_jpeg #endif return encode_with_esp_new_jpeg(src, src_len, width, height, format, quality, out, out_len, NULL, NULL); } bool image_to_jpeg_cb(uint8_t* src, size_t src_len, uint16_t width, uint16_t height, v4l2_pix_fmt_t format, uint8_t quality, jpg_out_cb cb, void* arg) { #if CONFIG_XIAOZHI_ENABLE_HARDWARE_JPEG_ENCODER if (encode_with_hw_jpeg(src, src_len, width, height, format, quality, NULL, NULL, cb, arg)) { return true; } // Fallback to esp_new_jpeg #endif return encode_with_esp_new_jpeg(src, src_len, width, height, format, quality, NULL, NULL, cb, arg); }
github_cpp
2025-12-06T08:42:39Z
https://github.com/vqtuan789/Xiaozhi-NTC-SDCARD/blob/5fdf02ae2dc44154eb5d83d611b298c9541ccf5e/main/display/lvgl_display/jpg/image_to_jpeg.cpp
{}
/* * METRIC --- Mode expansion modeling in integrated optics / photonics * http://metric.computational-photonics.eu/ */ /* * veims.cpp * Variational Effective Index Mode Solver VEIMS * mode analysis for waveguides with 2D rectangular cross section * --- the simplest version, separabale mode profile approximations */ #include<stdio.h> #include<stdlib.h> #include<math.h> #include"inout.h" #include"complex.h" #include"matrix.h" #include"structure.h" #include"gengwed.h" #include"matlvis.h" #include"slamode.h" #include"integral.h" #include"slamarr.h" #include"slams.h" #include"veims.h" /* error message */ void eimodeerror(const char *s) { fprintf(stderr, "\nveims: %s.\n", s); exit(1); } /* VEIMS mode profile representation: separable fields */ void EIMode::fldrep(Fcomp cp, double &fac, FldorDer &vfod, FldorDer &hfod, double x, double y) const { double iomu0, ioep0, epsr, epseff, epsc, q; int rly = st(rsegi).layeridx(x); int seg = st.segidx(y); if(pol == TE) { switch(cp) { case EX: vfod = FLD; hfod = FLD; fac = 0.0; break; case EY: epseff = hwg.eps(seg); vfod = FLD; hfod = FLD; fac = beta*vmp.beta/k0/k0/epseff; break; case EZ: epseff = hwg.eps(seg); vfod = FLD; hfod = DER; fac = -vmp.beta/k0/k0/epseff; break; case HX: iomu0 = vmp.invommu0; vfod = FLD; hfod = FLD; fac = -vmp.beta*iomu0; break; case HY: iomu0 = vmp.invommu0; epseff = hwg.eps(seg); vfod = DER; hfod = DER; fac = -vmp.beta*iomu0/k0/k0/epseff; break; case HZ: iomu0 = vmp.invommu0; epseff = hwg.eps(seg); vfod = DER; hfod = FLD; fac = vmp.beta*beta*iomu0/k0/k0/epseff; break; default: eimodeerror("setfldrep: illegal cp"); break; } return; } switch(cp) { case EX: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); vfod = FLD; hfod = FLD; fac = vmp.beta*ioep0/epsr; break; case EY: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); epsc = hwg_ec(seg); q = hwg_q(seg); vfod = DER; hfod = DER; fac = vmp.beta*ioep0/epsr/k0/k0/epsc*q; break; case EZ: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); epsc = hwg_ec(seg); q = hwg_q(seg); vfod = DER; hfod = FLD; fac = -vmp.beta*beta*ioep0/epsr/k0/k0/epsc*q; break; case HX: vfod = FLD; hfod = FLD; fac = 0.0; break; case HY: epsc = hwg_ec(seg); vfod = FLD; hfod = FLD; fac = beta*vmp.beta/k0/k0/epsc; break; case HZ: epsc = hwg_ec(seg); vfod = FLD; hfod = DER; fac = -vmp.beta/k0/k0/epsc; break; default: eimodeerror("setfldrep: illegal cp"); break; } return; } void EIMode::fldrep(Fcomp cp, double &fac, FldorDer &vfod, FldorDer &hfod, int l, int m) const { Rect r = wg.rectbounds(l, m); fldrep(cp, fac, vfod, hfod, 0.5*(r.x0+r.x1), 0.5*(r.y0+r.y1)); return; } /* mode profile, field values at position (x, y) cp: EX - HZ, SZ */ double EIMode::field(Fcomp cp, double x, double y) const { double vf, hf, fac; FldorDer hfod, vfod; switch(cp) { case EX: case EY: case EZ: case HX: case HY: case HZ: fldrep(cp, fac, vfod, hfod, x, y); vf = ((vfod == FLD) ? vmp.field(x) : vmp.d_field(x)); hf = ((hfod == FLD) ? hmp.field(y) : hmp.d_field(y)); return fac*vf*hf; break; case SZ: return 0.5*(field(EX, x, y)*field(HY, x, y) -field(HX, x, y)*field(EY, x, y)); break; default: return 0.0; break; } return 0.0; } /* store num values of component cp between (x0, y0) and (x1, y1) in a vector cp: EX - HZ, SZ */ Dvector EIMode::field(Fcomp cp, int num, double x0, double y0, double x1, double y1) const { double x, y; double dx, dy; int j; if(num <= 0) eimodeerror("field: num <= 0"); Dvector f(num); if(num == 1) { x = 0.5*(x0+x1); y = 0.5*(x0+x1); f(0) = field(cp, x, y); return f; } dx = (x1-x0)/(num-1); dy = (y1-y0)/(num-1); for(j=0; j<=num-1; ++j) { x = x0+j*dx; y = y0+j*dy; f(j) = field(cp, x, y); } return f; } /* evaluate component cp on a rectangular npx x npy mesh on area disp cp: EX - HZ, SZ foa: ORG, MOD, SQR */ Dmatrix EIMode::field(Rect disp, int npx, int npy, Fcomp cp, Afo foa) const { double x, y; double dx, dy; int i, j; if(npx <= 1) eimodeerror("field: npx <= 1"); if(npy <= 1) eimodeerror("field: npy <= 1"); Dmatrix f(npx, npy); double ft; dx = (disp.x1-disp.x0)/(npx-1); dy = (disp.y1-disp.y0)/(npy-1); for(i=0; i<=npx-1; ++i) { x = disp.x0+i*dx; for(j=0; j<=npy-1; ++j) { y = disp.y0+j*dy; ft = field(cp, x, y); switch(foa) { case MOD: ft = fabs(ft); break; case SQR: ft = ft*ft; break; case ORG: break; case REP: if(cp == EZ || cp == HZ) ft = 0.0; break; case IMP: if(cp != EZ && cp != HZ) ft = 0.0; break; } f(i, j) = ft; } } return f; } /* --- integrals of mode profiles ------------------------------ */ /* helper function */ double EIMode::prodint(int l, int m, Fcomp cp1, Fcomp cp2, Rect r) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*vmp.integrate(vfod1, vfod2, Interval(r.x0, r.x1)) *hmp.integrate(hfod1, hfod2, Interval(r.y0, r.y1)); } /* integration of a fieldproduct over rectangle r */ double EIMode::recint(Fcomp cp1, Fcomp cp2, Rect r) const { double x, y; double xt, yr; double s; int l, m, dum; Rect rp; if(r.x0 > r.x1) { x=r.x1; r.x1=r.x0; r.x0=x; } if(r.y0 > r.y1) { y=r.y1; r.y1=r.y0; r.y0=y; } s = 0.0; y = r.y0; while(fabs(r.y1-y) > HDIST) { wg.rectidx(0.0, y+HDIST/2.0, dum, m); yr = wg.rectbounds(0,m).y1; if(r.y1<yr) yr = r.y1; x = r.x0; while(fabs(r.x1-x) > HDIST) { wg.rectidx(x+HDIST/2.0, 0.0, l, dum); xt = wg.rectbounds(l,0).x1; if(r.x1<xt) xt = r.x1; rp.x0 = x; rp.y0 = y; rp.x1 = xt; rp.y1 = yr; s += prodint(l, m, cp1, cp2, rp); x = xt; } y = yr; } return s; } /* z component of the Poyntingvector, integrated over the entire x-y-plane */ /* TE part */ double EIMode::tepower() const { return -0.5*recint(HX, EY, XYplane); } /* TM part */ double EIMode::tmpower() const { return 0.5*recint(EX, HY, XYplane); } /* total power */ double EIMode::power() const { return tepower()+tmpower(); } /* --- integrals of mode profile product along lines ------------- */ /* integration of field products along horizontal lines */ double EIMode::horlineint(int l, int m, Fcomp cp1, Fcomp cp2, double x, double ya, double yb) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*((vfod1 == FLD) ? vmp.field(x) : vmp.d_field(x)) *((vfod2 == FLD) ? vmp.field(x) : vmp.d_field(x)) *hmp.integrate(hfod1, hfod2, Interval(ya, yb)); } /* integration of field products along vertical lines */ double EIMode::verlineint(int l, int m, Fcomp cp1, Fcomp cp2, double xa, double xb, double y) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*vmp.integrate(vfod1, vfod2, Interval(xa, xb)) *((hfod1 == FLD) ? hmp.field(y) : hmp.d_field(y)) *((hfod2 == FLD) ? hmp.field(y) : hmp.d_field(y)); } /* -------------------------------------------------------------------- */ /* set maximum values for E, H, Sz (only a coarse approximation !) */ void EIMode::setfieldmax() { int l, m; double x, y, w, s; int np = 10; Rect r; minE = 1.0e+300; maxE = -1.0e+300; minH = 1.0e+300; maxH = -1.0e+300; minS = 1.0e+300; maxS = -1.0e+300; for(l=1; l<=wg.nx; ++l) { for(m=1; m<=wg.ny;++m) { r = wg.rectbounds(l, m); s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(EX, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EY, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EZ, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(EX, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EY, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EZ, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; } s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(HX, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HY, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HZ, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(HX, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HY, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HZ, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; } s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(SZ, x, y); if(w > maxS) maxS = w; if(w < minS) minS = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(SZ, x, y); if(w > maxS) maxS = w; if(w < minS) minS = w; } } } return; } /* coordinates of the maximum mode intensity on rectangle r, average 1/e^2-radius in x- and y-direction only a coarse approximation ! */ double EIMode::maxintensity(Rect r, double& xm, double& ym, double& rx, double& ry) const { double x, y, w, sx, sy; int np = 50; double max; double a, b; max = -1.0e+300; xm = 0.0; ym = 0.0; sx = (r.x1-r.x0)/np; sy = (r.y1-r.y0)/np; for(x=r.x0; x<=r.x1; x+=sx) { for(y=r.y0; y<=r.y1; y+=sy) { w = fabs(field(SZ, x, y)); if(w > max) { max = w; xm = x; ym = y; } } } x = xm; y = ym; w =
/* * METRIC --- Mode expansion modeling in integrated optics / photonics * http://metric.computational-photonics.eu/ */ /* * veims.cpp * Variational Effective Index Mode Solver VEIMS * mode analysis for waveguides with 2D rectangular cross section * --- the simplest version, separabale mode profile approximations */ #include<stdio.h> #include<stdlib.h> #include<math.h> #include"inout.h" #include"complex.h" #include"matrix.h" #include"structure.h" #include"gengwed.h" #include"matlvis.h" #include"slamode.h" #include"integral.h" #include"slamarr.h" #include"slams.h" #include"veims.h" /* error message */ void eimodeerror(const char *s) { fprintf(stderr, "\nveims: %s.\n", s); exit(1); } /* VEIMS mode profile representation: separable fields */ void EIMode::fldrep(Fcomp cp, double &fac, FldorDer &vfod, FldorDer &hfod, double x, double y) const { double iomu0, ioep0, epsr, epseff, epsc, q; int rly = st(rsegi).layeridx(x); int seg = st.segidx(y); if(pol == TE) { switch(cp) { case EX: vfod = FLD; hfod = FLD; fac = 0.0; break; case EY: epseff = hwg.eps(seg); vfod = FLD; hfod = FLD; fac = beta*vmp.beta/k0/k0/epseff; break; case EZ: epseff = hwg.eps(seg); vfod = FLD; hfod = DER; fac = -vmp.beta/k0/k0/epseff; break; case HX: iomu0 = vmp.invommu0; vfod = FLD; hfod = FLD; fac = -vmp.beta*iomu0; break; case HY: iomu0 = vmp.invommu0; epseff = hwg.eps(seg); vfod = DER; hfod = DER; fac = -vmp.beta*iomu0/k0/k0/epseff; break; case HZ: iomu0 = vmp.invommu0; epseff = hwg.eps(seg); vfod = DER; hfod = FLD; fac = vmp.beta*beta*iomu0/k0/k0/epseff; break; default: eimodeerror("setfldrep: illegal cp"); break; } return; } switch(cp) { case EX: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); vfod = FLD; hfod = FLD; fac = vmp.beta*ioep0/epsr; break; case EY: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); epsc = hwg_ec(seg); q = hwg_q(seg); vfod = DER; hfod = DER; fac = vmp.beta*ioep0/epsr/k0/k0/epsc*q; break; case EZ: ioep0 = vmp.invomep0; epsr = st(rsegi).eps(rly); epsc = hwg_ec(seg); q = hwg_q(seg); vfod = DER; hfod = FLD; fac = -vmp.beta*beta*ioep0/epsr/k0/k0/epsc*q; break; case HX: vfod = FLD; hfod = FLD; fac = 0.0; break; case HY: epsc = hwg_ec(seg); vfod = FLD; hfod = FLD; fac = beta*vmp.beta/k0/k0/epsc; break; case HZ: epsc = hwg_ec(seg); vfod = FLD; hfod = DER; fac = -vmp.beta/k0/k0/epsc; break; default: eimodeerror("setfldrep: illegal cp"); break; } return; } void EIMode::fldrep(Fcomp cp, double &fac, FldorDer &vfod, FldorDer &hfod, int l, int m) const { Rect r = wg.rectbounds(l, m); fldrep(cp, fac, vfod, hfod, 0.5*(r.x0+r.x1), 0.5*(r.y0+r.y1)); return; } /* mode profile, field values at position (x, y) cp: EX - HZ, SZ */ double EIMode::field(Fcomp cp, double x, double y) const { double vf, hf, fac; FldorDer hfod, vfod; switch(cp) { case EX: case EY: case EZ: case HX: case HY: case HZ: fldrep(cp, fac, vfod, hfod, x, y); vf = ((vfod == FLD) ? vmp.field(x) : vmp.d_field(x)); hf = ((hfod == FLD) ? hmp.field(y) : hmp.d_field(y)); return fac*vf*hf; break; case SZ: return 0.5*(field(EX, x, y)*field(HY, x, y) -field(HX, x, y)*field(EY, x, y)); break; default: return 0.0; break; } return 0.0; } /* store num values of component cp between (x0, y0) and (x1, y1) in a vector cp: EX - HZ, SZ */ Dvector EIMode::field(Fcomp cp, int num, double x0, double y0, double x1, double y1) const { double x, y; double dx, dy; int j; if(num <= 0) eimodeerror("field: num <= 0"); Dvector f(num); if(num == 1) { x = 0.5*(x0+x1); y = 0.5*(x0+x1); f(0) = field(cp, x, y); return f; } dx = (x1-x0)/(num-1); dy = (y1-y0)/(num-1); for(j=0; j<=num-1; ++j) { x = x0+j*dx; y = y0+j*dy; f(j) = field(cp, x, y); } return f; } /* evaluate component cp on a rectangular npx x npy mesh on area disp cp: EX - HZ, SZ foa: ORG, MOD, SQR */ Dmatrix EIMode::field(Rect disp, int npx, int npy, Fcomp cp, Afo foa) const { double x, y; double dx, dy; int i, j; if(npx <= 1) eimodeerror("field: npx <= 1"); if(npy <= 1) eimodeerror("field: npy <= 1"); Dmatrix f(npx, npy); double ft; dx = (disp.x1-disp.x0)/(npx-1); dy = (disp.y1-disp.y0)/(npy-1); for(i=0; i<=npx-1; ++i) { x = disp.x0+i*dx; for(j=0; j<=npy-1; ++j) { y = disp.y0+j*dy; ft = field(cp, x, y); switch(foa) { case MOD: ft = fabs(ft); break; case SQR: ft = ft*ft; break; case ORG: break; case REP: if(cp == EZ || cp == HZ) ft = 0.0; break; case IMP: if(cp != EZ && cp != HZ) ft = 0.0; break; } f(i, j) = ft; } } return f; } /* --- integrals of mode profiles ------------------------------ */ /* helper function */ double EIMode::prodint(int l, int m, Fcomp cp1, Fcomp cp2, Rect r) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*vmp.integrate(vfod1, vfod2, Interval(r.x0, r.x1)) *hmp.integrate(hfod1, hfod2, Interval(r.y0, r.y1)); } /* integration of a fieldproduct over rectangle r */ double EIMode::recint(Fcomp cp1, Fcomp cp2, Rect r) const { double x, y; double xt, yr; double s; int l, m, dum; Rect rp; if(r.x0 > r.x1) { x=r.x1; r.x1=r.x0; r.x0=x; } if(r.y0 > r.y1) { y=r.y1; r.y1=r.y0; r.y0=y; } s = 0.0; y = r.y0; while(fabs(r.y1-y) > HDIST) { wg.rectidx(0.0, y+HDIST/2.0, dum, m); yr = wg.rectbounds(0,m).y1; if(r.y1<yr) yr = r.y1; x = r.x0; while(fabs(r.x1-x) > HDIST) { wg.rectidx(x+HDIST/2.0, 0.0, l, dum); xt = wg.rectbounds(l,0).x1; if(r.x1<xt) xt = r.x1; rp.x0 = x; rp.y0 = y; rp.x1 = xt; rp.y1 = yr; s += prodint(l, m, cp1, cp2, rp); x = xt; } y = yr; } return s; } /* z component of the Poyntingvector, integrated over the entire x-y-plane */ /* TE part */ double EIMode::tepower() const { return -0.5*recint(HX, EY, XYplane); } /* TM part */ double EIMode::tmpower() const { return 0.5*recint(EX, HY, XYplane); } /* total power */ double EIMode::power() const { return tepower()+tmpower(); } /* --- integrals of mode profile product along lines ------------- */ /* integration of field products along horizontal lines */ double EIMode::horlineint(int l, int m, Fcomp cp1, Fcomp cp2, double x, double ya, double yb) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*((vfod1 == FLD) ? vmp.field(x) : vmp.d_field(x)) *((vfod2 == FLD) ? vmp.field(x) : vmp.d_field(x)) *hmp.integrate(hfod1, hfod2, Interval(ya, yb)); } /* integration of field products along vertical lines */ double EIMode::verlineint(int l, int m, Fcomp cp1, Fcomp cp2, double xa, double xb, double y) const { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; fldrep(cp1, fac1, vfod1, hfod1, l, m); fldrep(cp2, fac2, vfod2, hfod2, l, m); return fac1*fac2*vmp.integrate(vfod1, vfod2, Interval(xa, xb)) *((hfod1 == FLD) ? hmp.field(y) : hmp.d_field(y)) *((hfod2 == FLD) ? hmp.field(y) : hmp.d_field(y)); } /* -------------------------------------------------------------------- */ /* set maximum values for E, H, Sz (only a coarse approximation !) */ void EIMode::setfieldmax() { int l, m; double x, y, w, s; int np = 10; Rect r; minE = 1.0e+300; maxE = -1.0e+300; minH = 1.0e+300; maxH = -1.0e+300; minS = 1.0e+300; maxS = -1.0e+300; for(l=1; l<=wg.nx; ++l) { for(m=1; m<=wg.ny;++m) { r = wg.rectbounds(l, m); s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(EX, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EY, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EZ, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(EX, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EY, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; w = field(EZ, x, y); if(w > maxE) maxE = w; if(w < minE) minE = w; } s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(HX, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HY, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HZ, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(HX, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HY, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; w = field(HZ, x, y); if(w > maxH) maxH = w; if(w < minH) minH = w; } s = (r.x1-r.x0)/np; y = (r.y1+r.y0)/2.0; for(x=r.x0; x<=r.x1; x+=s) { w = field(SZ, x, y); if(w > maxS) maxS = w; if(w < minS) minS = w; } s = (r.y1-r.y0)/np; x = (r.x1+r.x0)/2.0; for(y=r.y0; y<=r.y1; y+=s) { w = field(SZ, x, y); if(w > maxS) maxS = w; if(w < minS) minS = w; } } } return; } /* coordinates of the maximum mode intensity on rectangle r, average 1/e^2-radius in x- and y-direction only a coarse approximation ! */ double EIMode::maxintensity(Rect r, double& xm, double& ym, double& rx, double& ry) const { double x, y, w, sx, sy; int np = 50; double max; double a, b; max = -1.0e+300; xm = 0.0; ym = 0.0; sx = (r.x1-r.x0)/np; sy = (r.y1-r.y0)/np; for(x=r.x0; x<=r.x1; x+=sx) { for(y=r.y0; y<=r.y1; y+=sy) { w = fabs(field(SZ, x, y)); if(w > max) { max = w; xm = x; ym = y; } } } x = xm; y = ym; w = fabs(field(SZ, x, y)); while(w > max*0.1353) { x -= sx; w = fabs(field(SZ, x, y)); } a = x; x = xm; y = ym; w = fabs(field(SZ, x, y)); while(w > max*0.1353) { x += sx; w = fabs(field(SZ, x, y)); } b = x; rx = 0.5*(b-a); x = xm; y = ym; w = fabs(field(SZ, x, y)); while(w > max*0.1353) { y -= sy; w = fabs(field(SZ, x, y)); } a = y; x = xm; y = ym; w = fabs(field(SZ, x, y)); while(w > max*0.1353) { y += sy; w = fabs(field(SZ, x, y)); } b = y; ry = 0.5*(b-a); return max; } /* normalize mode to power() == nrm */ void EIMode::normalize(double nrm) { double p = power(); p = 1.0/p; hmp.normalize(p); setfieldmax(); return; } /* translate mode by (dx, dy) */ void EIMode::translate(double dx, double dy) { wg.translate(dx, dy); st.xtranslate(dx); st.ytranslate(dy); return; } /* write profile section along line (x0,y0) -> (x1,y1) cp: EX - HZ, SZ ext0, ext1: filename id characters np: number of output points */ void EIMode::writesec(Fcomp cp, char ext0, char ext1, int np, double x0, double y0, double x1, double y1) const { FILE *dat; int i; double x, y, dx, dy; char name[13] = "t_____.xyf"; switch(pol) { case TE: name[1] = 'e'; break; case TM: name[1] = 'm'; break; } name[2] = fldchr(cp); name[3] = cpchr(cp); name[4] = ext0; name[5] = ext1; fprintf(stderr, ">> %s\n", name); dat = fopen(name, "w+"); dx = x1-x0; dy = y1-y0; for(i=0; i<=np; ++i) { x = x0+i*dx/np; y = y0+i*dy/np; if(dx == 0.0) fprintf(dat, "%g %g\n", y, field(cp, x, y)); else { if(dy == 0.0) fprintf(dat, "%g %g\n", x, field(cp, x, y)); else { fprintf(dat, "%g %g\n", i*sqrt(dx*dx+dy*dy)/np, field(cp, x, y)); } } } fclose(dat); return; } /* ------------------------------------------------------------------------ */ /* permittivity perturbation: propagation constant shift */ Complex EIMode::phaseshift(ChWgPert p) const { Complex pt; double nrm; Complex db; db = CC0; // xx pt = p.e(0, 0); if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EX, EX, p.r); // yy pt = p.e(1, 1); if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EY, EY, p.r); // zz pt = p.e(2, 2); if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EZ, EZ, p.r); // xy, yx pt = p.e(0, 1) + p.e(1, 0); if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EX, EY, p.r); // xz, zx pt = p.e(0, 2) - p.e(2, 0); pt = CCI*pt; if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EX, EZ, p.r); // yz, zy pt = p.e(1, 2) - p.e(2, 1); pt = CCI*pt; if(pt.re != 0.0 || pt.im != 0.0) db = db+pt*recint(EY, EZ, p.r); nrm = recint(HY, EX, XYplane); nrm -= recint(HX, EY, XYplane); nrm *= 2.0; db = db*(val_k0(wg.lambda)*INVSQRTMU0/INVSQRTEP0/nrm); return db; } /* geometry variations: propagation constant shift due to moving the horizontal boundary between rectangles [l,m] and [l+1,m] ( moving hx(l) for hy(m-1) < y < hy(m) ) */ double EIMode::horgeovar(int l, int m) const { double pt; double nrm; double y0, y1; if(l<0) return 0.0; if(m<0) return 0.0; if(l>wg.nx+1) return 0.0; if(m>wg.ny) return 0.0; if(m <= 0) y0 = XYplane.y0; else y0 = wg.hy(m-1); if(m >= wg.ny+1) y1 = XYplane.y1; else y1 = wg.hy(m); pt = 0.5*horlineint(l+1, m, EX, EX, wg.hx(l), y0, y1) *wg.eps(l+1,m)/wg.eps(l,m); pt += 0.5*horlineint(l, m, EX, EX, wg.hx(l), y0, y1) *wg.eps(l,m)/wg.eps(l+1,m); pt += 0.5*(horlineint(l+1, m, EY, EY, wg.hx(l), y0, y1) +horlineint(l, m, EY, EY, wg.hx(l), y0, y1)); pt += 0.5*(horlineint(l+1, m, EZ, EZ, wg.hx(l), y0, y1) +horlineint(l, m, EZ, EZ, wg.hx(l), y0, y1)); pt *= (wg.eps(l,m)-wg.eps(l+1,m)); nrm = recint(HY, EX, XYplane); nrm -= recint(HX, EY, XYplane); nrm *= 2.0; pt *= val_k0(wg.lambda)*INVSQRTMU0/INVSQRTEP0/nrm; return pt; } /* geometry variations: propagation constant shift due to moving the vertical boundary between rectangles [l,m] and [l,m+1] ( moving hy(m) for hx(l-1) < x < hx(l) ) */ double EIMode::vergeovar(int l, int m) const { double pt; double nrm; double x0, x1; if(m<0) return 0.0; if(l<0) return 0.0; if(m>wg.ny) return 0.0; if(l>wg.nx+1) return 0.0; if(l == 0) x0 = XYplane.x0; else x0 = wg.hx(l-1); if(l == wg.nx+1) x1 = XYplane.x1; else x1 = wg.hx(l); if(pol == TM) { pt = 0.5*(verlineint(l, m+1, HY, HY, x0, x1, wg.hy(m)) +verlineint(l, m, HY, HY, x0, x1, wg.hy(m))); pt *= (wg.eps(l,m)-wg.eps(l,m+1)); nrm = recint(HY, HY, XYplane); pt *= val_k0(wg.lambda)*val_k0(wg.lambda)/2.0/beta/nrm; return pt; } pt = 0.5*(verlineint(l, m+1, EX, EX, x0, x1, wg.hy(m)) +verlineint(l, m, EX, EX, x0, x1, wg.hy(m))); pt += 0.5*verlineint(l, m+1, EY, EY, x0, x1, wg.hy(m)) *wg.eps(l,m+1)/wg.eps(l,m); pt += 0.5*verlineint(l, m, EY, EY, x0, x1, wg.hy(m)) *wg.eps(l,m)/wg.eps(l,m+1); pt += 0.5*(verlineint(l, m+1, EZ, EZ, x0, x1, wg.hy(m)) +verlineint(l, m, EZ, EZ, x0, x1, wg.hy(m))); pt *= (wg.eps(l,m)-wg.eps(l,m+1)); nrm = recint(HY, EX, XYplane); nrm -= recint(HX, EY, XYplane); nrm *= 2.0; pt *= val_k0(wg.lambda)*INVSQRTMU0/INVSQRTEP0/nrm; return pt; } /* ------------------------------------------------------------------------ */ /* overlap 0.5*intint mode_EX*inif_HY - mode_EY*inif_HX dxdy with an initial field, with real HX and HY components, integrals are evaluated by gaussian quadrature formulas, separately on the fundamental rectangles, inside the domain rectangle r */ /* helper functions */ double EIMode::ovleval(double (*inif)(Fcomp cp, double x, double y), double x, double y) const { double fex, fhy, fey, fhx; fhy = inif(HY, x, y); fex = 0; if(fhy != 0.0) fex = field(EX, x, y); fhx = inif(HX, x, y); fey = 0; if(fhx != 0.0) fey = field(EY, x, y); return 0.5*(fex*fhy-fey*fhx); } /* based on: Numerical Recipes in C --- The Art of Scientific Computing Press, Teukolsky, Vetterling, Flannery, Cambridge University Press, 1994 */ double EIMode::ovlpix(double (*inif)(Fcomp cp, double x, double y), double x0, double x1, double y, int numx) const { int j; double xr,xm,dx,s; static double x[]={0.0,0.1488743389,0.4333953941, 0.6794095682,0.8650633666,0.9739065285}; static double w[]={0.0,0.2955242247,0.2692667193, 0.2190863625,0.1494513491,0.0666713443}; double ovl = 0.0; double a, b, step; int i; step = (x1-x0)/((double) numx); for(i=0; i<=numx-1; ++i) { a = x0+((double) i)*step; b = a+step; xm=0.5*(b+a); xr=0.5*(b-a); s=0.0; for (j=1;j<=5;j++) { dx=xr*x[j]; s += w[j]*(ovleval(inif, xm+dx, y) +ovleval(inif, xm-dx, y)); } ovl += s*xr; } return ovl; } double EIMode::ovlpint(double (*inif)(Fcomp cp, double x, double y), Rect r, int numx, int numy) const { double ovl = 0.0; double a, b, step; int i; int j; double yr,ym,dy,s; static double y[]={0.0,0.1488743389,0.4333953941, 0.6794095682,0.8650633666,0.9739065285}; static double w[]={0.0,0.2955242247,0.2692667193, 0.2190863625,0.1494513491,0.0666713443}; step = (r.y1-r.y0)/((double) numy); for(i=0; i<=numy-1; ++i) { a = r.y0+((double) i)*step; b = a+step; ym=0.5*(b+a); yr=0.5*(b-a); s=0.0; for (j=1;j<=5;j++) { dy=yr*y[j]; s += w[j]*(ovlpix(inif, r.x0, r.x1, ym+dy, numx) +ovlpix(inif, r.x0, r.x1, ym-dy, numx)); } ovl += s*yr; } return ovl; } #define HDIST 1.0e-8 double EIMode::ovlp(double (*inif)(Fcomp cp, double x, double y), Rect r, int numx, int numy) const { double ovl = 0.0; double xb, yl; double xt, yr; int l, m, dum; Rect rp; if(r.x0 > r.x1) { xb=r.x1; r.x1=r.x0; r.x0=xb; } if(r.y0 > r.y1) { yl=r.y1; r.y1=r.y0; r.y0=yl; } yl = r.y0; while(fabs(r.y1-yl) > HDIST) { wg.rectidx(0.0, yl+HDIST/2.0, dum, m); yr = wg.rectbounds(0,m).y1; if(r.y1<yr) yr = r.y1; xb = r.x0; while(fabs(r.x1-xb) > HDIST) { wg.rectidx(xb+HDIST/2.0, 0.0, l, dum); xt = wg.rectbounds(l,0).x1; if(r.x1<xt) xt = r.x1; rp.x0 = xb; rp.y0 = yl; rp.x1 = xt; rp.y1 = yr; ovl += ovlpint(inif, rp, numx, numy); xb = xt; } yl = yr; } return ovl; } /* - Output to MATLAB .m-files -------------------------------------- */ /* write single component to MATLAB .m file cp: EX - HZ, SZ foa: MOD, ORG, SQR disp: output region on the x-y-plane npx, npy: number of points in output mesh ext0, ext1: filename id characters pltype: 'C': contour plot 'S': surface plot 'I': intensity image 'N': field + mesh only, no plot commands (default) */ void EIMode::plot(Fcomp cp, Afo foa, Rect disp, int npx, int npy, char ext0, char ext1, char pltype) const { FILE *dat; char name[13] = "t_______.m"; double minf, maxf; name[1] = polchr(pol); name[2] = afochr(foa); name[3] = fldchr(cp); name[4] = cpchr(cp); name[5] = ext0; name[6] = ext1; name[7] = pltype; fprintf(stderr, "%c%c >> %s\n", fldchr(cp), cpchr(cp), name); dat = fopen(name, "w+"); mlout_title(dat, name, "VEIMS mode profile"); maxf = 1.0; minf = -1.0; switch(fldchr(cp)) { case 'E': switch(foa) { case ORG: case REP: case IMP: maxf = fabs(maxE); if(fabs(minE)>maxf) maxf=fabs(minE); minf = -maxf; break; case MOD: maxf = fabs(maxE); if(fabs(minE)>maxf) maxf=fabs(minE); minf = 0.0; break; case SQR: maxf = maxE*maxE; if(minE*minE>maxf) maxf=minE*minE; minf = 0.0; break; } break; case 'H': switch(foa) { case ORG: case REP: case IMP: maxf = fabs(maxH); if(fabs(minH)>maxf) maxf=fabs(minH); minf = -maxf; break; case MOD: maxf = fabs(maxH); if(fabs(minH)>maxf) maxf=fabs(minH); minf = 0.0; break; case SQR: maxf = maxH*maxH; if(minH*minH>maxf) maxf=minH*minH; minf = 0.0; break; } break; case 'S': switch(foa) { case ORG: case REP: case MOD: maxf = maxS; minf = 0.0; break; case SQR: // not reasonable maxf = maxS*maxS; minf = 0.0; break; case IMP: // not reasonable maxf = 1.0; minf = 0.0; break; } break; } if(pltype == 'I') mlout_gengeoxy(dat, st, disp); else mlout_geo(dat, wg, minf, maxf); mlout_meshxy(dat, disp, npx, npy); Dmatrix fld; fld = field(disp, npx, npy, cp, foa); mlout_fld(dat, npx, npy, cp, fld); name[8] = 0; switch(pltype) { case 'C': mlout_contour(name, dat, cp, foa); break; case 'S': mlout_surface(name, dat, cp, foa); break; case 'I': mlout_image(name, dat, cp, foa, minf, maxf); if(foa == MOD || foa == SQR) mlout_lincolormap(dat); else mlout_magcolormap(dat); break; default: break; } mlout_print(dat, name, 'e'); fclose(dat); return; } /* write all components to MATLAB .m file, surface plots ext0, ext1: filename id characters disp: output region on the x-y-plane npx, npy: number of points in output mesh */ void EIMode::acplot(Rect disp, int npx, int npy, char ext0, char ext1) const { FILE *dat; Dmatrix fld; char name[13] = "t___A.m"; name[1] = polchr(pol); name[2] = ext0; name[3] = ext1; fprintf(stderr, "Ex - Hz >> %s\n", name); dat = fopen(name, "w+"); mlout_title(dat, name, "VEIMS mode profile"); name[5] = 0; mlout_geo(dat, wg, 0.0, 1.0); mlout_meshxy(dat, disp, npx, npy); fld = field(disp, npx, npy, EX, ORG); mlout_fld(dat, npx, npy, EX, fld); fld = field(disp, npx, npy, EY, ORG); mlout_fld(dat, npx, npy, EY, fld); fld = field(disp, npx, npy, EZ, ORG); mlout_fld(dat, npx, npy, EZ, fld); fld = field(disp, npx, npy, HX, ORG); mlout_fld(dat, npx, npy, HX, fld); fld = field(disp, npx, npy, HY, ORG); mlout_fld(dat, npx, npy, HY, fld); fld = field(disp, npx, npy, HZ, ORG); mlout_fld(dat, npx, npy, HZ, fld); mlout_acmfile(name, dat, minE, maxE, minH, maxH); mlout_print(dat, name, 'e'); fclose(dat); return; } /* write profile section along line (x0,y0) -> (x1,y1) to MATLAB .m file cp: EX - HZ, SZ ext0, ext1: filename id characters np: number of output points pltype: 'L': geometry information + plot commands (default) 'V': field, mesh, and plot command, to be included into *L.m --- at present implemented for horizontal or vertical lines only --- */ #define HDISTF 1.0e-8 void EIMode::secplot(Fcomp cp, char ext0, char ext1, int np, char pltype, double x0, double y0, double x1, double y1) const { FILE *dat; int i, j; double x, y, dx, dy, t; double xbd, ybd; double epsold, epsnew; double minf, maxf, f; int nsec, nbd; char name[13] = "t______.m"; char ori; name[1] = polchr(pol); name[2] = fldchr(cp); name[3] = cpchr(cp); name[4] = ext0; name[5] = ext1; if(pltype != 'V') pltype = 'L'; name[6] = pltype; dx = fabs(x1-x0); dy = fabs(y1-y0); if(dx > 1.0e-10 && dy > 1.0e-10) { fprintf(stderr, "%c%c [%g, %g] -> [%g, %g] >> file \n", fldchr(cp), cpchr(cp), x0, y0, x1, y1); fprintf(stderr, " not implemented for tilted lines, use EIMode.writesec().\n"); return; } if(dx < 1.0e-10 && dy < 1.0e-10) { fprintf(stderr, "EIMode.secplot(): Nothing to do !\n"); return; } if(dx < 1.0e-10) ori = 'h'; else ori = 'v'; if(ori == 'v') { if(x0 > x1) { t = x0; x0 = x1; x1 = t; }} else { if(y0 > y1) { t = y0; y0 = y1; y1 = t; }} fprintf(stderr, "%c%c [%g, %g] -> [%g, %g] >> %s\n", fldchr(cp), cpchr(cp), x0, y0, x1, y1, name); Dmatrix fld(wg.nx+2+wg.ny+2, np+1); Dmatrix pos(wg.nx+2+wg.ny+2, np+1); Ivector nump(wg.nx+2+wg.ny+2); Dvector of(np+1); Dvector op(np+1); nsec = 0; Dvector bd(wg.nx+1+wg.ny+1); Dvector ri(wg.nx+2+wg.ny+2); nbd = 0; dx = x1-x0; dy = y1-y0; nump(nsec) = 0; x = x0; y = y0; epsold = wg.eps(x, y); ri(nbd) = sqrt(epsold); if(ori=='h') pos(nsec, nump(nsec)) = y; else pos(nsec, nump(nsec)) = x; fld(nsec, nump(nsec)) = field(cp, x, y); ++nump(nsec); for(i=1; i<=np; ++i) { x = x0+i*dx/np; y = y0+i*dy/np; epsnew = wg.eps(x, y); if(fabs(epsnew-epsold)<1.0e-10) { if(ori=='h') pos(nsec, nump(nsec)) = y; else pos(nsec, nump(nsec)) = x; fld(nsec, nump(nsec)) = field(cp, x, y); ++nump(nsec); } else { if(ori=='h') { xbd = x0; ybd = (wg.rectbounds(x, y)).y0; } else { xbd = (wg.rectbounds(x, y)).x0; ybd = y0; } if(ori=='h') pos(nsec, nump(nsec)) = ybd; else pos(nsec, nump(nsec)) = xbd; fld(nsec, nump(nsec)) = field(cp, xbd-dx*HDISTF, ybd-dy*HDISTF); ++nump(nsec); if(ori=='h') bd(nbd) = ybd; else bd(nbd) = xbd; ++nbd; epsold = epsnew; ri(nbd) = sqrt(epsold); ++nsec; nump(nsec) = 0; if(ori=='h') pos(nsec, nump(nsec)) = ybd; else pos(nsec, nump(nsec)) = xbd; fld(nsec, nump(nsec)) = field(cp, xbd+dx*HDISTF, ybd+dy*HDISTF); ++nump(nsec); if(ori=='h') pos(nsec, nump(nsec)) = y; else pos(nsec, nump(nsec)) = x; fld(nsec, nump(nsec)) = field(cp, x, y); ++nump(nsec); } } ++nsec; dat = fopen(name, "w+"); mlout_title(dat, name, "VEIMS mode field section"); name[7] = 0; minf = -1.0; maxf = 1.0; if(pltype == 'L') { switch(fldchr(cp)) { case 'E': minf = minE; maxf = maxE; break; case 'H': minf = minH; maxf = maxH; break; case 'S': minf = minS; maxf = maxS; break; } mlout_geo(dat, wg, minf, maxf); mlout_Lsecgeo(dat, x0, y0, x1, y1, nbd, bd, ri); } minf = fld(0, 0); maxf = fld(0, 0); for(j=0; j<=nsec-1; ++j) { for(i=0; i<=nump(j)-1; ++i) { f = fld(j, i); if(f < minf) minf = f; if(f > maxf) maxf = f; of(i) = f; op(i) = pos(j, i); } if(pltype == 'L') mlout_sec1D(dat, cp, dig10(j), dig1(j), ' ', ' ', nump(j), of, op); else mlout_sec1D(dat, cp, dig10(j), dig1(j), ext0, ext1, nump(j), of, op); } if(pltype == 'L') { mlout_Lsecplot(name, dat, ori, minf, maxf, cp, nbd, nsec); mlout_print(dat, name, 'e'); } else mlout_Vsecplot(dat, cp, nsec, ext0, ext1); fclose(dat); return; } /* write single component to MATLAB .m file, fancy style :-) cp: EX - HZ, SZ disp: output region on the x-y-plane npx, npy: number of points in output mesh ext0, ext1: filename id characters */ void EIMode::fplot(Fcomp cp, Rect disp, int npx, int npy, char ext0, char ext1) const { FILE *dat; int np, l, m; double x0, x1, xp, y0, y1, yp; int numc; char name[13] = "t_____F.m"; name[1] = polchr(pol); name[2] = fldchr(cp); name[3] = cpchr(cp); name[4] = ext0; name[5] = ext1; fprintf(stderr, "%c%c >> %s\n", fldchr(cp), cpchr(cp), name); dat = fopen(name, "w+"); mlout_title(dat, name, "VEIMS mode profile :-)"); name[7] = 0; switch(fldchr(cp)) { case 'E': mlout_geo(dat, wg, minE, maxE); break; case 'H': mlout_geo(dat, wg, minH, maxH); break; case 'S': mlout_geo(dat, wg, minS, maxS); break; } mlout_meshxy(dat, disp, npx, npy); Dmatrix fld; fld = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, fld); Dvector fv; numc = 0; for(l=0; l<=wg.nx+1; ++l) { if(l==0) x0 = disp.x0; else x0 = wg.hx(l-1); if(l==wg.nx+1) x1 = disp.x1; else x1 = wg.hx(l); for(m=0; m<=wg.ny; ++m) { if(fabs(wg.n(l,m)-wg.n(l,m+1)) > 1.0e-10) { yp = wg.hy(m); np = (int)(((double) npx) *(x1-x0)/(disp.x1-disp.x0)); if(np >= 2) { fv = field(cp, np, x0+HDIST, yp-HDIST, x1-HDIST, yp-HDIST); mlout_sec(dat, x0+HDIST, yp-HDIST, x1-HDIST, yp-HDIST, np, cp, dig10(numc), dig1(numc), fv); ++numc; fv = field(cp, np, x0+HDIST, yp+HDIST, x1-HDIST, yp+HDIST); mlout_sec(dat, x0+HDIST, yp+HDIST, x1-HDIST, yp+HDIST, np, cp, dig10(numc), dig1(numc), fv); ++numc; } } } } for(m=0; m<=wg.ny+1; ++m) { if(m==0) y0 = disp.y0; else y0 = wg.hy(m-1); if(m==wg.ny+1) y1 = disp.y1; else y1 = wg.hy(m); for(l=0; l<=wg.nx; ++l) { if(fabs(wg.n(l,m)-wg.n(l+1,m)) > 1.0e-10) { xp = wg.hx(l); np = (int)(((double)npy) *(y1-y0)/(disp.y1-disp.y0)); if(np >= 2) { fv = field(cp, np, xp-HDIST, y0+HDIST, xp-HDIST, y1-HDIST); mlout_sec(dat, xp-HDIST, y0+HDIST, xp-HDIST, y1-HDIST, np, cp, dig10(numc), dig1(numc), fv); ++numc; fv = field(cp, np, xp+HDIST, y0+HDIST, xp+HDIST, y1-HDIST); mlout_sec(dat, xp+HDIST, y0+HDIST, xp+HDIST, y1-HDIST, np, cp, dig10(numc), dig1(numc), fv); ++numc; } } } } fv = field(cp, npx, disp.x0, disp.y0, disp.x1, disp.y0); mlout_sec(dat, disp.x0, disp.y0, disp.x1, disp.y0, npx, cp, dig10(numc), dig1(numc), fv); ++numc; fv = field(cp, npy, disp.x1, disp.y0, disp.x1, disp.y1); mlout_sec(dat, disp.x1, disp.y0, disp.x1, disp.y1, npy, cp, dig10(numc), dig1(numc), fv); ++numc; fv = field(cp, npx, disp.x1, disp.y1, disp.x0, disp.y1); mlout_sec(dat, disp.x1, disp.y1, disp.x0, disp.y1, npx, cp, dig10(numc), dig1(numc), fv); ++numc; fv = field(cp, npy, disp.x0, disp.y1, disp.x0, disp.y0); mlout_sec(dat, disp.x0, disp.y1, disp.x0, disp.y0, npy, cp, dig10(numc), dig1(numc), fv); ++numc; mlout_fancy(name, dat, cp, numc); mlout_print(dat, name, 'p'); fclose(dat); return; } /* export full mode profile data ("all") into a viewer m-file disp: the plot window npx, npy: number of plot points in the x and y directions ext0, ext1: filename id characters */ void EIMode::viewer(Rect disp, int npx, int npy, char ext0, char ext1) const { FILE *dat; char name[13] = "prf__A.m"; double wl = wg.lambda; name[3] = ext0; name[4] = ext1; double xbeg, xend, ybeg, yend; xbeg = disp.x0; xend = disp.x1; ybeg = disp.y0; yend = disp.y1; fprintf(stderr, "viewer([%g (%d) %g] x [%g (%d) %g]) >> %s\n", xbeg, npx, xend, ybeg, npy, yend, name); dat = fopen(name, "w+"); name[6] = 0; mlout_viewertopxy(dat, name, pol, wl); mlout_gengeoxy(dat, st, disp); mlout_meshxy(dat, disp, npx, npy); Dmatrix f; Fcomp cp; if(pol == TE) { cp = EX; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = EY; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = EZ; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtoim(dat, cp); cp = HX; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = HY; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = HZ; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtoim(dat, cp); } else { cp = EX; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = EY; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = EZ; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtoim(dat, cp); cp = HX; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = HY; f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtore(dat, cp); mlout_0fldxy(dat, cp); mlout_fldtoim(dat, cp); cp = HZ; mlout_0fldxy(dat, cp); mlout_fldtore(dat, cp); f = field(disp, npx, npy, cp, ORG); mlout_fld(dat, npx, npy, cp, f); mlout_fldtoim(dat, cp); } Dmatrix n(npx, npy); double dx = (xend-xbeg)/((double)(npx-1)); double dy = (yend-ybeg)/((double)(npy-1)); double x, y; y = ybeg; for(int j=0; j<=npy-1; ++j) { x = xbeg; for(int i=0; i<=npx-1; ++i) { n(i, j) = st.n(x, y); x += dx; } y += dy; } mlout_n(dat, npx, npy, n); mlout_fldviewerxy(dat, name); fclose(dat); return; } /* ------------------------------------------------------------------------ */ /* initialize */ EIModeArray::EIModeArray() : num(0) { mvec = NULL; } /* destroy */ EIModeArray::~EIModeArray() { if(mvec != NULL) delete[] mvec; mvec = NULL; num = 0; } /* copy constructor */ EIModeArray::EIModeArray(const EIModeArray& ma) : num(ma.num) { mvec = new EIMode[num]; EIMode* ap = ma.mvec; EIMode* mp = mvec; for(int i=0; i<=num-1; ++i) *mp++ = *ap++; } /* assignment */ EIModeArray& EIModeArray::operator=(const EIModeArray& ma) { if(this != &ma) { if(mvec != NULL) delete[] mvec; num = ma.num; mvec = new EIMode[num]; EIMode *ap = ma.mvec; EIMode *mp = mvec; for(int i=0; i<=num-1; ++i) *mp++ = *ap++; } return *this; } /* delete all Mode entries */ void EIModeArray::clear() { if(mvec != NULL) delete[] mvec; mvec = NULL; num = 0; } /* subscripting */ EIMode& EIModeArray::operator() (int i) { if(i<0 || i>=num) eimodeerror("EIModeArray: () out of range"); return mvec[i]; } EIMode EIModeArray::operator() (int i) const { if(i<0 || i>=num) eimodeerror("EIModeArray: () out of range"); return mvec[i]; } /* add a mode */ void EIModeArray::add(EIMode m) { EIMode *avec; avec = new EIMode[num+1]; EIMode* ap = avec; EIMode* mp = mvec; for(int i=0; i<=num-1; ++i) *ap++ = *mp++; *ap = m; if(mvec != NULL) delete[] mvec; mvec = avec; ++num; return; } /* delete a mode entry */ void EIModeArray::remove(int i) { if(i<0 || i>=num) eimodeerror("EIModeArray: remove: argument out of range"); if(num == 1) { delete[] mvec; mvec = NULL; num = 0; return; } EIMode *avec; avec = new EIMode[num-1]; EIMode* ap = avec; EIMode* mp = mvec; for(int j=0; j<=i-1; ++j) *ap++ = *mp++; mp++; for(int j=i+1; j<=num-1; ++j) *ap++ = *mp++; if(mvec != NULL) delete[] mvec; mvec = avec; --num; return; } /* add an entire EIModeArray nma */ void EIModeArray::merge(EIModeArray& ma) { EIMode *avec; avec = new EIMode[num+ma.num]; EIMode* ap = avec; EIMode* mp = mvec; for(int i=0; i<=num-1; ++i) *ap++ = *mp++; EIMode* np = ma.mvec; for(int i=0; i<=ma.num-1; ++i) *ap++ = *np++; if(mvec != NULL) delete[] mvec; mvec = avec; num += ma.num; return; } /* sort the array by propagation constants, highest first */ void EIModeArray::sort() { int j, k, maxi; double maxb; EIMode t; if(num<=1) return; for(j=0; j<=num-2; ++j) { maxi = j; maxb = (mvec[j]).beta; for(k=j+1; k<=num-1; ++k) { if(maxb<(mvec[k]).beta) { maxb = (mvec[k]).beta; maxi = k; } } t = mvec[j]; mvec[j] = mvec[maxi]; mvec[maxi] = t; } return; } /* ................................................................... */ /* * mode interference evaluation and visualization * all modes are assumed to belong to the same waveguide ! */ /* field superposition at point (x, y, z), amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) cp: EX, EY, EZ, HX, HY, HZ, SZ */ Complex EIModeArray::field(Cvector amp, Cvector pert, Fcomp cp, double x, double y, double z) const { Complex s, iphase; double f, sum; Cvector a; int i, j, k; a = amp; if(amp.nel < num) { a = Cvector(num); a.init(CC0); for(i=0; i<=amp.nel-1; ++i) a(i) = amp(i); } Cvector cbet(num); for(i=0; i<=num-1; ++i) cbet(i) = pert(i)+(mvec[i]).beta; if(z != 0.0) { for(i=0; i<=num-1; ++i) { iphase = CCI*cbet(i)*(-z); a(i) = a(i)*exp(iphase); } } s = CC0; if(cp != SZ) { for(i=0; i<=num-1; ++i) { f = (mvec[i]).field(cp, x, y); if(cp != EZ && cp != HZ) s = s+a(i)*f; else { a(i) = CCI*a(i); s = s+a(i)*f; } } return s; } sum = 0.0; Dvector fEx(num); Dvector fEy(num); Dvector fHx(num); Dvector fHy(num); for(j=0; j<=num-1; ++j) fEx(j) = (mvec[j]).field(EX, x, y); for(j=0; j<=num-1; ++j) fEy(j) = (mvec[j]).field(EY, x, y); for(j=0; j<=num-1; ++j) fHx(j) = (mvec[j]).field(HX, x, y); for(j=0; j<=num-1; ++j) fHy(j) = (mvec[j]).field(HY, x, y); for(j=0; j<=num-1; ++j) { for(k=0; k<=num-1; ++k) { f = (a(j).re*a(k).re + a(j).im*a(k).im); sum += f*(fEx(j)*fHy(k)-fEy(j)*fHx(k)); } } s = Complex(0.5*sum); return s; } /* evaluate component cp on a rectangular npx x npy mesh on area disp at position z amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) cp: EX - HZ, SZ foa: MOD, SQR, REP, IMP */ Dmatrix EIModeArray::field(Cvector amp, Cvector pert, double z, Rect disp, int npx, int npy, Fcomp cp, Afo foa) const { double x, y; double dx, dy; int i, j; if(npx <= 1) eimodeerror("field: npx <= 1"); if(npy <= 1) eimodeerror("field: npy <= 1"); Dmatrix f(npx, npy); Complex ft; double ftd; dx = (disp.x1-disp.x0)/(npx-1); dy = (disp.y1-disp.y0)/(npy-1); for(i=0; i<=npx-1; ++i) { x = disp.x0+i*dx; for(j=0; j<=npy-1; ++j) { y = disp.y0+j*dy; ft = field(amp, pert, cp, x, y, z); ftd = 0.0; switch(foa) { case MOD: case ORG: ftd = ft.abs(); break; case SQR: ftd = ft.sqabs(); break; case REP: ftd = ft.re; break; case IMP: ftd = ft.im; break; } f(i, j) = ftd; } } return f; } /* store nump values of component cp between (x0, y0) and (x1, y1) in a vector at position z amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) cp: EX - HZ, SZ foa: MOD, SQR, REP, IMP */ Dvector EIModeArray::field(Cvector amp, Cvector pert, double z, Fcomp cp, Afo foa, int nump, double x0, double y0, double x1, double y1) const { double x, y; double dx, dy; int j; if(nump <= 1) eimodeerror("field: nump <= 1"); Dvector f(nump); Complex ft; double ftd; dx = (x1-x0)/(nump-1); dy = (y1-y0)/(nump-1); for(j=0; j<=nump-1; ++j) { x = x0+j*dx; y = y0+j*dy; ft = field(amp, pert, cp, x, y, z); ftd = 0.0; switch(foa) { case MOD: case ORG: ftd = ft.abs(); break; case SQR: ftd = ft.sqabs(); break; case REP: ftd = ft.re; break; case IMP: ftd = ft.im; break; } f(j) = ftd; } return f; } /* - Three segment coupler, power transfer evaluation -------------------- imode: input mode, excites the modes this(j) with rel. power 1 at z=0 omode: output mode, relative power is returned pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) (!) this(j) are assumed to be normalized. */ /* weights for the power evaluation: w = ( omode | this(m) ) ( this(m) | imode ) */ double EIModeArray::pweight(const EIMode& imode, int m, const EIMode& omode) { return lscalprod(omode, (mvec[m]))*lscalprod((mvec[m]), imode); } /* single relative power level for device length l */ double EIModeArray::iopower(const EIMode& imode, const EIMode& omode, Cvector pert, double l) { Complex a, am, cbet; int m; a=CC0; for(m=0; m<=num-1; ++m) { cbet = pert(m)+(mvec[m]).beta; cbet = CCI*cbet*(-l); am = exp(cbet)*pweight(imode, m, omode); a = a+am; } return a.sqabs(); } /* output power for numl devices of lengths between lmin and lmax */ Dvector EIModeArray::iopower(const EIMode& imode, const EIMode& omode, Cvector pert, int numl, double lbeg, double lend) { Complex a, am, cbet; a=CC0; Dvector w(num); double l, dl; int li, m; if(numl <= 0) eimodeerror("iopower: numl <= 0"); Dvector p(numl); if(numl == 1) { p(0) = iopower(imode, omode, pert, 0.5*(lbeg+lend)); return p; } for(m=0; m<=num-1; ++m) { w(m) = pweight(imode, m, omode); } dl = (lend-lbeg)/(numl-1.0); for(li=0; li<=numl-1; ++li) { a=CC0; l = lbeg+li*dl; for(m=0; m<=num-1; ++m) { cbet = pert(m)+(mvec[m]).beta; cbet = CCI*cbet*(-l); am = exp(cbet)*w(m); a = a+am; } p(li) = a.sqabs(); } return p; } /* ... write to file */ void EIModeArray::writeiopower(const EIMode& imode, const EIMode& omode, Cvector pert, int numl, double lbeg, double lend, char ext0, char ext1) { Dvector p; int i; double l, dl; char name[13] = "__pow__.xyf"; FILE *dat; if(numl <= 0) eimodeerror("iopower: numl <= 0"); switch((mvec[0]).pol) { case TE: name[0] = 't'; name[1] = 'e'; break; case TM: name[0] = 't'; name[1] = 'm'; break; } name[5] = ext0; name[6] = ext1; fprintf(stderr, "P(L = %g -> %g) >> %s\n", lbeg, lend, name); p = iopower(imode, omode, pert, numl, lbeg, lend); dat = fopen(name, "w+"); if(p.nel == 1) { fprintf(dat, "%g %.10g\n", 0.5*(lbeg+lend), p(0)); fclose(dat); return; } dl = (lend-lbeg)/(p.nel-1.0); for(i=0; i<=p.nel-1; ++i) { l = lbeg+i*dl; fprintf(dat, "%g %.10g\n", l, p(i)); } fclose(dat); return; } /* - Output to MATLAB .m files ---------------------------------------- */ /* write single component of the interference field at position z to MATLAB .m file amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) cp: EX - HZ, SZ foa: MOD, ORG, SQR, REP, IMP disp: output region on the x-y-plane npx, npy: number of points in output mesh ext0, ext1: filename id characters pltype: 'C': contour plot 'S': surface plot 'I': intensity image 'N': field + mesh only, no plot commands (default) */ void EIModeArray::plot(Cvector amp, Cvector pert, double z, Fcomp cp, Afo foa, Rect disp, int npx, int npy, char ext0, char ext1, char pltype) const { FILE *dat; char name[13] = "int______.m"; double rz; double max, f, maxsum, minf, maxf; int i; rz = floor(z*10.0+0.5)/10.0; name[3] = afochr(foa); name[4] = fldchr(cp); name[5] = cpchr(cp); name[6] = ext0; name[7] = ext1; name[8] = pltype; fprintf(stderr, "%c%c(z=%g) >> %s\n", fldchr(cp), cpchr(cp), z, name); dat = fopen(name, "w+"); mlout_title(dat, name, "WMM interference field"); maxf = 1.0; minf = -1.0; switch(fldchr(cp)) { case 'E': maxsum = 0.0; for(i=0; i<=num-1; ++i) { max = fabs((mvec[i]).maxE)*amp(i).abs(); f = fabs((mvec[i]).minE)*amp(i).abs(); if(max < f) max = f; maxsum += max; } switch(foa) { case MOD: maxf = maxsum; minf = 0.0; break; case SQR: maxf = maxsum*maxsum; minf = 0.0; break; case ORG: case REP: case IMP: maxf = maxsum; minf = -maxsum; break; } break; case 'H': maxsum = 0.0; for(i=0; i<=num-1; ++i) { max = fabs((mvec[i]).maxH)*amp(i).abs(); f = fabs((mvec[i]).minH)*amp(i).abs(); if(max < f) max = f; maxsum += max; } switch(foa) { case MOD: maxf = maxsum; minf = 0.0; break; case SQR: maxf = maxsum*maxsum; minf = 0.0; break; case ORG: case REP: case IMP: maxf = maxsum; minf = -maxsum; break; } break; case 'S': maxsum = 0.0; for(i=0; i<=num-1; ++i) { max = fabs((mvec[i]).maxS)*amp(i).abs()*amp(i).abs(); f = fabs((mvec[i]).minS)*amp(i).abs()*amp(i).abs(); if(max < f) max = f; maxsum += sqrt(max); } switch(foa) { case MOD: case ORG: case REP: maxf = maxsum*maxsum; minf = 0.0; break; case SQR: // not reasonable maxf = maxsum*maxsum*maxsum*maxsum; minf = 0.0; break; case IMP: // not reasonable maxf = 1.0; minf = 0.0; break; } break; } if(pltype == 'I') mlout_gengeoxy(dat, (mvec[0]).st, disp); else mlout_geo(dat, (mvec[0]).wg, minf, maxf); mlout_meshxy(dat, disp, npx, npy); Dmatrix fld; fld = field(amp, pert, z, disp, npx, npy, cp, foa); mlout_fld(dat, npx, npy, cp, fld); name[9] = 0; switch(pltype) { case 'C': mlout_contour(name, dat, cp, foa); mlout_annotatezpos(dat, rz); break; case 'S': mlout_surface(name, dat, cp, foa); break; case 'I': mlout_image(name, dat, cp, foa, minf, maxf); if(foa == MOD || foa == SQR) mlout_lincolormap(dat); else mlout_magcolormap(dat); mlout_annotatezpos(dat, rz); break; default: break; } mlout_print(dat, name, 'e'); fclose(dat); return; } /* write interference field at position z to MATLAB .m file, fancy style :-) amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) disp: output region on the x-y-plane npx, npy: number of points in output mesh ext0, ext1: filename id characters */ #define HDIST 1.0e-8 void EIModeArray::fplot(Cvector amp, Cvector pert, double z, Rect disp, int npx, int npy, char ext0, char ext1) const { FILE *dat; int np, l, m; double x0, x1, xp, y0, y1, yp; int numc; double max, f, maxsum, minf, maxf; int i; WgCrs wg; char name[13] = "intfSz__F.m"; name[6] = ext0; name[7] = ext1; fprintf(stderr, "Sz(z=%g) >> %s\n", z, name); dat = fopen(name, "w+"); mlout_title(dat, name, "WMM interference field :-)"); name[9] = 0; wg = (mvec[0]).wg; maxsum = 0.0; for(i=0; i<=num-1; ++i) { max = fabs((mvec[i]).maxS)*amp(i).abs()*amp(i).abs(); f = fabs((mvec[i]).minS)*amp(i).abs()*amp(i).abs(); if(max < f) max = f; maxsum += sqrt(max); } maxf = maxsum*maxsum; minf = 0.0; mlout_geo(dat, wg, minf, maxf); mlout_meshxy(dat, disp, npx, npy); Dmatrix fld; fld = field(amp, pert, z, disp, npx, npy, SZ, ORG); mlout_fld(dat, npx, npy, SZ, fld); Dvector fv; numc = 0; for(l=0; l<=wg.nx+1; ++l) { if(l==0) x0 = disp.x0; else x0 = wg.hx(l-1); if(l==wg.nx+1) x1 = disp.x1; else x1 = wg.hx(l); for(m=0; m<=wg.ny; ++m) { if(fabs(wg.n(l,m)-wg.n(l,m+1)) > 1.0e-10) { yp = wg.hy(m); np = (int)(((double) npx) *(x1-x0)/(disp.x1-disp.x0)); if(np >= 2) { fv = field(amp, pert, z, SZ, ORG, np, x0+HDIST, yp-HDIST, x1-HDIST, yp-HDIST); mlout_sec(dat, x0+HDIST, yp-HDIST, x1-HDIST, yp-HDIST, np, SZ, dig10(numc), dig1(numc), fv); ++numc; fv = field(amp, pert, z, SZ, ORG, np, x0+HDIST, yp+HDIST, x1-HDIST, yp+HDIST); mlout_sec(dat, x0+HDIST, yp+HDIST, x1-HDIST, yp+HDIST, np, SZ, dig10(numc), dig1(numc), fv); ++numc; } } } } for(m=0; m<=wg.ny+1; ++m) { if(m==0) y0 = disp.y0; else y0 = wg.hy(m-1); if(m==wg.ny+1) y1 = disp.y1; else y1 = wg.hy(m); for(l=0; l<=wg.nx; ++l) { if(fabs(wg.n(l,m)-wg.n(l+1,m)) > 1.0e-10) { xp = wg.hx(l); np = (int)(((double)npy) *(y1-y0)/(disp.y1-disp.y0)); if(np >= 2) { fv = field(amp, pert, z, SZ, ORG, np, xp-HDIST, y0+HDIST, xp-HDIST, y1-HDIST); mlout_sec(dat, xp-HDIST, y0+HDIST, xp-HDIST, y1-HDIST, np, SZ, dig10(numc), dig1(numc), fv); ++numc; fv = field(amp, pert, z, SZ, ORG, np, xp+HDIST, y0+HDIST, xp+HDIST, y1-HDIST); mlout_sec(dat, xp+HDIST, y0+HDIST, xp+HDIST, y1-HDIST, np, SZ, dig10(numc), dig1(numc), fv); ++numc; } } } } fv = field(amp, pert, z, SZ, ORG, npx, disp.x0, disp.y0, disp.x1, disp.y0); mlout_sec(dat, disp.x0, disp.y0, disp.x1, disp.y0, npx, SZ, dig10(numc), dig1(numc), fv); ++numc; fv = field(amp, pert, z, SZ, ORG, npy, disp.x1, disp.y0, disp.x1, disp.y1); mlout_sec(dat, disp.x1, disp.y0, disp.x1, disp.y1, npy, SZ, dig10(numc), dig1(numc), fv); ++numc; fv = field(amp, pert, z, SZ, ORG, npx, disp.x1, disp.y1, disp.x0, disp.y1); mlout_sec(dat, disp.x1, disp.y1, disp.x0, disp.y1, npx, SZ, dig10(numc), dig1(numc), fv); ++numc; fv = field(amp, pert, z, SZ, ORG, npy, disp.x0, disp.y1, disp.x0, disp.y0); mlout_sec(dat, disp.x0, disp.y1, disp.x0, disp.y0, npy, SZ, dig10(numc), dig1(numc), fv); ++numc; mlout_fancy(name, dat, SZ, numc); mlout_print(dat, name, 'p'); fclose(dat); return; } /* animate the interference field amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) cp: EX - HZ, SZ foa: MOD, ORG, SQR, REP, IMP disp: output region on the x-y-plane npx, npy: number of points in output mesh pltype: 'C': contour plot 'S': surface plot 'I': intensity image 'F': fancy plot, cp and foa are set to SZ, MOD (default) z0, z1, numz: .m files are generated for z-positions z0+i*(z1-z0)/numz, i=0, ..., numz-1 */ void EIModeArray::movie(Cvector amp, Cvector pert, Fcomp cp, Afo foa, Rect disp, int npx, int npy, char pltype, int numz, double z0, double z1) const { int i; double z; FILE *dat; char name[13] = "int___plM.m"; if(numz >= 100) numz = 100; if(pltype == 'F') { cp = SZ; foa = ORG; } name[3] = afochr(foa); name[4] = fldchr(cp); name[5] = cpchr(cp); for(i=0; i<=numz-1; ++i) { z = z0+i*(z1-z0)/numz; switch(pltype) { case 'C': case 'S': case 'I': plot(amp, pert, z, cp, foa, disp, npx, npy, dig10(i), dig1(i), pltype); break; default: fplot(amp, pert, z, disp, npx, npy, dig10(i), dig1(i)); break; } } fprintf(stderr, ">> %s\n", name); dat = fopen(name, "w+"); mlout_title(dat, name, "WMM interference field animation"); name[8] = pltype; name[9] = 0; mlout_play(dat, name, numz); fclose(dat); return; } /* interference pattern on the horizontal plane ybeg <= y <= yend, zbeg <= z <= zend at position x image plot corresponding to the squareroot of the local intensity (SZ) amp: complex amplitudes at z=0 amplitudes evolve according to amp_j(z) = amp_i(0)*exp(-i cbet_j*z) pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) npy, npz: number of points in output mesh ext0, ext1: filename id characters */ void EIModeArray::prop(Cvector amp, Cvector pert, double x, double ybeg, double yend, int npy, double zbeg, double zend, int npz, char ext0, char ext1) const { FILE *dat; char name[13] = "propSz__I.m"; double max, f, maxsum, minf, maxf; double y, dy, z, dz; Complex ph; int i, j, k, m; name[6] = ext0; name[7] = ext1; fprintf(stderr, "prop(z=%g -> %g) >> %s\n", zbeg, zend, name); dat = fopen(name, "w+"); mlout_title(dat, name, "WMM interference field"); maxsum = 0.0; for(i=0; i<=num-1; ++i) { max = fabs((mvec[i]).maxS)*amp(i).abs()*amp(i).abs(); f = fabs((mvec[i]).minS)*amp(i).abs()*amp(i).abs(); if(max < f) max = f; maxsum += sqrt(max); } maxf = maxsum; minf = 0.0; mlout_geoyz(dat, (mvec[0]).wg, minf, maxf); mlout_meshyz(dat, ybeg, yend, npy, zbeg, zend, npz); Dmatrix fld(npy, npz); dy = (yend-ybeg)/(npy-1); dz = (zend-zbeg)/(npz-1); Dmatrix pmprofEx(num, npy); Dmatrix pmprofEy(num, npy); Dmatrix pmprofHx(num, npy); Dmatrix pmprofHy(num, npy); for(i=0; i<=npy-1; ++i) { y = ybeg+i*dy; for(m=0; m<=num-1; ++m) { pmprofEx(m, i) = (mvec[m]).field(EX, x, y); pmprofEy(m, i) = (mvec[m]).field(EY, x, y); pmprofHx(m, i) = (mvec[m]).field(HX, x, y); pmprofHy(m, i) = (mvec[m]).field(HY, x, y); } } Cvector cbet(num); Cvector ma(num); for(m=0; m<=num-1; ++m) cbet(m) = pert(m)+(mvec[m]).beta; for(j=0; j<=npz-1; ++j) { z = zbeg+j*dz; for(m=0; m<=num-1; ++m) { ph = CCI*cbet(m)*(-z); ma(m) = amp(m)*exp(ph); } for(i=0; i<=npy-1; ++i) { y = ybeg+i*dy; f = 0.0; for(m=0; m<=num-1; ++m) { for(k=0; k<=num-1; ++k) { f += (ma(m).re*ma(k).re+ma(m).im*ma(k).im) *(pmprofEx(m,i)*pmprofHy(k,i)-pmprofEy(m,i)*pmprofHx(k,i)); } } fld(i, j) = sqrt(0.5*fabs(f)); } } mlout_fld(dat, npy, npz, SZ, fld); name[9] = 0; mlout_propimage(name, dat); mlout_print(dat, name, 'e'); fclose(dat); return; } /* Three segment coupler, interference pattern on the horizontal plane ybeg <= y <= yend, zbeg <= z <= zend at elevation x image plot corresponding to the squareroot of the local intensity (SZ) z < 0: intensity profile of the input modes 0 < z < l: mode interference pattern l < z: intensity profiles of the output modes, scaled by the output amplitudes (!) this(j) are assumed to be normalized. iamp: complex input mode amplitudes imodes: input modes, excite the modes this(j) at z=0, belonging to well separated port waveguides iwg: the geometry of the combined input waveguides omodes: output modes, belonging to well separated port waveguides owg: the geometry of the combined output waveguides iwg and owg are for displaying purposes only pert: propagation constant perturbations, cbet(j) = this(j).beta+pert(j) npy, npz: number of points in output mesh ext0, ext1: filename id characters */ #define ZDIST 1.0e-8 void EIModeArray::ioprop(Cvector iamp, const EIModeArray& imodes, WgCrs iwg, const EIModeArray& omodes, WgCrs owg, Cvector pert, double x, double l, double ybeg, double yend, int npy, double zbeg, double zend, int npz, char ext0, char ext1) const { FILE *dat; char name[13] = "propSz__I.m"; double max, f, maxsum, minf, maxf; int i, j, m, k; double y, z, dy, dz; Complex a, aj; Complex ph; double s; name[6] = ext0; name[7] = ext1; if(zbeg >= zend) eimodeerror("ioprop: invalid z-range"); fprintf(stderr, "ioprop(z=%g -> %g) >> %s\n", zbeg, zend, name); dat = fopen(name, "w+"); mlout_title(dat, name, "WMM interference field"); maxsum = 0.0; for(i=0; i<=imodes.num-1; ++i) { max = fabs(imodes(i).maxS)*iamp(i).abs()*iamp(i).abs(); f = fabs(imodes(i).minS)*iamp(i).abs()*iamp(i).abs(); if(max < f) max = f; maxsum += sqrt(max); } maxf = maxsum; minf = 0.0; mlout_iopropgeo(dat, iwg, (mvec[0]).wg, owg, l, minf, maxf); mlout_meshyz(dat, ybeg, yend, npy, zbeg, zend, npz); s = 0.0; for(i=0; i<=imodes.num-1; ++i) { fprintf(stderr, " A^I_%d = %g + i %g, |A|^2: %g\n", i, iamp(i).re, iamp(i).im, iamp(i).sqabs()); s += iamp(i).sqabs(); } fprintf(stderr, " total input power: %g\n", s); Cvector pamp(num); for(i=0; i<=num-1; ++i) { a = CC0; for(j=0; j<=imodes.num-1; ++j) { aj = iamp(j)*lscalprod((mvec[i]), imodes(j)); a = a+aj; } pamp(i) = a; } s=0.0; for(i=0; i<=num-1; ++i) { fprintf(stderr, " A^II_%d = %g + i %g, |A|^2: %g\n", i, pamp(i).re, pamp(i).im, pamp(i).sqabs()); s += pamp(i).sqabs(); } fprintf(stderr, " propagating power: %g\n", s); Cvector oamp(omodes.num); for(i=0; i<=omodes.num-1; ++i) { a = CC0; for(j=0; j<=num-1; ++j) { aj = pamp(j)*lscalprod(omodes(i), (mvec[j])); ph = pert(j)+(mvec[j]).beta; ph = CCI*ph*(-l); aj = aj*exp(ph); a = a+aj; } oamp(i) = a; } s=0.0; for(i=0; i<=omodes.num-1; ++i) { fprintf(stderr, " A^III_%d = %g + i %g, |A|^2: %g\n", i, oamp(i).re, oamp(i).im, oamp(i).sqabs()); s += oamp(i).sqabs(); } fprintf(stderr, " total output power: %g\n", s); Dmatrix fld(npy, npz); dy = (yend-ybeg)/(npy-1); dz = (zend-zbeg)/(npz-1); Dvector improf(npy); Dvector omprof(npy); for(i=0; i<=npy-1; ++i) { y = ybeg+i*dy; improf(i) = 0.0; for(m=0; m<=imodes.num-1; ++m) { improf(i) += imodes(m).field(SZ, x, y) *iamp(m).sqabs(); } omprof(i) = 0.0; for(m=0; m<=omodes.num-1; ++m) { omprof(i) += omodes(m).field(SZ, x, y) *oamp(m).sqabs(); } } Dmatrix pmprofEx(num, npy); Dmatrix pmprofEy(num, npy); Dmatrix pmprofHx(num, npy); Dmatrix pmprofHy(num, npy); for(i=0; i<=npy-1; ++i) { y = ybeg+i*dy; for(m=0; m<=num-1; ++m) { pmprofEx(m, i) = (mvec[m]).field(EX, x, y); pmprofEy(m, i) = (mvec[m]).field(EY, x, y); pmprofHx(m, i) = (mvec[m]).field(HX, x, y); pmprofHy(m, i) = (mvec[m]).field(HY, x, y); } } Cvector cbet(num); Cvector ma(num); for(m=0; m<=num-1; ++m) cbet(m) = pert(m)+(mvec[m]).beta; for(j=0; j<=npz-1; ++j) { z = zbeg+j*dz; if(z<= 0.0) { for(i=0; i<=npy-1; ++i) fld(i, j) = sqrt(improf(i)); } else { if(z>=l) { for(i=0; i<=npy-1; ++i) fld(i, j) = sqrt(omprof(i)); } else { for(m=0; m<=num-1; ++m) { ph = CCI*cbet(m)*(-z); ma(m) = pamp(m)*exp(ph); } for(i=0; i<=npy-1; ++i) { y = ybeg+i*dy; f = 0.0; for(m=0; m<=num-1; ++m) { for(k=0; k<=num-1; ++k) { f += (ma(m).re*ma(k).re+ma(m).im*ma(k).im) *(pmprofEx(m,i)*pmprofHy(k,i)-pmprofEy(m,i)*pmprofHx(k,i)); } } fld(i, j) = sqrt(0.5*fabs(f)); } } } } mlout_fld(dat, npy, npz, SZ, fld); name[9] = 0; mlout_iopropimage(name, dat); mlout_print(dat, name, 'e'); fclose(dat); return; } /* ... as before, but with a single input mode imode only */ void EIModeArray::ioprop(const EIMode& imode, const EIModeArray& omodes, WgCrs owg, Cvector pert, double x, double l, double ybeg, double yend, int npy, double zbeg, double zend, int npz, char ext0, char ext1) const { EIModeArray imodes; imodes.add(imode); Cvector iamp(1); iamp(0) = CC1; ioprop(iamp, imodes, imode.wg, omodes, owg, pert, x, l, ybeg, yend, npy, zbeg, zend, npz, ext0, ext1); return; } /* ... as before, with single input and output modes imode and omode */ void EIModeArray::ioprop(const EIMode& imode, const EIMode& omode, Cvector pert, double x, double l, double ybeg, double yend, int npy, double zbeg, double zend, int npz, char ext0, char ext1) const { EIModeArray omodes; omodes.add(omode); ioprop(imode, omodes, omode.wg, pert, x, l, ybeg, yend, npy, zbeg, zend, npz, ext0, ext1); return; } /* ---------------------------------------------------------------------- */ /* scalar product 0.5\int\int(E_1x H_2y - E_1y H_2x)dxdy of two modes : */ /* helper function */ double twomprodint(const EIMode& mode1, int l1, int m1, Fcomp cp1, const EIMode& mode2, int l2, int m2, Fcomp cp2, Rect r) { FldorDer vfod1, vfod2, hfod1, hfod2; double fac1, fac2; mode1.fldrep(cp1, fac1, vfod1, hfod1, l1, m1); mode2.fldrep(cp2, fac2, vfod2, hfod2, l2, m2); return fac1*fac2 *mode1.vmp.integrate(vfod1, mode2.vmp, vfod2, Interval(r.x0, r.x1)) *mode1.hmp.integrate(hfod1, mode2.hmp, hfod2, Interval(r.y0, r.y1)); } /* integration of a fieldproduct between two modes over rectangle r */ double twomrecint(const EIMode& mode1, Fcomp cp1, const EIMode& mode2, Fcomp cp2, Rect r) { double x, y; double xt, yr; double yr1, yr2; double xt1, xt2; double s, z; int l1, m1; int l2, m2; int dum; Rect rp; if(r.x0 > r.x1) { x=r.x1; r.x1=r.x0; r.x0=x; } if(r.y0 > r.y1) { y=r.y1; r.y1=r.y0; r.y0=y; } rp = XYplane; if(r.x0 < rp.x0) r.x0=rp.x0; if(r.x1 > rp.x1) r.x1=rp.x1; if(r.y0 < rp.y0) r.y0=rp.y0; if(r.y1 > rp.y1) r.y1=rp.y1; s = 0.0; y = r.y0; while(fabs(r.y1-y) > HDIST) { mode1.wg.rectidx(0.0, y+HDIST/2.0, dum, m1); mode2.wg.rectidx(0.0, y+HDIST/2.0, dum, m2); yr1 = mode1.wg.rectbounds(0,m1).y1; yr2 = mode2.wg.rectbounds(0,m2).y1; yr = yr1; if(yr2<yr) yr = yr2; if(r.y1<yr) yr = r.y1; x = r.x0; while(fabs(r.x1-x) > HDIST) { mode1.wg.rectidx(x+HDIST/2.0, 0.0, l1, dum); mode2.wg.rectidx(x+HDIST/2.0, 0.0, l2, dum); xt1 = mode1.wg.rectbounds(l1,0).x1; xt2 = mode2.wg.rectbounds(l2,0).x1; xt = xt1; if(xt2<xt) xt = xt2; if(r.x1<xt) xt = r.x1; rp.x0 = x; rp.y0 = y; rp.x1 = xt; rp.y1 = yr; z = twomprodint(mode1, l1, m1, cp1, mode2, l2, m2, cp2, rp); s += z; x = xt; } y = yr; } return s; } /* scalar product 0.5\int\int(E_1x H_2y - E_1y H_2x)dxdy of two modes */ double scalprod(const EIMode& m1, const EIMode& m2) { Rect r; r = XYplane; /* int l, m; double s; if(m1.pol == TE && m2.pol == TE) return 0.5*m2.beta*val_invommu0(m1.wg.lambda) *twomrecint(m1, EY, m2, HY, r); if(m1.pol == TM && m2.pol == TM) { s = 0.0; for(l=0; l<=m2.wg.nx+1; ++l) { for(m=0; m<=m2.wg.ny+1; ++m) s += twomrecint(m1, HY, m2, HY, m2.wg.rectbounds(l, m))/m2.wg.eps(l,m); } return 0.5*m2.beta*val_invomep0(m2.wg.lambda)*s; } */ return 0.5*( twomrecint(m1, EX, m2, HY, r) -twomrecint(m1, EY, m2, HX, r)); } /* scalar product 0.25\int\int(E_1x H_2y - E_1y H_2x + E_2x H_1y - E_2y H_1x)dxdy of two modes */ double lscalprod(const EIMode& m1, const EIMode& m2) { return 0.5*(scalprod(m1,m2)+scalprod(m2,m1)); } /* for a superposition of two EIModes m1, m2 with coefficients c1, c2: intensity behind a polarizer at an angle alpha versus TE-polarization */ double polarizeroutput(Complex c1, const EIMode& m1, Complex c2, const EIMode& m2, double alpha) { double p, pt; p = 0.0; pt = 0.0; if(sin(alpha) != 0.0) pt += sin(alpha)*sin(alpha)*m1.recint(EX, EX, XYplane); if(cos(alpha) != 0.0) pt += cos(alpha)*cos(alpha)*m1.recint(EY, EY, XYplane); if(sin(alpha) != 0.0 && cos(alpha) != 0.0) pt += 2.0*sin(alpha)*cos(alpha)*m1.recint(EX, EY, XYplane); pt *= c1.sqabs(); p += pt; pt = 0.0; if(sin(alpha) != 0.0) pt += sin(alpha)*sin(alpha)*m2.recint(EX, EX, XYplane); if(cos(alpha) != 0.0) pt += cos(alpha)*cos(alpha)*m2.recint(EY, EY, XYplane); if(sin(alpha) != 0.0 && cos(alpha) != 0.0) pt += 2.0*sin(alpha)*cos(alpha)*m2.recint(EX, EY, XYplane); pt *= c2.sqabs(); p += pt; pt = 0.0; if(sin(alpha) != 0.0) pt += sin(alpha)*sin(alpha)*twomrecint(m1, EX, m2, EX, XYplane); if(cos(alpha) != 0.0) pt += cos(alpha)*cos(alpha)*twomrecint(m1, EY, m2, EY, XYplane); if(sin(alpha) != 0.0 && cos(alpha) != 0.0) pt += sin(alpha)*cos(alpha)*twomrecint(m1, EX, m2, EY, XYplane); if(sin(alpha) != 0.0 && cos(alpha) != 0.0) pt += sin(alpha)*cos(alpha)*twomrecint(m1, EY, m2, EX, XYplane); pt *= 2.0*(c1.re*c2.re + c1.im*c2.im); p += pt; p *= 0.5*INVSQRTMU0/INVSQRTEP0; return p; } /* --- VEIMS mode solver, helper functions, mode identification ------- */ /* trap propagation constants */ #define MAXNUMSCF 100 // record this many suspicious configurations for the // further propagation constant search int trapX(ModeArray& ma, Mode& m, double bqmin, double bqmax, double sf, int sm, Dmatrix& susp, int& nsc, int lfsc, int verb) { int newv; double step; double bq0, bq1, bq2, f0, f1, f2, mbq; double err; if(bqmax <= bqmin) return 0; if(bqmax-bqmin < bqSep(bqmax)) return 0; newv = 0; double nst = 10.0*pow(10.0, sf); step = (bqmax-bqmin)/nst; if(step < bqTol(bqmax)) step = bqTol(bqmax); if(step < bqTol(bqmin)) step = bqTol(bqmin); // fprintf(stderr, "trapX(%.15g#%d <--T%c-- %.15g#%d <%.15g>, %d)\n", // bqmin, zmax, polCHR(m.pol), bqmax, zmin, step, sm); bq1 = bqmax; f1 = m.travres(bq1); bq0 = bqmax; f0 = f1; for(double bq=bqmax-step; bq>=bqmin; bq-=step) { bq2 = bq; f2 = m.travres(bq2); if(f1*f2 <= 0.0) { mbq = m.localize(bq2, bq1, bqTol(bq2)); m.polish(mbq); err = m.checkcontinuity(); // fprintf(stderr, "\n< %s >\n", m.ids); // fprintf(stderr, "n_eff = %.16g\n", m.neff); // fprintf(stderr, "betaq = %.16g\n", m.betaq); // fprintf(stderr, "spec = %d\n", m.special); // fprintf(stderr, "disc = %.16g\n", m.checkcontinuity()); // fprintf(stderr, "dtol = %.16g\n", SLAMS_DISCERRTOL); if(err < SLAMS_DISCERRTOL) { ma.add(m); if(verb == 2) fprintf(stderr, "X "); if(verb == 1) fprintf(stderr, "."); if(sm == 1) return 1; } } if(lfsc != 0) { if(f1 > 0.0 && f0 > f1 && f1 < f2) { if(nsc < MAXNUMSCF-1) { susp(nsc, 0) = bq2; susp(nsc, 1) = bq0; ++nsc; } } if(f1 < 0.0 && f0 < f1 && f1 > f2) { if(nsc < MAXNUMSCF-1) { susp(nsc, 0) = bq2; susp(nsc, 1) = bq0; ++nsc; } } } bq0 = bq1; f0 = f1; bq1 = bq2; f1 = f2; } return newv; } int trapX(ModeArray& ma, Mode& m, double bqmin, double bqmax, double sf, int sm, int lfsc, int verb) { int newv, nsc = 0; Dmatrix susp(MAXNUMSCF, 2); newv = trapX(ma, m, bqmin, bqmax, sf, sm, susp, nsc, 1, verb); if(newv > 0) return newv; if(lfsc == 0) return newv; double minbq, maxbq; while(nsc >= 1) { minbq = susp(nsc-1, 0); maxbq = susp(nsc-1, 1); --nsc; newv += trapX(ma, m, minbq, maxbq, sf, sm, susp, nsc, 0, verb); if(sm == 1 && newv > 0) return newv; } return newv; } /* solve nonstandard effective VEIMS problem, determine all modes with squared propagation constants between bqmin and bqmax, for the problem specified by wg, pol, boundaries of type bdtb at bpb, of type bdtt at bpt; verb: 0: no control output, 1: progress, 2: all, ec: VEIMS effective permittivity */ void findmodesX(Waveguide wg, Polarization pol, Boundary_type bdtb, double bpb, Boundary_type bdtt, double bpt, double bqmin, double bqmax, ModeArray& ma, int verb, Dvector ec) { double k0 = val_k0(wg.lambda); double tbq, elim; int cl, split, nsm; // fprintf(stderr, "\nFM_I( wg.nx = %d, pol = T%c\n", wg.nx, polCHR(pol)); // fprintf(stderr, " bdtb = %d, bpb = %g, bdtt = %d, bpt = %g\n", // bdtb, bpb, bdtt, bpt); // fprintf(stderr, " bqmin = %g, bqmax = %g )\n", bqmin, bqmax); // verb = 2; // wg.write(stderr); if(bqmax < bqmin+bqSep(bqmax)) return; // if(bqmax < bqmin+bqSep(bqmax)) eimodeerror("findmodesX: bqmin, bqmax"); cl = wg.checksymmetry(); split = 1; if(cl <= 1 || cl >= wg.nx) split = 0; if(bdtb != bdtt) split = 0; if(bdtb != OPEN && fabs((wg.hx(0)-bpb)-(bpt-wg.hx(wg.nx)))>COMPTOL_HX) split = 0; if(split == 1) { Waveguide wg0, wg1; wg.split(cl, wg0, wg1); ModeArray sm; double cp = (wg.hx(cl-1)+wg.hx(cl))/2.0; if(verb == 1) fprintf(stderr, "s"); if(verb == 2) fprintf(stderr, "sym "); sm.clear(); findmodesX(wg0, pol, bdtb, bpb, LIMN, cp, bqmin, bqmax, sm, verb, ec); nsm = sm.num; for(int j=0; j<=nsm-1; ++j) { if(sm(j).special==2 && sm(j).bdt_t==OPEN) ; else sm(j).expand(); } ma.merge(sm); if(verb == 1) fprintf(stderr, "a"); if(verb == 2) fprintf(stderr, "asy "); sm.clear(); findmodesX(wg0, pol, bdtb, bpb, LIMD, cp, bqmin, bqmax, sm, verb, ec); nsm = sm.num; for(int j=0; j<=nsm-1; ++j) { if(sm(j).special==2 && sm(j).bdt_t==OPEN) sm(j).mirror(cp); else sm(j).expand(); } ma.merge(sm); ma.sort(); return; } elim = MAXREFIND*MAXREFIND; cl = -1; for(int l=1; l<=wg.nx; ++l) { double te = wg.decoupepseff(l); if(te < elim) { elim = te; cl = l; } } tbq = k0*k0*elim; if(tbq < bqmin) tbq = bqmin; if(bqmax > tbq+bqSep(tbq)) { Waveguide wg0, wg1; wg.split(cl, wg0, wg1); ModeArray sm; if(verb == 1) fprintf(stderr, "d-"); if(verb == 2) fprintf(stderr, "d^(0, %d) ", cl); sm.clear(); findmodesX(wg0, pol, bdtb, bpb, OPEN, AWAY, tbq, bqmax, sm, verb, ec); for(int j=0; j<=sm.num-1; ++j) sm(j).special = 2; ma.merge(sm); if(verb == 1) fprintf(stderr, "d+"); if(verb == 2) fprintf(stderr, "d^(%d, %d) ", cl, wg.nx+1); sm.clear(); findmodesX(wg1, pol, OPEN, -AWAY, bdtt, bpt, tbq, bqmax, sm, verb, ec); for(int j=0; j<=sm.num-1; ++j) sm(j).special = 2; ma.merge(sm); if(tbq > bqmin+bqSep(bqmin)) { if(verb == 1) fprintf(stderr, "dr"); if(verb == 2) fprintf(stderr, "d_(0, %d) ", wg.nx+1); sm.clear(); findmodesX(wg, pol, bdtb, bpb, bdtt, bpt, bqmin, tbq, sm, verb, ec); ma.merge(sm); } ma.sort(); return; } if(bdtb != OPEN) { elim = wg.lbdecepseff(bpb); tbq = k0*k0*elim; if(tbq < bqmin) tbq = bqmin; if(bqmax > tbq+bqSep(tbq)) { ModeArray sm; if(verb == 1) fprintf(stderr, "l^"); if(verb == 2) fprintf(stderr, "l^ "); sm.clear(); findmodesX(wg, pol, OPEN, -AWAY, bdtt, bpt, tbq, bqmax, sm, verb, ec); for(int j=0; j<=sm.num-1; ++j) sm(j).special = 2; ma.merge(sm); if(bqmin < tbq-bqSep(tbq)) { if(verb == 1) fprintf(stderr, "l_"); if(verb == 2) fprintf(stderr, "l_ "); sm.clear(); findmodesX(wg, pol, bdtb, bpb, bdtt, bpt, bqmin, tbq, sm, verb, ec); ma.merge(sm); } ma.sort(); return; } } if(bdtt != OPEN) { elim = wg.ubdecepseff(bpt); tbq = k0*k0*elim; if(tbq < bqmin) tbq = bqmin; if(bqmax > tbq+bqSep(tbq)) { ModeArray sm; if(verb == 1) fprintf(stderr, "u^"); if(verb == 2) fprintf(stderr, "u^ "); sm.clear(); findmodesX(wg, pol, bdtb, bpb, OPEN, AWAY, tbq, bqmax, sm, verb, ec); for(int j=0; j<=sm.num-1; ++j) sm(j).special = 2; ma.merge(sm); if(bqmin < tbq-bqSep(tbq)) { if(verb == 1) fprintf(stderr, "u_"); if(verb == 2) fprintf(stderr, "u_ "); sm.clear(); findmodesX(wg, pol, bdtb, bpb, bdtt, bpt, bqmin, tbq, sm, verb, ec); ma.merge(sm); } ma.sort(); return; } } if(bdtt==OPEN && bdtb==OPEN) { double emin = wg.defaultepseffmin(); double emax = wg.defaultepseffmax(); tbq = k0*k0*emin; if(bqmin < tbq) bqmin = tbq; tbq = k0*k0*emax; if(bqmax > tbq) bqmax = tbq; if(bqmax < bqmin+bqSep(bqmin)) return; } double mineps = wg.n.min(); double t = ec.min(); if(t < mineps) mineps = t; if(mineps > 0.0) { double bq, dbq; int zmax, zmin; Mode m(wg, ec, bdtb, bpb, bdtt, bpt); bq = bqmax-bqTol(bqmax); zmin = m.nummodesabove(bq); bq = bqmin+bqTol(bqmin); zmax = m.nummodesabove(bq); if(zmin >= zmax) return; // fprintf(stderr, "\nFM_F( wg.nx = %d, pol = T%c\n", wg.nx, polCHR(pol)); // fprintf(stderr, " bdtb = %d, bpb = %g, bdtt = %d, bpt = %g\n", // bdtb, bpb, bdtt, bpt); // fprintf(stderr, " bqmin = %.15g, zmax = %d, bqmax = %g, zmin= %d)\n", // bqmin, zmax, bqmax, zmin); // m.travresinspect(bqmin+bqTol(bqmin), bqmax-bqTol(bqmax)); Ivector found(zmax); found.init(0); Dvector bqf(zmax); bqf.init(0.0); trap(ma, m, bqmin, bqmax, 2.0, 0, 1, bqf, found, verb); int fa, mn, i, newv; double bqup, bqdn, sf; fa = 0; while(fa == 0) { fa = 1; mn = zmax-1; while(fa == 1 && mn>= zmin) { if(found(mn) != 0) --mn; else { fa = 0; i=mn+1; while(i<zmax && found(i) == 0) ++i; if(i >= zmax) bqdn = bqmin; else bqdn = bqf(i); i=mn-1; while(i>zmin-1 && found(i) == 0) --i; if(i <= zmin-1) bqup = bqmax; else bqup = bqf(i); dbq = bqup-bqdn; sf = 1.0; while((newv = trap(ma, m, bqdn+bqTol(bqdn), bqup-bqTol(bqup), sf, 1, 1, bqf, found, verb)) == 0 && sf < 4.0) sf += 0.5; if(newv >= 1) goto fnd; if(mn < zmax-1 && found(mn+1) != 0) { bqdn = bqf(mn+1); sf = 1.0; tbq = bqdn+dbq/100.0; while((newv = trap(ma, m, bqdn, tbq, sf, 1, 1, bqf, found, verb) ) == 0 && sf < 4.0) sf += 0.5; } if(newv >= 1) goto fnd; if(mn > zmin && found(mn-1) != 0) { bqup = bqf(mn-1); sf = 1.0; tbq = bqup-dbq/100.0; while((newv = trap(ma, m, tbq, bqup, sf, 1, 1, bqf, found, verb) ) == 0 && sf < 4.0) sf += 0.5; } if(newv >= 1) goto fnd; if(mn == zmax-1) { bqdn = bqmin; sf = 1.0; tbq = bqdn+dbq/100.0; while((newv = trap(ma, m, bqdn, tbq, sf, 1, 1, bqf, found, verb) ) == 0 && sf < 4.0) sf += 0.5; } if(newv >= 1) goto fnd; if(mn == zmin) { bqup = bqmax; sf = 1.0; tbq = bqup-dbq/100.0; while((newv = trap(ma, m, tbq, bqup, sf, 1, 1, bqf, found, verb) ) == 0 && sf < 4.0) sf += 0.5; } if(newv >= 1) goto fnd; fprintf(stderr, " T%c%d ?\n", polCHR(m.pol), mn); m.travresinspect(bqmin, bqmax); m.wg.write(stderr); m.wg.plot('e','r'); for(int j=0; j<=ma.num-1; ++j) { fprintf(stderr, " < %s > ", ma(j).ids); ma(j).writeprofile(dig10(j), dig1(j)); } eimodeerror("findmodesX, pos: missing mode"); fnd: ;; } } } } else // min epsilon negative { Mode m(wg, ec, bdtb, bpb, bdtt, bpt); if(bqmin >= bqmax) return; // fprintf(stderr, "\nFM_F( wg.nx = %d, pol = T%c\n", wg.nx, polCHR(pol)); // fprintf(stderr, " bdtb = %d, bpb = %g, bdtt = %d, bpt = %g\n", // bdtb, bpb, bdtt, bpt); // fprintf(stderr, " bqmin = %.15g, bqmax = %g)\n", // bqmin, bqmax); // m.travresinspect(bqmin+bqTol(bqmin), bqmax-bqTol(bqmax)); trapX(ma, m, bqmin, bqmax, 3.0, 0, 1, verb); ma.sort(); int om = 0; while(om <= ma.num-2) { double ov = overlap(ma(om), FORW, ma(om+1), FORW).abs(); if(ov>0.001) { // fprintf(stderr, " <%g|%g> = %g\n", ma(om).betaq, ma(om+1).betaq, ov); ma.remove(om+1); if(verb == 2) fprintf(stderr, "(r) "); } else { // fprintf(stderr, " <%g|%g> = %g\n", ma(om).betaq, ma(om+1).betaq, ov); ++om; } } ma.sort(); for(int o=0; o<=ma.num-1; ++o) ma(o).setids(o); } ma.sort(); if(verb >= 1) fprintf(stderr, "Ok.\n"); return; } /* --- VEIMS mode solver ---------------------------------------------- */ /* VEIMS algorithm: solve the effective 1D problem */ int veims_hsolve(Waveguide hwg, Dvector hwg_ec, double eemin, double eemax, ModeArray& ma) { double emin = hwg.defaultepseffmin(); double emax = hwg.defaultepseffmax(); if(emin < 0.0) emin = 0.0; if(emin < eemin) emin = eemin; if(emax > eemax) emax = eemax; double k0 = val_k0(hwg.lambda); double bqmin = k0*k0*emin; double bqmax = k0*k0*emax; bqmin += SLAMS_BQSEP; ma.clear(); if(bqmin >= bqmax) return 0; findmodesX(hwg, TM, OPEN, -AWAY, OPEN, AWAY, bqmin, bqmax, ma, 0, hwg_ec); // findmodesX(hwg, TM, OPEN, -AWAY, OPEN, AWAY, bqmin, bqmax, ma, 2, hwg_ec); ma.sort(); for(int j=0; j<=ma.num-1; ++j) { if(ma(j).special==0 && ma(j).ord!=j) eimodeerror("veims_hsolve: mode set corrupted"); ma(j).setids(j); } return ma.num; } /* VEIMS guided mode analysis, returns the number of found modes wg, st: the structure under consideration pol: polarization type, TE or TM rsn: nmber of the reference slice; -1: determined automatically ma: the modes found (output) quiet == 1: suppress log output */ int veims(SegWgCrs st, Polarization pol, int rsn, EIModeArray& ma, int quiet) { st.consistency(); double lambda = st(0).lambda; double k0 = val_k0(lambda); WgCrs wg = st.wgcrs(); Waveguide rs; int ri = 0; double mn = 0.0; int nxp; if(rsn < 0) { for(int j=1; j<=wg.ny; ++j) { ModeArray a; nxp = modeanalysis(st(j), pol, a, 1); if(nxp >= 1) { if(a(0).neff > mn) { ri = j; mn = a(0).neff; } } } if(ri<=0 || ri>=wg.ny+1) { // eimodeerror("solver: reference slice identification"); ma.clear(); return 0; } } else { ri = rsn; } rs = st(ri); ModeArray xpa; nxp = modeanalysis(rs, pol, xpa, 1); if(quiet != 1) { fprintf(stderr, "\n------------- Metric - VEIMS --------------- 2010 ---\n"); switch(pol) { case TE: fprintf(stderr, "TE, "); break; case TM: fprintf(stderr, "TM, "); break; } fprintf(stderr, "Lambda: %.10g, K0: %g, ", lambda, k0); fprintf(stderr, "RSlice: %d, #VModes: %d\n", ri, nxp); fprintf(stderr, "-----------------------------------------------------\n"); fprintf(stderr, " Nx: %d Ny: %d\n", wg.nx, wg.ny); fprintf(stderr, " Hx: "); for(int j=0; j<=wg.nx; ++j) fprintf(stderr, "%6.10g ", wg.hx(j)); fprintf(stderr, "\n"); fprintf(stderr, " Hy: "); for(int j=0; j<=wg.ny; ++j) fprintf(stderr, "%6.10g ", wg.hy(j)); fprintf(stderr, "\n"); fprintf(stderr, " N: "); for(int j=wg.nx+1; j>=0; --j) { for(int k=0; k<=wg.ny+1; ++k) fprintf(stderr, "%6.10g ", wg.n(j,k)); if(j>0) fprintf(stderr, "\n "); } fprintf(stderr, "\n"); fprintf(stderr, "-----------------------------------------------------\n"); } ModeArray tma, ttma; ttma.clear(); modeanalysis(st(0), TE, tma, 1); ttma.merge(tma); modeanalysis(st(0), TM, tma, 1); ttma.merge(tma); modeanalysis(st(st.ny+1), TE, tma, 1); ttma.merge(tma); modeanalysis(st(st.ny+1), TM, tma, 1); ttma.merge(tma); ttma.sort(); double leakylim = -1.0; if(ttma.num >= 1) leakylim = ttma(0).neff; tma.clear(); ttma.clear(); ma.clear(); for(int xpi=0; xpi<=nxp-1; ++xpi) { if(quiet != 1) fprintf(stderr, "v(%d):\n", xpi); EIMode m; m.wg = wg; m.st = st; m.pol = pol; m.k0 = k0; m.rsegi = ri; Mode xp = xpa(xpi); m.vmp = xp; Waveguide hwg(wg.ny); hwg.hx = wg.hy; hwg.lambda = wg.lambda; hwg.special = 1; Dvector hwg_q(wg.ny+2); Dvector hwg_ec(wg.ny+2); for(int s=0; s<=wg.ny+1; ++s) { if(rs.nx > wg.nx) eimodeerror("veims: nx"); if(pol == TE) { double na = 0.0; double no = 0.0; for(int l=0; l<=wg.nx+1; ++l) { double x0, x1, xc; if(l==0) x0 = -AWAY; else x0 = wg.hx(l-1); if(l==wg.nx+1) x1 = AWAY; else x1 = wg.hx(l); xc = 0.5*(x0+x1); double i = xp.integrate(FLD, FLD, Interval(x0, x1)); no += i; na += i*(wg.eps(l, s) - rs.eps(xc)); } hwg.n(s) = xp.neff*xp.neff+na/no; hwg_q(s) = 1.0; hwg_ec(s) = hwg.n(s); } else // TM { double x_1_x = 0.0; double x_1de_x = 0.0; double x_edeq_x = 0.0; double xp_1de_xp = 0.0; double xp_edeq_xp = 0.0; for(int l=0; l<=wg.nx+1; ++l) { double x0, x1, xc; if(l==0) x0 = -AWAY; else x0 = wg.hx(l-1); if(l==wg.nx+1) x1 = AWAY; else x1 = wg.hx(l); xc = 0.5*(x0+x1); double i; i = xp.integrate(FLD, FLD, Interval(x0, x1)); x_1_x += i; x_1de_x += i/rs.eps(xc); x_edeq_x += i*wg.eps(l, s)/rs.eps(xc)/rs.eps(xc); i = xp.integrate(DER, DER, Interval(x0, x1)); xp_1de_xp += i/rs.eps(xc); xp_edeq_xp += i*wg.eps(l, s)/rs.eps(xc)/rs.eps(xc); } hwg_q(s) = xp_1de_xp/xp_edeq_xp; hwg_ec(s) = xp.neff*xp.neff*hwg_q(s)+x_1_x/x_1de_x*(1.0-hwg_q(s)) ; hwg.n(s) = hwg_ec(s)*x_edeq_x/x_1de_x; } } m.hwg = hwg; // if(quiet != 1) hwg.write(stderr); m.hwg_q = hwg_q; m.hwg_ec = hwg_ec; if(quiet != 1) { if(pol == TE) { fprintf(stderr, "eps_eff: "); for(int s=0; s<=wg.ny+1; ++s) fprintf(stderr, "%6.4g ", hwg.eps(s)); fprintf(stderr, "\n"); } else { fprintf(stderr, "eps_eff: "); for(int s=0; s<=wg.ny+1; ++s) fprintf(stderr, "%6.4g ", hwg.eps(s)); fprintf(stderr, "\n"); fprintf(stderr, "eps_c: "); for(int s=0; s<=wg.ny+1; ++s) fprintf(stderr, "%6.4g ", hwg_ec(s)); fprintf(stderr, "\n"); fprintf(stderr, "q: "); for(int s=0; s<=wg.ny+1; ++s) fprintf(stderr, "%6.4g ", hwg_q(s)); fprintf(stderr, "\n"); } } double eemin = xp.wg.defaultepseffmin(); double eemax = xp.wg.defaultepseffmax(); ModeArray hma; veims_hsolve(hwg, hwg_ec, eemin, eemax, hma); for(int j=0; j<=hma.num-1; ++j) { if(hma(j).neff*hma(j).neff > eemin) { m.beta = hma(j).beta; m.neff = hma(j).neff; m.hmp = hma(j); m.normalize(1.0); m.setfieldmax(); m.leaky = 0; if(m.neff <= leakylim) m.leaky = 1; ma.add(m); if(quiet != 1) { fprintf(stderr, "T%c_(%d, %d), neff = %g", polCHR(pol), xpi, j, m.neff); if(m.leaky == 1) fprintf(stderr, " (leaky !)"); fprintf(stderr, "\n"); } } } if(quiet != 1) fprintf(stderr, " - - -\n"); } if(quiet != 1) fprintf(stderr, "\n"); return ma.num; } // with automatically determined number of the reference slice int veims(SegWgCrs st, Polarization pol, EIModeArray& ma, int quiet) { return veims(st, pol, -1, ma, quiet); } int veims(SegWgCrs st, Polarization pol, EIModeArray& ma) { return veims(st, pol, -1, ma, 0); } int veims(WgCrs wg, Polarization pol, EIModeArray& ma, int quiet) { return veims(SegWgCrs(wg), pol, -1, ma, quiet); } int veims(WgCrs wg, Polarization pol, EIModeArray& ma) { return veims(wg, pol, ma, 0); }
github_cpp
2025-12-07T00:57:46Z
https://github.com/Xzhang363/BOSIM_v4.0/blob/92769e9dfd138b35614c8ceba3dcb30ac40b86c7/Metric/veims.cpp
{}
#include <iostream> #include <string> #include "encrypt.h" class TestData { public: int num; std::string text; TestData(int n, const std::string& t) : num(n), text(t) { std::cout << "[+] Created: " << text << "\n"; } ~TestData() { std::cout << "[-] Destroyed: " << text << "\n"; } void show() const { std::cout << ">> " << text << " = " << num << "\n"; } }; int main() { try { auto sp1 = make_secure<TestData>(42, "object_1"); sp1->show(); sp1->num = 999; sp1->text = "modified"; sp1->show(); auto sp2 = make_secure<TestData>(100, "object_2"); auto sp3 = make_secure<TestData>(200, "object_3"); sp2->show(); sp3->show(); sp1.destroy(); sp2.destroy(); sp3.destroy(); } catch (const std::exception& e) { std::cerr << "ERROR: " << e.what() << "\n"; return 1; } return 0; }
#include <iostream> #include <string> #include "encrypt.h" class TestData { public: int num; std::string text; TestData(int n, const std::string& t) : num(n), text(t) { std::cout << "[+] Created: " << text << "\n"; } ~TestData() { std::cout << "[-] Destroyed: " << text << "\n"; } void show() const { std::cout << ">> " << text << " = " << num << "\n"; } }; int main() { try { auto sp1 = make_secure<TestData>(42, "object_1"); sp1->show(); sp1->num = 999; sp1->text = "modified"; sp1->show(); auto sp2 = make_secure<TestData>(100, "object_2"); auto sp3 = make_secure<TestData>(200, "object_3"); sp2->show(); sp3->show(); sp1.destroy(); sp2.destroy(); sp3.destroy(); } catch (const std::exception& e) { std::cerr << "ERROR: " << e.what() << "\n"; return 1; } return 0; }
github_cpp
2025-12-05T20:30:04Z
https://github.com/agesa1/Pointer-Encryption/blob/7935758a119510324e949546e3150861a088f2d3/main.cpp
{}
#include<iostream> #include<vector> using namespace std; void merge(vector<int>&arr, int low , int mid, int high){ vector<int> temp; int left = low; int right = mid+1; while(left <= mid && right<= high){ if(arr[left] <= arr[right]){ temp.push_back(arr[left]); left++; } else{ temp.push_back(arr[right]); right++; } } while(left <= mid){ temp.push_back(arr[left]); left++; } while(right<= high){ temp.push_back(arr[right]); right++; } for(int i = low;i<=high ; i++){ arr[i] = temp[i-low]; } } void ms(vector<int>&arr, int low , int high){ if(low >= high) return; int mid = (low+high)/2; ms(arr , low , mid); ms(arr , mid+1, high); merge(arr, low ,mid , high); } void MergeSort(vector<int>&arr , int n){ ms(arr , 0 ,n-1); } int main(){ int n; cout << "Enter the number of elements: "; cin >> n; vector<int> arr(n); for(int i = 0 ;i<n;i++){ cout << "Enter " << " Element " << i+1 << ": "; cin >> arr[i]; } MergeSort(arr, n); cout << "The sorted Array is: "; for(int i = 0; i<n ; i++){ cout << arr[i] << " "; } cout << endl; return 0; }
#include<iostream> #include<vector> using namespace std; void merge(vector<int>&arr, int low , int mid, int high){ vector<int> temp; int left = low; int right = mid+1; while(left <= mid && right<= high){ if(arr[left] <= arr[right]){ temp.push_back(arr[left]); left++; } else{ temp.push_back(arr[right]); right++; } } while(left <= mid){ temp.push_back(arr[left]); left++; } while(right<= high){ temp.push_back(arr[right]); right++; } for(int i = low;i<=high ; i++){ arr[i] = temp[i-low]; } } void ms(vector<int>&arr, int low , int high){ if(low >= high) return; int mid = (low+high)/2; ms(arr , low , mid); ms(arr , mid+1, high); merge(arr, low ,mid , high); } void MergeSort(vector<int>&arr , int n){ ms(arr , 0 ,n-1); } int main(){ int n; cout << "Enter the number of elements: "; cin >> n; vector<int> arr(n); for(int i = 0 ;i<n;i++){ cout << "Enter " << " Element " << i+1 << ": "; cin >> arr[i]; } MergeSort(arr, n); cout << "The sorted Array is: "; for(int i = 0; i<n ; i++){ cout << arr[i] << " "; } cout << endl; return 0; }
github_cpp
2025-12-07T14:49:57Z
https://github.com/bobbadikumar/Merge-quick-sort/blob/1a4dd7849236f59a90c280b1f43f0263ec4da53b/Merge Sorting.cpp
{}
#include <iostream> using namespace std; int main() { int factorial = 1, a; bool User_Handle = false; // Loop for user input validation (WHILE loop) while (!User_Handle) { cout << "------------------------------------------\n"; cout << "C++ program to find the factorial.\n"; cout << "------------------------------------------\n"; cout << "Input any number: "; if (!(cin >> a)) { User_Handle = true; } else if (a < 0) { cout << "please enter a positive number.\n"; } else { User_Handle = true; } if (!User_Handle) { cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); cout << "Invalid Input! Enter Positive Integer.\n"; } } // Main factorial calculation loop (FOR loop) for (int i = 1; i <= a; i++) { factorial = factorial * i; } cout << "\n----------------------------------------------------------\n"; cout << "The factorial of the given number is = " << factorial << endl; cout << "----------------------------------------------------------\n"; return 0; }
#include <iostream> using namespace std; int main() { int factorial = 1, a; bool User_Handle = false; // Loop for user input validation (WHILE loop) while (!User_Handle) { cout << "------------------------------------------\n"; cout << "C++ program to find the factorial.\n"; cout << "------------------------------------------\n"; cout << "Input any number: "; if (!(cin >> a)) { User_Handle = true; } else if (a < 0) { cout << "please enter a positive number.\n"; } else { User_Handle = true; } if (!User_Handle) { cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); cout << "Invalid Input! Enter Positive Integer.\n"; } } // Main factorial calculation loop (FOR loop) for (int i = 1; i <= a; i++) { factorial = factorial * i; } cout << "\n----------------------------------------------------------\n"; cout << "The factorial of the given number is = " << factorial << endl; cout << "----------------------------------------------------------\n"; return 0; }
github_cpp
2025-12-03T04:48:30Z
https://github.com/RomelSeguira/My-CPP-Learning/blob/f14a7b48dacacfb70b567bbe08ff7ef8dd56ffb3/Assignments/factorial_with_input_validation.cpp
{}
import { useEffect, useRef, useState } from 'react'; /** * Custom hook to detect when user is speaking using Web Audio API * @param {boolean} isMuted - Whether the microphone is muted * @param {number} threshold - Audio level threshold to detect speaking (0-255) * @returns {boolean} isSpeaking - Whether the user is currently speaking */ export function useAudioDetection(isMuted, threshold = 30) { const [isSpeaking, setIsSpeaking] = useState(false); const audioContextRef = useRef(null); const analyserRef = useRef(null); const microphoneRef = useRef(null); const animationFrameRef = useRef(null); const streamRef = useRef(null); useEffect(() => { let mounted = true; async function setupAudioDetection() { try { // Request microphone access const stream = await navigator.mediaDevices.getUserMedia({ audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: true } }); if (!mounted) { stream.getTracks().forEach(track => track.stop()); return; } streamRef.current = stream; // Create audio context and analyser const audioContext = new (window.AudioContext || window.webkitAudioContext)(); const analyser = audioContext.createAnalyser(); const microphone = audioContext.createMediaStreamSource(stream); analyser.smoothingTimeConstant = 0.8; analyser.fftSize = 1024; microphone.connect(analyser); audioContextRef.current = audioContext; analyserRef.current = analyser; microphoneRef.current = microphone; // Start detecting audio levels detectAudioLevel(); } catch (error) { console.error('Error accessing microphone:', error); } } function detectAudioLevel() { if (!mounted || !analyserRef.current) return; const dataArray = new Uint8Array(analyserRef.current.frequencyBinCount); function checkLevel() { if (!mounted || !analyserRef.current) return; analyserRef.current.getByteFrequencyData(dataArray); // Calculate average audio level const average = dataArray.reduce((a, b) => a + b) / dataArray.length; // Update speaking status based on threshold setIsSpeaking(!isMuted && average > threshold); animationFrameRef.current = requestAnimationFrame(checkLevel); } checkLevel(); } if (!isMuted) { setupAudioDetection(); } else { setIsSpeaking(false); } return () => { mounted = false; // Cleanup if (animationFrameRef.current) { cancelAnimationFrame(animationFrameRef.current); } if (audioContextRef.current) { audioContextRef.current.close(); } if (streamRef.current) { streamRef.current.getTracks().forEach(track => track.stop()); } }; }, [isMuted, threshold]); return isSpeaking; } export default useAudioDetection;
import { useEffect, useRef, useState } from 'react'; /** * Custom hook to detect when user is speaking using Web Audio API * @param {boolean} isMuted - Whether the microphone is muted * @param {number} threshold - Audio level threshold to detect speaking (0-255) * @returns {boolean} isSpeaking - Whether the user is currently speaking */ export function useAudioDetection(isMuted, threshold = 30) { const [isSpeaking, setIsSpeaking] = useState(false); const audioContextRef = useRef(null); const analyserRef = useRef(null); const microphoneRef = useRef(null); const animationFrameRef = useRef(null); const streamRef = useRef(null); useEffect(() => { let mounted = true; async function setupAudioDetection() { try { // Request microphone access const stream = await navigator.mediaDevices.getUserMedia({ audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: true } }); if (!mounted) { stream.getTracks().forEach(track => track.stop()); return; } streamRef.current = stream; // Create audio context and analyser const audioContext = new (window.AudioContext || window.webkitAudioContext)(); const analyser = audioContext.createAnalyser(); const microphone = audioContext.createMediaStreamSource(stream); analyser.smoothingTimeConstant = 0.8; analyser.fftSize = 1024; microphone.connect(analyser); audioContextRef.current = audioContext; analyserRef.current = analyser; microphoneRef.current = microphone; // Start detecting audio levels detectAudioLevel(); } catch (error) { console.error('Error accessing microphone:', error); } } function detectAudioLevel() { if (!mounted || !analyserRef.current) return; const dataArray = new Uint8Array(analyserRef.current.frequencyBinCount); function checkLevel() { if (!mounted || !analyserRef.current) return; analyserRef.current.getByteFrequencyData(dataArray); // Calculate average audio level const average = dataArray.reduce((a, b) => a + b) / dataArray.length; // Update speaking status based on threshold setIsSpeaking(!isMuted && average > threshold); animationFrameRef.current = requestAnimationFrame(checkLevel); } checkLevel(); } if (!isMuted) { setupAudioDetection(); } else { setIsSpeaking(false); } return () => { mounted = false; // Cleanup if (animationFrameRef.current) { cancelAnimationFrame(animationFrameRef.current); } if (audioContextRef.current) { audioContextRef.current.close(); } if (streamRef.current) { streamRef.current.getTracks().forEach(track => track.stop()); } }; }, [isMuted, threshold]); return isSpeaking; } export default useAudioDetection;
github_javascript
2025-12-07T07:51:52Z
https://github.com/CodinGakpo/DebateIT/blob/1e5341c922fb5b063eb5ef2f9fecbe32c58bfc09/frontend/src/hooks/useAudioDetection.js
{}
const { getRouter } = require('stremio-addon-sdk'); const fetch = require('node-fetch'); const TMDB_API_KEY = process.env.TMDB_API_KEY || ''; const manifest = { id: 'com.trailers.youtube.addon', version: '1.0.1', name: 'YouTube Trailers', description: 'Direct links to YouTube trailers - No buffering!', resources: ['stream'], types: ['movie', 'series'], catalogs: [], idPrefixes: ['tt'], background: 'https://images.unsplash.com/photo-1574267432644-f05f41dc1799?w=1920&h=1080&fit=crop', logo: 'https://cdn-icons-png.flaticon.com/512/1384/1384060.png', // Certificação Stremio Addons stremioAddonsConfig: { issuer: 'https://stremio-addons.net', signature: 'eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..pB-EC9zlZduz6a-zU0OxsQ.R_CydhOhJx12LAA6b5K_c7GxYcxMu0e1FlAGC9elpvhCZJPtVMwdsTEnbMXROVZL9FNBERr9Z2kF45wFQN7uLN5fHXV3MmSqGmO2hHnic-oc3vcbzQ0rl2LUmo8uTXM8.1uu_6hsolyXULB6kmaghdQ' } }; async function getTMDBInfo(imdbId, type) { if (!TMDB_API_KEY) return null; try { const url = `https://api.themoviedb.org/3/find/${imdbId}?api_key=${TMDB_API_KEY}&external_source=imdb_id`; const response = await fetch(url); const data = await response.json(); if (type === 'movie' && data.movie_results && data.movie_results.length > 0) { return { id: data.movie_results[0].id, name: data.movie_results[0].title, type: 'movie' }; } else if (type === 'series' && data.tv_results && data.tv_results.length > 0) { return { id: data.tv_results[0].id, name: data.tv_results[0].name, type: 'tv' }; } return null; } catch (error) { console.error('Error getting TMDB info:', error); return null; } } async function getTMDBTrailer(tmdbId, mediaType) { if (!TMDB_API_KEY) return null; try { const url = `https://api.themoviedb.org/3/${mediaType}/${tmdbId}/videos?api_key=${TMDB_API_KEY}`; const response = await fetch(url); const data = await response.json(); const trailer = data.results.find(v => v.type === 'Trailer' && v.site === 'YouTube' && (v.iso_639_1 === 'en' || v.iso_639_1 === 'pt') ) || data.results.find(v => v.type === 'Trailer' && v.site === 'YouTube'); if (trailer) { return `https://www.youtube.com/watch?v=${trailer.key}`; } return null; } catch (error) { console.error('Error getting TMDB trailer:', error); return null; } } const addonInterface = { manifest, get: async (resource, type, id) => { if (resource === 'stream') { const imdbId = id.split(':')[0]; try { const tmdbInfo = await getTMDBInfo(imdbId, type); if (!tmdbInfo) { return { streams: [{ name: '🎬 Search Trailer on YouTube', title: 'Search for trailer', externalUrl: `https://www.youtube.com/results?search_query=${imdbId}+official+trailer`, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } const trailerUrl = await getTMDBTrailer(tmdbInfo.id, tmdbInfo.type); if (trailerUrl) { return { streams: [{ name: '▶️ Watch Trailer', title: `${tmdbInfo.name} - Official Trailer`, externalUrl: trailerUrl, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } else { return { streams: [{ name: '🔍 Search Trailer', title: `Search for "${tmdbInfo.name}" trailer`, externalUrl: `https://www.youtube.com/results?search_query=${encodeURIComponent(tmdbInfo.name + ' official trailer')}`, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } } catch (error) { console.error('Error:', error); return { streams: [] }; } } return { streams: [] }; } }; const router = getRouter(addonInterface); module.exports = (req, res) => { router(req, res, () => { res.statusCode = 404; res.end(); }); };
const { getRouter } = require('stremio-addon-sdk'); const fetch = require('node-fetch'); const TMDB_API_KEY = process.env.TMDB_API_KEY || ''; const manifest = { id: 'com.trailers.youtube.addon', version: '1.0.1', name: 'YouTube Trailers', description: 'Direct links to YouTube trailers - No buffering!', resources: ['stream'], types: ['movie', 'series'], catalogs: [], idPrefixes: ['tt'], background: 'https://images.unsplash.com/photo-1574267432644-f05f41dc1799?w=1920&h=1080&fit=crop', logo: 'https://cdn-icons-png.flaticon.com/512/1384/1384060.png', // Certificação Stremio Addons stremioAddonsConfig: { issuer: 'https://stremio-addons.net', signature: 'eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..pB-EC9zlZduz6a-zU0OxsQ.R_CydhOhJx12LAA6b5K_c7GxYcxMu0e1FlAGC9elpvhCZJPtVMwdsTEnbMXROVZL9FNBERr9Z2kF45wFQN7uLN5fHXV3MmSqGmO2hHnic-oc3vcbzQ0rl2LUmo8uTXM8.1uu_6hsolyXULB6kmaghdQ' } }; async function getTMDBInfo(imdbId, type) { if (!TMDB_API_KEY) return null; try { const url = `https://api.themoviedb.org/3/find/${imdbId}?api_key=${TMDB_API_KEY}&external_source=imdb_id`; const response = await fetch(url); const data = await response.json(); if (type === 'movie' && data.movie_results && data.movie_results.length > 0) { return { id: data.movie_results[0].id, name: data.movie_results[0].title, type: 'movie' }; } else if (type === 'series' && data.tv_results && data.tv_results.length > 0) { return { id: data.tv_results[0].id, name: data.tv_results[0].name, type: 'tv' }; } return null; } catch (error) { console.error('Error getting TMDB info:', error); return null; } } async function getTMDBTrailer(tmdbId, mediaType) { if (!TMDB_API_KEY) return null; try { const url = `https://api.themoviedb.org/3/${mediaType}/${tmdbId}/videos?api_key=${TMDB_API_KEY}`; const response = await fetch(url); const data = await response.json(); const trailer = data.results.find(v => v.type === 'Trailer' && v.site === 'YouTube' && (v.iso_639_1 === 'en' || v.iso_639_1 === 'pt') ) || data.results.find(v => v.type === 'Trailer' && v.site === 'YouTube'); if (trailer) { return `https://www.youtube.com/watch?v=${trailer.key}`; } return null; } catch (error) { console.error('Error getting TMDB trailer:', error); return null; } } const addonInterface = { manifest, get: async (resource, type, id) => { if (resource === 'stream') { const imdbId = id.split(':')[0]; try { const tmdbInfo = await getTMDBInfo(imdbId, type); if (!tmdbInfo) { return { streams: [{ name: '🎬 Search Trailer on YouTube', title: 'Search for trailer', externalUrl: `https://www.youtube.com/results?search_query=${imdbId}+official+trailer`, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } const trailerUrl = await getTMDBTrailer(tmdbInfo.id, tmdbInfo.type); if (trailerUrl) { return { streams: [{ name: '▶️ Watch Trailer', title: `${tmdbInfo.name} - Official Trailer`, externalUrl: trailerUrl, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } else { return { streams: [{ name: '🔍 Search Trailer', title: `Search for "${tmdbInfo.name}" trailer`, externalUrl: `https://www.youtube.com/results?search_query=${encodeURIComponent(tmdbInfo.name + ' official trailer')}`, behaviorHints: { notWebReady: true, bingeGroup: 'trailer' } }] }; } } catch (error) { console.error('Error:', error); return { streams: [] }; } } return { streams: [] }; } }; const router = getRouter(addonInterface); module.exports = (req, res) => { router(req, res, () => { res.statusCode = 404; res.end(); }); };
github_javascript
2025-12-12T00:11:17Z
https://github.com/mechanicwb2-hub/stremio-trailer-addon/blob/818d8a229bcbf5e12c899199f3464bf9e5da8093/api/index.js
{}
const { test, describe } = require('node:test'); const assert = require('node:assert'); const UrlAnalyzer = require('../site/js/analyzer.js'); /** * Helper: Extract just the pattern strings from results */ function getPatterns(results) { return results.map(r => r.pattern).sort(); } describe('UrlAnalyzer', () => { const analyzer = new UrlAnalyzer(); describe('Basic functionality', () => { test('empty input returns empty array', () => { const result = analyzer.analyze([]); assert.deepStrictEqual(result, []); }); test('whitespace-only input returns empty array', () => { const result = analyzer.analyze(['', ' ', '\t']); assert.deepStrictEqual(result, []); }); test('invalid URLs are filtered out', () => { const result = analyzer.analyze(['not-a-url', 'https://example.com/valid']); assert.strictEqual(result.length, 1); assert.strictEqual(result[0].pattern, 'https://example.com/valid'); }); test('duplicate URLs are deduplicated', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://example.com/page', 'https://example.com/page' ]); assert.strictEqual(result.length, 1); assert.strictEqual(result[0].count, 1); }); }); describe('Pattern masking - static vs dynamic', () => { test('single URL - no masking (static)', () => { const result = analyzer.analyze(['https://example.com/foo']); assert.deepStrictEqual(getPatterns(result), ['https://example.com/foo']); }); test('multiple URLs with same path - no masking (static)', () => { const result = analyzer.analyze([ 'https://example.com/products', 'https://example.com/products' ]); // Deduplicated to 1 assert.strictEqual(result.length, 1); assert.strictEqual(result[0].pattern, 'https://example.com/products'); }); test('multiple URLs with different final segments - masking (dynamic)', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/products/456', 'https://example.com/products/789' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/products/…']); }); test('multiple URLs with different first segments - masking', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/categories/abc' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/…/…']); }); test('simple single-segment paths should be masked', () => { const result = analyzer.analyze([ 'https://example.com/a', 'https://example.com/b', 'https://example.com/c' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/…']); }); test('mixed static and dynamic segments', () => { const result = analyzer.analyze([ 'https://example.com/blog/a', 'https://example.com/blog/b', 'https://example.com/about' ]); const patterns = getPatterns(result); // 'blog' and 'about' are at same position -> both masked to … // Result: https://example.com/…/… (for blog/a, blog/b) and https://example.com/… (for about) assert.ok(patterns.some(p => p.includes('…'))); }); test('static segment after dynamic segment', () => { const result = analyzer.analyze([ 'https://www.example.com/01/p/001', 'https://www.example.com/02/p/002', 'https://www.example.com/03/p/003', 'https://www.example.com/04/p/004', 'https://www.example.com/05/p/005' ]); assert.deepStrictEqual(getPatterns(result), ['https://www.example.com/…/p/…']); }); }); describe('Host handling', () => { test('tenant-like subdomains are masked', () => { const result = analyzer.analyze([ 'https://tenant1.app.com/dashboard', 'https://tenant2.app.com/dashboard', 'https://tenant3.app.com/dashboard' ]); assert.deepStrictEqual(getPatterns(result), ['https://….app.com/dashboard']); }); test('meaningful subdomains like www and blog should NOT be masked', () => { const result = analyzer.analyze([ 'https://www.example.com/page', 'https://www.example.com/about', 'https://blog.example.com/post', 'https://blog.example.com/archive' ]); const patterns = getPatterns(result); // www and blog are meaningful subdomains - they should stay separate assert.ok(patterns.some(p => p.includes('www.example.com')), 'Should keep www subdomain: ' + JSON.stringify(patterns)); assert.ok(patterns.some(p => p.includes('blog.example.com')), 'Should keep blog subdomain: ' + JSON.stringify(patterns)); assert.ok(!patterns.some(p => p.includes('….example.com')), 'Should NOT mask to ….example.com: ' + JSON.stringify(patterns)); }); test('different domains create separate patterns', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://other.com/page' ]); // Different domains are never masked - they create separate patterns assert.strictEqual(result.length, 2); }); test('www vs non-www are treated as different host structures', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://www.example.com/page' ]); // www.example.com has 3 host segments, example.com has 2 // They have different structures, so they create separate patterns assert.strictEqual(result.length, 2); }); }); describe('Scheme handling', () => { test('http and https create separate patterns', () => { const result = analyzer.analyze([ 'http://example.com/page', 'https://example.com/page' ]); // Different schemes are never masked - they create separate patterns assert.strictEqual(result.length, 2); }); }); describe('Result structure', () => { test('result contains pattern, count, and urls', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/products/456' ]); assert.strictEqual(result.length, 1); assert.ok('pattern' in result[0]); assert.ok('count' in result[0]); assert.ok('urls' in result[0]); assert.strictEqual(result[0].count, 2); assert.strictEqual(result[0].urls.length, 2); }); test('results are sorted by group frequency, then hierarchically', () => { const result = analyzer.analyze([ 'https://example.com/a', 'https://example.com/a/1', 'https://example.com/a/2', 'https://example.com/b', 'https://example.com/b/1', 'https://example.com/b/2', 'https://example.com/b/3', 'https://example.com/b/4', 'https://example.com/b/5', 'https://example.com/c' ]); const patterns = result.map(r => r.pattern); // /b group has total 6 (1+5), /a group has total 3 (1+2), /c has 1 // Expected order: /b group first (highest), then /a group, then /c assert.strictEqual(patterns[0], 'https://example.com/b'); assert.strictEqual(patterns[1], 'https://example.com/b/…'); assert.strictEqual(patterns[2], 'https://example.com/a'); assert.strictEqual(patterns[3], 'https://example.com/a/…'); assert.strictEqual(patterns[4], 'https://example.com/c'); }); }); describe('Mixed structures at same level', () => { test('static segment after dynamic when mixed with other paths', () => { // This tests the /01/p/001 pattern when mixed with locale paths const result = analyzer.analyze([ 'https://www.example.com/en-us/home', 'https://www.example.com/en-us/pricing', 'https://www.example.com/fr-fr/home', 'https://www.example.com/01/p/001', 'https://www.example.com/02/p/002', 'https://www.example.com/03/p/003' ]); const patterns = getPatterns(result); // Should have separate locale patterns AND the …/p/… pattern assert.ok(patterns.some(p => p.includes('/p/')), 'Should preserve /p/ as static segment: ' + JSON.stringify(patterns)); assert.ok(patterns.some(p => p === 'https://www.example.com/…/p/…'), 'Should produce …/p/… pattern: ' + JSON.stringify(patterns)); }); test('route names vs IDs are distinguished by count', () => { // home/pricing appear multiple times (routes) while IDs appear once each const result = analyzer.analyze([ 'https://example.com/users/home', 'https://example.com/products/home', 'https://example.com/settings/home',
const { test, describe } = require('node:test'); const assert = require('node:assert'); const UrlAnalyzer = require('../site/js/analyzer.js'); /** * Helper: Extract just the pattern strings from results */ function getPatterns(results) { return results.map(r => r.pattern).sort(); } describe('UrlAnalyzer', () => { const analyzer = new UrlAnalyzer(); describe('Basic functionality', () => { test('empty input returns empty array', () => { const result = analyzer.analyze([]); assert.deepStrictEqual(result, []); }); test('whitespace-only input returns empty array', () => { const result = analyzer.analyze(['', ' ', '\t']); assert.deepStrictEqual(result, []); }); test('invalid URLs are filtered out', () => { const result = analyzer.analyze(['not-a-url', 'https://example.com/valid']); assert.strictEqual(result.length, 1); assert.strictEqual(result[0].pattern, 'https://example.com/valid'); }); test('duplicate URLs are deduplicated', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://example.com/page', 'https://example.com/page' ]); assert.strictEqual(result.length, 1); assert.strictEqual(result[0].count, 1); }); }); describe('Pattern masking - static vs dynamic', () => { test('single URL - no masking (static)', () => { const result = analyzer.analyze(['https://example.com/foo']); assert.deepStrictEqual(getPatterns(result), ['https://example.com/foo']); }); test('multiple URLs with same path - no masking (static)', () => { const result = analyzer.analyze([ 'https://example.com/products', 'https://example.com/products' ]); // Deduplicated to 1 assert.strictEqual(result.length, 1); assert.strictEqual(result[0].pattern, 'https://example.com/products'); }); test('multiple URLs with different final segments - masking (dynamic)', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/products/456', 'https://example.com/products/789' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/products/…']); }); test('multiple URLs with different first segments - masking', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/categories/abc' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/…/…']); }); test('simple single-segment paths should be masked', () => { const result = analyzer.analyze([ 'https://example.com/a', 'https://example.com/b', 'https://example.com/c' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/…']); }); test('mixed static and dynamic segments', () => { const result = analyzer.analyze([ 'https://example.com/blog/a', 'https://example.com/blog/b', 'https://example.com/about' ]); const patterns = getPatterns(result); // 'blog' and 'about' are at same position -> both masked to … // Result: https://example.com/…/… (for blog/a, blog/b) and https://example.com/… (for about) assert.ok(patterns.some(p => p.includes('…'))); }); test('static segment after dynamic segment', () => { const result = analyzer.analyze([ 'https://www.example.com/01/p/001', 'https://www.example.com/02/p/002', 'https://www.example.com/03/p/003', 'https://www.example.com/04/p/004', 'https://www.example.com/05/p/005' ]); assert.deepStrictEqual(getPatterns(result), ['https://www.example.com/…/p/…']); }); }); describe('Host handling', () => { test('tenant-like subdomains are masked', () => { const result = analyzer.analyze([ 'https://tenant1.app.com/dashboard', 'https://tenant2.app.com/dashboard', 'https://tenant3.app.com/dashboard' ]); assert.deepStrictEqual(getPatterns(result), ['https://….app.com/dashboard']); }); test('meaningful subdomains like www and blog should NOT be masked', () => { const result = analyzer.analyze([ 'https://www.example.com/page', 'https://www.example.com/about', 'https://blog.example.com/post', 'https://blog.example.com/archive' ]); const patterns = getPatterns(result); // www and blog are meaningful subdomains - they should stay separate assert.ok(patterns.some(p => p.includes('www.example.com')), 'Should keep www subdomain: ' + JSON.stringify(patterns)); assert.ok(patterns.some(p => p.includes('blog.example.com')), 'Should keep blog subdomain: ' + JSON.stringify(patterns)); assert.ok(!patterns.some(p => p.includes('….example.com')), 'Should NOT mask to ….example.com: ' + JSON.stringify(patterns)); }); test('different domains create separate patterns', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://other.com/page' ]); // Different domains are never masked - they create separate patterns assert.strictEqual(result.length, 2); }); test('www vs non-www are treated as different host structures', () => { const result = analyzer.analyze([ 'https://example.com/page', 'https://www.example.com/page' ]); // www.example.com has 3 host segments, example.com has 2 // They have different structures, so they create separate patterns assert.strictEqual(result.length, 2); }); }); describe('Scheme handling', () => { test('http and https create separate patterns', () => { const result = analyzer.analyze([ 'http://example.com/page', 'https://example.com/page' ]); // Different schemes are never masked - they create separate patterns assert.strictEqual(result.length, 2); }); }); describe('Result structure', () => { test('result contains pattern, count, and urls', () => { const result = analyzer.analyze([ 'https://example.com/products/123', 'https://example.com/products/456' ]); assert.strictEqual(result.length, 1); assert.ok('pattern' in result[0]); assert.ok('count' in result[0]); assert.ok('urls' in result[0]); assert.strictEqual(result[0].count, 2); assert.strictEqual(result[0].urls.length, 2); }); test('results are sorted by group frequency, then hierarchically', () => { const result = analyzer.analyze([ 'https://example.com/a', 'https://example.com/a/1', 'https://example.com/a/2', 'https://example.com/b', 'https://example.com/b/1', 'https://example.com/b/2', 'https://example.com/b/3', 'https://example.com/b/4', 'https://example.com/b/5', 'https://example.com/c' ]); const patterns = result.map(r => r.pattern); // /b group has total 6 (1+5), /a group has total 3 (1+2), /c has 1 // Expected order: /b group first (highest), then /a group, then /c assert.strictEqual(patterns[0], 'https://example.com/b'); assert.strictEqual(patterns[1], 'https://example.com/b/…'); assert.strictEqual(patterns[2], 'https://example.com/a'); assert.strictEqual(patterns[3], 'https://example.com/a/…'); assert.strictEqual(patterns[4], 'https://example.com/c'); }); }); describe('Mixed structures at same level', () => { test('static segment after dynamic when mixed with other paths', () => { // This tests the /01/p/001 pattern when mixed with locale paths const result = analyzer.analyze([ 'https://www.example.com/en-us/home', 'https://www.example.com/en-us/pricing', 'https://www.example.com/fr-fr/home', 'https://www.example.com/01/p/001', 'https://www.example.com/02/p/002', 'https://www.example.com/03/p/003' ]); const patterns = getPatterns(result); // Should have separate locale patterns AND the …/p/… pattern assert.ok(patterns.some(p => p.includes('/p/')), 'Should preserve /p/ as static segment: ' + JSON.stringify(patterns)); assert.ok(patterns.some(p => p === 'https://www.example.com/…/p/…'), 'Should produce …/p/… pattern: ' + JSON.stringify(patterns)); }); test('route names vs IDs are distinguished by count', () => { // home/pricing appear multiple times (routes) while IDs appear once each const result = analyzer.analyze([ 'https://example.com/users/home', 'https://example.com/products/home', 'https://example.com/settings/home', 'https://example.com/users/123', 'https://example.com/products/456' ]); const patterns = getPatterns(result); // 'home' appears 3 times -> should be kept as route name // '123', '456' appear once -> should be masked assert.ok(patterns.some(p => p.includes('/home')), 'Should preserve /home as route: ' + JSON.stringify(patterns)); }); }); describe('Edge cases', () => { test('root path only', () => { const result = analyzer.analyze(['https://example.com/']); assert.strictEqual(result.length, 1); assert.strictEqual(result[0].pattern, 'https://example.com/'); }); test('deep nested paths', () => { const result = analyzer.analyze([ 'https://example.com/a/b/c/d/1', 'https://example.com/a/b/c/d/2' ]); assert.deepStrictEqual(getPatterns(result), ['https://example.com/a/b/c/d/…']); }); test('paths with special characters', () => { const result = analyzer.analyze([ 'https://example.com/path%20with%20spaces', 'https://example.com/path-with-dashes' ]); assert.strictEqual(result.length, 1); }); }); });
github_javascript
2025-12-07T09:17:59Z
https://github.com/MarekProkop/url-pattern-analyzer/blob/8789f9cb9e406c395cd489a7940f85a58371240c/tests/analyzer.test.js
{}
export const jams = [ { id: 1, title: "Micro Jam 001: Sci-fi", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-001", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 2, title: "Micro Jam 002: Renaissance", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-002", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 3, title: "Micro Jam 003: Winter", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-003", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 4, title: "Micro Jam 004: Light", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-004", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/4_nwdbh2.png" }, { id: 5, title: "Micro Jam 005: Mythical", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-005", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 6, title: "Micro Jam 006: Creatures", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-006", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 7, title: "Micro Jam 007: Aliens", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-007", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/7_qzewea.png" }, { id: 8, title: "Micro Jam 008: Magic", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-008", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110398/8_g2mnmo.png" }, { id: 9, title: "Micro Jam 009: Music", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-009", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/9_bk3sy3.png" }, { id: 10, title: "Micro Jam 010: Villains", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-010", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/10_nj6cqx.png" }, { id: 11, title: "Micro Jam 011: Teleportation", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-011", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110398/11_kxb6ly.png" }, { id: 12, title: "Micro Jam 012: Ice", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-012", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/12_ev3j6u.png" }, { id: 13, title: "Micro Jam 013: Lava", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-013", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/13_dukdx8.png" }, { id: 14, title: "Micro Jam 014: Urban", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-014", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/14_xya0cf.png" }, { id: 15, title: "Micro Jam 015: Ghost", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-015", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/15_zfizup.png" }, { id: 16, title: "Micro Jam 016: Space", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-016", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/16_j9unaw.png" }, { id: 17, title: "Micro Jam 017: Islands", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-017", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765164702/1_ch4jej.png" }, { id: 18, title: "Micro Jam 018: Water", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-018", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/2_glkqtg.png' }, { id: 19, title: "Micro Jam 019: Dimensions", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-019", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/3_tsao9c.png' }, { id: 20, title: "Micro Jam 020: Time", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-020", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/4_pbqdtm.png' }, { id: 21, title: "Micro Jam 021: Underground", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-021", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/5_xyjqaz.png' }, { id: 22, title: "Micro Jam 022: Desert", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-022", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/6_v81hcn.png' }, { id: 23, title: "Micro Jam 023: Heat", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-023", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/7_honebf.png' }, { id: 24, title: "Micro Jam 024: Wind", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-024", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/8_ql1bnp.png' }, { id: 25, title: "Micro Jam 025: Vampires", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-025", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/9_kkmabl.png' }, { id: 26, title: "Micro Jam 026: Cats", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-026", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/10_amjpzp.png' }, { id: 27, title: "Micro Jam 027: Retro", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-027", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/11_kjrqet.png' }, { id: 28, title: "Micro Jam 028: Electric", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-028", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/12_rq6zy6.png' }, { id: 29, title: "Micro Jam 029: Poison", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-029", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/13_ap9rvd.png' }, { id: 30, title: "Micro Jam 030: Dreams", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-030", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/14_sot4eb.png' }, { id: 31, title: "Micro Jam 031: Robots", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-031", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/15_qhtxkm.png' }, { id: 32, title: "Micro Jam 032: Ancient", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-032", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/16_ys8bcz.png' }, { id: 33, title: "Micro Jam 033: Void", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-033", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/17_xeyetx.png' }, { id: 34, title: "Micro Jam 034: Defense", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-034", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/18_jksbxy.png' }, { id: 35, title: "Micro Jam 035: Wizard", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-035", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/19_ju1cpr.png' }, { id: 36, title: "Micro Jam 036: Fortune", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-036", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/20_mnn6h8.png' }, { id: 37, title: "Micro Jam 037: Food", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-037", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/21_k8jzwj.png' }, { id: 38, title: "Micro Jam 038: Zombies", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-038", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/22_jz0alo.png' }, { id: 39, title: "Micro Jam 039: Duality", status: "completed", // "upcoming", "active",
export const jams = [ { id: 1, title: "Micro Jam 001: Sci-fi", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-001", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 2, title: "Micro Jam 002: Renaissance", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-002", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 3, title: "Micro Jam 003: Winter", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-003", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 4, title: "Micro Jam 004: Light", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-004", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/4_nwdbh2.png" }, { id: 5, title: "Micro Jam 005: Mythical", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-005", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 6, title: "Micro Jam 006: Creatures", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-006", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/1_rgzo3q.png" }, { id: 7, title: "Micro Jam 007: Aliens", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-007", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/7_qzewea.png" }, { id: 8, title: "Micro Jam 008: Magic", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-008", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110398/8_g2mnmo.png" }, { id: 9, title: "Micro Jam 009: Music", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-009", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/9_bk3sy3.png" }, { id: 10, title: "Micro Jam 010: Villains", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-010", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110397/10_nj6cqx.png" }, { id: 11, title: "Micro Jam 011: Teleportation", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-011", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110398/11_kxb6ly.png" }, { id: 12, title: "Micro Jam 012: Ice", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-012", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/12_ev3j6u.png" }, { id: 13, title: "Micro Jam 013: Lava", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-013", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/13_dukdx8.png" }, { id: 14, title: "Micro Jam 014: Urban", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-014", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/14_xya0cf.png" }, { id: 15, title: "Micro Jam 015: Ghost", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-015", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/15_zfizup.png" }, { id: 16, title: "Micro Jam 016: Space", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-016", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765110399/16_j9unaw.png" }, { id: 17, title: "Micro Jam 017: Islands", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-017", img: "https://res.cloudinary.com/draopkta1/image/upload/v1765164702/1_ch4jej.png" }, { id: 18, title: "Micro Jam 018: Water", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-018", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/2_glkqtg.png' }, { id: 19, title: "Micro Jam 019: Dimensions", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-019", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/3_tsao9c.png' }, { id: 20, title: "Micro Jam 020: Time", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-020", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/4_pbqdtm.png' }, { id: 21, title: "Micro Jam 021: Underground", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-021", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/5_xyjqaz.png' }, { id: 22, title: "Micro Jam 022: Desert", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-022", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/6_v81hcn.png' }, { id: 23, title: "Micro Jam 023: Heat", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-023", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/7_honebf.png' }, { id: 24, title: "Micro Jam 024: Wind", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-024", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/8_ql1bnp.png' }, { id: 25, title: "Micro Jam 025: Vampires", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-025", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/9_kkmabl.png' }, { id: 26, title: "Micro Jam 026: Cats", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-026", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/10_amjpzp.png' }, { id: 27, title: "Micro Jam 027: Retro", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-027", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/11_kjrqet.png' }, { id: 28, title: "Micro Jam 028: Electric", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-028", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/12_rq6zy6.png' }, { id: 29, title: "Micro Jam 029: Poison", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-029", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/13_ap9rvd.png' }, { id: 30, title: "Micro Jam 030: Dreams", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-030", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/14_sot4eb.png' }, { id: 31, title: "Micro Jam 031: Robots", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-031", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/15_qhtxkm.png' }, { id: 32, title: "Micro Jam 032: Ancient", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-032", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/16_ys8bcz.png' }, { id: 33, title: "Micro Jam 033: Void", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-033", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/17_xeyetx.png' }, { id: 34, title: "Micro Jam 034: Defense", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-034", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/18_jksbxy.png' }, { id: 35, title: "Micro Jam 035: Wizard", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-035", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/19_ju1cpr.png' }, { id: 36, title: "Micro Jam 036: Fortune", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-036", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164702/20_mnn6h8.png' }, { id: 37, title: "Micro Jam 037: Food", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-037", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164703/21_k8jzwj.png' }, { id: 38, title: "Micro Jam 038: Zombies", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-038", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/22_jz0alo.png' }, { id: 39, title: "Micro Jam 039: Duality", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-039", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/23_is3dc8.png' }, { id: 40, title: "Micro Jam 040: Magic²", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-040", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164704/24_d2aura.png' }, { id: 41, title: "Micro Jam 041: Platforms", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-041", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/25_jczdfb.png' }, { id: 42, title: "Micro Jam 042: Frogs", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-042", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765165147/Untitled_design_21_oaymbv.png' }, { id: 43, title: "Micro Jam 043: Colors", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-043", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164705/26_tztj8m.png' }, { id: 44, title: "Micro Jam 044: Space²", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-044", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/27_nzezwh.png' }, { id: 45, title: "Micro Jam 045: Miniature", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-045", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164706/28_u6dy78.png' }, { id: 46, title: "Micro Jam 046: Night", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-046", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164707/29_p9pfv2.png' }, { id: 47, title: "Micro Jam 047: Savanna", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-047", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164707/30_vw4yse.png' }, { id: 48, title: "Micro Jam 048: Webs", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-048", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164707/31_nv3mop.png' }, { id: 49, title: "Micro Jam 049: Sky", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-049", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164707/32_ub1gwf.png' }, { id: 50, title: "Micro Jam 050: Aliens²", status: "completed", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-050", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164707/33_crwkqj.png' }, { id: 51, title: "Micro Jam 051: Christmas", status: "upcoming", // "upcoming", "active", "completed" itchUrl: "https://itch.io/jam/micro-jam-051", img: 'https://res.cloudinary.com/draopkta1/image/upload/v1765164708/34_v4uyga.png' }, ]
github_javascript
2025-12-12T10:47:14Z
https://github.com/t33devv/microjam/blob/1dd32e971b07397dc1d57cb9b6a425671800b076/src/data/jams.js
{}
import { useState, useCallback } from 'react'; const MAX_HISTORY = 50; export function useHistory(initialState) { const [history, setHistory] = useState([initialState]); const [currentIndex, setCurrentIndex] = useState(0); const currentState = history[currentIndex]; const pushState = useCallback((newState) => { setHistory(prev => { // 如果不在最新位置,删除后面的历史 const newHistory = prev.slice(0, currentIndex + 1); newHistory.push(newState); // 限制历史记录数量 if (newHistory.length > MAX_HISTORY) { newHistory.shift(); return newHistory; } return newHistory; }); setCurrentIndex(prev => Math.min(prev + 1, MAX_HISTORY - 1)); }, [currentIndex]); const undo = useCallback(() => { if (currentIndex > 0) { setCurrentIndex(prev => prev - 1); return history[currentIndex - 1]; } return null; }, [currentIndex, history]); const redo = useCallback(() => { if (currentIndex < history.length - 1) { setCurrentIndex(prev => prev + 1); return history[currentIndex + 1]; } return null; }, [currentIndex, history]); const canUndo = currentIndex > 0; const canRedo = currentIndex < history.length - 1; return { currentState, pushState, undo, redo, canUndo, canRedo, }; }
import { useState, useCallback } from 'react'; const MAX_HISTORY = 50; export function useHistory(initialState) { const [history, setHistory] = useState([initialState]); const [currentIndex, setCurrentIndex] = useState(0); const currentState = history[currentIndex]; const pushState = useCallback((newState) => { setHistory(prev => { // 如果不在最新位置,删除后面的历史 const newHistory = prev.slice(0, currentIndex + 1); newHistory.push(newState); // 限制历史记录数量 if (newHistory.length > MAX_HISTORY) { newHistory.shift(); return newHistory; } return newHistory; }); setCurrentIndex(prev => Math.min(prev + 1, MAX_HISTORY - 1)); }, [currentIndex]); const undo = useCallback(() => { if (currentIndex > 0) { setCurrentIndex(prev => prev - 1); return history[currentIndex - 1]; } return null; }, [currentIndex, history]); const redo = useCallback(() => { if (currentIndex < history.length - 1) { setCurrentIndex(prev => prev + 1); return history[currentIndex + 1]; } return null; }, [currentIndex, history]); const canUndo = currentIndex > 0; const canRedo = currentIndex < history.length - 1; return { currentState, pushState, undo, redo, canUndo, canRedo, }; }
github_javascript
2025-12-03T20:46:22Z
https://github.com/24K-GA/App-Mockup-Studio/blob/c644854a68dfe844c4897f0805571947c2b553ce/src/hooks/useHistory.js
{}
// Game State let gameState = { hunger: 100, money: 1000, happiness: 50 }; // DOM Elements const hungerBar = document.getElementById('hungerBar'); const hungerText = document.getElementById('hungerText'); const moneyDisplay = document.getElementById('money'); const happinessDisplay = document.getElementById('happiness'); const speechBubble = document.getElementById('speechBubble'); const character = document.getElementById('character'); const mouth = document.getElementById('mouth'); const notifications = document.getElementById('notifications'); // Funny Messages const messages = { hungry: [ "Đói quá... cho ăn đi anh em ơi! 🥺", "Bụng kêu ọc ọc rồi! 😭", "Anh em đâu rồi? Em đói lắm! 🍚", "Sao không cho em ăn vậy? 😢", "Em muốn ăn... bất cứ thứ gì! 🤤", "Đói đến mức muốn ăn cả màn hình luôn! 😤", "Nhìn cái menu mà nuốt nước bọt! 🤤" ], satisfied: [ "Ơ, có vẻ ổn rồi đó! 😊", "Cảm ơn anh em nhé! 😁", "Ngon quá đi mất! 😋", "Hơi no rồi nè! 🙂", "Vẫn còn ăn được nữa! 😄" ], full: [ "No căng bụng luôn! 🤰", "Ăn no rồi, nghỉ tý! 😴", "Sướng quá đi mất! 🥰", "Cảm ơn anh em đã nuôi! ❤️", "No nê rồi, có thể chiến game! 🎮", "배불러! (Bụng no quá!) 😂" ], veryHungry: [ "ĐÓIIIIII! CHO ĂN ĐI! 😭😭😭", "Sắp chết đói rồi anh em ơi! ☠️", "Cứu em với! Đói lắm rồi! 🆘", "Em sắp ngất đói mất! 😵", "Nhanh lên đi! Bụng em kêu ầm ầm! 🔊" ], noMoney: [ "Hết tiền rồi! Đi làm thêm đi! 💸", "Nghèo quá! Phải kiếm tiền thôi! 😢", "Ví trống rỗng! Lương đâu? 💰", "Làm việc đi anh em! Hết tiền rồi! 😭" ], afterEating: [ "Ngon lành cành đào! 😋", "Cảm ơn meal! 🙏", "Đã đời! 🤩", "Trời ơi, ngon quá! 😍", "5 sao cho món này! ⭐⭐⭐⭐⭐" ] }; // Initialize Game function init() { updateUI(); startHungerTimer(); } // Update UI function updateUI() { // Update hunger bar hungerBar.style.width = gameState.hunger + '%'; hungerText.textContent = gameState.hunger + '%'; // Update money moneyDisplay.textContent = gameState.money + 'đ'; // Update happiness emoji if (gameState.hunger < 20) { happinessDisplay.textContent = '😭'; mouth.className = 'mouth sad'; } else if (gameState.hunger < 50) { happinessDisplay.textContent = '😟'; mouth.className = 'mouth'; } else if (gameState.hunger < 80) { happinessDisplay.textContent = '😊'; mouth.className = 'mouth'; } else { happinessDisplay.textContent = '🤩'; mouth.className = 'mouth happy'; } // Update speech bubble updateSpeechBubble(); } // Update Speech Bubble function updateSpeechBubble() { let messageArray; if (gameState.hunger < 15) { messageArray = messages.veryHungry; } else if (gameState.hunger < 40) { messageArray = messages.hungry; } else if (gameState.hunger < 80) { messageArray = messages.satisfied; } else { messageArray = messages.full; } const randomMessage = messageArray[Math.floor(Math.random() * messageArray.length)]; speechBubble.textContent = randomMessage; } // Hunger Timer function startHungerTimer() { setInterval(() => { if (gameState.hunger > 0) { gameState.hunger = Math.max(0, gameState.hunger - 1); updateUI(); // Warning notification when very hungry if (gameState.hunger === 10) { showNotification('⚠️ Anh em sắp chết đói rồi!', 'error'); } } else { showNotification('💀 Game Over! Anh em đói chết rồi!', 'error'); } }, 3000); // Decrease hunger every 3 seconds } // Feed Character function feedCharacter(foodName, price, hungerRestore) { if (gameState.money < price) { showNotification('❌ Không đủ tiền! Đi làm thêm đi!', 'error'); const randomMessage = messages.noMoney[Math.floor(Math.random() * messages.noMoney.length)]; speechBubble.textContent = randomMessage; return; } // Deduct money and restore hunger gameState.money -= price; gameState.hunger = Math.min(100, gameState.hunger + hungerRestore); // Play eating animation character.classList.add('eating'); setTimeout(() => { character.classList.remove('eating'); }, 1500); // Show notification const randomAfterEating = messages.afterEating[Math.floor(Math.random() * messages.afterEating.length)]; showNotification(`🍴 Đã ăn ${foodName}! ${randomAfterEating}`, 'success'); // Update speech bubble with random message setTimeout(() => { const messageArray = gameState.hunger > 80 ? messages.full : messages.satisfied; const randomMessage = messageArray[Math.floor(Math.random() * messageArray.length)]; speechBubble.textContent = randomMessage; }, 1000); updateUI(); } // Show Notification function showNotification(message, type = 'success') { const notification = document.createElement('div'); notification.className = `notification ${type}`; notification.textContent = message; notifications.appendChild(notification); setTimeout(() => { notification.style.animation = 'slideIn 0.3s ease reverse'; setTimeout(() => { notifications.removeChild(notification); }, 300); }, 3000); } // Earn Money function earnMoney() { const earnings = Math.floor(Math.random() * 100) + 50; gameState.money += earnings; const funnyEarnMessages = [ `💰 Kiếm được ${earnings}đ! Giàu vcl! 💸`, `🤑 +${earnings}đ! Streamer à? 🎮`, `💵 Nhận ${earnings}đ! Đi làm thêm giỏi đấy! 💪`, `💴 Lương ${earnings}đ đây! Mua đồ ăn đi! 🍕`, `💷 Được ${earnings}đ! Sugar daddy đó à? 😏` ]; const randomEarnMessage = funnyEarnMessages[Math.floor(Math.random() * funnyEarnMessages.length)]; showNotification(randomEarnMessage, 'success'); speechBubble.textContent = "Wow! Giàu rồi! Mua đồ ăn cho em đi! 🤩"; updateUI(); } // Reset Game function resetGame() { if (confirm('Chơi lại từ đầu? Tất cả tiến trình sẽ mất! 🔄')) { gameState = { hunger: 100, money: 1000, happiness: 50 }; updateUI(); showNotification('🎮 Đã reset game! Chơi lại nào!', 'success'); speechBubble.textContent = 'Xin chào! Nuôi em đi anh em ơi! 😊'; } } // Event Listeners document.querySelectorAll('.food-item').forEach(button => { button.addEventListener('click', (e) => { const foodName = button.dataset.food; const price = parseInt(button.dataset.price); const hungerRestore = parseInt(button.dataset.hunger); feedCharacter(foodName, price, hungerRestore); }); }); document.getElementById('earnMoney').addEventListener('click', earnMoney); document.getElementById('resetGame').addEventListener('click', resetGame); // Easter Eggs let clickCount = 0; character.addEventListener('click', () => { clickCount++; const easterEggMessages = [ "Đừng bấm vào mặt em! 😤", "Gì zậy trời? 🤨", "Ngứa à? 😏", "Đừng có bấm nữa! 😠", "Anh em thích chọc em à? 🙄", "Đau đó nha! 😭", "Chơi khăm à? 😤" ]; if (clickCount % 3 === 0) { const randomEgg = easterEggMessages[Math.floor(Math.random() * easterEggMessages.length)]; speechBubble.textContent = randomEgg; } }); // Konami Code Easter Egg (↑↑↓↓←→←→BA) let konamiCode = []; const konamiSequence = ['ArrowUp', 'ArrowUp', 'ArrowDown', 'ArrowDown', 'ArrowLeft', 'ArrowRight', 'ArrowLeft', 'ArrowRight', 'b', 'a']; document.addEventListener('keydown', (e) => { konamiCode.push(e.key); konamiCode = konamiCode.slice(-10); if (konamiCode.join(',') === konamiSequence.join(',')) { gameState.money += 9999; updateUI(); showNotification('🎮 CHEAT CODE ACTIVATED! +9999đ! You are a legend! 🏆', 'success'); speechBubble.textContent = 'Hack à? Được đó, giờ mua hết đồ ăn đi! 😎'; } }); // Random funny messages every 30 seconds setInterval(() => { if (Math.random() > 0.7) { const allMessages = [...messages.hungry, ...messages.satisfied, ...messages.full]; const randomMsg = allMessages[Math.floor(Math.random() * allMessages.length)]; speechBubble.textContent = randomMsg; } }, 30000); // Initialize the game init(); // Welcome message setTimeout(() => { showNotification('🎮 Chào mừng đến với Nuôi Anh Em! Đừng để anh em đói nhé! 😊', 'success'); }, 500);
// Game State let gameState = { hunger: 100, money: 1000, happiness: 50 }; // DOM Elements const hungerBar = document.getElementById('hungerBar'); const hungerText = document.getElementById('hungerText'); const moneyDisplay = document.getElementById('money'); const happinessDisplay = document.getElementById('happiness'); const speechBubble = document.getElementById('speechBubble'); const character = document.getElementById('character'); const mouth = document.getElementById('mouth'); const notifications = document.getElementById('notifications'); // Funny Messages const messages = { hungry: [ "Đói quá... cho ăn đi anh em ơi! 🥺", "Bụng kêu ọc ọc rồi! 😭", "Anh em đâu rồi? Em đói lắm! 🍚", "Sao không cho em ăn vậy? 😢", "Em muốn ăn... bất cứ thứ gì! 🤤", "Đói đến mức muốn ăn cả màn hình luôn! 😤", "Nhìn cái menu mà nuốt nước bọt! 🤤" ], satisfied: [ "Ơ, có vẻ ổn rồi đó! 😊", "Cảm ơn anh em nhé! 😁", "Ngon quá đi mất! 😋", "Hơi no rồi nè! 🙂", "Vẫn còn ăn được nữa! 😄" ], full: [ "No căng bụng luôn! 🤰", "Ăn no rồi, nghỉ tý! 😴", "Sướng quá đi mất! 🥰", "Cảm ơn anh em đã nuôi! ❤️", "No nê rồi, có thể chiến game! 🎮", "배불러! (Bụng no quá!) 😂" ], veryHungry: [ "ĐÓIIIIII! CHO ĂN ĐI! 😭😭😭", "Sắp chết đói rồi anh em ơi! ☠️", "Cứu em với! Đói lắm rồi! 🆘", "Em sắp ngất đói mất! 😵", "Nhanh lên đi! Bụng em kêu ầm ầm! 🔊" ], noMoney: [ "Hết tiền rồi! Đi làm thêm đi! 💸", "Nghèo quá! Phải kiếm tiền thôi! 😢", "Ví trống rỗng! Lương đâu? 💰", "Làm việc đi anh em! Hết tiền rồi! 😭" ], afterEating: [ "Ngon lành cành đào! 😋", "Cảm ơn meal! 🙏", "Đã đời! 🤩", "Trời ơi, ngon quá! 😍", "5 sao cho món này! ⭐⭐⭐⭐⭐" ] }; // Initialize Game function init() { updateUI(); startHungerTimer(); } // Update UI function updateUI() { // Update hunger bar hungerBar.style.width = gameState.hunger + '%'; hungerText.textContent = gameState.hunger + '%'; // Update money moneyDisplay.textContent = gameState.money + 'đ'; // Update happiness emoji if (gameState.hunger < 20) { happinessDisplay.textContent = '😭'; mouth.className = 'mouth sad'; } else if (gameState.hunger < 50) { happinessDisplay.textContent = '😟'; mouth.className = 'mouth'; } else if (gameState.hunger < 80) { happinessDisplay.textContent = '😊'; mouth.className = 'mouth'; } else { happinessDisplay.textContent = '🤩'; mouth.className = 'mouth happy'; } // Update speech bubble updateSpeechBubble(); } // Update Speech Bubble function updateSpeechBubble() { let messageArray; if (gameState.hunger < 15) { messageArray = messages.veryHungry; } else if (gameState.hunger < 40) { messageArray = messages.hungry; } else if (gameState.hunger < 80) { messageArray = messages.satisfied; } else { messageArray = messages.full; } const randomMessage = messageArray[Math.floor(Math.random() * messageArray.length)]; speechBubble.textContent = randomMessage; } // Hunger Timer function startHungerTimer() { setInterval(() => { if (gameState.hunger > 0) { gameState.hunger = Math.max(0, gameState.hunger - 1); updateUI(); // Warning notification when very hungry if (gameState.hunger === 10) { showNotification('⚠️ Anh em sắp chết đói rồi!', 'error'); } } else { showNotification('💀 Game Over! Anh em đói chết rồi!', 'error'); } }, 3000); // Decrease hunger every 3 seconds } // Feed Character function feedCharacter(foodName, price, hungerRestore) { if (gameState.money < price) { showNotification('❌ Không đủ tiền! Đi làm thêm đi!', 'error'); const randomMessage = messages.noMoney[Math.floor(Math.random() * messages.noMoney.length)]; speechBubble.textContent = randomMessage; return; } // Deduct money and restore hunger gameState.money -= price; gameState.hunger = Math.min(100, gameState.hunger + hungerRestore); // Play eating animation character.classList.add('eating'); setTimeout(() => { character.classList.remove('eating'); }, 1500); // Show notification const randomAfterEating = messages.afterEating[Math.floor(Math.random() * messages.afterEating.length)]; showNotification(`🍴 Đã ăn ${foodName}! ${randomAfterEating}`, 'success'); // Update speech bubble with random message setTimeout(() => { const messageArray = gameState.hunger > 80 ? messages.full : messages.satisfied; const randomMessage = messageArray[Math.floor(Math.random() * messageArray.length)]; speechBubble.textContent = randomMessage; }, 1000); updateUI(); } // Show Notification function showNotification(message, type = 'success') { const notification = document.createElement('div'); notification.className = `notification ${type}`; notification.textContent = message; notifications.appendChild(notification); setTimeout(() => { notification.style.animation = 'slideIn 0.3s ease reverse'; setTimeout(() => { notifications.removeChild(notification); }, 300); }, 3000); } // Earn Money function earnMoney() { const earnings = Math.floor(Math.random() * 100) + 50; gameState.money += earnings; const funnyEarnMessages = [ `💰 Kiếm được ${earnings}đ! Giàu vcl! 💸`, `🤑 +${earnings}đ! Streamer à? 🎮`, `💵 Nhận ${earnings}đ! Đi làm thêm giỏi đấy! 💪`, `💴 Lương ${earnings}đ đây! Mua đồ ăn đi! 🍕`, `💷 Được ${earnings}đ! Sugar daddy đó à? 😏` ]; const randomEarnMessage = funnyEarnMessages[Math.floor(Math.random() * funnyEarnMessages.length)]; showNotification(randomEarnMessage, 'success'); speechBubble.textContent = "Wow! Giàu rồi! Mua đồ ăn cho em đi! 🤩"; updateUI(); } // Reset Game function resetGame() { if (confirm('Chơi lại từ đầu? Tất cả tiến trình sẽ mất! 🔄')) { gameState = { hunger: 100, money: 1000, happiness: 50 }; updateUI(); showNotification('🎮 Đã reset game! Chơi lại nào!', 'success'); speechBubble.textContent = 'Xin chào! Nuôi em đi anh em ơi! 😊'; } } // Event Listeners document.querySelectorAll('.food-item').forEach(button => { button.addEventListener('click', (e) => { const foodName = button.dataset.food; const price = parseInt(button.dataset.price); const hungerRestore = parseInt(button.dataset.hunger); feedCharacter(foodName, price, hungerRestore); }); }); document.getElementById('earnMoney').addEventListener('click', earnMoney); document.getElementById('resetGame').addEventListener('click', resetGame); // Easter Eggs let clickCount = 0; character.addEventListener('click', () => { clickCount++; const easterEggMessages = [ "Đừng bấm vào mặt em! 😤", "Gì zậy trời? 🤨", "Ngứa à? 😏", "Đừng có bấm nữa! 😠", "Anh em thích chọc em à? 🙄", "Đau đó nha! 😭", "Chơi khăm à? 😤" ]; if (clickCount % 3 === 0) { const randomEgg = easterEggMessages[Math.floor(Math.random() * easterEggMessages.length)]; speechBubble.textContent = randomEgg; } }); // Konami Code Easter Egg (↑↑↓↓←→←→BA) let konamiCode = []; const konamiSequence = ['ArrowUp', 'ArrowUp', 'ArrowDown', 'ArrowDown', 'ArrowLeft', 'ArrowRight', 'ArrowLeft', 'ArrowRight', 'b', 'a']; document.addEventListener('keydown', (e) => { konamiCode.push(e.key); konamiCode = konamiCode.slice(-10); if (konamiCode.join(',') === konamiSequence.join(',')) { gameState.money += 9999; updateUI(); showNotification('🎮 CHEAT CODE ACTIVATED! +9999đ! You are a legend! 🏆', 'success'); speechBubble.textContent = 'Hack à? Được đó, giờ mua hết đồ ăn đi! 😎'; } }); // Random funny messages every 30 seconds setInterval(() => { if (Math.random() > 0.7) { const allMessages = [...messages.hungry, ...messages.satisfied, ...messages.full]; const randomMsg = allMessages[Math.floor(Math.random() * allMessages.length)]; speechBubble.textContent = randomMsg; } }, 30000); // Initialize the game init(); // Welcome message setTimeout(() => { showNotification('🎮 Chào mừng đến với Nuôi Anh Em! Đừng để anh em đói nhé! 😊', 'success'); }, 500);
github_javascript
2025-12-10T02:07:16Z
https://github.com/codetoanbug/nuoianhem/blob/51f1fbfc09a852d8b5d1e28acf28c7862a836940/script.js
{}
// Internationalization (i18n) Module // Manages translations for Spanish (es) and English (en) const translations = { es: { // Header "header.logo": "NmapFormatter", "header.github": "GitHub", "header.documentation": "Documentación", // Mode Switcher "mode.hosts": "Hosts", "mode.ports": "Puertos", "mode.services": "Servicios", "mode.correlation": "Correlación", // Hosts Mode "hosts.title": "Reconocimiento de hosts (Nmap -sn)", "hosts.tooltip.title": "Formato esperado", "hosts.tooltip.description": "Archivo de texto con salida de un escaneo de descubrimiento de hosts (ping scan).", "hosts.tooltip.example": "El archivo debe contener líneas como:", "hosts.upload.title": "Arrastra tu archivo", "hosts.upload.subtitle": "o haz clic para seleccionar", "hosts.upload.hint": "Formato: salida normal de Nmap con escaneo ping (-sn) para descubrimiento de hosts", "hosts.pill.nofile": "Sin archivo", "hosts.kpi.active": "Hosts Activos", "hosts.kpi.scanned": "IPs Escaneadas", "hosts.kpi.duration": "Duración", "hosts.kpi.latency": "Latencia Promedio", "hosts.search.label": "Buscar (IP, Hostname, Vendor)", "hosts.search.placeholder": "Ej. 192.168.0.1, router, Apple...", "hosts.clear": "Limpiar", "hosts.export": "Exportar CSV", "hosts.table.ip": "IP", "hosts.table.hostname": "Hostname", "hosts.table.status": "Estado", "hosts.table.latency": "Latencia", "hosts.table.mac": "MAC", "hosts.table.vendor": "Vendor", "hosts.table.actions": "Acciones", "hosts.table.empty": "Carga un archivo de escaneo de hosts para visualizar los resultados", "hosts.modal.title": "Detalle", "hosts.modal.close": "Cerrar", // Ports Mode "ports.title": "Informe de puertos (Nmap)", "ports.tooltip.title": "Formato esperado", "ports.tooltip.description": "Archivo de texto con salida de un escaneo básico de puertos.", "ports.tooltip.example": "El archivo debe contener una tabla PORT STATE SERVICE con los puertos abiertos detectados.", "ports.tooltip.exampleline": "Ejemplo de línea:", "ports.upload.title": "Sube un .txt de Nmap", "ports.upload.hint": 'Arrastra y suelta aquí, o usa el botón. Formato esperado: salida "normal" de Nmap con bloques "Nmap scan report for …" y tabla "PORT STATE SERVICE …".', "ports.clear": "Limpiar", "ports.pill.nofile": "Sin archivo", "ports.kpi.hosts": "Hosts (con puertos)", "ports.kpi.ports": "Puertos (total)", "ports.kpi.unique": "Puertos únicos", "ports.kpi.top": "Puerto más común", "ports.chart.ports": "Top puertos", "ports.chart.services": "Top servicios", "ports.search.label": "Buscar (IP, MAC, vendor, servicio, puerto)", "ports.search.placeholder": "Ej.: 192.168.8.13 | mysql | 3306 | Intel", "ports.filter.service": "Servicio", "ports.filter.vendor": "Vendor", "ports.filter.minports": "Mín. # puertos", "ports.export": "Exportar CSV", "ports.reset": "Reset", "ports.tip": 'Tip: haz clic en los encabezados para ordenar. En "Detalle" puedes ver el listado completo de puertos por host.', "ports.table.ip": "IP", "ports.table.vendor": "Vendor (MAC)", "ports.table.latency": "Latencia", "ports.table.portcount": "# Puertos", "ports.table.ports": "Puertos / servicios", "ports.table.actions": "Acciones", "ports.table.empty": "Carga un archivo para ver resultados.", "ports.modal.title": "Host", "ports.modal.close": "Cerrar", // Services Mode "services.title": "Informe de servicios (Nmap)", "services.tooltip.title": "Formato esperado", "services.tooltip.description": "Archivo de texto con salida de un escaneo de servicios con detección de versiones y scripts NSE.", "services.tooltip.example": "El archivo debe contener información de versión, scripts ejecutados y posibles vulnerabilidades.", "services.tooltip.exampleline": "Ejemplo:", "services.upload.title": "Sube un .txt de Nmap (-sV/-sC/--script vuln, etc.)", "services.upload.hint": 'Arrastra y suelta aquí, o usa el botón. Formato esperado: salida "normal" de Nmap con bloques "Nmap scan report for …" y tabla "PORT STATE SERVICE REASON VERSION". También soporta salidas con scripts (líneas que empiezan por "|").', "services.clear": "Limpiar", "services.pill.nofile": "Sin archivo", "services.kpi.hosts": "Hosts con servicios", "services.kpi.services": "Servicios (instancias)", "services.kpi.findings": "Vulns/Findings (IDs)", "services.kpi.critical": "Críticas (CVSS ≥ 9)", "services.chart.services": "Top servicios", "services.chart.vulns": "Top vulnerabilidades / IDs", "services.search.label": "Buscar (IP, servicio, versión, CVE, script)", "services.search.placeholder": "Ej.: 192.168.8.1 | apache | CVE-2025-38476 | ssl-dh-params", "services.filter.service": "Servicio", "services.filter.mincvss": "Min CVSS", "services.filter.exploitonly": "Solo *EXPLOIT*", "services.filter.exploitonly.no": "No", "services.filter.exploitonly.yes": "Sí", "services.export": "Exportar CSV", "services.reset": "Reset", "services.tip": 'Tip: haz clic en los encabezados para ordenar. En "Detalle" se muestran scripts, versión completa y lista de CVEs/IDs detectadas.', "services.table.ip": "IP", "services.table.port": "Puerto", "services.table.service": "Servicio", "services.table.version": "Producto/versión", "services.table.maxcvss": "Max CVSS", "services.table.indicators": "Indicadores", "services.table.actions": "Acciones", "services.table.empty": "Carga un archivo para ver resultados.", "services.modal.title": "Servicio", "services.modal.close": "Cerrar", // Correlation Mode "correlation.title": "Correlación de Datos", "correlation.tooltip.title": "¿Qué es la correlación?", "correlation.tooltip.description": "Combina datos de un escaneo básico de puertos con un escaneo de servicios para obtener información enriquecida.", "correlation.tooltip.example": "Carga ambos archivos para correlacionar la información por IP+Puerto y obtener versiones, scripts y vulnerabilidades asociadas a cada puerto.", "correlation.pill.nodata": "Sin datos", "correlation.ports.title": "Escaneo de Puertos", "correlation.ports.hint": "Formato: salida normal de Nmap con puertos abiertos", "correlation.ports.status": "📄 No cargado", "correlation.services.title": "Escaneo de Servicios", "correlation.services.hint": "Formato: salida con -sV/-sC y detección de versiones", "correlation.services.status": "📄 No cargado", "correlation.button.correlate": "🔗 Correlacionar Datos", "correlation.button.clear": "Limpiar", "correlation.hint": "Los datos se correlacionarán por IP + Puerto + Protocolo", "correlation.kpi.hosts": "Hosts Correlacionados", "correlation.kpi.ports": "Puertos Enriquecidos", "correlation.kpi.services": "Servicios Detectados", "correlation.kpi.vulns": "Vulnerabilidades", "correlation.search.label": "Buscar", "correlation.search.placeholder": "IP, puerto, servicio, vulnerabilidad...", "correlation.filter.service": "Servicio", "correlation.filter.mincvss": "Min CVSS", "correlation.export": "Exportar CSV", "correlation.reset": "Reset", "correlation.table.ip": "IP", "correlation.table.port": "Puerto", "correlation.table.service": "Servicio", "correlation.table.version": "Versión", "correlation.table.vulns": "Vulns", "correlation.table.status": "Estado", "correlation.table.actions": "Acciones", "correlation.table.empty": 'Carga ambos archivos y presiona "Correlacionar Datos"', "correlation.modal.title": "Detalle", "correlation.modal.close": "Cerrar", // Footer "footer.home": "Inicio", "footer.documentation": "Documentación", "footer.github": "GitHub", "footer.contact": "Contacto", "footer.copy": "© 2026 NmapFormatter. Herramienta de análisis de seguridad de red.", // Common "common.close": "Cerrar", "common.detail": "Detalle", "common.copy": "Copiar IP", }, en: { // Header "header.logo": "NmapFormatter", "header.github": "GitHub", "header.documentation": "Documentation", // Mode Switcher "mode.hosts": "Hosts", "mode.ports": "Ports", "mode.services": "Services", "mode.correlation": "Correlation", // Hosts Mode "hosts.title": "Host Discovery (Nmap -sn)", "hosts.tooltip.title": "Expected Format", "hosts.tooltip.description": "Text file with output from a host discovery scan (ping scan).", "hosts.tooltip.example": "The file should contain lines like:", "hosts.upload.title": "Drop your file here", "hosts.upload.subtitle": "or click to select", "hosts.upload.hint": "Format: normal Nmap output with ping scan (-sn) for host discovery", "hosts.pill.nofile": "No file", "hosts.kpi.active": "Active Hosts", "hosts.kpi.scanned":
// Internationalization (i18n) Module // Manages translations for Spanish (es) and English (en) const translations = { es: { // Header "header.logo": "NmapFormatter", "header.github": "GitHub", "header.documentation": "Documentación", // Mode Switcher "mode.hosts": "Hosts", "mode.ports": "Puertos", "mode.services": "Servicios", "mode.correlation": "Correlación", // Hosts Mode "hosts.title": "Reconocimiento de hosts (Nmap -sn)", "hosts.tooltip.title": "Formato esperado", "hosts.tooltip.description": "Archivo de texto con salida de un escaneo de descubrimiento de hosts (ping scan).", "hosts.tooltip.example": "El archivo debe contener líneas como:", "hosts.upload.title": "Arrastra tu archivo", "hosts.upload.subtitle": "o haz clic para seleccionar", "hosts.upload.hint": "Formato: salida normal de Nmap con escaneo ping (-sn) para descubrimiento de hosts", "hosts.pill.nofile": "Sin archivo", "hosts.kpi.active": "Hosts Activos", "hosts.kpi.scanned": "IPs Escaneadas", "hosts.kpi.duration": "Duración", "hosts.kpi.latency": "Latencia Promedio", "hosts.search.label": "Buscar (IP, Hostname, Vendor)", "hosts.search.placeholder": "Ej. 192.168.0.1, router, Apple...", "hosts.clear": "Limpiar", "hosts.export": "Exportar CSV", "hosts.table.ip": "IP", "hosts.table.hostname": "Hostname", "hosts.table.status": "Estado", "hosts.table.latency": "Latencia", "hosts.table.mac": "MAC", "hosts.table.vendor": "Vendor", "hosts.table.actions": "Acciones", "hosts.table.empty": "Carga un archivo de escaneo de hosts para visualizar los resultados", "hosts.modal.title": "Detalle", "hosts.modal.close": "Cerrar", // Ports Mode "ports.title": "Informe de puertos (Nmap)", "ports.tooltip.title": "Formato esperado", "ports.tooltip.description": "Archivo de texto con salida de un escaneo básico de puertos.", "ports.tooltip.example": "El archivo debe contener una tabla PORT STATE SERVICE con los puertos abiertos detectados.", "ports.tooltip.exampleline": "Ejemplo de línea:", "ports.upload.title": "Sube un .txt de Nmap", "ports.upload.hint": 'Arrastra y suelta aquí, o usa el botón. Formato esperado: salida "normal" de Nmap con bloques "Nmap scan report for …" y tabla "PORT STATE SERVICE …".', "ports.clear": "Limpiar", "ports.pill.nofile": "Sin archivo", "ports.kpi.hosts": "Hosts (con puertos)", "ports.kpi.ports": "Puertos (total)", "ports.kpi.unique": "Puertos únicos", "ports.kpi.top": "Puerto más común", "ports.chart.ports": "Top puertos", "ports.chart.services": "Top servicios", "ports.search.label": "Buscar (IP, MAC, vendor, servicio, puerto)", "ports.search.placeholder": "Ej.: 192.168.8.13 | mysql | 3306 | Intel", "ports.filter.service": "Servicio", "ports.filter.vendor": "Vendor", "ports.filter.minports": "Mín. # puertos", "ports.export": "Exportar CSV", "ports.reset": "Reset", "ports.tip": 'Tip: haz clic en los encabezados para ordenar. En "Detalle" puedes ver el listado completo de puertos por host.', "ports.table.ip": "IP", "ports.table.vendor": "Vendor (MAC)", "ports.table.latency": "Latencia", "ports.table.portcount": "# Puertos", "ports.table.ports": "Puertos / servicios", "ports.table.actions": "Acciones", "ports.table.empty": "Carga un archivo para ver resultados.", "ports.modal.title": "Host", "ports.modal.close": "Cerrar", // Services Mode "services.title": "Informe de servicios (Nmap)", "services.tooltip.title": "Formato esperado", "services.tooltip.description": "Archivo de texto con salida de un escaneo de servicios con detección de versiones y scripts NSE.", "services.tooltip.example": "El archivo debe contener información de versión, scripts ejecutados y posibles vulnerabilidades.", "services.tooltip.exampleline": "Ejemplo:", "services.upload.title": "Sube un .txt de Nmap (-sV/-sC/--script vuln, etc.)", "services.upload.hint": 'Arrastra y suelta aquí, o usa el botón. Formato esperado: salida "normal" de Nmap con bloques "Nmap scan report for …" y tabla "PORT STATE SERVICE REASON VERSION". También soporta salidas con scripts (líneas que empiezan por "|").', "services.clear": "Limpiar", "services.pill.nofile": "Sin archivo", "services.kpi.hosts": "Hosts con servicios", "services.kpi.services": "Servicios (instancias)", "services.kpi.findings": "Vulns/Findings (IDs)", "services.kpi.critical": "Críticas (CVSS ≥ 9)", "services.chart.services": "Top servicios", "services.chart.vulns": "Top vulnerabilidades / IDs", "services.search.label": "Buscar (IP, servicio, versión, CVE, script)", "services.search.placeholder": "Ej.: 192.168.8.1 | apache | CVE-2025-38476 | ssl-dh-params", "services.filter.service": "Servicio", "services.filter.mincvss": "Min CVSS", "services.filter.exploitonly": "Solo *EXPLOIT*", "services.filter.exploitonly.no": "No", "services.filter.exploitonly.yes": "Sí", "services.export": "Exportar CSV", "services.reset": "Reset", "services.tip": 'Tip: haz clic en los encabezados para ordenar. En "Detalle" se muestran scripts, versión completa y lista de CVEs/IDs detectadas.', "services.table.ip": "IP", "services.table.port": "Puerto", "services.table.service": "Servicio", "services.table.version": "Producto/versión", "services.table.maxcvss": "Max CVSS", "services.table.indicators": "Indicadores", "services.table.actions": "Acciones", "services.table.empty": "Carga un archivo para ver resultados.", "services.modal.title": "Servicio", "services.modal.close": "Cerrar", // Correlation Mode "correlation.title": "Correlación de Datos", "correlation.tooltip.title": "¿Qué es la correlación?", "correlation.tooltip.description": "Combina datos de un escaneo básico de puertos con un escaneo de servicios para obtener información enriquecida.", "correlation.tooltip.example": "Carga ambos archivos para correlacionar la información por IP+Puerto y obtener versiones, scripts y vulnerabilidades asociadas a cada puerto.", "correlation.pill.nodata": "Sin datos", "correlation.ports.title": "Escaneo de Puertos", "correlation.ports.hint": "Formato: salida normal de Nmap con puertos abiertos", "correlation.ports.status": "📄 No cargado", "correlation.services.title": "Escaneo de Servicios", "correlation.services.hint": "Formato: salida con -sV/-sC y detección de versiones", "correlation.services.status": "📄 No cargado", "correlation.button.correlate": "🔗 Correlacionar Datos", "correlation.button.clear": "Limpiar", "correlation.hint": "Los datos se correlacionarán por IP + Puerto + Protocolo", "correlation.kpi.hosts": "Hosts Correlacionados", "correlation.kpi.ports": "Puertos Enriquecidos", "correlation.kpi.services": "Servicios Detectados", "correlation.kpi.vulns": "Vulnerabilidades", "correlation.search.label": "Buscar", "correlation.search.placeholder": "IP, puerto, servicio, vulnerabilidad...", "correlation.filter.service": "Servicio", "correlation.filter.mincvss": "Min CVSS", "correlation.export": "Exportar CSV", "correlation.reset": "Reset", "correlation.table.ip": "IP", "correlation.table.port": "Puerto", "correlation.table.service": "Servicio", "correlation.table.version": "Versión", "correlation.table.vulns": "Vulns", "correlation.table.status": "Estado", "correlation.table.actions": "Acciones", "correlation.table.empty": 'Carga ambos archivos y presiona "Correlacionar Datos"', "correlation.modal.title": "Detalle", "correlation.modal.close": "Cerrar", // Footer "footer.home": "Inicio", "footer.documentation": "Documentación", "footer.github": "GitHub", "footer.contact": "Contacto", "footer.copy": "© 2026 NmapFormatter. Herramienta de análisis de seguridad de red.", // Common "common.close": "Cerrar", "common.detail": "Detalle", "common.copy": "Copiar IP", }, en: { // Header "header.logo": "NmapFormatter", "header.github": "GitHub", "header.documentation": "Documentation", // Mode Switcher "mode.hosts": "Hosts", "mode.ports": "Ports", "mode.services": "Services", "mode.correlation": "Correlation", // Hosts Mode "hosts.title": "Host Discovery (Nmap -sn)", "hosts.tooltip.title": "Expected Format", "hosts.tooltip.description": "Text file with output from a host discovery scan (ping scan).", "hosts.tooltip.example": "The file should contain lines like:", "hosts.upload.title": "Drop your file here", "hosts.upload.subtitle": "or click to select", "hosts.upload.hint": "Format: normal Nmap output with ping scan (-sn) for host discovery", "hosts.pill.nofile": "No file", "hosts.kpi.active": "Active Hosts", "hosts.kpi.scanned": "Scanned IPs", "hosts.kpi.duration": "Duration", "hosts.kpi.latency": "Average Latency", "hosts.search.label": "Search (IP, Hostname, Vendor)", "hosts.search.placeholder": "E.g. 192.168.0.1, router, Apple...", "hosts.clear": "Clear", "hosts.export": "Export CSV", "hosts.table.ip": "IP", "hosts.table.hostname": "Hostname", "hosts.table.status": "Status", "hosts.table.latency": "Latency", "hosts.table.mac": "MAC", "hosts.table.vendor": "Vendor", "hosts.table.actions": "Actions", "hosts.table.empty": "Upload a host scan file to view results", "hosts.modal.title": "Details", "hosts.modal.close": "Close", // Ports Mode "ports.title": "Port Scan Report (Nmap)", "ports.tooltip.title": "Expected Format", "ports.tooltip.description": "Text file with output from a basic port scan.", "ports.tooltip.example": "The file should contain a PORT STATE SERVICE table with detected open ports.", "ports.tooltip.exampleline": "Example line:", "ports.upload.title": "Upload a Nmap .txt file", "ports.upload.hint": 'Drag and drop here, or use the button. Expected format: "normal" Nmap output with "Nmap scan report for …" blocks and "PORT STATE SERVICE …" table.', "ports.clear": "Clear", "ports.pill.nofile": "No file", "ports.kpi.hosts": "Hosts (with ports)", "ports.kpi.ports": "Ports (total)", "ports.kpi.unique": "Unique ports", "ports.kpi.top": "Most common port", "ports.chart.ports": "Top ports", "ports.chart.services": "Top services", "ports.search.label": "Search (IP, MAC, vendor, service, port)", "ports.search.placeholder": "E.g.: 192.168.8.13 | mysql | 3306 | Intel", "ports.filter.service": "Service", "ports.filter.vendor": "Vendor", "ports.filter.minports": "Min # ports", "ports.export": "Export CSV", "ports.reset": "Reset", "ports.tip": 'Tip: click on headers to sort. In "Details" you can see the complete list of ports per host.', "ports.table.ip": "IP", "ports.table.vendor": "Vendor (MAC)", "ports.table.latency": "Latency", "ports.table.portcount": "# Ports", "ports.table.ports": "Ports / services", "ports.table.actions": "Actions", "ports.table.empty": "Upload a file to view results.", "ports.modal.title": "Host", "ports.modal.close": "Close", // Services Mode "services.title": "Service Scan Report (Nmap)", "services.tooltip.title": "Expected Format", "services.tooltip.description": "Text file with output from a service scan with version detection and NSE scripts.", "services.tooltip.example": "The file should contain version information, executed scripts and possible vulnerabilities.", "services.tooltip.exampleline": "Example:", "services.upload.title": "Upload a Nmap .txt file (-sV/-sC/--script vuln, etc.)", "services.upload.hint": 'Drag and drop here, or use the button. Expected format: "normal" Nmap output with "Nmap scan report for …" blocks and "PORT STATE SERVICE REASON VERSION" table. Also supports outputs with scripts (lines starting with "|").', "services.clear": "Clear", "services.pill.nofile": "No file", "services.kpi.hosts": "Hosts with services", "services.kpi.services": "Services (instances)", "services.kpi.findings": "Vulns/Findings (IDs)", "services.kpi.critical": "Critical (CVSS ≥ 9)", "services.chart.services": "Top services", "services.chart.vulns": "Top vulnerabilities / IDs", "services.search.label": "Search (IP, service, version, CVE, script)", "services.search.placeholder": "E.g.: 192.168.8.1 | apache | CVE-2025-38476 | ssl-dh-params", "services.filter.service": "Service", "services.filter.mincvss": "Min CVSS", "services.filter.exploitonly": "Only *EXPLOIT*", "services.filter.exploitonly.no": "No", "services.filter.exploitonly.yes": "Yes", "services.export": "Export CSV", "services.reset": "Reset", "services.tip": 'Tip: click on headers to sort. In "Details" scripts, full version and list of detected CVEs/IDs are shown.', "services.table.ip": "IP", "services.table.port": "Port", "services.table.service": "Service", "services.table.version": "Product/version", "services.table.maxcvss": "Max CVSS", "services.table.indicators": "Indicators", "services.table.actions": "Actions", "services.table.empty": "Upload a file to view results.", "services.modal.title": "Service", "services.modal.close": "Close", // Correlation Mode "correlation.title": "Data Correlation", "correlation.tooltip.title": "What is correlation?", "correlation.tooltip.description": "Combines data from a basic port scan with a service scan to obtain enriched information.", "correlation.tooltip.example": "Upload both files to correlate information by IP+Port and get versions, scripts and vulnerabilities associated with each port.", "correlation.pill.nodata": "No data", "correlation.ports.title": "Port Scan", "correlation.ports.hint": "Format: normal Nmap output with open ports", "correlation.ports.status": "📄 Not loaded", "correlation.services.title": "Service Scan", "correlation.services.hint": "Format: output with -sV/-sC and version detection", "correlation.services.status": "📄 Not loaded", "correlation.button.correlate": "🔗 Correlate Data", "correlation.button.clear": "Clear", "correlation.hint": "Data will be correlated by IP + Port + Protocol", "correlation.kpi.hosts": "Correlated Hosts", "correlation.kpi.ports": "Enriched Ports", "correlation.kpi.services": "Detected Services", "correlation.kpi.vulns": "Vulnerabilities", "correlation.search.label": "Search", "correlation.search.placeholder": "IP, port, service, vulnerability...", "correlation.filter.service": "Service", "correlation.filter.mincvss": "Min CVSS", "correlation.export": "Export CSV", "correlation.reset": "Reset", "correlation.table.ip": "IP", "correlation.table.port": "Port", "correlation.table.service": "Service", "correlation.table.version": "Version", "correlation.table.vulns": "Vulns", "correlation.table.status": "Status", "correlation.table.actions": "Actions", "correlation.table.empty": 'Upload both files and press "Correlate Data"', "correlation.modal.title": "Details", "correlation.modal.close": "Close", // Footer "footer.home": "Home", "footer.documentation": "Documentation", "footer.github": "GitHub", "footer.contact": "Contact", "footer.copy": "© 2026 NmapFormatter. Network security analysis tool.", // Common "common.close": "Close", "common.detail": "Details", "common.copy": "Copy IP", }, }; // Default language const DEFAULT_LANG = "es"; // Get current language from URL hash function getCurrentLanguage() { const hash = window.location.hash; if (hash === "#/en" || hash.startsWith("#/en/")) { return "en"; } if (hash === "#/es" || hash.startsWith("#/es/")) { return "es"; } return DEFAULT_LANG; } // Set language and update URL function setLanguage(lang) { if (!translations[lang]) { lang = DEFAULT_LANG; } // Update URL hash window.location.hash = `#/${lang}`; // Update HTML lang attribute document.documentElement.lang = lang; // Apply translations applyTranslations(lang); // Update active state on language switcher updateLanguageSwitcher(lang); } // Get translation for a key function t(key, lang = null) { if (!lang) { lang = getCurrentLanguage(); } return ( translations[lang]?.[key] || translations[DEFAULT_LANG]?.[key] || key ); } // Apply all translations to the page function applyTranslations(lang) { // Translate all elements with data-i18n attribute document.querySelectorAll("[data-i18n]").forEach((element) => { const key = element.getAttribute("data-i18n"); const translation = t(key, lang); // Update text content or placeholder depending on element type if (element.tagName === "INPUT" || element.tagName === "TEXTAREA") { if (element.hasAttribute("placeholder")) { element.placeholder = translation; } } else { element.textContent = translation; } }); // Auto-translate common elements by selector for efficiency // This allows us to translate elements without manually adding data-i18n to every element const autoTranslations = [ // Hosts Mode - using specific selectors { selector: "#hostsContainer .title", key: "hosts.title", asText: true, index: 0, }, { selector: "#hostsContainer .uploadSection .drop strong", key: "hosts.upload.title", asText: true, index: 0, }, { selector: "#hostsContainer #hostsQ", key: "hosts.search.placeholder", asPlaceholder: true, }, { selector: "#hostsContainer #hostsClear", key: "hosts.clear", asText: true, }, { selector: "#hostsContainer #hostsExportCSV", key: "hosts.export", asText: true, }, { selector: "#hostsContainer .kpi .l", key: [ "hosts.kpi.active", "hosts.kpi.scanned", "hosts.kpi.duration", "hosts.kpi.latency", ], asText: true, multiple: true, }, { selector: '#hostsContainer th[data-sort="ip"]', key: "hosts.table.ip", asText: true, }, { selector: '#hostsContainer th[data-sort="hostname"]', key: "hosts.table.hostname", asText: true, }, { selector: '#hostsContainer th[data-sort="status"]', key: "hosts.table.status", asText: true, }, { selector: '#hostsContainer th[data-sort="latency"]', key: "hosts.table.latency", asText: true, }, { selector: '#hostsContainer th[data-sort="mac"]', key: "hosts.table.mac", asText: true, }, { selector: '#hostsContainer th[data-sort="vendor"]', key: "hosts.table.vendor", asText: true, }, { selector: "#hostsContainer #dlg .closeBtn", key: "hosts.modal.close", asText: true, }, // Ports Mode { selector: "#portsContainer .title", key: "ports.title", asText: true, index: 0, }, { selector: "#portsContainer .drop strong", key: "ports.upload.title", asText: true, }, { selector: "#portsContainer #btnDemo", key: "ports.clear", asText: true, }, { selector: "#portsContainer #q", key: "ports.search.placeholder", asPlaceholder: true, }, { selector: '#portsContainer label[for="q"]', key: "ports.search.label", asText: true, }, { selector: '#portsContainer label[for="service"]', key: "ports.filter.service", asText: true, }, { selector: '#portsContainer label[for="vendor"]', key: "ports.filter.vendor", asText: true, }, { selector: '#portsContainer label[for="minPorts"]', key: "ports.filter.minports", asText: true, }, { selector: "#portsContainer #btnExport", key: "ports.export", asText: true, }, { selector: "#portsContainer #btnReset", key: "ports.reset", asText: true, }, { selector: "#portsContainer .kpi .l", key: [ "ports.kpi.hosts", "ports.kpi.ports", "ports.kpi.unique", "ports.kpi.top", ], asText: true, multiple: true, }, { selector: "#portsContainer .chartTitle", key: ["ports.chart.ports", "ports.chart.services"], asText: true, multiple: true, }, { selector: '#portsContainer th[data-sort="ip"]', key: "ports.table.ip", asText: true, }, { selector: '#portsContainer th[data-sort="vendor"]', key: "ports.table.vendor", asText: true, }, { selector: '#portsContainer th[data-sort="latency"]', key: "ports.table.latency", asText: true, }, { selector: '#portsContainer th[data-sort="portCount"]', key: "ports.table.portcount", asText: true, }, // Services Mode { selector: "#servicesContainer .title", key: "services.title", asText: true, index: 0, }, { selector: "#servicesContainer .drop strong", key: "services.upload.title", asText: true, }, { selector: "#servicesContainer #btnClear", key: "services.clear", asText: true, }, { selector: "#servicesContainer #q", key: "services.search.placeholder", asPlaceholder: true, }, { selector: '#servicesContainer label[for="q"]', key: "services.search.label", asText: true, }, { selector: '#servicesContainer label[for="service"]', key: "services.filter.service", asText: true, }, { selector: '#servicesContainer label[for="minCvss"]', key: "services.filter.mincvss", asText: true, }, { selector: '#servicesContainer label[for="exploitOnly"]', key: "services.filter.exploitonly", asText: true, }, { selector: "#servicesContainer #btnExport", key: "services.export", asText: true, }, { selector: "#servicesContainer #btnReset", key: "services.reset", asText: true, }, { selector: "#servicesContainer .kpi .l", key: [ "services.kpi.hosts", "services.kpi.services", "services.kpi.findings", "services.kpi.critical", ], asText: true, multiple: true, }, { selector: "#servicesContainer .chartTitle", key: ["services.chart.services", "services.chart.vulns"], asText: true, multiple: true, }, { selector: '#servicesContainer th[data-sort="ip"]', key: "services.table.ip", asText: true, }, { selector: '#servicesContainer th[data-sort="port"]', key: "services.table.port", asText: true, }, { selector: '#servicesContainer th[data-sort="service"]', key: "services.table.service", asText: true, }, { selector: '#servicesContainer th[data-sort="version"]', key: "services.table.version", asText: true, }, { selector: '#servicesContainer th[data-sort="maxCvss"]', key: "services.table.maxcvss", asText: true, }, // Correlation Mode { selector: "#correlationContainer .title", key: "correlation.title", asText: true, index: 0, }, { selector: "#correlationContainer #btnCorrelate", key: "correlation.button.correlate", asText: true, }, { selector: "#correlationContainer #btnClearCorrelation", key: "correlation.button.clear", asText: true, }, { selector: "#correlationContainer #correlationQ", key: "correlation.search.placeholder", asPlaceholder: true, }, { selector: '#correlationContainer label[for="correlationQ"]', key: "correlation.search.label", asText: true, }, { selector: '#correlationContainer label[for="correlationService"]', key: "correlation.filter.service", asText: true, }, { selector: '#correlationContainer label[for="correlationMinCvss"]', key: "correlation.filter.mincvss", asText: true, }, { selector: "#correlationContainer #btnExportCorrelation", key: "correlation.export", asText: true, }, { selector: "#correlationContainer #btnResetCorrelationFilters", key: "correlation.reset", asText: true, }, { selector: "#correlationContainer .kpi .l", key: [ "correlation.kpi.hosts", "correlation.kpi.ports", "correlation.kpi.services", "correlation.kpi.vulns", ], asText: true, multiple: true, }, ]; autoTranslations.forEach( ({ selector, key, asText, asPlaceholder, multiple, index }) => { if (multiple && Array.isArray(key)) { // Handle multiple elements with an array of keys const elements = document.querySelectorAll(selector); elements.forEach((el, i) => { if (i < key.length) { const translation = t(key[i], lang); if (asText) el.textContent = translation; if (asPlaceholder) el.placeholder = translation; } }); } else if (typeof index === "number") { // Handle specific index const elements = document.querySelectorAll(selector); if (elements[index]) { const translation = t(key, lang); if (asText) elements[index].textContent = translation; if (asPlaceholder) elements[index].placeholder = translation; } } else { // Handle single element const element = document.querySelector(selector); if (element) { const translation = t(key, lang); if (asText) element.textContent = translation; if (asPlaceholder) element.placeholder = translation; } } } ); // Update HTML lang attribute document.documentElement.lang = lang; } // Update language switcher active state function updateLanguageSwitcher(lang) { document.querySelectorAll(".langBtn").forEach((btn) => { if (btn.dataset.lang === lang) { btn.classList.add("active"); } else { btn.classList.remove("active"); } }); } // Initialize i18n system function initI18n() { // Check if we need to redirect to default language const hash = window.location.hash; if (!hash || hash === "#" || hash === "#/") { window.location.hash = `#/${DEFAULT_LANG}`; } // Apply initial translations const currentLang = getCurrentLanguage(); applyTranslations(currentLang); updateLanguageSwitcher(currentLang); // Listen for hash changes (browser back/forward) window.addEventListener("hashchange", () => { const newLang = getCurrentLanguage(); applyTranslations(newLang); updateLanguageSwitcher(newLang); }); } // Export functions export { initI18n, setLanguage, getCurrentLanguage, t };
github_javascript
2025-12-13T19:37:51Z
https://github.com/nuoframework/nmapformatter/blob/f45b3b69cbd5de75935b74a3f9bba693a42c39cd/js/i18n.js
{}
// ------------------------- // INITIAL SETUP // ------------------------- const menu = document.getElementById('menu'); const itemList = document.getElementById('itemList'); const cartContainer = document.getElementById('container'); const cartBtn = document.getElementById('btn1'); // View cart const orderbtn = document.getElementById('btn2'); // Order via WhatsApp let categories = ['clothing', 'footwear', 'accessories', 'fragrances']; let cart = JSON.parse(localStorage.getItem('cart')) || []; // Clear menu first menu.innerHTML = ""; // ------------------------- // CREATE CATEGORY BUTTONS // ------------------------- categories.forEach(category => { const li = document.createElement('li'); li.textContent = category.toUpperCase(); li.addEventListener('click', () => display(category)); menu.appendChild(li); }); // ------------------------- // ITEM DATA // ------------------------- const clothing = [ { image: 'IMG-20251212-WA0073.jpg', price: 13000 }, { image: 'IMG-20251212-WA0074.jpg', price: 13000 }, { image: 'IMG-20251212-WA0075.jpg', price: 14000 }, { image: 'IMG-20251212-WA0076.jpg', price: 13500 } ]; const footwear = [ { image: 'IMG-20251212-WA0082.jpg', price: 51000, size: '42–46' }, { image: 'IMG-20251212-WA0087.jpg', price: 32000, size: '42–46' }, { image: 'IMG-20251212-WA0084.jpg', price: 25000, size: '37–42' }, { image: 'IMG-20251212-WA0083.jpg', price: 18000, size: '38–41' } ]; const accessories = [ { image: 'IMG-20251212-WA0077.jpg', price: 5000 }, { image: 'IMG-20251212-WA0078.jpg', price: 5000 }, { image: 'IMG-20251212-WA0080.jpg', price: 5000 }, { image: 'IMG-20251212-WA0079.jpg', price: 5000 }, { image: 'IMG-20251212-WA0081.jpg', price: 18500 }, { image: 'IMG-20251212-WA0086.jpg', price: 6500 }, { image: 'IMG-20251212-WA0085.jpg', price: 5000 } ]; const fragrances = [ { image: 'https://via.placeholder.com/200?text=Perfume', price: 8000 }, { image: 'https://via.placeholder.com/200?text=Cologne', price: 11000 }, { image: 'https://via.placeholder.com/200?text=Body+Spray', price: 9000 } ]; // ------------------------- // DISPLAY ITEMS // ------------------------- function display(category) { itemList.innerHTML = ''; let items = []; if (category === 'clothing') items = clothing; if (category === 'footwear') items = footwear; if (category === 'accessories') items = accessories; if (category === 'fragrances') items = fragrances; items.forEach(item => { const itemdiv = document.createElement('div'); itemdiv.classList.add('product'); itemdiv.innerHTML = ` <img src="${item.image}" alt="product"> <p>₦${item.price}</p> ${item.size ? `<p>Size: ${item.size}</p>` : ''} <button class="addBtn">ADD</button> `; const addBtn = itemdiv.querySelector('.addBtn'); addBtn.addEventListener('click', () => addToCart(item, category)); itemList.appendChild(itemdiv); }); } // ------------------------- // ADD TO CART // ------------------------- function addToCart(item, category) { cart.push({ image: item.image, price: item.price, category: category }); saveItem(); alert("Item added to cart"); } // ------------------------- // SHOW CART // ------------------------- cartBtn.addEventListener('click', displayCart); function displayCart() { cartContainer.innerHTML = ''; if (cart.length === 0) { cartContainer.innerHTML = "<p>Your cart is empty</p>"; return; } cart.forEach((item, index) => { const cartdiv = document.createElement('div'); cartdiv.classList.add('cart-item'); cartdiv.innerHTML = ` <img src="${item.image}"> <span>${item.category}</span> <span>₦${item.price}</span> `; const delBtn = document.createElement('button'); delBtn.textContent = "🗑️"; delBtn.addEventListener('click', () => delItem(index)); cartdiv.appendChild(delBtn); cartContainer.appendChild(cartdiv); }); } // ------------------------- // ORDER VIA WHATSAPP // ------------------------- orderbtn.addEventListener('click', () => { if (cart.length === 0) { alert("Your cart is empty"); return; } let message = "🛍️ *New Order*\n\n"; cart.forEach((item, index) => { message += `${index + 1}. ${item.category.toUpperCase()}\n`; message += `Price: ₦${item.price}\n`; message += `Image: ${location.origin}/${item.image}\n\n`; }); const encodedMessage = encodeURIComponent(message); const whatsappURL = `https://wa.me/2349068366743?text=${encodedMessage}`; window.open(whatsappURL, "_blank"); }); // ------------------------- // DELETE & SAVE // ------------------------- function delItem(index) { cart.splice(index, 1); saveItem(); displayCart(); } function saveItem() { localStorage.setItem('cart', JSON.stringify(cart)); }
// ------------------------- // INITIAL SETUP // ------------------------- const menu = document.getElementById('menu'); const itemList = document.getElementById('itemList'); const cartContainer = document.getElementById('container'); const cartBtn = document.getElementById('btn1'); // View cart const orderbtn = document.getElementById('btn2'); // Order via WhatsApp let categories = ['clothing', 'footwear', 'accessories', 'fragrances']; let cart = JSON.parse(localStorage.getItem('cart')) || []; // Clear menu first menu.innerHTML = ""; // ------------------------- // CREATE CATEGORY BUTTONS // ------------------------- categories.forEach(category => { const li = document.createElement('li'); li.textContent = category.toUpperCase(); li.addEventListener('click', () => display(category)); menu.appendChild(li); }); // ------------------------- // ITEM DATA // ------------------------- const clothing = [ { image: 'IMG-20251212-WA0073.jpg', price: 13000 }, { image: 'IMG-20251212-WA0074.jpg', price: 13000 }, { image: 'IMG-20251212-WA0075.jpg', price: 14000 }, { image: 'IMG-20251212-WA0076.jpg', price: 13500 } ]; const footwear = [ { image: 'IMG-20251212-WA0082.jpg', price: 51000, size: '42–46' }, { image: 'IMG-20251212-WA0087.jpg', price: 32000, size: '42–46' }, { image: 'IMG-20251212-WA0084.jpg', price: 25000, size: '37–42' }, { image: 'IMG-20251212-WA0083.jpg', price: 18000, size: '38–41' } ]; const accessories = [ { image: 'IMG-20251212-WA0077.jpg', price: 5000 }, { image: 'IMG-20251212-WA0078.jpg', price: 5000 }, { image: 'IMG-20251212-WA0080.jpg', price: 5000 }, { image: 'IMG-20251212-WA0079.jpg', price: 5000 }, { image: 'IMG-20251212-WA0081.jpg', price: 18500 }, { image: 'IMG-20251212-WA0086.jpg', price: 6500 }, { image: 'IMG-20251212-WA0085.jpg', price: 5000 } ]; const fragrances = [ { image: 'https://via.placeholder.com/200?text=Perfume', price: 8000 }, { image: 'https://via.placeholder.com/200?text=Cologne', price: 11000 }, { image: 'https://via.placeholder.com/200?text=Body+Spray', price: 9000 } ]; // ------------------------- // DISPLAY ITEMS // ------------------------- function display(category) { itemList.innerHTML = ''; let items = []; if (category === 'clothing') items = clothing; if (category === 'footwear') items = footwear; if (category === 'accessories') items = accessories; if (category === 'fragrances') items = fragrances; items.forEach(item => { const itemdiv = document.createElement('div'); itemdiv.classList.add('product'); itemdiv.innerHTML = ` <img src="${item.image}" alt="product"> <p>₦${item.price}</p> ${item.size ? `<p>Size: ${item.size}</p>` : ''} <button class="addBtn">ADD</button> `; const addBtn = itemdiv.querySelector('.addBtn'); addBtn.addEventListener('click', () => addToCart(item, category)); itemList.appendChild(itemdiv); }); } // ------------------------- // ADD TO CART // ------------------------- function addToCart(item, category) { cart.push({ image: item.image, price: item.price, category: category }); saveItem(); alert("Item added to cart"); } // ------------------------- // SHOW CART // ------------------------- cartBtn.addEventListener('click', displayCart); function displayCart() { cartContainer.innerHTML = ''; if (cart.length === 0) { cartContainer.innerHTML = "<p>Your cart is empty</p>"; return; } cart.forEach((item, index) => { const cartdiv = document.createElement('div'); cartdiv.classList.add('cart-item'); cartdiv.innerHTML = ` <img src="${item.image}"> <span>${item.category}</span> <span>₦${item.price}</span> `; const delBtn = document.createElement('button'); delBtn.textContent = "🗑️"; delBtn.addEventListener('click', () => delItem(index)); cartdiv.appendChild(delBtn); cartContainer.appendChild(cartdiv); }); } // ------------------------- // ORDER VIA WHATSAPP // ------------------------- orderbtn.addEventListener('click', () => { if (cart.length === 0) { alert("Your cart is empty"); return; } let message = "🛍️ *New Order*\n\n"; cart.forEach((item, index) => { message += `${index + 1}. ${item.category.toUpperCase()}\n`; message += `Price: ₦${item.price}\n`; message += `Image: ${location.origin}/${item.image}\n\n`; }); const encodedMessage = encodeURIComponent(message); const whatsappURL = `https://wa.me/2349068366743?text=${encodedMessage}`; window.open(whatsappURL, "_blank"); }); // ------------------------- // DELETE & SAVE // ------------------------- function delItem(index) { cart.splice(index, 1); saveItem(); displayCart(); } function saveItem() { localStorage.setItem('cart', JSON.stringify(cart)); }
github_javascript
2025-12-13T06:50:42Z
https://github.com/jarviskrost2000-wq/luxeloom-boutique/blob/c3f95c16ff359602b19499e075e8b3ce4f376325/script.js
{}
// server.js const express = require('express'); const mongoose = require('mongoose'); const cors = require('cors'); require('dotenv').config(); const app = express(); const PORT = process.env.PORT || 3001; // Middleware app.use(cors()); app.use(express.json()); // MongoDB Connection const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/community_resources'; mongoose.connect(MONGODB_URI) .then(() => console.log('Connected to MongoDB')) .catch((err) => console.error('MongoDB connection error:', err)); // Resource Schema const resourceSchema = new mongoose.Schema({ organization: { type: String, required: true, trim: true, index: true }, address: { type: String, trim: true }, phone: { type: String, trim: true }, email: { type: String, trim: true, lowercase: true }, website: { type: String, trim: true }, description: { type: String, required: true, index: 'text' }, categories: { type: String, trim: true, index: true }, filename: { type: String, trim: true }, notes: { type: String, trim: true }, dateAdded: { type: Date, default: Date.now }, lastUpdated: { type: Date, default: Date.now } }, { timestamps: true }); // Text index for full-text search resourceSchema.index({ organization: 'text', description: 'text', categories: 'text', address: 'text', notes: 'text' }); const Resource = mongoose.model('Resource', resourceSchema); // Routes // IMPORTANT: Specific routes must come BEFORE parameterized routes like /:id // Health check app.get('/api/health', (req, res) => { res.json({ status: 'ok', timestamp: new Date().toISOString(), database: mongoose.connection.readyState === 1 ? 'connected' : 'disconnected' }); }); // Get statistics app.get('/api/stats', async (req, res) => { try { const totalResources = await Resource.countDocuments(); const categoryCounts = await Resource.aggregate([ { $unwind: { path: '$categories', preserveNullAndEmptyArrays: true } }, { $group: { _id: '$categories', count: { $sum: 1 } } }, { $sort: { count: -1 } } ]); res.json({ totalResources, categoryCounts }); } catch (error) { res.status(500).json({ error: error.message }); } }); // Export all resources (MUST be before /api/resources/:id) app.get('/api/resources/export/all', async (req, res) => { try { const resources = await Resource.find().sort({ dateAdded: -1 }); res.json(resources); } catch (error) { res.status(500).json({ error: error.message }); } }); // Search resources (MUST be before /api/resources/:id) app.get('/api/resources/search', async (req, res) => { try { const { q } = req.query; console.log('Search query received:', q); // Debug log if (!q) { return res.status(400).json({ error: 'Search query is required' }); } // Use text search and also search specific fields const resources = await Resource.find({ $or: [ { organization: { $regex: q, $options: 'i' } }, { categories: { $regex: q, $options: 'i' } }, { description: { $regex: q, $options: 'i' } }, { address: { $regex: q, $options: 'i' } }, { phone: { $regex: q, $options: 'i' } }, { email: { $regex: q, $options: 'i' } }, { notes: { $regex: q, $options: 'i' } } ] }).sort({ dateAdded: -1 }); console.log('Search results:', resources.length); // Debug log res.json(resources); } catch (error) { console.error('Search error:', error); // Debug log res.status(500).json({ error: error.message }); } }); // Bulk import resources (MUST be before /api/resources/:id) app.post('/api/resources/bulk', async (req, res) => { try { const { resources } = req.body; if (!Array.isArray(resources)) { return res.status(400).json({ error: 'Resources must be an array' }); } const insertedResources = await Resource.insertMany(resources); res.status(201).json({ message: `${insertedResources.length} resources imported successfully`, resources: insertedResources }); } catch (error) { res.status(400).json({ error: error.message }); } }); // Get all resources app.get('/api/resources', async (req, res) => { try { const resources = await Resource.find().sort({ dateAdded: -1 }); res.json(resources); } catch (error) { res.status(500).json({ error: error.message }); } }); // Get single resource by ID app.get('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findById(req.params.id); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json(resource); } catch (error) { res.status(500).json({ error: error.message }); } }); // Create new resource app.post('/api/resources', async (req, res) => { try { const resource = new Resource(req.body); await resource.save(); res.status(201).json(resource); } catch (error) { res.status(400).json({ error: error.message }); } }); // Update resource app.put('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findByIdAndUpdate( req.params.id, { ...req.body, lastUpdated: Date.now() }, { new: true, runValidators: true } ); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json(resource); } catch (error) { res.status(400).json({ error: error.message }); } }); // Delete resource app.delete('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findByIdAndDelete(req.params.id); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json({ message: 'Resource deleted successfully' }); } catch (error) { res.status(500).json({ error: error.message }); } }); // Start server app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); console.log(`MongoDB URI: ${MONGODB_URI}`); });
// server.js const express = require('express'); const mongoose = require('mongoose'); const cors = require('cors'); require('dotenv').config(); const app = express(); const PORT = process.env.PORT || 3001; // Middleware app.use(cors()); app.use(express.json()); // MongoDB Connection const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/community_resources'; mongoose.connect(MONGODB_URI) .then(() => console.log('Connected to MongoDB')) .catch((err) => console.error('MongoDB connection error:', err)); // Resource Schema const resourceSchema = new mongoose.Schema({ organization: { type: String, required: true, trim: true, index: true }, address: { type: String, trim: true }, phone: { type: String, trim: true }, email: { type: String, trim: true, lowercase: true }, website: { type: String, trim: true }, description: { type: String, required: true, index: 'text' }, categories: { type: String, trim: true, index: true }, filename: { type: String, trim: true }, notes: { type: String, trim: true }, dateAdded: { type: Date, default: Date.now }, lastUpdated: { type: Date, default: Date.now } }, { timestamps: true }); // Text index for full-text search resourceSchema.index({ organization: 'text', description: 'text', categories: 'text', address: 'text', notes: 'text' }); const Resource = mongoose.model('Resource', resourceSchema); // Routes // IMPORTANT: Specific routes must come BEFORE parameterized routes like /:id // Health check app.get('/api/health', (req, res) => { res.json({ status: 'ok', timestamp: new Date().toISOString(), database: mongoose.connection.readyState === 1 ? 'connected' : 'disconnected' }); }); // Get statistics app.get('/api/stats', async (req, res) => { try { const totalResources = await Resource.countDocuments(); const categoryCounts = await Resource.aggregate([ { $unwind: { path: '$categories', preserveNullAndEmptyArrays: true } }, { $group: { _id: '$categories', count: { $sum: 1 } } }, { $sort: { count: -1 } } ]); res.json({ totalResources, categoryCounts }); } catch (error) { res.status(500).json({ error: error.message }); } }); // Export all resources (MUST be before /api/resources/:id) app.get('/api/resources/export/all', async (req, res) => { try { const resources = await Resource.find().sort({ dateAdded: -1 }); res.json(resources); } catch (error) { res.status(500).json({ error: error.message }); } }); // Search resources (MUST be before /api/resources/:id) app.get('/api/resources/search', async (req, res) => { try { const { q } = req.query; console.log('Search query received:', q); // Debug log if (!q) { return res.status(400).json({ error: 'Search query is required' }); } // Use text search and also search specific fields const resources = await Resource.find({ $or: [ { organization: { $regex: q, $options: 'i' } }, { categories: { $regex: q, $options: 'i' } }, { description: { $regex: q, $options: 'i' } }, { address: { $regex: q, $options: 'i' } }, { phone: { $regex: q, $options: 'i' } }, { email: { $regex: q, $options: 'i' } }, { notes: { $regex: q, $options: 'i' } } ] }).sort({ dateAdded: -1 }); console.log('Search results:', resources.length); // Debug log res.json(resources); } catch (error) { console.error('Search error:', error); // Debug log res.status(500).json({ error: error.message }); } }); // Bulk import resources (MUST be before /api/resources/:id) app.post('/api/resources/bulk', async (req, res) => { try { const { resources } = req.body; if (!Array.isArray(resources)) { return res.status(400).json({ error: 'Resources must be an array' }); } const insertedResources = await Resource.insertMany(resources); res.status(201).json({ message: `${insertedResources.length} resources imported successfully`, resources: insertedResources }); } catch (error) { res.status(400).json({ error: error.message }); } }); // Get all resources app.get('/api/resources', async (req, res) => { try { const resources = await Resource.find().sort({ dateAdded: -1 }); res.json(resources); } catch (error) { res.status(500).json({ error: error.message }); } }); // Get single resource by ID app.get('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findById(req.params.id); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json(resource); } catch (error) { res.status(500).json({ error: error.message }); } }); // Create new resource app.post('/api/resources', async (req, res) => { try { const resource = new Resource(req.body); await resource.save(); res.status(201).json(resource); } catch (error) { res.status(400).json({ error: error.message }); } }); // Update resource app.put('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findByIdAndUpdate( req.params.id, { ...req.body, lastUpdated: Date.now() }, { new: true, runValidators: true } ); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json(resource); } catch (error) { res.status(400).json({ error: error.message }); } }); // Delete resource app.delete('/api/resources/:id', async (req, res) => { try { const resource = await Resource.findByIdAndDelete(req.params.id); if (!resource) { return res.status(404).json({ error: 'Resource not found' }); } res.json({ message: 'Resource deleted successfully' }); } catch (error) { res.status(500).json({ error: error.message }); } }); // Start server app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); console.log(`MongoDB URI: ${MONGODB_URI}`); });
github_javascript
2025-12-13T01:08:30Z
https://github.com/KryptykBioz/resource_board/blob/793ca7eb8aae3434c4251aa3c9d2263fe05991a6/backend/server.js
{}
var Vue = (function (exports) { 'use strict'; /** * Make a map and return a function for checking if a key * is in that map. * IMPORTANT: all calls of this function must be prefixed with * \/\*#\_\_PURE\_\_\*\/ * So that rollup can tree-shake them if necessary. */ function makeMap(str, expectsLowerCase) { const map = Object.create(null); const list = str.split(','); for (let i = 0; i < list.length; i++) { map[list[i]] = true; } return expectsLowerCase ? val => !!map[val.toLowerCase()] : val => !!map[val]; } /** * dev only flag -> name mapping */ const PatchFlagNames = { [1 /* TEXT */]: `TEXT`, [2 /* CLASS */]: `CLASS`, [4 /* STYLE */]: `STYLE`, [8 /* PROPS */]: `PROPS`, [16 /* FULL_PROPS */]: `FULL_PROPS`, [32 /* HYDRATE_EVENTS */]: `HYDRATE_EVENTS`, [64 /* STABLE_FRAGMENT */]: `STABLE_FRAGMENT`, [128 /* KEYED_FRAGMENT */]: `KEYED_FRAGMENT`, [256 /* UNKEYED_FRAGMENT */]: `UNKEYED_FRAGMENT`, [512 /* NEED_PATCH */]: `NEED_PATCH`, [1024 /* DYNAMIC_SLOTS */]: `DYNAMIC_SLOTS`, [2048 /* DEV_ROOT_FRAGMENT */]: `DEV_ROOT_FRAGMENT`, [-1 /* HOISTED */]: `HOISTED`, [-2 /* BAIL */]: `BAIL` }; /** * Dev only */ const slotFlagsText = { [1 /* STABLE */]: 'STABLE', [2 /* DYNAMIC */]: 'DYNAMIC', [3 /* FORWARDED */]: 'FORWARDED' }; const GLOBALS_WHITE_LISTED = 'Infinity,undefined,NaN,isFinite,isNaN,parseFloat,parseInt,decodeURI,' + 'decodeURIComponent,encodeURI,encodeURIComponent,Math,Number,Date,Array,' + 'Object,Boolean,String,RegExp,Map,Set,JSON,Intl,BigInt'; const isGloballyWhitelisted = /*#__PURE__*/ makeMap(GLOBALS_WHITE_LISTED); const range = 2; function generateCodeFrame(source, start = 0, end = source.length) { const lines = source.split(/\r?\n/); let count = 0; const res = []; for (let i = 0; i < lines.length; i++) { count += lines[i].length + 1; if (count >= start) { for (let j = i - range; j <= i + range || end > count; j++) { if (j < 0 || j >= lines.length) continue; const line = j + 1; res.push(`${line}${' '.repeat(Math.max(3 - String(line).length, 0))}| ${lines[j]}`); const lineLength = lines[j].length; if (j === i) { // push underline const pad = start - (count - lineLength) + 1; const length = Math.max(1, end > count ? lineLength - pad : end - start); res.push(` | ` + ' '.repeat(pad) + '^'.repeat(length)); } else if (j > i) { if (end > count) { const length = Math.max(Math.min(end - count, lineLength), 1); res.push(` | ` + '^'.repeat(length)); } count += lineLength + 1; } } break; } } return res.join('\n'); } /** * On the client we only need to offer special cases for boolean attributes that * have different names from their corresponding dom properties: * - itemscope -> N/A * - allowfullscreen -> allowFullscreen * - formnovalidate -> formNoValidate * - ismap -> isMap * - nomodule -> noModule * - novalidate -> noValidate * - readonly -> readOnly */ const specialBooleanAttrs = `itemscope,allowfullscreen,formnovalidate,ismap,nomodule,novalidate,readonly`; const isSpecialBooleanAttr = /*#__PURE__*/ makeMap(specialBooleanAttrs); function normalizeStyle(value) { if (isArray(value)) { const res = {}; for (let i = 0; i < value.length; i++) { const item = value[i]; const normalized = normalizeStyle(isString(item) ? parseStringStyle(item) : item); if (normalized) { for (const key in normalized) { res[key] = normalized[key]; } } } return res; } else if (isObject(value)) { return value; } } const listDelimiterRE = /;(?![^(]*\))/g; const propertyDelimiterRE = /:(.+)/; function parseStringStyle(cssText) { const ret = {}; cssText.split(listDelimiterRE).forEach(item => { if (item) { const tmp = item.split(propertyDelimiterRE); tmp.length > 1 && (ret[tmp[0].trim()] = tmp[1].trim()); } }); return ret; } function normalizeClass(value) { let res = ''; if (isString(value)) { res = value; } else if (isArray(value)) { for (let i = 0; i < value.length; i++) { const normalized = normalizeClass(value[i]); if (normalized) { res += normalized + ' '; } } } else if (isObject(value)) { for (const name in value) { if (value[name]) { res += name + ' '; } } } return res.trim(); } // These tag configs are shared between compiler-dom and runtime-dom, so they // https://developer.mozilla.org/en-US/docs/Web/HTML/Element const HTML_TAGS = 'html,body,base,head,link,meta,style,title,address,article,aside,footer,' + 'header,h1,h2,h3,h4,h5,h6,hgroup,nav,section,div,dd,dl,dt,figcaption,' + 'figure,picture,hr,img,li,main,ol,p,pre,ul,a,b,abbr,bdi,bdo,br,cite,code,' + 'data,dfn,em,i,kbd,mark,q,rp,rt,rtc,ruby,s,samp,small,span,strong,sub,sup,' + 'time,u,var,wbr,area,audio,map,track,video,embed,object,param,source,' + 'canvas,script,noscript,del,ins,caption,col,colgroup,table,thead,tbody,td,' + 'th,tr,button,datalist,fieldset,form,input,label,legend,meter,optgroup,' + 'option,output,progress,select,textarea,details,dialog,menu,' + 'summary,template,blockquote,iframe,tfoot'; // https://developer.mozilla.org/en-US/docs/Web/SVG/Element const SVG_TAGS = 'svg,animate,animateMotion,animateTransform,circle,clipPath,color-profile,' + 'defs,desc,discard,ellipse,feBlend,feColorMatrix,feComponentTransfer,' + 'feComposite,feConvolveMatrix,feDiffuseLighting,feDisplacementMap,' + 'feDistanceLight,feDropShadow,feFlood,feFuncA,feFuncB,feFuncG,feFuncR,' + 'feGaussianBlur,feImage,feMerge,feMergeNode,feMorphology,feOffset,' + 'fePointLight,feSpecularLighting,feSpotLight,feTile,feTurbulence,filter,' + 'foreignObject,g,hatch,hatchpath,image,line,linearGradient,marker,mask,' + 'mesh,meshgradient,meshpatch,meshrow,metadata,mpath,path,pattern,' + 'polygon,polyline,radialGradient,rect,set,solidcolor,stop,switch,symbol,' + 'text,textPath,title,tspan,unknown,use,view'; const VOID_TAGS = 'area,base,br,col,embed,hr,img,input,link,meta,param,source,track,wbr'; const isHTMLTag = /*#__PURE__*/ makeMap(HTML_TAGS); const isSVGTag = /*#__PURE__*/ makeMap(SVG_TAGS); const isVoidTag = /*#__PURE__*/ makeMap(VOID_TAGS); function looseCompareArrays(a, b) { if (a.length !== b.length) return false; let equal = true; for (let i = 0; equal && i < a.length; i++) { equal = looseEqual(a[i], b[i]); } return equal; } function looseEqual(a, b) { if (a === b) return true; let aValidType = isDate(a); let bValidType = isDate(b); if (aValidType || bValidType) { return aValidType && bValidType ? a.getTime() === b.getTime() : false; } aValidType = isArray(a); bValidType = isArray(b); if (aValidType || bValidType) { return aValidType && bValidType ? looseCompareArrays(a, b) : false; } aValidType = isObject(a); bValidType = isObject(b); if (aValidType || bValidType) { /* istanbul ignore if: this if will probably never be called */ if (!aValidType || !bValidType) { return false; } const aKeysCount = Object.keys(a).length; const bKeysCount = Object.keys(b).length; if (aKeysCount !== bKeysCount) { return false; } for (const key in a) { const aHasKey = a.hasOwnProperty(key); const bHasKey = b.hasOwnProperty(key); if ((aHasKey && !bHasKey) || (!aHasKey && bHasKey) || !looseEqual(a[key], b[key])) { return false; } } } return String(a) === String(b); } function looseIndexOf(arr, val) { return arr.findIndex(item => looseEqual(item, val)); } /** * For converting {{ interpolation }} values to displayed strings. * @private */ const toDisplayString = (val) => { return val == null ? '' : isObject(val) ? JSON.stringify(val, replacer, 2) : String(val); }; const replacer = (_key, val) => { if (isMap(val)) { return { [`Map(${val.size})`]: [...val.entries()].reduce((entries, [key, val]) => { entries[`${key} =>`] = val; return entries; }, {}) }; } else if (isSet(val)) { return { [`Set(${val.size})`]: [...val.values()] }; } else if (isObject(val) && !isArray(val) && !isPlainObject(val)) { return String(val); } return val; }; const EMPTY_OBJ = Object.freeze({}) ; const EMPTY_ARR = Object.freeze([]) ;
var Vue = (function (exports) { 'use strict'; /** * Make a map and return a function for checking if a key * is in that map. * IMPORTANT: all calls of this function must be prefixed with * \/\*#\_\_PURE\_\_\*\/ * So that rollup can tree-shake them if necessary. */ function makeMap(str, expectsLowerCase) { const map = Object.create(null); const list = str.split(','); for (let i = 0; i < list.length; i++) { map[list[i]] = true; } return expectsLowerCase ? val => !!map[val.toLowerCase()] : val => !!map[val]; } /** * dev only flag -> name mapping */ const PatchFlagNames = { [1 /* TEXT */]: `TEXT`, [2 /* CLASS */]: `CLASS`, [4 /* STYLE */]: `STYLE`, [8 /* PROPS */]: `PROPS`, [16 /* FULL_PROPS */]: `FULL_PROPS`, [32 /* HYDRATE_EVENTS */]: `HYDRATE_EVENTS`, [64 /* STABLE_FRAGMENT */]: `STABLE_FRAGMENT`, [128 /* KEYED_FRAGMENT */]: `KEYED_FRAGMENT`, [256 /* UNKEYED_FRAGMENT */]: `UNKEYED_FRAGMENT`, [512 /* NEED_PATCH */]: `NEED_PATCH`, [1024 /* DYNAMIC_SLOTS */]: `DYNAMIC_SLOTS`, [2048 /* DEV_ROOT_FRAGMENT */]: `DEV_ROOT_FRAGMENT`, [-1 /* HOISTED */]: `HOISTED`, [-2 /* BAIL */]: `BAIL` }; /** * Dev only */ const slotFlagsText = { [1 /* STABLE */]: 'STABLE', [2 /* DYNAMIC */]: 'DYNAMIC', [3 /* FORWARDED */]: 'FORWARDED' }; const GLOBALS_WHITE_LISTED = 'Infinity,undefined,NaN,isFinite,isNaN,parseFloat,parseInt,decodeURI,' + 'decodeURIComponent,encodeURI,encodeURIComponent,Math,Number,Date,Array,' + 'Object,Boolean,String,RegExp,Map,Set,JSON,Intl,BigInt'; const isGloballyWhitelisted = /*#__PURE__*/ makeMap(GLOBALS_WHITE_LISTED); const range = 2; function generateCodeFrame(source, start = 0, end = source.length) { const lines = source.split(/\r?\n/); let count = 0; const res = []; for (let i = 0; i < lines.length; i++) { count += lines[i].length + 1; if (count >= start) { for (let j = i - range; j <= i + range || end > count; j++) { if (j < 0 || j >= lines.length) continue; const line = j + 1; res.push(`${line}${' '.repeat(Math.max(3 - String(line).length, 0))}| ${lines[j]}`); const lineLength = lines[j].length; if (j === i) { // push underline const pad = start - (count - lineLength) + 1; const length = Math.max(1, end > count ? lineLength - pad : end - start); res.push(` | ` + ' '.repeat(pad) + '^'.repeat(length)); } else if (j > i) { if (end > count) { const length = Math.max(Math.min(end - count, lineLength), 1); res.push(` | ` + '^'.repeat(length)); } count += lineLength + 1; } } break; } } return res.join('\n'); } /** * On the client we only need to offer special cases for boolean attributes that * have different names from their corresponding dom properties: * - itemscope -> N/A * - allowfullscreen -> allowFullscreen * - formnovalidate -> formNoValidate * - ismap -> isMap * - nomodule -> noModule * - novalidate -> noValidate * - readonly -> readOnly */ const specialBooleanAttrs = `itemscope,allowfullscreen,formnovalidate,ismap,nomodule,novalidate,readonly`; const isSpecialBooleanAttr = /*#__PURE__*/ makeMap(specialBooleanAttrs); function normalizeStyle(value) { if (isArray(value)) { const res = {}; for (let i = 0; i < value.length; i++) { const item = value[i]; const normalized = normalizeStyle(isString(item) ? parseStringStyle(item) : item); if (normalized) { for (const key in normalized) { res[key] = normalized[key]; } } } return res; } else if (isObject(value)) { return value; } } const listDelimiterRE = /;(?![^(]*\))/g; const propertyDelimiterRE = /:(.+)/; function parseStringStyle(cssText) { const ret = {}; cssText.split(listDelimiterRE).forEach(item => { if (item) { const tmp = item.split(propertyDelimiterRE); tmp.length > 1 && (ret[tmp[0].trim()] = tmp[1].trim()); } }); return ret; } function normalizeClass(value) { let res = ''; if (isString(value)) { res = value; } else if (isArray(value)) { for (let i = 0; i < value.length; i++) { const normalized = normalizeClass(value[i]); if (normalized) { res += normalized + ' '; } } } else if (isObject(value)) { for (const name in value) { if (value[name]) { res += name + ' '; } } } return res.trim(); } // These tag configs are shared between compiler-dom and runtime-dom, so they // https://developer.mozilla.org/en-US/docs/Web/HTML/Element const HTML_TAGS = 'html,body,base,head,link,meta,style,title,address,article,aside,footer,' + 'header,h1,h2,h3,h4,h5,h6,hgroup,nav,section,div,dd,dl,dt,figcaption,' + 'figure,picture,hr,img,li,main,ol,p,pre,ul,a,b,abbr,bdi,bdo,br,cite,code,' + 'data,dfn,em,i,kbd,mark,q,rp,rt,rtc,ruby,s,samp,small,span,strong,sub,sup,' + 'time,u,var,wbr,area,audio,map,track,video,embed,object,param,source,' + 'canvas,script,noscript,del,ins,caption,col,colgroup,table,thead,tbody,td,' + 'th,tr,button,datalist,fieldset,form,input,label,legend,meter,optgroup,' + 'option,output,progress,select,textarea,details,dialog,menu,' + 'summary,template,blockquote,iframe,tfoot'; // https://developer.mozilla.org/en-US/docs/Web/SVG/Element const SVG_TAGS = 'svg,animate,animateMotion,animateTransform,circle,clipPath,color-profile,' + 'defs,desc,discard,ellipse,feBlend,feColorMatrix,feComponentTransfer,' + 'feComposite,feConvolveMatrix,feDiffuseLighting,feDisplacementMap,' + 'feDistanceLight,feDropShadow,feFlood,feFuncA,feFuncB,feFuncG,feFuncR,' + 'feGaussianBlur,feImage,feMerge,feMergeNode,feMorphology,feOffset,' + 'fePointLight,feSpecularLighting,feSpotLight,feTile,feTurbulence,filter,' + 'foreignObject,g,hatch,hatchpath,image,line,linearGradient,marker,mask,' + 'mesh,meshgradient,meshpatch,meshrow,metadata,mpath,path,pattern,' + 'polygon,polyline,radialGradient,rect,set,solidcolor,stop,switch,symbol,' + 'text,textPath,title,tspan,unknown,use,view'; const VOID_TAGS = 'area,base,br,col,embed,hr,img,input,link,meta,param,source,track,wbr'; const isHTMLTag = /*#__PURE__*/ makeMap(HTML_TAGS); const isSVGTag = /*#__PURE__*/ makeMap(SVG_TAGS); const isVoidTag = /*#__PURE__*/ makeMap(VOID_TAGS); function looseCompareArrays(a, b) { if (a.length !== b.length) return false; let equal = true; for (let i = 0; equal && i < a.length; i++) { equal = looseEqual(a[i], b[i]); } return equal; } function looseEqual(a, b) { if (a === b) return true; let aValidType = isDate(a); let bValidType = isDate(b); if (aValidType || bValidType) { return aValidType && bValidType ? a.getTime() === b.getTime() : false; } aValidType = isArray(a); bValidType = isArray(b); if (aValidType || bValidType) { return aValidType && bValidType ? looseCompareArrays(a, b) : false; } aValidType = isObject(a); bValidType = isObject(b); if (aValidType || bValidType) { /* istanbul ignore if: this if will probably never be called */ if (!aValidType || !bValidType) { return false; } const aKeysCount = Object.keys(a).length; const bKeysCount = Object.keys(b).length; if (aKeysCount !== bKeysCount) { return false; } for (const key in a) { const aHasKey = a.hasOwnProperty(key); const bHasKey = b.hasOwnProperty(key); if ((aHasKey && !bHasKey) || (!aHasKey && bHasKey) || !looseEqual(a[key], b[key])) { return false; } } } return String(a) === String(b); } function looseIndexOf(arr, val) { return arr.findIndex(item => looseEqual(item, val)); } /** * For converting {{ interpolation }} values to displayed strings. * @private */ const toDisplayString = (val) => { return val == null ? '' : isObject(val) ? JSON.stringify(val, replacer, 2) : String(val); }; const replacer = (_key, val) => { if (isMap(val)) { return { [`Map(${val.size})`]: [...val.entries()].reduce((entries, [key, val]) => { entries[`${key} =>`] = val; return entries; }, {}) }; } else if (isSet(val)) { return { [`Set(${val.size})`]: [...val.values()] }; } else if (isObject(val) && !isArray(val) && !isPlainObject(val)) { return String(val); } return val; }; const EMPTY_OBJ = Object.freeze({}) ; const EMPTY_ARR = Object.freeze([]) ; const NOOP = () => { }; /** * Always return false. */ const NO = () => false; const onRE = /^on[^a-z]/; const isOn = (key) => onRE.test(key); const isModelListener = (key) => key.startsWith('onUpdate:'); const extend = Object.assign; const remove = (arr, el) => { const i = arr.indexOf(el); if (i > -1) { arr.splice(i, 1); } }; const hasOwnProperty = Object.prototype.hasOwnProperty; const hasOwn = (val, key) => hasOwnProperty.call(val, key); const isArray = Array.isArray; const isMap = (val) => toTypeString(val) === '[object Map]'; const isSet = (val) => toTypeString(val) === '[object Set]'; const isDate = (val) => val instanceof Date; const isFunction = (val) => typeof val === 'function'; const isString = (val) => typeof val === 'string'; const isSymbol = (val) => typeof val === 'symbol'; const isObject = (val) => val !== null && typeof val === 'object'; const isPromise = (val) => { return isObject(val) && isFunction(val.then) && isFunction(val.catch); }; const objectToString = Object.prototype.toString; const toTypeString = (value) => objectToString.call(value); const toRawType = (value) => { // extract "RawType" from strings like "[object RawType]" return toTypeString(value).slice(8, -1); }; const isPlainObject = (val) => toTypeString(val) === '[object Object]'; const isIntegerKey = (key) => isString(key) && key !== 'NaN' && key[0] !== '-' && '' + parseInt(key, 10) === key; const isReservedProp = /*#__PURE__*/ makeMap( // the leading comma is intentional so empty string "" is also included ',key,ref,' + 'onVnodeBeforeMount,onVnodeMounted,' + 'onVnodeBeforeUpdate,onVnodeUpdated,' + 'onVnodeBeforeUnmount,onVnodeUnmounted'); const cacheStringFunction = (fn) => { const cache = Object.create(null); return ((str) => { const hit = cache[str]; return hit || (cache[str] = fn(str)); }); }; const camelizeRE = /-(\w)/g; /** * @private */ const camelize = cacheStringFunction((str) => { return str.replace(camelizeRE, (_, c) => (c ? c.toUpperCase() : '')); }); const hyphenateRE = /\B([A-Z])/g; /** * @private */ const hyphenate = cacheStringFunction((str) => str.replace(hyphenateRE, '-$1').toLowerCase()); /** * @private */ const capitalize = cacheStringFunction((str) => str.charAt(0).toUpperCase() + str.slice(1)); /** * @private */ const toHandlerKey = cacheStringFunction((str) => (str ? `on${capitalize(str)}` : ``)); // compare whether a value has changed, accounting for NaN. const hasChanged = (value, oldValue) => value !== oldValue && (value === value || oldValue === oldValue); const invokeArrayFns = (fns, arg) => { for (let i = 0; i < fns.length; i++) { fns[i](arg); } }; const def = (obj, key, value) => { Object.defineProperty(obj, key, { configurable: true, enumerable: false, value }); }; const toNumber = (val) => { const n = parseFloat(val); return isNaN(n) ? val : n; }; let _globalThis; const getGlobalThis = () => { return (_globalThis || (_globalThis = typeof globalThis !== 'undefined' ? globalThis : typeof self !== 'undefined' ? self : typeof window !== 'undefined' ? window : typeof global !== 'undefined' ? global : {})); }; const targetMap = new WeakMap(); const effectStack = []; let activeEffect; const ITERATE_KEY = Symbol('iterate' ); const MAP_KEY_ITERATE_KEY = Symbol('Map key iterate' ); function isEffect(fn) { return fn && fn._isEffect === true; } function effect(fn, options = EMPTY_OBJ) { if (isEffect(fn)) { fn = fn.raw; } const effect = createReactiveEffect(fn, options); if (!options.lazy) { effect(); } return effect; } function stop(effect) { if (effect.active) { cleanup(effect); if (effect.options.onStop) { effect.options.onStop(); } effect.active = false; } } let uid = 0; function createReactiveEffect(fn, options) { const effect = function reactiveEffect() { if (!effect.active) { return options.scheduler ? undefined : fn(); } if (!effectStack.includes(effect)) { cleanup(effect); try { enableTracking(); effectStack.push(effect); activeEffect = effect; return fn(); } finally { effectStack.pop(); resetTracking(); activeEffect = effectStack[effectStack.length - 1]; } } }; effect.id = uid++; effect.allowRecurse = !!options.allowRecurse; effect._isEffect = true; effect.active = true; effect.raw = fn; effect.deps = []; effect.options = options; return effect; } function cleanup(effect) { const { deps } = effect; if (deps.length) { for (let i = 0; i < deps.length; i++) { deps[i].delete(effect); } deps.length = 0; } } let shouldTrack = true; const trackStack = []; function pauseTracking() { trackStack.push(shouldTrack); shouldTrack = false; } function enableTracking() { trackStack.push(shouldTrack); shouldTrack = true; } function resetTracking() { const last = trackStack.pop(); shouldTrack = last === undefined ? true : last; } function track(target, type, key) { if (!shouldTrack || activeEffect === undefined) { return; } let depsMap = targetMap.get(target); if (!depsMap) { targetMap.set(target, (depsMap = new Map())); } let dep = depsMap.get(key); if (!dep) { depsMap.set(key, (dep = new Set())); } if (!dep.has(activeEffect)) { dep.add(activeEffect); activeEffect.deps.push(dep); if (activeEffect.options.onTrack) { activeEffect.options.onTrack({ effect: activeEffect, target, type, key }); } } } function trigger(target, type, key, newValue, oldValue, oldTarget) { const depsMap = targetMap.get(target); if (!depsMap) { // never been tracked return; } const effects = new Set(); const add = (effectsToAdd) => { if (effectsToAdd) { effectsToAdd.forEach(effect => { if (effect !== activeEffect || effect.allowRecurse) { effects.add(effect); } }); } }; if (type === "clear" /* CLEAR */) { // collection being cleared // trigger all effects for target depsMap.forEach(add); } else if (key === 'length' && isArray(target)) { depsMap.forEach((dep, key) => { if (key === 'length' || key >= newValue) { add(dep); } }); } else { // schedule runs for SET | ADD | DELETE if (key !== void 0) { add(depsMap.get(key)); } // also run for iteration key on ADD | DELETE | Map.SET switch (type) { case "add" /* ADD */: if (!isArray(target)) { add(depsMap.get(ITERATE_KEY)); if (isMap(target)) { add(depsMap.get(MAP_KEY_ITERATE_KEY)); } } else if (isIntegerKey(key)) { // new index added to array -> length changes add(depsMap.get('length')); } break; case "delete" /* DELETE */: if (!isArray(target)) { add(depsMap.get(ITERATE_KEY)); if (isMap(target)) { add(depsMap.get(MAP_KEY_ITERATE_KEY)); } } break; case "set" /* SET */: if (isMap(target)) { add(depsMap.get(ITERATE_KEY)); } break; } } const run = (effect) => { if (effect.options.onTrigger) { effect.options.onTrigger({ effect, target, key, type, newValue, oldValue, oldTarget }); } if (effect.options.scheduler) { effect.options.scheduler(effect); } else { effect(); } }; effects.forEach(run); } const isNonTrackableKeys = /*#__PURE__*/ makeMap(`__proto__,__v_isRef,__isVue`); const builtInSymbols = new Set(Object.getOwnPropertyNames(Symbol) .map(key => Symbol[key]) .filter(isSymbol)); const get = /*#__PURE__*/ createGetter(); const shallowGet = /*#__PURE__*/ createGetter(false, true); const readonlyGet = /*#__PURE__*/ createGetter(true); const shallowReadonlyGet = /*#__PURE__*/ createGetter(true, true); const arrayInstrumentations = {}; ['includes', 'indexOf', 'lastIndexOf'].forEach(key => { const method = Array.prototype[key]; arrayInstrumentations[key] = function (...args) { const arr = toRaw(this); for (let i = 0, l = this.length; i < l; i++) { track(arr, "get" /* GET */, i + ''); } // we run the method using the original args first (which may be reactive) const res = method.apply(arr, args); if (res === -1 || res === false) { // if that didn't work, run it again using raw values. return method.apply(arr, args.map(toRaw)); } else { return res; } }; }); ['push', 'pop', 'shift', 'unshift', 'splice'].forEach(key => { const method = Array.prototype[key]; arrayInstrumentations[key] = function (...args) { pauseTracking(); const res = method.apply(this, args); resetTracking(); return res; }; }); function createGetter(isReadonly = false, shallow = false) { return function get(target, key, receiver) { if (key === "__v_isReactive" /* IS_REACTIVE */) { return !isReadonly; } else if (key === "__v_isReadonly" /* IS_READONLY */) { return isReadonly; } else if (key === "__v_raw" /* RAW */ && receiver === (isReadonly ? shallow ? shallowReadonlyMap : readonlyMap : shallow ? shallowReactiveMap : reactiveMap).get(target)) { return target; } const targetIsArray = isArray(target); if (!isReadonly && targetIsArray && hasOwn(arrayInstrumentations, key)) { return Reflect.get(arrayInstrumentations, key, receiver); } const res = Reflect.get(target, key, receiver); if (isSymbol(key) ? builtInSymbols.has(key) : isNonTrackableKeys(key)) { return res; } if (!isReadonly) { track(target, "get" /* GET */, key); } if (shallow) { return res; } if (isRef(res)) { // ref unwrapping - does not apply for Array + integer key. const shouldUnwrap = !targetIsArray || !isIntegerKey(key); return shouldUnwrap ? res.value : res; } if (isObject(res)) { // Convert returned value into a proxy as well. we do the isObject check // here to avoid invalid value warning. Also need to lazy access readonly // and reactive here to avoid circular dependency. return isReadonly ? readonly(res) : reactive(res); } return res; }; } const set = /*#__PURE__*/ createSetter(); const shallowSet = /*#__PURE__*/ createSetter(true); function createSetter(shallow = false) { return function set(target, key, value, receiver) { let oldValue = target[key]; if (!shallow) { value = toRaw(value); oldValue = toRaw(oldValue); if (!isArray(target) && isRef(oldValue) && !isRef(value)) { oldValue.value = value; return true; } } const hadKey = isArray(target) && isIntegerKey(key) ? Number(key) < target.length : hasOwn(target, key); const result = Reflect.set(target, key, value, receiver); // don't trigger if target is something up in the prototype chain of original if (target === toRaw(receiver)) { if (!hadKey) { trigger(target, "add" /* ADD */, key, value); } else if (hasChanged(value, oldValue)) { trigger(target, "set" /* SET */, key, value, oldValue); } } return result; }; } function deleteProperty(target, key) { const hadKey = hasOwn(target, key); const oldValue = target[key]; const result = Reflect.deleteProperty(target, key); if (result && hadKey) { trigger(target, "delete" /* DELETE */, key, undefined, oldValue); } return result; } function has(target, key) { const result = Reflect.has(target, key); if (!isSymbol(key) || !builtInSymbols.has(key)) { track(target, "has" /* HAS */, key); } return result; } function ownKeys(target) { track(target, "iterate" /* ITERATE */, isArray(target) ? 'length' : ITERATE_KEY); return Reflect.ownKeys(target); } const mutableHandlers = { get, set, deleteProperty, has, ownKeys }; const readonlyHandlers = { get: readonlyGet, set(target, key) { { console.warn(`Set operation on key "${String(key)}" failed: target is readonly.`, target); } return true; }, deleteProperty(target, key) { { console.warn(`Delete operation on key "${String(key)}" failed: target is readonly.`, target); } return true; } }; const shallowReactiveHandlers = extend({}, mutableHandlers, { get: shallowGet, set: shallowSet }); // Props handlers are special in the sense that it should not unwrap top-level // refs (in order to allow refs to be explicitly passed down), but should // retain the reactivity of the normal readonly object. const shallowReadonlyHandlers = extend({}, readonlyHandlers, { get: shallowReadonlyGet }); const toReactive = (value) => isObject(value) ? reactive(value) : value; const toReadonly = (value) => isObject(value) ? readonly(value) : value; const toShallow = (value) => value; const getProto = (v) => Reflect.getPrototypeOf(v); function get$1(target, key, isReadonly = false, isShallow = false) { // #1772: readonly(reactive(Map)) should return readonly + reactive version // of the value target = target["__v_raw" /* RAW */]; const rawTarget = toRaw(target); const rawKey = toRaw(key); if (key !== rawKey) { !isReadonly && track(rawTarget, "get" /* GET */, key); } !isReadonly && track(rawTarget, "get" /* GET */, rawKey); const { has } = getProto(rawTarget); const wrap = isShallow ? toShallow : isReadonly ? toReadonly : toReactive; if (has.call(rawTarget, key)) { return wrap(target.get(key)); } else if (has.call(rawTarget, rawKey)) { return wrap(target.get(rawKey)); } } function has$1(key, isReadonly = false) { const target = this["__v_raw" /* RAW */]; const rawTarget = toRaw(target); const rawKey = toRaw(key); if (key !== rawKey) { !isReadonly && track(rawTarget, "has" /* HAS */, key); } !isReadonly && track(rawTarget, "has" /* HAS */, rawKey); return key === rawKey ? target.has(key) : target.has(key) || target.has(rawKey); } function size(target, isReadonly = false) { target = target["__v_raw" /* RAW */]; !isReadonly && track(toRaw(target), "iterate" /* ITERATE */, ITERATE_KEY); return Reflect.get(target, 'size', target); } function add(value) { value = toRaw(value); const target = toRaw(this); const proto = getProto(target); const hadKey = proto.has.call(target, value); if (!hadKey) { target.add(value); trigger(target, "add" /* ADD */, value, value); } return this; } function set$1(key, value) { value = toRaw(value); const target = toRaw(this); const { has, get } = getProto(target); let hadKey = has.call(target, key); if (!hadKey) { key = toRaw(key); hadKey = has.call(target, key); } else { checkIdentityKeys(target, has, key); } const oldValue = get.call(target, key); target.set(key, value); if (!hadKey) { trigger(target, "add" /* ADD */, key, value); } else if (hasChanged(value, oldValue)) { trigger(target, "set" /* SET */, key, value, oldValue); } return this; } function deleteEntry(key) { const target = toRaw(this); const { has, get } = getProto(target); let hadKey = has.call(target, key); if (!hadKey) { key = toRaw(key); hadKey = has.call(target, key); } else { checkIdentityKeys(target, has, key); } const oldValue = get ? get.call(target, key) : undefined; // forward the operation before queueing reactions const result = target.delete(key); if (hadKey) { trigger(target, "delete" /* DELETE */, key, undefined, oldValue); } return result; } function clear() { const target = toRaw(this); const hadItems = target.size !== 0; const oldTarget = isMap(target) ? new Map(target) : new Set(target) ; // forward the operation before queueing reactions const result = target.clear(); if (hadItems) { trigger(target, "clear" /* CLEAR */, undefined, undefined, oldTarget); } return result; } function createForEach(isReadonly, isShallow) { return function forEach(callback, thisArg) { const observed = this; const target = observed["__v_raw" /* RAW */]; const rawTarget = toRaw(target); const wrap = isShallow ? toShallow : isReadonly ? toReadonly : toReactive; !isReadonly && track(rawTarget, "iterate" /* ITERATE */, ITERATE_KEY); return target.forEach((value, key) => { // important: make sure the callback is // 1. invoked with the reactive map as `this` and 3rd arg // 2. the value received should be a corresponding reactive/readonly. return callback.call(thisArg, wrap(value), wrap(key), observed); }); }; } function createIterableMethod(method, isReadonly, isShallow) { return function (...args) { const target = this["__v_raw" /* RAW */]; const rawTarget = toRaw(target); const targetIsMap = isMap(rawTarget); const isPair = method === 'entries' || (method === Symbol.iterator && targetIsMap); const isKeyOnly = method === 'keys' && targetIsMap; const innerIterator = target[method](...args); const wrap = isShallow ? toShallow : isReadonly ? toReadonly : toReactive; !isReadonly && track(rawTarget, "iterate" /* ITERATE */, isKeyOnly ? MAP_KEY_ITERATE_KEY : ITERATE_KEY); // return a wrapped iterator which returns observed versions of the // values emitted from the real iterator return { // iterator protocol next() { const { value, done } = innerIterator.next(); return done ? { value, done } : { value: isPair ? [wrap(value[0]), wrap(value[1])] : wrap(value), done }; }, // iterable protocol [Symbol.iterator]() { return this; } }; }; } function createReadonlyMethod(type) { return function (...args) { { const key = args[0] ? `on key "${args[0]}" ` : ``; console.warn(`${capitalize(type)} operation ${key}failed: target is readonly.`, toRaw(this)); } return type === "delete" /* DELETE */ ? false : this; }; } const mutableInstrumentations = { get(key) { return get$1(this, key); }, get size() { return size(this); }, has: has$1, add, set: set$1, delete: deleteEntry, clear, forEach: createForEach(false, false) }; const shallowInstrumentations = { get(key) { return get$1(this, key, false, true); }, get size() { return size(this); }, has: has$1, add, set: set$1, delete: deleteEntry, clear, forEach: createForEach(false, true) }; const readonlyInstrumentations = { get(key) { return get$1(this, key, true); }, get size() { return size(this, true); }, has(key) { return has$1.call(this, key, true); }, add: createReadonlyMethod("add" /* ADD */), set: createReadonlyMethod("set" /* SET */), delete: createReadonlyMethod("delete" /* DELETE */), clear: createReadonlyMethod("clear" /* CLEAR */), forEach: createForEach(true, false) }; const shallowReadonlyInstrumentations = { get(key) { return get$1(this, key, true, true); }, get size() { return size(this, true); }, has(key) { return has$1.call(this, key, true); }, add: createReadonlyMethod("add" /* ADD */), set: createReadonlyMethod("set" /* SET */), delete: createReadonlyMethod("delete" /* DELETE */), clear: createReadonlyMethod("clear" /* CLEAR */), forEach: createForEach(true, true) }; const iteratorMethods = ['keys', 'values', 'entries', Symbol.iterator]; iteratorMethods.forEach(method => { mutableInstrumentations[method] = createIterableMethod(method, false, false); readonlyInstrumentations[method] = createIterableMethod(method, true, false); shallowInstrumentations[method] = createIterableMethod(method, false, true); shallowReadonlyInstrumentations[method] = createIterableMethod(method, true, true); }); function createInstrumentationGetter(isReadonly, shallow) { const instrumentations = shallow ? isReadonly ? shallowReadonlyInstrumentations : shallowInstrumentations : isReadonly ? readonlyInstrumentations : mutableInstrumentations; return (target, key, receiver) => { if (key === "__v_isReactive" /* IS_REACTIVE */) { return !isReadonly; } else if (key === "__v_isReadonly" /* IS_READONLY */) { return isReadonly; } else if (key === "__v_raw" /* RAW */) { return target; } return Reflect.get(hasOwn(instrumentations, key) && key in target ? instrumentations : target, key, receiver); }; } const mutableCollectionHandlers = { get: createInstrumentationGetter(false, false) }; const shallowCollectionHandlers = { get: createInstrumentationGetter(false, true) }; const readonlyCollectionHandlers = { get: createInstrumentationGetter(true, false) }; const shallowReadonlyCollectionHandlers = { get: createInstrumentationGetter(true, true) }; function checkIdentityKeys(target, has, key) { const rawKey = toRaw(key); if (rawKey !== key && has.call(target, rawKey)) { const type = toRawType(target); console.warn(`Reactive ${type} contains both the raw and reactive ` + `versions of the same object${type === `Map` ? ` as keys` : ``}, ` + `which can lead to inconsistencies. ` + `Avoid differentiating between the raw and reactive versions ` + `of an object and only use the reactive version if possible.`); } } const reactiveMap = new WeakMap(); const shallowReactiveMap = new WeakMap(); const readonlyMap = new WeakMap(); const shallowReadonlyMap = new WeakMap(); function targetTypeMap(rawType) { switch (rawType) { case 'Object': case 'Array': return 1 /* COMMON */; case 'Map': case 'Set': case 'WeakMap': case 'WeakSet': return 2 /* COLLECTION */; default: return 0 /* INVALID */; } } function getTargetType(value) { return value["__v_skip" /* SKIP */] || !Object.isExtensible(value) ? 0 /* INVALID */ : targetTypeMap(toRawType(value)); } function reactive(target) { // if trying to observe a readonly proxy, return the readonly version. if (target && target["__v_isReadonly" /* IS_READONLY */]) { return target; } return createReactiveObject(target, false, mutableHandlers, mutableCollectionHandlers, reactiveMap); } /** * Return a shallowly-reactive copy of the original object, where only the root * level properties are reactive. It also does not auto-unwrap refs (even at the * root level). */ function shallowReactive(target) { return createReactiveObject(target, false, shallowReactiveHandlers, shallowCollectionHandlers, shallowReactiveMap); } /** * Creates a readonly copy of the original object. Note the returned copy is not * made reactive, but `readonly` can be called on an already reactive object. */ function readonly(target) { return createReactiveObject(target, true, readonlyHandlers, readonlyCollectionHandlers, readonlyMap); } /** * Returns a reactive-copy of the original object, where only the root level * properties are readonly, and does NOT unwrap refs nor recursively convert * returned properties. * This is used for creating the props proxy object for stateful components. */ function shallowReadonly(target) { return createReactiveObject(target, true, shallowReadonlyHandlers, shallowReadonlyCollectionHandlers, shallowReadonlyMap); } function createReactiveObject(target, isReadonly, baseHandlers, collectionHandlers, proxyMap) { if (!isObject(target)) { { console.warn(`value cannot be made reactive: ${String(target)}`); } return target; } // target is already a Proxy, return it. // exception: calling readonly() on a reactive object if (target["__v_raw" /* RAW */] && !(isReadonly && target["__v_isReactive" /* IS_REACTIVE */])) { return target; } // target already has corresponding Proxy const existingProxy = proxyMap.get(target); if (existingProxy) { return existingProxy; } // only a whitelist of value types can be observed. const targetType = getTargetType(target); if (targetType === 0 /* INVALID */) { return target; } const proxy = new Proxy(target, targetType === 2 /* COLLECTION */ ? collectionHandlers : baseHandlers); proxyMap.set(target, proxy); return proxy; } function isReactive(value) { if (isReadonly(value)) { return isReactive(value["__v_raw" /* RAW */]); } return !!(value && value["__v_isReactive" /* IS_REACTIVE */]); } function isReadonly(value) { return !!(value && value["__v_isReadonly" /* IS_READONLY */]); } function isProxy(value) { return isReactive(value) || isReadonly(value); } function toRaw(observed) { return ((observed && toRaw(observed["__v_raw" /* RAW */])) || observed); } function markRaw(value) { def(value, "__v_skip" /* SKIP */, true); return value; } const convert = (val) => isObject(val) ? reactive(val) : val; function isRef(r) { return Boolean(r && r.__v_isRef === true); } function ref(value) { return createRef(value); } function shallowRef(value) { return createRef(value, true); } class RefImpl { constructor(_rawValue, _shallow = false) { this._rawValue = _rawValue; this._shallow = _shallow; this.__v_isRef = true; this._value = _shallow ? _rawValue : convert(_rawValue); } get value() { track(toRaw(this), "get" /* GET */, 'value'); return this._value; } set value(newVal) { if (hasChanged(toRaw(newVal), this._rawValue)) { this._rawValue = newVal; this._value = this._shallow ? newVal : convert(newVal); trigger(toRaw(this), "set" /* SET */, 'value', newVal); } } } function createRef(rawValue, shallow = false) { if (isRef(rawValue)) { return rawValue; } return new RefImpl(rawValue, shallow); } function triggerRef(ref) { trigger(toRaw(ref), "set" /* SET */, 'value', ref.value ); } function unref(ref) { return isRef(ref) ? ref.value : ref; } const shallowUnwrapHandlers = { get: (target, key, receiver) => unref(Reflect.get(target, key, receiver)), set: (target, key, value, receiver) => { const oldValue = target[key]; if (isRef(oldValue) && !isRef(value)) { oldValue.value = value; return true; } else { return Reflect.set(target, key, value, receiver); } } }; function proxyRefs(objectWithRefs) { return isReactive(objectWithRefs) ? objectWithRefs : new Proxy(objectWithRefs, shallowUnwrapHandlers); } class CustomRefImpl { constructor(factory) { this.__v_isRef = true; const { get, set } = factory(() => track(this, "get" /* GET */, 'value'), () => trigger(this, "set" /* SET */, 'value')); this._get = get; this._set = set; } get value() { return this._get(); } set value(newVal) { this._set(newVal); } } function customRef(factory) { return new CustomRefImpl(factory); } function toRefs(object) { if (!isProxy(object)) { console.warn(`toRefs() expects a reactive object but received a plain one.`); } const ret = isArray(object) ? new Array(object.length) : {}; for (const key in object) { ret[key] = toRef(object, key); } return ret; } class ObjectRefImpl { constructor(_object, _key) { this._object = _object; this._key = _key; this.__v_isRef = true; } get value() { return this._object[this._key]; } set value(newVal) { this._object[this._key] = newVal; } } function toRef(object, key) { return isRef(object[key]) ? object[key] : new ObjectRefImpl(object, key); } class ComputedRefImpl { constructor(getter, _setter, isReadonly) { this._setter = _setter; this._dirty = true; this.__v_isRef = true; this.effect = effect(getter, { lazy: true, scheduler: () => { if (!this._dirty) { this._dirty = true; trigger(toRaw(this), "set" /* SET */, 'value'); } } }); this["__v_isReadonly" /* IS_READONLY */] = isReadonly; } get value() { // the computed ref may get wrapped by other proxies e.g. readonly() #3376 const self = toRaw(this); if (self._dirty) { self._value = this.effect(); self._dirty = false; } track(self, "get" /* GET */, 'value'); return self._value; } set value(newValue) { this._setter(newValue); } } function computed(getterOrOptions) { let getter; let setter; if (isFunction(getterOrOptions)) { getter = getterOrOptions; setter = () => { console.warn('Write operation failed: computed value is readonly'); } ; } else { getter = getterOrOptions.get; setter = getterOrOptions.set; } return new ComputedRefImpl(getter, setter, isFunction(getterOrOptions) || !getterOrOptions.set); } const stack = []; function pushWarningContext(vnode) { stack.push(vnode); } function popWarningContext() { stack.pop(); } function warn(msg, ...args) { // avoid props formatting or warn handler tracking deps that might be mutated // during patch, leading to infinite recursion. pauseTracking(); const instance = stack.length ? stack[stack.length - 1].component : null; const appWarnHandler = instance && instance.appContext.config.warnHandler; const trace = getComponentTrace(); if (appWarnHandler) { callWithErrorHandling(appWarnHandler, instance, 11 /* APP_WARN_HANDLER */, [ msg + args.join(''), instance && instance.proxy, trace .map(({ vnode }) => `at <${formatComponentName(instance, vnode.type)}>`) .join('\n'), trace ]); } else { const warnArgs = [`[Vue warn]: ${msg}`, ...args]; /* istanbul ignore if */ if (trace.length && // avoid spamming console during tests !false) { warnArgs.push(`\n`, ...formatTrace(trace)); } console.warn(...warnArgs); } resetTracking(); } function getComponentTrace() { let currentVNode = stack[stack.length - 1]; if (!currentVNode) { return []; } // we can't just use the stack because it will be incomplete during updates // that did not start from the root. Re-construct the parent chain using // instance parent pointers. const normalizedStack = []; while (currentVNode) { const last = normalizedStack[0]; if (last && last.vnode === currentVNode) { last.recurseCount++; } else { normalizedStack.push({ vnode: currentVNode, recurseCount: 0 }); } const parentInstance = currentVNode.component && currentVNode.component.parent; currentVNode = parentInstance && parentInstance.vnode; } return normalizedStack; } /* istanbul ignore next */ function formatTrace(trace) { const logs = []; trace.forEach((entry, i) => { logs.push(...(i === 0 ? [] : [`\n`]), ...formatTraceEntry(entry)); }); return logs; } function formatTraceEntry({ vnode, recurseCount }) { const postfix = recurseCount > 0 ? `... (${recurseCount} recursive calls)` : ``; const isRoot = vnode.component ? vnode.component.parent == null : false; const open = ` at <${formatComponentName(vnode.component, vnode.type, isRoot)}`; const close = `>` + postfix; return vnode.props ? [open, ...formatProps(vnode.props), close] : [open + close]; } /* istanbul ignore next */ function formatProps(props) { const res = []; const keys = Object.keys(props); keys.slice(0, 3).forEach(key => { res.push(...formatProp(key, props[key])); }); if (keys.length > 3) { res.push(` ...`); } return res; } /* istanbul ignore next */ function formatProp(key, value, raw) { if (isString(value)) { value = JSON.stringify(value); return raw ? value : [`${key}=${value}`]; } else if (typeof value === 'number' || typeof value === 'boolean' || value == null) { return raw ? value : [`${key}=${value}`]; } else if (isRef(value)) { value = formatProp(key, toRaw(value.value), true); return raw ? value : [`${key}=Ref<`, value, `>`]; } else if (isFunction(value)) { return [`${key}=fn${value.name ? `<${value.name}>` : ``}`]; } else { value = toRaw(value); return raw ? value : [`${key}=`, value]; } } const ErrorTypeStrings = { ["bc" /* BEFORE_CREATE */]: 'beforeCreate hook', ["c" /* CREATED */]: 'created hook', ["bm" /* BEFORE_MOUNT */]: 'beforeMount hook', ["m" /* MOUNTED */]: 'mounted hook', ["bu" /* BEFORE_UPDATE */]: 'beforeUpdate hook', ["u" /* UPDATED */]: 'updated', ["bum" /* BEFORE_UNMOUNT */]: 'beforeUnmount hook', ["um" /* UNMOUNTED */]: 'unmounted hook', ["a" /* ACTIVATED */]: 'activated hook', ["da" /* DEACTIVATED */]: 'deactivated hook', ["ec" /* ERROR_CAPTURED */]: 'errorCaptured hook', ["rtc" /* RENDER_TRACKED */]: 'renderTracked hook', ["rtg" /* RENDER_TRIGGERED */]: 'renderTriggered hook', [0 /* SETUP_FUNCTION */]: 'setup function', [1 /* RENDER_FUNCTION */]: 'render function', [2 /* WATCH_GETTER */]: 'watcher getter', [3 /* WATCH_CALLBACK */]: 'watcher callback', [4 /* WATCH_CLEANUP */]: 'watcher cleanup function', [5 /* NATIVE_EVENT_HANDLER */]: 'native event handler', [6 /* COMPONENT_EVENT_HANDLER */]: 'component event handler', [7 /* VNODE_HOOK */]: 'vnode hook', [8 /* DIRECTIVE_HOOK */]: 'directive hook', [9 /* TRANSITION_HOOK */]: 'transition hook', [10 /* APP_ERROR_HANDLER */]: 'app errorHandler', [11 /* APP_WARN_HANDLER */]: 'app warnHandler', [12 /* FUNCTION_REF */]: 'ref function', [13 /* ASYNC_COMPONENT_LOADER */]: 'async component loader', [14 /* SCHEDULER */]: 'scheduler flush. This is likely a Vue internals bug. ' + 'Please open an issue at https://new-issue.vuejs.org/?repo=vuejs/vue-next' }; function callWithErrorHandling(fn, instance, type, args) { let res; try { res = args ? fn(...args) : fn(); } catch (err) { handleError(err, instance, type); } return res; } function callWithAsyncErrorHandling(fn, instance, type, args) { if (isFunction(fn)) { const res = callWithErrorHandling(fn, instance, type, args); if (res && isPromise(res)) { res.catch(err => { handleError(err, instance, type); }); } return res; } const values = []; for (let i = 0; i < fn.length; i++) { values.push(callWithAsyncErrorHandling(fn[i], instance, type, args)); } return values; } function handleError(err, instance, type, throwInDev = true) { const contextVNode = instance ? instance.vnode : null; if (instance) { let cur = instance.parent; // the exposed instance is the render proxy to keep it consistent with 2.x const exposedInstance = instance.proxy; // in production the hook receives only the error code const errorInfo = ErrorTypeStrings[type] ; while (cur) { const errorCapturedHooks = cur.ec; if (errorCapturedHooks) { for (let i = 0; i < errorCapturedHooks.length; i++) { if (errorCapturedHooks[i](err, exposedInstance, errorInfo) === false) { return; } } } cur = cur.parent; } // app-level handling const appErrorHandler = instance.appContext.config.errorHandler; if (appErrorHandler) { callWithErrorHandling(appErrorHandler, null, 10 /* APP_ERROR_HANDLER */, [err, exposedInstance, errorInfo]); return; } } logError(err, type, contextVNode, throwInDev); } function logError(err, type, contextVNode, throwInDev = true) { { const info = ErrorTypeStrings[type]; if (contextVNode) { pushWarningContext(contextVNode); } warn(`Unhandled error${info ? ` during execution of ${info}` : ``}`); if (contextVNode) { popWarningContext(); } // crash in dev by default so it's more noticeable if (throwInDev) { throw err; } else { console.error(err); } } } let isFlushing = false; let isFlushPending = false; const queue = []; let flushIndex = 0; const pendingPreFlushCbs = []; let activePreFlushCbs = null; let preFlushIndex = 0; const pendingPostFlushCbs = []; let activePostFlushCbs = null; let postFlushIndex = 0; const resolvedPromise = Promise.resolve(); let currentFlushPromise = null; let currentPreFlushParentJob = null; const RECURSION_LIMIT = 100; function nextTick(fn) { const p = currentFlushPromise || resolvedPromise; return fn ? p.then(this ? fn.bind(this) : fn) : p; } // #2768 // Use binary-search to find a suitable position in the queue, // so that the queue maintains the increasing order of job's id, // which can prevent the job from being skipped and also can avoid repeated patching. function findInsertionIndex(job) { // the start index should be `flushIndex + 1` let start = flushIndex + 1; let end = queue.length; const jobId = getId(job); while (start < end) { const middle = (start + end) >>> 1; const middleJobId = getId(queue[middle]); middleJobId < jobId ? (start = middle + 1) : (end = middle); } return start; } function queueJob(job) { // the dedupe search uses the startIndex argument of Array.includes() // by default the search index includes the current job that is being run // so it cannot recursively trigger itself again. // if the job is a watch() callback, the search will start with a +1 index to // allow it recursively trigger itself - it is the user's responsibility to // ensure it doesn't end up in an infinite loop. if ((!queue.length || !queue.includes(job, isFlushing && job.allowRecurse ? flushIndex + 1 : flushIndex)) && job !== currentPreFlushParentJob) { const pos = findInsertionIndex(job); if (pos > -1) { queue.splice(pos, 0, job); } else { queue.push(job); } queueFlush(); } } function queueFlush() { if (!isFlushing && !isFlushPending) { isFlushPending = true; currentFlushPromise = resolvedPromise.then(flushJobs); } } function invalidateJob(job) { const i = queue.indexOf(job); if (i > flushIndex) { queue.splice(i, 1); } } function queueCb(cb, activeQueue, pendingQueue, index) { if (!isArray(cb)) { if (!activeQueue || !activeQueue.includes(cb, cb.allowRecurse ? index + 1 : index)) { pendingQueue.push(cb); } } else { // if cb is an array, it is a component lifecycle hook which can only be // triggered by a job, which is already deduped in the main queue, so // we can skip duplicate check here to improve perf pendingQueue.push(...cb); } queueFlush(); } function queuePreFlushCb(cb) { queueCb(cb, activePreFlushCbs, pendingPreFlushCbs, preFlushIndex); } function queuePostFlushCb(cb) { queueCb(cb, activePostFlushCbs, pendingPostFlushCbs, postFlushIndex); } function flushPreFlushCbs(seen, parentJob = null) { if (pendingPreFlushCbs.length) { currentPreFlushParentJob = parentJob; activePreFlushCbs = [...new Set(pendingPreFlushCbs)]; pendingPreFlushCbs.length = 0; { seen = seen || new Map(); } for (preFlushIndex = 0; preFlushIndex < activePreFlushCbs.length; preFlushIndex++) { { checkRecursiveUpdates(seen, activePreFlushCbs[preFlushIndex]); } activePreFlushCbs[preFlushIndex](); } activePreFlushCbs = null; preFlushIndex = 0; currentPreFlushParentJob = null; // recursively flush until it drains flushPreFlushCbs(seen, parentJob); } } function flushPostFlushCbs(seen) { if (pendingPostFlushCbs.length) { const deduped = [...new Set(pendingPostFlushCbs)]; pendingPostFlushCbs.length = 0; // #1947 already has active queue, nested flushPostFlushCbs call if (activePostFlushCbs) { activePostFlushCbs.push(...deduped); return; } activePostFlushCbs = deduped; { seen = seen || new Map(); } activePostFlushCbs.sort((a, b) => getId(a) - getId(b)); for (postFlushIndex = 0; postFlushIndex < activePostFlushCbs.length; postFlushIndex++) { { checkRecursiveUpdates(seen, activePostFlushCbs[postFlushIndex]); } activePostFlushCbs[postFlushIndex](); } activePostFlushCbs = null; postFlushIndex = 0; } } const getId = (job) => job.id == null ? Infinity : job.id; function flushJobs(seen) { isFlushPending = false; isFlushing = true; { seen = seen || new Map(); } flushPreFlushCbs(seen); // Sort queue before flush. // This ensures that: // 1. Components are updated from parent to child. (because parent is always // created before the child so its render effect will have smaller // priority number) // 2. If a component is unmounted during a parent component's update, // its update can be skipped. queue.sort((a, b) => getId(a) - getId(b)); try { for (flushIndex = 0; flushIndex < queue.length; flushIndex++) { const job = queue[flushIndex]; if (job) { if (true) { checkRecursiveUpdates(seen, job); } callWithErrorHandling(job, null, 14 /* SCHEDULER */); } } } finally { flushIndex = 0; queue.length = 0; flushPostFlushCbs(seen); isFlushing = false; currentFlushPromise = null; // some postFlushCb queued jobs! // keep flushing until it drains. if (queue.length || pendingPostFlushCbs.length) { flushJobs(seen); } } } function checkRecursiveUpdates(seen, fn) { if (!seen.has(fn)) { seen.set(fn, 1); } else { const count = seen.get(fn); if (count > RECURSION_LIMIT) { throw new Error(`Maximum recursive updates exceeded. ` + `This means you have a reactive effect that is mutating its own ` + `dependencies and thus recursively triggering itself. Possible sources ` + `include component template, render function, updated hook or ` + `watcher source function.`); } else { seen.set(fn, count + 1); } } } /* eslint-disable no-restricted-globals */ let isHmrUpdating = false; const hmrDirtyComponents = new Set(); // Expose the HMR runtime on the global object // This makes it entirely tree-shakable without polluting the exports and makes // it easier to be used in toolings like vue-loader // Note: for a component to be eligible for HMR it also needs the __hmrId option // to be set so that its instances can be registered / removed. { const globalObject = typeof global !== 'undefined' ? global : typeof self !== 'undefined' ? self : typeof window !== 'undefined' ? window : {}; globalObject.__VUE_HMR_RUNTIME__ = { createRecord: tryWrap(createRecord), rerender: tryWrap(rerender), reload: tryWrap(reload) }; } const map = new Map(); function registerHMR(instance) { const id = instance.type.__hmrId; let record = map.get(id); if (!record) { createRecord(id, instance.type); record = map.get(id); } record.instances.add(instance); } function unregisterHMR(instance) { map.get(instance.type.__hmrId).instances.delete(instance); } function createRecord(id, component) { if (!component) { warn(`HMR API usage is out of date.\n` + `Please upgrade vue-loader/vite/rollup-plugin-vue or other relevant ` + `dependency that handles Vue SFC compilation.`); component = {}; } if (map.has(id)) { return false; } map.set(id, { component: isClassComponent(component) ? component.__vccOpts : component, instances: new Set() }); return true; } function rerender(id, newRender) { const record = map.get(id); if (!record) return; if (newRender) record.component.render = newRender; // Array.from creates a snapshot which avoids the set being mutated during // updates Array.from(record.instances).forEach(instance => { if (newRender) { instance.render = newRender; } instance.renderCache = []; // this flag forces child components with slot content to update isHmrUpdating = true; instance.update(); isHmrUpdating = false; }); } function reload(id, newComp) { const record = map.get(id); if (!record) return; // Array.from creates a snapshot which avoids the set being mutated during // updates const { component, instances } = record; if (!hmrDirtyComponents.has(component)) { // 1. Update existing comp definition to match new one newComp = isClassComponent(newComp) ? newComp.__vccOpts : newComp; extend(component, newComp); for (const key in component) { if (!(key in newComp)) { delete component[key]; } } // 2. Mark component dirty. This forces the renderer to replace the component // on patch. hmrDirtyComponents.add(component); // 3. Make sure to unmark the component after the reload. queuePostFlushCb(() => { hmrDirtyComponents.delete(component); }); } Array.from(instances).forEach(instance => { if (instance.parent) { // 4. Force the parent instance to re-render. This will cause all updated // components to be unmounted and re-mounted. Queue the update so that we // don't end up forcing the same parent to re-render multiple times. queueJob(instance.parent.update); } else if (instance.appContext.reload) { // root instance mounted via createApp() has a reload method instance.appContext.reload(); } else if (typeof window !== 'undefined') { // root instance inside tree created via raw render(). Force reload. window.location.reload(); } else { console.warn('[HMR] Root or manually mounted instance modified. Full reload required.'); } }); } function tryWrap(fn) { return (id, arg) => { try { return fn(id, arg); } catch (e) { console.error(e); console.warn(`[HMR] Something went wrong during Vue component hot-reload. ` + `Full reload required.`); } }; } function setDevtoolsHook(hook) { exports.devtools = hook; } function devtoolsInitApp(app, version) { // TODO queue if devtools is undefined if (!exports.devtools) return; exports.devtools.emit("app:init" /* APP_INIT */, app, version, { Fragment, Text, Comment, Static }); } function devtoolsUnmountApp(app) { if (!exports.devtools) return; exports.devtools.emit("app:unmount" /* APP_UNMOUNT */, app); } const devtoolsComponentAdded = /*#__PURE__*/ createDevtoolsComponentHook("component:added" /* COMPONENT_ADDED */); const devtoolsComponentUpdated = /*#__PURE__*/ createDevtoolsComponentHook("component:updated" /* COMPONENT_UPDATED */); const devtoolsComponentRemoved = /*#__PURE__*/ createDevtoolsComponentHook("component:removed" /* COMPONENT_REMOVED */); function createDevtoolsComponentHook(hook) { return (component) => { if (!exports.devtools) return; exports.devtools.emit(hook, component.appContext.app, component.uid, component.parent ? component.parent.uid : undefined, component); }; } function devtoolsComponentEmit(component, event, params) { if (!exports.devtools) return; exports.devtools.emit("component:emit" /* COMPONENT_EMIT */, component.appContext.app, component, event, params); } function emit(instance, event, ...rawArgs) { const props = instance.vnode.props || EMPTY_OBJ; { const { emitsOptions, propsOptions: [propsOptions] } = instance; if (emitsOptions) { if (!(event in emitsOptions)) { if (!propsOptions || !(toHandlerKey(event) in propsOptions)) { warn(`Component emitted event "${event}" but it is neither declared in ` + `the emits option nor as an "${toHandlerKey(event)}" prop.`); } } else { const validator = emitsOptions[event]; if (isFunction(validator)) { const isValid = validator(...rawArgs); if (!isValid) { warn(`Invalid event arguments: event validation failed for event "${event}".`); } } } } } let args = rawArgs; const isModelListener = event.startsWith('update:'); // for v-model update:xxx events, apply modifiers on args const modelArg = isModelListener && event.slice(7); if (modelArg && modelArg in props) { const modifiersKey = `${modelArg === 'modelValue' ? 'model' : modelArg}Modifiers`; const { number, trim } = props[modifiersKey] || EMPTY_OBJ; if (trim) { args = rawArgs.map(a => a.trim()); } else if (number) { args = rawArgs.map(toNumber); } } { devtoolsComponentEmit(instance, event, args); } { const lowerCaseEvent = event.toLowerCase(); if (lowerCaseEvent !== event && props[toHandlerKey(lowerCaseEvent)]) { warn(`Event "${lowerCaseEvent}" is emitted in component ` + `${formatComponentName(instance, instance.type)} but the handler is registered for "${event}". ` + `Note that HTML attributes are case-insensitive and you cannot use ` + `v-on to listen to camelCase events when using in-DOM templates. ` + `You should probably use "${hyphenate(event)}" instead of "${event}".`); } } let handlerName; let handler = props[(handlerName = toHandlerKey(event))] || // also try camelCase event handler (#2249) props[(handlerName = toHandlerKey(camelize(event)))]; // for v-model update:xxx events, also trigger kebab-case equivalent // for props passed via kebab-case if (!handler && isModelListener) { handler = props[(handlerName = toHandlerKey(hyphenate(event)))]; } if (handler) { callWithAsyncErrorHandling(handler, instance, 6 /* COMPONENT_EVENT_HANDLER */, args); } const onceHandler = props[handlerName + `Once`]; if (onceHandler) { if (!instance.emitted) { (instance.emitted = {})[handlerName] = true; } else if (instance.emitted[handlerName]) { return; } callWithAsyncErrorHandling(onceHandler, instance, 6 /* COMPONENT_EVENT_HANDLER */, args); } } function normalizeEmitsOptions(comp, appContext, asMixin = false) { if (!appContext.deopt && comp.__emits !== undefined) { return comp.__emits; } const raw = comp.emits; let normalized = {}; // apply mixin/extends props let hasExtends = false; if (!isFunction(comp)) { const extendEmits = (raw) => { const normalizedFromExtend = normalizeEmitsOptions(raw, appContext, true); if (normalizedFromExtend) { hasExtends = true; extend(normalized, normalizedFromExtend); } }; if (!asMixin && appContext.mixins.length) { appContext.mixins.forEach(extendEmits); } if (comp.extends) { extendEmits(comp.extends); } if (comp.mixins) { comp.mixins.forEach(extendEmits); } } if (!raw && !hasExtends) { return (comp.__emits = null); } if (isArray(raw)) { raw.forEach(key => (normalized[key] = null)); } else { extend(normalized, raw); } return (comp.__emits = normalized); } // Check if an incoming prop key is a declared emit event listener. // e.g. With `emits: { click: null }`, props named `onClick` and `onclick` are // both considered matched listeners. function isEmitListener(options, key) { if (!options || !isOn(key)) { return false; } key = key.slice(2).replace(/Once$/, ''); return (hasOwn(options, key[0].toLowerCase() + key.slice(1)) || hasOwn(options, hyphenate(key)) || hasOwn(options, key)); } let isRenderingCompiledSlot = 0; const setCompiledSlotRendering = (n) => (isRenderingCompiledSlot += n); /** * Compiler runtime helper for rendering `<slot/>` * @private */ function renderSlot(slots, name, props = {}, // this is not a user-facing function, so the fallback is always generated by // the compiler and guaranteed to be a function returning an array fallback, noSlotted) { let slot = slots[name]; if (slot && slot.length > 1) { warn(`SSR-optimized slot function detected in a non-SSR-optimized render ` + `function. You need to mark this component with $dynamic-slots in the ` + `parent template.`); slot = () => []; } // a compiled slot disables block tracking by default to avoid manual // invocation interfering with template-based block tracking, but in // `renderSlot` we can be sure that it's template-based so we can force // enable it. isRenderingCompiledSlot++; openBlock(); const validSlotContent = slot && ensureValidVNode(slot(props)); const rendered = createBlock(Fragment, { key: props.key || `_${name}` }, validSlotContent || (fallback ? fallback() : []), validSlotContent && slots._ === 1 /* STABLE */ ? 64 /* STABLE_FRAGMENT */ : -2 /* BAIL */); if (!noSlotted && rendered.scopeId) { rendered.slotScopeIds = [rendered.scopeId + '-s']; } isRenderingCompiledSlot--; return rendered; } function ensureValidVNode(vnodes) { return vnodes.some(child => { if (!isVNode(child)) return true; if (child.type === Comment) return false; if (child.type === Fragment && !ensureValidVNode(child.children)) return false; return true; }) ? vnodes : null; } /** * mark the current rendering instance for asset resolution (e.g. * resolveComponent, resolveDirective) during render */ let currentRenderingInstance = null; let currentScopeId = null; /** * Note: rendering calls maybe nested. The function returns the parent rendering * instance if present, which should be restored after the render is done: * * ```js * const prev = setCurrentRenderingInstance(i) * // ...render * setCurrentRenderingInstance(prev) * ``` */ function setCurrentRenderingInstance(instance) { const prev = currentRenderingInstance; currentRenderingInstance = instance; currentScopeId = (instance && instance.type.__scopeId) || null; return prev; } /** * Set scope id when creating hoisted vnodes. * @private compiler helper */ function pushScopeId(id) { currentScopeId = id; } /** * Technically we no longer need this after 3.0.8 but we need to keep the same * API for backwards compat w/ code generated by compilers. * @private */ function popScopeId() { currentScopeId = null; } /** * Only for backwards compat * @private */ const withScopeId = (_id) => withCtx; /** * Wrap a slot function to memoize current rendering instance * @private compiler helper */ function withCtx(fn, ctx = currentRenderingInstance) { if (!ctx) return fn; const renderFnWithContext = (...args) => { // If a user calls a compiled slot inside a template expression (#1745), it // can mess up block tracking, so by default we need to push a null block to // avoid that. This isn't necessary if rendering a compiled `<slot>`. if (!isRenderingCompiledSlot) { openBlock(true /* null block that disables tracking */); } const prevInstance = setCurrentRenderingInstance(ctx); const res = fn(...args); setCurrentRenderingInstance(prevInstance); if (!isRenderingCompiledSlot) { closeBlock(); } return res; }; // mark this as a compiled slot function. // this is used in vnode.ts -> normalizeChildren() to set the slot // rendering flag. renderFnWithContext._c = true; return renderFnWithContext; } /** * dev only flag to track whether $attrs was used during render. * If $attrs was used during render then the warning for failed attrs * fallthrough can be suppressed. */ let accessedAttrs = false; function markAttrsAccessed() { accessedAttrs = true; } function renderComponentRoot(instance) { const { type: Component, vnode, proxy, withProxy, props, propsOptions: [propsOptions], slots, attrs, emit, render, renderCache, data, setupState, ctx } = instance; let result; const prev = setCurrentRenderingInstance(instance); { accessedAttrs = false; } try { let fallthroughAttrs; if (vnode.shapeFlag & 4 /* STATEFUL_COMPONENT */) { // withProxy is a proxy with a different `has` trap only for // runtime-compiled render functions using `with` block. const proxyToUse = withProxy || proxy; result = normalizeVNode(render.call(proxyToUse, proxyToUse, renderCache, props, setupState, data, ctx)); fallthroughAttrs = attrs; } else { // functional const render = Component; // in dev, mark attrs accessed if optional props (attrs === props) if (true && attrs === props) { markAttrsAccessed(); } result = normalizeVNode(render.length > 1 ? render(props, true ? { get attrs() { markAttrsAccessed(); return attrs; }, slots, emit } : { attrs, slots, emit }) : render(props, null /* we know it doesn't need it */)); fallthroughAttrs = Component.props ? attrs : getFunctionalFallthrough(attrs); } // attr merging // in dev mode, comments are preserved, and it's possible for a template // to have comments along side the root element which makes it a fragment let root = result; let setRoot = undefined; if (true && result.patchFlag > 0 && result.patchFlag & 2048 /* DEV_ROOT_FRAGMENT */) { ; [root, setRoot] = getChildRoot(result); } if (Component.inheritAttrs !== false && fallthroughAttrs) { const keys = Object.keys(fallthroughAttrs); const { shapeFlag } = root; if (keys.length) { if (shapeFlag & 1 /* ELEMENT */ || shapeFlag & 6 /* COMPONENT */) { if (propsOptions && keys.some(isModelListener)) { // If a v-model listener (onUpdate:xxx) has a corresponding declared // prop, it indicates this component expects to handle v-model and // it should not fallthrough. // related: #1543, #1643, #1989 fallthroughAttrs = filterModelListeners(fallthroughAttrs, propsOptions); } root = cloneVNode(root, fallthroughAttrs); } else if (true && !accessedAttrs && root.type !== Comment) { const allAttrs = Object.keys(attrs); const eventAttrs = []; const extraAttrs = []; for (let i = 0, l = allAttrs.length; i < l; i++) { const key = allAttrs[i]; if (isOn(key)) { // ignore v-model handlers when they fail to fallthrough if (!isModelListener(key)) { // remove `on`, lowercase first letter to reflect event casing // accurately eventAttrs.push(key[2].toLowerCase() + key.slice(3)); } } else { extraAttrs.push(key); } } if (extraAttrs.length) { warn(`Extraneous non-props attributes (` + `${extraAttrs.join(', ')}) ` + `were passed to component but could not be automatically inherited ` + `because component renders fragment or text root nodes.`); } if (eventAttrs.length) { warn(`Extraneous non-emits event listeners (` + `${eventAttrs.join(', ')}) ` + `were passed to component but could not be automatically inherited ` + `because component renders fragment or text root nodes. ` + `If the listener is intended to be a component custom event listener only, ` + `declare it using the "emits" option.`); } } } } // inherit directives if (vnode.dirs) { if (true && !isElementRoot(root)) { warn(`Runtime directive used on component with non-element root node. ` + `The directives will not function as intended.`); } root.dirs = root.dirs ? root.dirs.concat(vnode.dirs) : vnode.dirs; } // inherit transition data if (vnode.transition) { if (true && !isElementRoot(root)) { warn(`Component inside <Transition> renders non-element root node ` + `that cannot be animated.`); } root.transition = vnode.transition; } if (true && setRoot) { setRoot(root); } else { result = root; } } catch (err) { blockStack.length = 0; handleError(err, instance, 1 /* RENDER_FUNCTION */); result = createVNode(Comment); } setCurrentRenderingInstance(prev); return result; } /** * dev only * In dev mode, template root level comments are rendered, which turns the * template into a fragment root, but we need to locate the single element * root for attrs and scope id processing. */ const getChildRoot = (vnode) => { const rawChildren = vnode.children; const dynamicChildren = vnode.dynamicChildren; const childRoot = filterSingleRoot(rawChildren); if (!childRoot) { return [vnode, undefined]; } const index = rawChildren.indexOf(childRoot); const dynamicIndex = dynamicChildren ? dynamicChildren.indexOf(childRoot) : -1; const setRoot = (updatedRoot) => { rawChildren[index] = updatedRoot; if (dynamicChildren) { if (dynamicIndex > -1) { dynamicChildren[dynamicIndex] = updatedRoot; } else if (updatedRoot.patchFlag > 0) { vnode.dynamicChildren = [...dynamicChildren, updatedRoot]; } } }; return [normalizeVNode(childRoot), setRoot]; }; function filterSingleRoot(children) { let singleRoot; for (let i = 0; i < children.length; i++) { const child = children[i]; if (isVNode(child)) { // ignore user comment if (child.type !== Comment || child.children === 'v-if') { if (singleRoot) { // has more than 1 non-comment child, return now return; } else { singleRoot = child; } } } else { return; } } return singleRoot; } const getFunctionalFallthrough = (attrs) => { let res; for (const key in attrs) { if (key === 'class' || key === 'style' || isOn(key)) { (res || (res = {}))[key] = attrs[key]; } } return res; }; const filterModelListeners = (attrs, props) => { const res = {}; for (const key in attrs) { if (!isModelListener(key) || !(key.slice(9) in props)) { res[key] = attrs[key]; } } return res; }; const isElementRoot = (vnode) => { return (vnode.shapeFlag & 6 /* COMPONENT */ || vnode.shapeFlag & 1 /* ELEMENT */ || vnode.type === Comment // potential v-if branch switch ); }; function shouldUpdateComponent(prevVNode, nextVNode, optimized) { const { props: prevProps, children: prevChildren, component } = prevVNode; const { props: nextProps, children: nextChildren, patchFlag } = nextVNode; const emits = component.emitsOptions; // Parent component's render function was hot-updated. Since this may have // caused the child component's slots content to have changed, we need to // force the child to update as well. if ((prevChildren || nextChildren) && isHmrUpdating) { return true; } // force child update for runtime directive or transition on component vnode. if (nextVNode.dirs || nextVNode.transition) { return true; } if (optimized && patchFlag >= 0) { if (patchFlag & 1024 /* DYNAMIC_SLOTS */) { // slot content that references values that might have changed, // e.g. in a v-for return true; } if (patchFlag & 16 /* FULL_PROPS */) { if (!prevProps) { return !!nextProps; } // presence of this flag indicates props are always non-null return hasPropsChanged(prevProps, nextProps, emits); } else if (patchFlag & 8 /* PROPS */) { const dynamicProps = nextVNode.dynamicProps; for (let i = 0; i < dynamicProps.length; i++) { const key = dynamicProps[i]; if (nextProps[key] !== prevProps[key] && !isEmitListener(emits, key)) { return true; } } } } else { // this path is only taken by manually written render functions // so presence of any children leads to a forced update if (prevChildren || nextChildren) { if (!nextChildren || !nextChildren.$stable) { return true; } } if (prevProps === nextProps) { return false; } if (!prevProps) { return !!nextProps; } if (!nextProps) { return true; } return hasPropsChanged(prevProps, nextProps, emits); } return false; } function hasPropsChanged(prevProps, nextProps, emitsOptions) { const nextKeys = Object.keys(nextProps); if (nextKeys.length !== Object.keys(prevProps).length) { return true; } for (let i = 0; i < nextKeys.length; i++) { const key = nextKeys[i]; if (nextProps[key] !== prevProps[key] && !isEmitListener(emitsOptions, key)) { return true; } } return false; } function updateHOCHostEl({ vnode, parent }, el // HostNode ) { while (parent && parent.subTree === vnode) { (vnode = parent.vnode).el = el; parent = parent.parent; } } const isSuspense = (type) => type.__isSuspense; // Suspense exposes a component-like API, and is treated like a component // in the compiler, but internally it's a special built-in type that hooks // directly into the renderer. const SuspenseImpl = { name: 'Suspense', // In order to make Suspense tree-shakable, we need to avoid importing it // directly in the renderer. The renderer checks for the __isSuspense flag // on a vnode's type and calls the `process` method, passing in renderer // internals. __isSuspense: true, process(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, // platform-specific impl passed from renderer rendererInternals) { if (n1 == null) { mountSuspense(n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, rendererInternals); } else { patchSuspense(n1, n2, container, anchor, parentComponent, isSVG, slotScopeIds, optimized, rendererInternals); } }, hydrate: hydrateSuspense, create: createSuspenseBoundary }; // Force-casted public typing for h and TSX props inference const Suspense = (SuspenseImpl ); function mountSuspense(vnode, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, rendererInternals) { const { p: patch, o: { createElement } } = rendererInternals; const hiddenContainer = createElement('div'); const suspense = (vnode.suspense = createSuspenseBoundary(vnode, parentSuspense, parentComponent, container, hiddenContainer, anchor, isSVG, slotScopeIds, optimized, rendererInternals)); // start mounting the content subtree in an off-dom container patch(null, (suspense.pendingBranch = vnode.ssContent), hiddenContainer, null, parentComponent, suspense, isSVG, slotScopeIds); // now check if we have encountered any async deps if (suspense.deps > 0) { // has async // mount the fallback tree patch(null, vnode.ssFallback, container, anchor, parentComponent, null, // fallback tree will not have suspense context isSVG, slotScopeIds); setActiveBranch(suspense, vnode.ssFallback); } else { // Suspense has no async deps. Just resolve. suspense.resolve(); } } function patchSuspense(n1, n2, container, anchor, parentComponent, isSVG, slotScopeIds, optimized, { p: patch, um: unmount, o: { createElement } }) { const suspense = (n2.suspense = n1.suspense); suspense.vnode = n2; n2.el = n1.el; const newBranch = n2.ssContent; const newFallback = n2.ssFallback; const { activeBranch, pendingBranch, isInFallback, isHydrating } = suspense; if (pendingBranch) { suspense.pendingBranch = newBranch; if (isSameVNodeType(newBranch, pendingBranch)) { // same root type but content may have changed. patch(pendingBranch, newBranch, suspense.hiddenContainer, null, parentComponent, suspense, isSVG, slotScopeIds, optimized); if (suspense.deps <= 0) { suspense.resolve(); } else if (isInFallback) { patch(activeBranch, newFallback, container, anchor, parentComponent, null, // fallback tree will not have suspense context isSVG, slotScopeIds, optimized); setActiveBranch(suspense, newFallback); } } else { // toggled before pending tree is resolved suspense.pendingId++; if (isHydrating) { // if toggled before hydration is finished, the current DOM tree is // no longer valid. set it as the active branch so it will be unmounted // when resolved suspense.isHydrating = false; suspense.activeBranch = pendingBranch; } else { unmount(pendingBranch, parentComponent, suspense); } // increment pending ID. this is used to invalidate async callbacks // reset suspense state suspense.deps = 0; // discard effects from pending branch suspense.effects.length = 0; // discard previous container suspense.hiddenContainer = createElement('div'); if (isInFallback) { // already in fallback state patch(null, newBranch, suspense.hiddenContainer, null, parentComponent, suspense, isSVG, slotScopeIds, optimized); if (suspense.deps <= 0) { suspense.resolve(); } else { patch(activeBranch, newFallback, container, anchor, parentComponent, null, // fallback tree will not have suspense context isSVG, slotScopeIds, optimized); setActiveBranch(suspense, newFallback); } } else if (activeBranch && isSameVNodeType(newBranch, activeBranch)) { // toggled "back" to current active branch patch(activeBranch, newBranch, container, anchor, parentComponent, suspense, isSVG, slotScopeIds, optimized); // force resolve suspense.resolve(true); } else { // switched to a 3rd branch patch(null, newBranch, suspense.hiddenContainer, null, parentComponent, suspense, isSVG, slotScopeIds, optimized); if (suspense.deps <= 0) { suspense.resolve(); } } } } else { if (activeBranch && isSameVNodeType(newBranch, activeBranch)) { // root did not change, just normal patch patch(activeBranch, newBranch, container, anchor, parentComponent, suspense, isSVG, slotScopeIds, optimized); setActiveBranch(suspense, newBranch); } else { // root node toggled // invoke @pending event const onPending = n2.props && n2.props.onPending; if (isFunction(onPending)) { onPending(); } // mount pending branch in off-dom container suspense.pendingBranch = newBranch; suspense.pendingId++; patch(null, newBranch, suspense.hiddenContainer, null, parentComponent, suspense, isSVG, slotScopeIds, optimized); if (suspense.deps <= 0) { // incoming branch has no async deps, resolve now. suspense.resolve(); } else { const { timeout, pendingId } = suspense; if (timeout > 0) { setTimeout(() => { if (suspense.pendingId === pendingId) { suspense.fallback(newFallback); } }, timeout); } else if (timeout === 0) { suspense.fallback(newFallback); } } } } } let hasWarned = false; function createSuspenseBoundary(vnode, parent, parentComponent, container, hiddenContainer, anchor, isSVG, slotScopeIds, optimized, rendererInternals, isHydrating = false) { /* istanbul ignore if */ if (!hasWarned) { hasWarned = true; // @ts-ignore `console.info` cannot be null error console[console.info ? 'info' : 'log'](`<Suspense> is an experimental feature and its API will likely change.`); } const { p: patch, m: move, um: unmount, n: next, o: { parentNode, remove } } = rendererInternals; const timeout = toNumber(vnode.props && vnode.props.timeout); const suspense = { vnode, parent, parentComponent, isSVG, container, hiddenContainer, anchor, deps: 0, pendingId: 0, timeout: typeof timeout === 'number' ? timeout : -1, activeBranch: null, pendingBranch: null, isInFallback: true, isHydrating, isUnmounted: false, effects: [], resolve(resume = false) { { if (!resume && !suspense.pendingBranch) { throw new Error(`suspense.resolve() is called without a pending branch.`); } if (suspense.isUnmounted) { throw new Error(`suspense.resolve() is called on an already unmounted suspense boundary.`); } } const { vnode, activeBranch, pendingBranch, pendingId, effects, parentComponent, container } = suspense; if (suspense.isHydrating) { suspense.isHydrating = false; } else if (!resume) { const delayEnter = activeBranch && pendingBranch.transition && pendingBranch.transition.mode === 'out-in'; if (delayEnter) { activeBranch.transition.afterLeave = () => { if (pendingId === suspense.pendingId) { move(pendingBranch, container, anchor, 0 /* ENTER */); } }; } // this is initial anchor on mount let { anchor } = suspense; // unmount current active tree if (activeBranch) { // if the fallback tree was mounted, it may have been moved // as part of a parent suspense. get the latest anchor for insertion anchor = next(activeBranch); unmount(activeBranch, parentComponent, suspense, true); } if (!delayEnter) { // move content from off-dom container to actual container move(pendingBranch, container, anchor, 0 /* ENTER */); } } setActiveBranch(suspense, pendingBranch); suspense.pendingBranch = null; suspense.isInFallback = false; // flush buffered effects // check if there is a pending parent suspense let parent = suspense.parent; let hasUnresolvedAncestor = false; while (parent) { if (parent.pendingBranch) { // found a pending parent suspense, merge buffered post jobs // into that parent parent.effects.push(...effects); hasUnresolvedAncestor = true; break; } parent = parent.parent; } // no pending parent suspense, flush all jobs if (!hasUnresolvedAncestor) { queuePostFlushCb(effects); } suspense.effects = []; // invoke @resolve event const onResolve = vnode.props && vnode.props.onResolve; if (isFunction(onResolve)) { onResolve(); } }, fallback(fallbackVNode) { if (!suspense.pendingBranch) { return; } const { vnode, activeBranch, parentComponent, container, isSVG } = suspense; // invoke @fallback event const onFallback = vnode.props && vnode.props.onFallback; if (isFunction(onFallback)) { onFallback(); } const anchor = next(activeBranch); const mountFallback = () => { if (!suspense.isInFallback) { return; } // mount the fallback tree patch(null, fallbackVNode, container, anchor, parentComponent, null, // fallback tree will not have suspense context isSVG, slotScopeIds, optimized); setActiveBranch(suspense, fallbackVNode); }; const delayEnter = fallbackVNode.transition && fallbackVNode.transition.mode === 'out-in'; if (delayEnter) { activeBranch.transition.afterLeave = mountFallback; } // unmount current active branch unmount(activeBranch, parentComponent, null, // no suspense so unmount hooks fire now true // shouldRemove ); suspense.isInFallback = true; if (!delayEnter) { mountFallback(); } }, move(container, anchor, type) { suspense.activeBranch && move(suspense.activeBranch, container, anchor, type); suspense.container = container; }, next() { return suspense.activeBranch && next(suspense.activeBranch); }, registerDep(instance, setupRenderEffect) { const isInPendingSuspense = !!suspense.pendingBranch; if (isInPendingSuspense) { suspense.deps++; } const hydratedEl = instance.vnode.el; instance .asyncDep.catch(err => { handleError(err, instance, 0 /* SETUP_FUNCTION */); }) .then(asyncSetupResult => { // retry when the setup() promise resolves. // component may have been unmounted before resolve. if (instance.isUnmounted || suspense.isUnmounted || suspense.pendingId !== instance.suspenseId) { return; } // retry from this component instance.asyncResolved = true; const { vnode } = instance; { pushWarningContext(vnode); } handleSetupResult(instance, asyncSetupResult, false); if (hydratedEl) { // vnode may have been replaced if an update happened before the // async dep is resolved. vnode.el = hydratedEl; } const placeholder = !hydratedEl && instance.subTree.el; setupRenderEffect(instance, vnode, // component may have been moved before resolve. // if this is not a hydration, instance.subTree will be the comment // placeholder. parentNode(hydratedEl || instance.subTree.el), // anchor will not be used if this is hydration, so only need to // consider the comment placeholder case. hydratedEl ? null : next(instance.subTree), suspense, isSVG, optimized); if (placeholder) { remove(placeholder); } updateHOCHostEl(instance, vnode.el); { popWarningContext(); } // only decrease deps count if suspense is not already resolved if (isInPendingSuspense && --suspense.deps === 0) { suspense.resolve(); } }); }, unmount(parentSuspense, doRemove) { suspense.isUnmounted = true; if (suspense.activeBranch) { unmount(suspense.activeBranch, parentComponent, parentSuspense, doRemove); } if (suspense.pendingBranch) { unmount(suspense.pendingBranch, parentComponent, parentSuspense, doRemove); } } }; return suspense; } function hydrateSuspense(node, vnode, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, rendererInternals, hydrateNode) { /* eslint-disable no-restricted-globals */ const suspense = (vnode.suspense = createSuspenseBoundary(vnode, parentSuspense, parentComponent, node.parentNode, document.createElement('div'), null, isSVG, slotScopeIds, optimized, rendererInternals, true /* hydrating */)); // there are two possible scenarios for server-rendered suspense: // - success: ssr content should be fully resolved // - failure: ssr content should be the fallback branch. // however, on the client we don't really know if it has failed or not // attempt to hydrate the DOM assuming it has succeeded, but we still // need to construct a suspense boundary first const result = hydrateNode(node, (suspense.pendingBranch = vnode.ssContent), parentComponent, suspense, slotScopeIds, optimized); if (suspense.deps === 0) { suspense.resolve(); } return result; /* eslint-enable no-restricted-globals */ } function normalizeSuspenseChildren(vnode) { const { shapeFlag, children } = vnode; let content; let fallback; if (shapeFlag & 32 /* SLOTS_CHILDREN */) { content = normalizeSuspenseSlot(children.default); fallback = normalizeSuspenseSlot(children.fallback); } else { content = normalizeSuspenseSlot(children); fallback = normalizeVNode(null); } return { content, fallback }; } function normalizeSuspenseSlot(s) { if (isFunction(s)) { s = s(); } if (isArray(s)) { const singleChild = filterSingleRoot(s); if (!singleChild) { warn(`<Suspense> slots expect a single root node.`); } s = singleChild; } return normalizeVNode(s); } function queueEffectWithSuspense(fn, suspense) { if (suspense && suspense.pendingBranch) { if (isArray(fn)) { suspense.effects.push(...fn); } else { suspense.effects.push(fn); } } else { queuePostFlushCb(fn); } } function setActiveBranch(suspense, branch) { suspense.activeBranch = branch; const { vnode, parentComponent } = suspense; const el = (vnode.el = branch.el); // in case suspense is the root node of a component, // recursively update the HOC el if (parentComponent && parentComponent.subTree === vnode) { parentComponent.vnode.el = el; updateHOCHostEl(parentComponent, el); } } function initProps(instance, rawProps, isStateful, // result of bitwise flag comparison isSSR = false) { const props = {}; const attrs = {}; def(attrs, InternalObjectKey, 1); instance.propsDefaults = Object.create(null); setFullProps(instance, rawProps, props, attrs); // validation { validateProps(rawProps || {}, props, instance); } if (isStateful) { // stateful instance.props = isSSR ? props : shallowReactive(props); } else { if (!instance.type.props) { // functional w/ optional props, props === attrs instance.props = attrs; } else { // functional w/ declared props instance.props = props; } } instance.attrs = attrs; } function updateProps(instance, rawProps, rawPrevProps, optimized) { const { props, attrs, vnode: { patchFlag } } = instance; const rawCurrentProps = toRaw(props); const [options] = instance.propsOptions; if ( // always force full diff in dev // - #1942 if hmr is enabled with sfc component // - vite#872 non-sfc component used by sfc component !((instance.type.__hmrId || (instance.parent && instance.parent.type.__hmrId))) && (optimized || patchFlag > 0) && !(patchFlag & 16 /* FULL_PROPS */)) { if (patchFlag & 8 /* PROPS */) { // Compiler-generated props & no keys change, just set the updated // the props. const propsToUpdate = instance.vnode.dynamicProps; for (let i = 0; i < propsToUpdate.length; i++) { const key = propsToUpdate[i]; // PROPS flag guarantees rawProps to be non-null const value = rawProps[key]; if (options) { // attr / props separation was done on init and will be consistent // in this code path, so just check if attrs have it. if (hasOwn(attrs, key)) { attrs[key] = value; } else { const camelizedKey = camelize(key); props[camelizedKey] = resolvePropValue(options, rawCurrentProps, camelizedKey, value, instance); } } else { attrs[key] = value; } } } } else { // full props update. setFullProps(instance, rawProps, props, attrs); // in case of dynamic props, check if we need to delete keys from // the props object let kebabKey; for (const key in rawCurrentProps) { if (!rawProps || // for camelCase (!hasOwn(rawProps, key) && // it's possible the original props was passed in as kebab-case // and converted to camelCase (#955) ((kebabKey = hyphenate(key)) === key || !hasOwn(rawProps, kebabKey)))) { if (options) { if (rawPrevProps && // for camelCase (rawPrevProps[key] !== undefined || // for kebab-case rawPrevProps[kebabKey] !== undefined)) { props[key] = resolvePropValue(options, rawProps || EMPTY_OBJ, key, undefined, instance); } } else { delete props[key]; } } } // in the case of functional component w/o props declaration, props and // attrs point to the same object so it should already have been updated. if (attrs !== rawCurrentProps) { for (const key in attrs) { if (!rawProps || !hasOwn(rawProps, key)) { delete attrs[key]; } } } } // trigger updates for $attrs in case it's used in component slots trigger(instance, "set" /* SET */, '$attrs'); { validateProps(rawProps || {}, props, instance); } } function setFullProps(instance, rawProps, props, attrs) { const [options, needCastKeys] = instance.propsOptions; if (rawProps) { for (const key in rawProps) { const value = rawProps[key]; // key, ref are reserved and never passed down if (isReservedProp(key)) { continue; } // prop option names are camelized during normalization, so to support // kebab -> camel conversion here we need to camelize the key. let camelKey; if (options && hasOwn(options, (camelKey = camelize(key)))) { props[camelKey] = value; } else if (!isEmitListener(instance.emitsOptions, key)) { // Any non-declared (either as a prop or an emitted event) props are put // into a separate `attrs` object for spreading. Make sure to preserve // original key casing attrs[key] = value; } } } if (needCastKeys) { const rawCurrentProps = toRaw(props); for (let i = 0; i < needCastKeys.length; i++) { const key = needCastKeys[i]; props[key] = resolvePropValue(options, rawCurrentProps, key, rawCurrentProps[key], instance); } } } function resolvePropValue(options, props, key, value, instance) { const opt = options[key]; if (opt != null) { const hasDefault = hasOwn(opt, 'default'); // default values if (hasDefault && value === undefined) { const defaultValue = opt.default; if (opt.type !== Function && isFunction(defaultValue)) { const { propsDefaults } = instance; if (key in propsDefaults) { value = propsDefaults[key]; } else { setCurrentInstance(instance); value = propsDefaults[key] = defaultValue(props); setCurrentInstance(null); } } else { value = defaultValue; } } // boolean casting if (opt[0 /* shouldCast */]) { if (!hasOwn(props, key) && !hasDefault) { value = false; } else if (opt[1 /* shouldCastTrue */] && (value === '' || value === hyphenate(key))) { value = true; } } } return value; } function normalizePropsOptions(comp, appContext, asMixin = false) { if (!appContext.deopt && comp.__props) { return comp.__props; } const raw = comp.props; const normalized = {}; const needCastKeys = []; // apply mixin/extends props let hasExtends = false; if (!isFunction(comp)) { const extendProps = (raw) => { hasExtends = true; const [props, keys] = normalizePropsOptions(raw, appContext, true); extend(normalized, props); if (keys) needCastKeys.push(...keys); }; if (!asMixin && appContext.mixins.length) { appContext.mixins.forEach(extendProps); } if (comp.extends) { extendProps(comp.extends); } if (comp.mixins) { comp.mixins.forEach(extendProps); } } if (!raw && !hasExtends) { return (comp.__props = EMPTY_ARR); } if (isArray(raw)) { for (let i = 0; i < raw.length; i++) { if (!isString(raw[i])) { warn(`props must be strings when using array syntax.`, raw[i]); } const normalizedKey = camelize(raw[i]); if (validatePropName(normalizedKey)) { normalized[normalizedKey] = EMPTY_OBJ; } } } else if (raw) { if (!isObject(raw)) { warn(`invalid props options`, raw); } for (const key in raw) { const normalizedKey = camelize(key); if (validatePropName(normalizedKey)) { const opt = raw[key]; const prop = (normalized[normalizedKey] = isArray(opt) || isFunction(opt) ? { type: opt } : opt); if (prop) { const booleanIndex = getTypeIndex(Boolean, prop.type); const stringIndex = getTypeIndex(String, prop.type); prop[0 /* shouldCast */] = booleanIndex > -1; prop[1 /* shouldCastTrue */] = stringIndex < 0 || booleanIndex < stringIndex; // if the prop needs boolean casting or default value if (booleanIndex > -1 || hasOwn(prop, 'default')) { needCastKeys.push(normalizedKey); } } } } } return (comp.__props = [normalized, needCastKeys]); } function validatePropName(key) { if (key[0] !== '$') { return true; } else { warn(`Invalid prop name: "${key}" is a reserved property.`); } return false; } // use function string name to check type constructors // so that it works across vms / iframes. function getType(ctor) { const match = ctor && ctor.toString().match(/^\s*function (\w+)/); return match ? match[1] : ''; } function isSameType(a, b) { return getType(a) === getType(b); } function getTypeIndex(type, expectedTypes) { if (isArray(expectedTypes)) { return expectedTypes.findIndex(t => isSameType(t, type)); } else if (isFunction(expectedTypes)) { return isSameType(expectedTypes, type) ? 0 : -1; } return -1; } /** * dev only */ function validateProps(rawProps, props, instance) { const resolvedValues = toRaw(props); const options = instance.propsOptions[0]; for (const key in options) { let opt = options[key]; if (opt == null) continue; validateProp(key, resolvedValues[key], opt, !hasOwn(rawProps, key) && !hasOwn(rawProps, hyphenate(key))); } } /** * dev only */ function validateProp(name, value, prop, isAbsent) { const { type, required, validator } = prop; // required! if (required && isAbsent) { warn('Missing required prop: "' + name + '"'); return; } // missing but optional if (value == null && !prop.required) { return; } // type check if (type != null && type !== true) { let isValid = false; const types = isArray(type) ? type : [type]; const expectedTypes = []; // value is valid as long as one of the specified types match for (let i = 0; i < types.length && !isValid; i++) { const { valid, expectedType } = assertType(value, types[i]); expectedTypes.push(expectedType || ''); isValid = valid; } if (!isValid) { warn(getInvalidTypeMessage(name, value, expectedTypes)); return; } } // custom validator if (validator && !validator(value)) { warn('Invalid prop: custom validator check failed for prop "' + name + '".'); } } const isSimpleType = /*#__PURE__*/ makeMap('String,Number,Boolean,Function,Symbol,BigInt'); /** * dev only */ function assertType(value, type) { let valid; const expectedType = getType(type); if (isSimpleType(expectedType)) { const t = typeof value; valid = t === expectedType.toLowerCase(); // for primitive wrapper objects if (!valid && t === 'object') { valid = value instanceof type; } } else if (expectedType === 'Object') { valid = isObject(value); } else if (expectedType === 'Array') { valid = isArray(value); } else { valid = value instanceof type; } return { valid, expectedType }; } /** * dev only */ function getInvalidTypeMessage(name, value, expectedTypes) { let message = `Invalid prop: type check failed for prop "${name}".` + ` Expected ${expectedTypes.map(capitalize).join(', ')}`; const expectedType = expectedTypes[0]; const receivedType = toRawType(value); const expectedValue = styleValue(value, expectedType); const receivedValue = styleValue(value, receivedType); // check if we need to specify expected value if (expectedTypes.length === 1 && isExplicable(expectedType) && !isBoolean(expectedType, receivedType)) { message += ` with value ${expectedValue}`; } message += `, got ${receivedType} `; // check if we need to specify received value if (isExplicable(receivedType)) { message += `with value ${receivedValue}.`; } return message; } /** * dev only */ function styleValue(value, type) { if (type === 'String') { return `"${value}"`; } else if (type === 'Number') { return `${Number(value)}`; } else { return `${value}`; } } /** * dev only */ function isExplicable(type) { const explicitTypes = ['string', 'number', 'boolean']; return explicitTypes.some(elem => type.toLowerCase() === elem); } /** * dev only */ function isBoolean(...args) { return args.some(elem => elem.toLowerCase() === 'boolean'); } function injectHook(type, hook, target = currentInstance, prepend = false) { if (target) { const hooks = target[type] || (target[type] = []); // cache the error handling wrapper for injected hooks so the same hook // can be properly deduped by the scheduler. "__weh" stands for "with error // handling". const wrappedHook = hook.__weh || (hook.__weh = (...args) => { if (target.isUnmounted) { return; } // disable tracking inside all lifecycle hooks // since they can potentially be called inside effects. pauseTracking(); // Set currentInstance during hook invocation. // This assumes the hook does not synchronously trigger other hooks, which // can only be false when the user does something really funky. setCurrentInstance(target); const res = callWithAsyncErrorHandling(hook, target, type, args); setCurrentInstance(null); resetTracking(); return res; }); if (prepend) { hooks.unshift(wrappedHook); } else { hooks.push(wrappedHook); } return wrappedHook; } else { const apiName = toHandlerKey(ErrorTypeStrings[type].replace(/ hook$/, '')); warn(`${apiName} is called when there is no active component instance to be ` + `associated with. ` + `Lifecycle injection APIs can only be used during execution of setup().` + (` If you are using async setup(), make sure to register lifecycle ` + `hooks before the first await statement.` )); } } const createHook = (lifecycle) => (hook, target = currentInstance) => // post-create lifecycle registrations are noops during SSR !isInSSRComponentSetup && injectHook(lifecycle, hook, target); const onBeforeMount = createHook("bm" /* BEFORE_MOUNT */); const onMounted = createHook("m" /* MOUNTED */); const onBeforeUpdate = createHook("bu" /* BEFORE_UPDATE */); const onUpdated = createHook("u" /* UPDATED */); const onBeforeUnmount = createHook("bum" /* BEFORE_UNMOUNT */); const onUnmounted = createHook("um" /* UNMOUNTED */); const onRenderTriggered = createHook("rtg" /* RENDER_TRIGGERED */); const onRenderTracked = createHook("rtc" /* RENDER_TRACKED */); const onErrorCaptured = (hook, target = currentInstance) => { injectHook("ec" /* ERROR_CAPTURED */, hook, target); }; // Simple effect. function watchEffect(effect, options) { return doWatch(effect, null, options); } // initial value for watchers to trigger on undefined initial values const INITIAL_WATCHER_VALUE = {}; // implementation function watch(source, cb, options) { if (!isFunction(cb)) { warn(`\`watch(fn, options?)\` signature has been moved to a separate API. ` + `Use \`watchEffect(fn, options?)\` instead. \`watch\` now only ` + `supports \`watch(source, cb, options?) signature.`); } return doWatch(source, cb, options); } function doWatch(source, cb, { immediate, deep, flush, onTrack, onTrigger } = EMPTY_OBJ, instance = currentInstance) { if (!cb) { if (immediate !== undefined) { warn(`watch() "immediate" option is only respected when using the ` + `watch(source, callback, options?) signature.`); } if (deep !== undefined) { warn(`watch() "deep" option is only respected when using the ` + `watch(source, callback, options?) signature.`); } } const warnInvalidSource = (s) => { warn(`Invalid watch source: `, s, `A watch source can only be a getter/effect function, a ref, ` + `a reactive object, or an array of these types.`); }; let getter; let forceTrigger = false; if (isRef(source)) { getter = () => source.value; forceTrigger = !!source._shallow; } else if (isReactive(source)) { getter = () => source; deep = true; } else if (isArray(source)) { getter = () => source.map(s => { if (isRef(s)) { return s.value; } else if (isReactive(s)) { return traverse(s); } else if (isFunction(s)) { return callWithErrorHandling(s, instance, 2 /* WATCH_GETTER */, [ instance && instance.proxy ]); } else { warnInvalidSource(s); } }); } else if (isFunction(source)) { if (cb) { // getter with cb getter = () => callWithErrorHandling(source, instance, 2 /* WATCH_GETTER */, [ instance && instance.proxy ]); } else { // no cb -> simple effect getter = () => { if (instance && instance.isUnmounted) { return; } if (cleanup) { cleanup(); } return callWithAsyncErrorHandling(source, instance, 3 /* WATCH_CALLBACK */, [onInvalidate]); }; } } else { getter = NOOP; warnInvalidSource(source); } if (cb && deep) { const baseGetter = getter; getter = () => traverse(baseGetter()); } let cleanup; let onInvalidate = (fn) => { cleanup = runner.options.onStop = () => { callWithErrorHandling(fn, instance, 4 /* WATCH_CLEANUP */); }; }; let oldValue = isArray(source) ? [] : INITIAL_WATCHER_VALUE; const job = () => { if (!runner.active) { return; } if (cb) { // watch(source, cb) const newValue = runner(); if (deep || forceTrigger || hasChanged(newValue, oldValue)) { // cleanup before running cb again if (cleanup) { cleanup(); } callWithAsyncErrorHandling(cb, instance, 3 /* WATCH_CALLBACK */, [ newValue, // pass undefined as the old value when it's changed for the first time oldValue === INITIAL_WATCHER_VALUE ? undefined : oldValue, onInvalidate ]); oldValue = newValue; } } else { // watchEffect runner(); } }; // important: mark the job as a watcher callback so that scheduler knows // it is allowed to self-trigger (#1727) job.allowRecurse = !!cb; let scheduler; if (flush === 'sync') { scheduler = job; } else if (flush === 'post') { scheduler = () => queuePostRenderEffect(job, instance && instance.suspense); } else { // default: 'pre' scheduler = () => { if (!instance || instance.isMounted) { queuePreFlushCb(job); } else { // with 'pre' option, the first call must happen before // the component is mounted so it is called synchronously. job(); } }; } const runner = effect(getter, { lazy: true, onTrack, onTrigger, scheduler }); recordInstanceBoundEffect(runner, instance); // initial run if (cb) { if (immediate) { job(); } else { oldValue = runner(); } } else if (flush === 'post') { queuePostRenderEffect(runner, instance && instance.suspense); } else { runner(); } return () => { stop(runner); if (instance) { remove(instance.effects, runner); } }; } // this.$watch function instanceWatch(source, cb, options) { const publicThis = this.proxy; const getter = isString(source) ? () => publicThis[source] : source.bind(publicThis); return doWatch(getter, cb.bind(publicThis), options, this); } function traverse(value, seen = new Set()) { if (!isObject(value) || seen.has(value)) { return value; } seen.add(value); if (isRef(value)) { traverse(value.value, seen); } else if (isArray(value)) { for (let i = 0; i < value.length; i++) { traverse(value[i], seen); } } else if (isSet(value) || isMap(value)) { value.forEach((v) => { traverse(v, seen); }); } else { for (const key in value) { traverse(value[key], seen); } } return value; } function useTransitionState() { const state = { isMounted: false, isLeaving: false, isUnmounting: false, leavingVNodes: new Map() }; onMounted(() => { state.isMounted = true; }); onBeforeUnmount(() => { state.isUnmounting = true; }); return state; } const TransitionHookValidator = [Function, Array]; const BaseTransitionImpl = { name: `BaseTransition`, props: { mode: String, appear: Boolean, persisted: Boolean, // enter onBeforeEnter: TransitionHookValidator, onEnter: TransitionHookValidator, onAfterEnter: TransitionHookValidator, onEnterCancelled: TransitionHookValidator, // leave onBeforeLeave: TransitionHookValidator, onLeave: TransitionHookValidator, onAfterLeave: TransitionHookValidator, onLeaveCancelled: TransitionHookValidator, // appear onBeforeAppear: TransitionHookValidator, onAppear: TransitionHookValidator, onAfterAppear: TransitionHookValidator, onAppearCancelled: TransitionHookValidator }, setup(props, { slots }) { const instance = getCurrentInstance(); const state = useTransitionState(); let prevTransitionKey; return () => { const children = slots.default && getTransitionRawChildren(slots.default(), true); if (!children || !children.length) { return; } // warn multiple elements if (children.length > 1) { warn('<transition> can only be used on a single element or component. Use ' + '<transition-group> for lists.'); } // there's no need to track reactivity for these props so use the raw // props for a bit better perf const rawProps = toRaw(props); const { mode } = rawProps; // check mode if (mode && !['in-out', 'out-in', 'default'].includes(mode)) { warn(`invalid <transition> mode: ${mode}`); } // at this point children has a guaranteed length of 1. const child = children[0]; if (state.isLeaving) { return emptyPlaceholder(child); } // in the case of <transition><keep-alive/></transition>, we need to // compare the type of the kept-alive children. const innerChild = getKeepAliveChild(child); if (!innerChild) { return emptyPlaceholder(child); } const enterHooks = resolveTransitionHooks(innerChild, rawProps, state, instance); setTransitionHooks(innerChild, enterHooks); const oldChild = instance.subTree; const oldInnerChild = oldChild && getKeepAliveChild(oldChild); let transitionKeyChanged = false; const { getTransitionKey } = innerChild.type; if (getTransitionKey) { const key = getTransitionKey(); if (prevTransitionKey === undefined) { prevTransitionKey = key; } else if (key !== prevTransitionKey) { prevTransitionKey = key; transitionKeyChanged = true; } } // handle mode if (oldInnerChild && oldInnerChild.type !== Comment && (!isSameVNodeType(innerChild, oldInnerChild) || transitionKeyChanged)) { const leavingHooks = resolveTransitionHooks(oldInnerChild, rawProps, state, instance); // update old tree's hooks in case of dynamic transition setTransitionHooks(oldInnerChild, leavingHooks); // switching between different views if (mode === 'out-in') { state.isLeaving = true; // return placeholder node and queue update when leave finishes leavingHooks.afterLeave = () => { state.isLeaving = false; instance.update(); }; return emptyPlaceholder(child); } else if (mode === 'in-out' && innerChild.type !== Comment) { leavingHooks.delayLeave = (el, earlyRemove, delayedLeave) => { const leavingVNodesCache = getLeavingNodesForType(state, oldInnerChild); leavingVNodesCache[String(oldInnerChild.key)] = oldInnerChild; // early removal callback el._leaveCb = () => { earlyRemove(); el._leaveCb = undefined; delete enterHooks.delayedLeave; }; enterHooks.delayedLeave = delayedLeave; }; } } return child; }; } }; // export the public type for h/tsx inference // also to avoid inline import() in generated d.ts files const BaseTransition = BaseTransitionImpl; function getLeavingNodesForType(state, vnode) { const { leavingVNodes } = state; let leavingVNodesCache = leavingVNodes.get(vnode.type); if (!leavingVNodesCache) { leavingVNodesCache = Object.create(null); leavingVNodes.set(vnode.type, leavingVNodesCache); } return leavingVNodesCache; } // The transition hooks are attached to the vnode as vnode.transition // and will be called at appropriate timing in the renderer. function resolveTransitionHooks(vnode, props, state, instance) { const { appear, mode, persisted = false, onBeforeEnter, onEnter, onAfterEnter, onEnterCancelled, onBeforeLeave, onLeave, onAfterLeave, onLeaveCancelled, onBeforeAppear, onAppear, onAfterAppear, onAppearCancelled } = props; const key = String(vnode.key); const leavingVNodesCache = getLeavingNodesForType(state, vnode); const callHook = (hook, args) => { hook && callWithAsyncErrorHandling(hook, instance, 9 /* TRANSITION_HOOK */, args); }; const hooks = { mode, persisted, beforeEnter(el) { let hook = onBeforeEnter; if (!state.isMounted) { if (appear) { hook = onBeforeAppear || onBeforeEnter; } else { return; } } // for same element (v-show) if (el._leaveCb) { el._leaveCb(true /* cancelled */); } // for toggled element with same key (v-if) const leavingVNode = leavingVNodesCache[key]; if (leavingVNode && isSameVNodeType(vnode, leavingVNode) && leavingVNode.el._leaveCb) { // force early removal (not cancelled) leavingVNode.el._leaveCb(); } callHook(hook, [el]); }, enter(el) { let hook = onEnter; let afterHook = onAfterEnter; let cancelHook = onEnterCancelled; if (!state.isMounted) { if (appear) { hook = onAppear || onEnter; afterHook = onAfterAppear || onAfterEnter; cancelHook = onAppearCancelled || onEnterCancelled; } else { return; } } let called = false; const done = (el._enterCb = (cancelled) => { if (called) return; called = true; if (cancelled) { callHook(cancelHook, [el]); } else { callHook(afterHook, [el]); } if (hooks.delayedLeave) { hooks.delayedLeave(); } el._enterCb = undefined; }); if (hook) { hook(el, done); if (hook.length <= 1) { done(); } } else { done(); } }, leave(el, remove) { const key = String(vnode.key); if (el._enterCb) { el._enterCb(true /* cancelled */); } if (state.isUnmounting) { return remove(); } callHook(onBeforeLeave, [el]); let called = false; const done = (el._leaveCb = (cancelled) => { if (called) return; called = true; remove(); if (cancelled) { callHook(onLeaveCancelled, [el]); } else { callHook(onAfterLeave, [el]); } el._leaveCb = undefined; if (leavingVNodesCache[key] === vnode) { delete leavingVNodesCache[key]; } }); leavingVNodesCache[key] = vnode; if (onLeave) { onLeave(el, done); if (onLeave.length <= 1) { done(); } } else { done(); } }, clone(vnode) { return resolveTransitionHooks(vnode, props, state, instance); } }; return hooks; } // the placeholder really only handles one special case: KeepAlive // in the case of a KeepAlive in a leave phase we need to return a KeepAlive // placeholder with empty content to avoid the KeepAlive instance from being // unmounted. function emptyPlaceholder(vnode) { if (isKeepAlive(vnode)) { vnode = cloneVNode(vnode); vnode.children = null; return vnode; } } function getKeepAliveChild(vnode) { return isKeepAlive(vnode) ? vnode.children ? vnode.children[0] : undefined : vnode; } function setTransitionHooks(vnode, hooks) { if (vnode.shapeFlag & 6 /* COMPONENT */ && vnode.component) { setTransitionHooks(vnode.component.subTree, hooks); } else if (vnode.shapeFlag & 128 /* SUSPENSE */) { vnode.ssContent.transition = hooks.clone(vnode.ssContent); vnode.ssFallback.transition = hooks.clone(vnode.ssFallback); } else { vnode.transition = hooks; } } function getTransitionRawChildren(children, keepComment = false) { let ret = []; let keyedFragmentCount = 0; for (let i = 0; i < children.length; i++) { const child = children[i]; // handle fragment children case, e.g. v-for if (child.type === Fragment) { if (child.patchFlag & 128 /* KEYED_FRAGMENT */) keyedFragmentCount++; ret = ret.concat(getTransitionRawChildren(child.children, keepComment)); } // comment placeholders should be skipped, e.g. v-if else if (keepComment || child.type !== Comment) { ret.push(child); } } // #1126 if a transition children list contains multiple sub fragments, these // fragments will be merged into a flat children array. Since each v-for // fragment may contain different static bindings inside, we need to de-op // these children to force full diffs to ensure correct behavior. if (keyedFragmentCount > 1) { for (let i = 0; i < ret.length; i++) { ret[i].patchFlag = -2 /* BAIL */; } } return ret; } const isKeepAlive = (vnode) => vnode.type.__isKeepAlive; const KeepAliveImpl = { name: `KeepAlive`, // Marker for special handling inside the renderer. We are not using a === // check directly on KeepAlive in the renderer, because importing it directly // would prevent it from being tree-shaken. __isKeepAlive: true, props: { include: [String, RegExp, Array], exclude: [String, RegExp, Array], max: [String, Number] }, setup(props, { slots }) { const instance = getCurrentInstance(); // KeepAlive communicates with the instantiated renderer via the // ctx where the renderer passes in its internals, // and the KeepAlive instance exposes activate/deactivate implementations. // The whole point of this is to avoid importing KeepAlive directly in the // renderer to facilitate tree-shaking. const sharedContext = instance.ctx; // if the internal renderer is not registered, it indicates that this is server-side rendering, // for KeepAlive, we just need to render its children if (!sharedContext.renderer) { return slots.default; } const cache = new Map(); const keys = new Set(); let current = null; const parentSuspense = instance.suspense; const { renderer: { p: patch, m: move, um: _unmount, o: { createElement } } } = sharedContext; const storageContainer = createElement('div'); sharedContext.activate = (vnode, container, anchor, isSVG, optimized) => { const instance = vnode.component; move(vnode, container, anchor, 0 /* ENTER */, parentSuspense); // in case props have changed patch(instance.vnode, vnode, container, anchor, instance, parentSuspense, isSVG, vnode.slotScopeIds, optimized); queuePostRenderEffect(() => { instance.isDeactivated = false; if (instance.a) { invokeArrayFns(instance.a); } const vnodeHook = vnode.props && vnode.props.onVnodeMounted; if (vnodeHook) { invokeVNodeHook(vnodeHook, instance.parent, vnode); } }, parentSuspense); }; sharedContext.deactivate = (vnode) => { const instance = vnode.component; move(vnode, storageContainer, null, 1 /* LEAVE */, parentSuspense); queuePostRenderEffect(() => { if (instance.da) { invokeArrayFns(instance.da); } const vnodeHook = vnode.props && vnode.props.onVnodeUnmounted; if (vnodeHook) { invokeVNodeHook(vnodeHook, instance.parent, vnode); } instance.isDeactivated = true; }, parentSuspense); }; function unmount(vnode) { // reset the shapeFlag so it can be properly unmounted resetShapeFlag(vnode); _unmount(vnode, instance, parentSuspense); } function pruneCache(filter) { cache.forEach((vnode, key) => { const name = getComponentName(vnode.type); if (name && (!filter || !filter(name))) { pruneCacheEntry(key); } }); } function pruneCacheEntry(key) { const cached = cache.get(key); if (!current || cached.type !== current.type) { unmount(cached); } else if (current) { // current active instance should no longer be kept-alive. // we can't unmount it now but it might be later, so reset its flag now. resetShapeFlag(current); } cache.delete(key); keys.delete(key); } // prune cache on include/exclude prop change watch(() => [props.include, props.exclude], ([include, exclude]) => { include && pruneCache(name => matches(include, name)); exclude && pruneCache(name => !matches(exclude, name)); }, // prune post-render after `current` has been updated { flush: 'post', deep: true }); // cache sub tree after render let pendingCacheKey = null; const cacheSubtree = () => { // fix #1621, the pendingCacheKey could be 0 if (pendingCacheKey != null) { cache.set(pendingCacheKey, getInnerChild(instance.subTree)); } }; onMounted(cacheSubtree); onUpdated(cacheSubtree); onBeforeUnmount(() => { cache.forEach(cached => { const { subTree, suspense } = instance; const vnode = getInnerChild(subTree); if (cached.type === vnode.type) { // current instance will be unmounted as part of keep-alive's unmount resetShapeFlag(vnode); // but invoke its deactivated hook here const da = vnode.component.da; da && queuePostRenderEffect(da, suspense); return; } unmount(cached); }); }); return () => { pendingCacheKey = null; if (!slots.default) { return null; } const children = slots.default(); const rawVNode = children[0]; if (children.length > 1) { { warn(`KeepAlive should contain exactly one component child.`); } current = null; return children; } else if (!isVNode(rawVNode) || (!(rawVNode.shapeFlag & 4 /* STATEFUL_COMPONENT */) && !(rawVNode.shapeFlag & 128 /* SUSPENSE */))) { current = null; return rawVNode; } let vnode = getInnerChild(rawVNode); const comp = vnode.type; const name = getComponentName(comp); const { include, exclude, max } = props; if ((include && (!name || !matches(include, name))) || (exclude && name && matches(exclude, name))) { current = vnode; return rawVNode; } const key = vnode.key == null ? comp : vnode.key; const cachedVNode = cache.get(key); // clone vnode if it's reused because we are going to mutate it if (vnode.el) { vnode = cloneVNode(vnode); if (rawVNode.shapeFlag & 128 /* SUSPENSE */) { rawVNode.ssContent = vnode; } } // #1513 it's possible for the returned vnode to be cloned due to attr // fallthrough or scopeId, so the vnode here may not be the final vnode // that is mounted. Instead of caching it directly, we store the pending // key and cache `instance.subTree` (the normalized vnode) in // beforeMount/beforeUpdate hooks. pendingCacheKey = key; if (cachedVNode) { // copy over mounted state vnode.el = cachedVNode.el; vnode.component = cachedVNode.component; if (vnode.transition) { // recursively update transition hooks on subTree setTransitionHooks(vnode, vnode.transition); } // avoid vnode being mounted as fresh vnode.shapeFlag |= 512 /* COMPONENT_KEPT_ALIVE */; // make this key the freshest keys.delete(key); keys.add(key); } else { keys.add(key); // prune oldest entry if (max && keys.size > parseInt(max, 10)) { pruneCacheEntry(keys.values().next().value); } } // avoid vnode being unmounted vnode.shapeFlag |= 256 /* COMPONENT_SHOULD_KEEP_ALIVE */; current = vnode; return rawVNode; }; } }; // export the public type for h/tsx inference // also to avoid inline import() in generated d.ts files const KeepAlive = KeepAliveImpl; function matches(pattern, name) { if (isArray(pattern)) { return pattern.some((p) => matches(p, name)); } else if (isString(pattern)) { return pattern.split(',').indexOf(name) > -1; } else if (pattern.test) { return pattern.test(name); } /* istanbul ignore next */ return false; } function onActivated(hook, target) { registerKeepAliveHook(hook, "a" /* ACTIVATED */, target); } function onDeactivated(hook, target) { registerKeepAliveHook(hook, "da" /* DEACTIVATED */, target); } function registerKeepAliveHook(hook, type, target = currentInstance) { // cache the deactivate branch check wrapper for injected hooks so the same // hook can be properly deduped by the scheduler. "__wdc" stands for "with // deactivation check". const wrappedHook = hook.__wdc || (hook.__wdc = () => { // only fire the hook if the target instance is NOT in a deactivated branch. let current = target; while (current) { if (current.isDeactivated) { return; } current = current.parent; } hook(); }); injectHook(type, wrappedHook, target); // In addition to registering it on the target instance, we walk up the parent // chain and register it on all ancestor instances that are keep-alive roots. // This avoids the need to walk the entire component tree when invoking these // hooks, and more importantly, avoids the need to track child components in // arrays. if (target) { let current = target.parent; while (current && current.parent) { if (isKeepAlive(current.parent.vnode)) { injectToKeepAliveRoot(wrappedHook, type, target, current); } current = current.parent; } } } function injectToKeepAliveRoot(hook, type, target, keepAliveRoot) { // injectHook wraps the original for error handling, so make sure to remove // the wrapped version. const injected = injectHook(type, hook, keepAliveRoot, true /* prepend */); onUnmounted(() => { remove(keepAliveRoot[type], injected); }, target); } function resetShapeFlag(vnode) { let shapeFlag = vnode.shapeFlag; if (shapeFlag & 256 /* COMPONENT_SHOULD_KEEP_ALIVE */) { shapeFlag -= 256 /* COMPONENT_SHOULD_KEEP_ALIVE */; } if (shapeFlag & 512 /* COMPONENT_KEPT_ALIVE */) { shapeFlag -= 512 /* COMPONENT_KEPT_ALIVE */; } vnode.shapeFlag = shapeFlag; } function getInnerChild(vnode) { return vnode.shapeFlag & 128 /* SUSPENSE */ ? vnode.ssContent : vnode; } const isInternalKey = (key) => key[0] === '_' || key === '$stable'; const normalizeSlotValue = (value) => isArray(value) ? value.map(normalizeVNode) : [normalizeVNode(value)]; const normalizeSlot = (key, rawSlot, ctx) => withCtx((props) => { if (currentInstance) { warn(`Slot "${key}" invoked outside of the render function: ` + `this will not track dependencies used in the slot. ` + `Invoke the slot function inside the render function instead.`); } return normalizeSlotValue(rawSlot(props)); }, ctx); const normalizeObjectSlots = (rawSlots, slots) => { const ctx = rawSlots._ctx; for (const key in rawSlots) { if (isInternalKey(key)) continue; const value = rawSlots[key]; if (isFunction(value)) { slots[key] = normalizeSlot(key, value, ctx); } else if (value != null) { { warn(`Non-function value encountered for slot "${key}". ` + `Prefer function slots for better performance.`); } const normalized = normalizeSlotValue(value); slots[key] = () => normalized; } } }; const normalizeVNodeSlots = (instance, children) => { if (!isKeepAlive(instance.vnode)) { warn(`Non-function value encountered for default slot. ` + `Prefer function slots for better performance.`); } const normalized = normalizeSlotValue(children); instance.slots.default = () => normalized; }; const initSlots = (instance, children) => { if (instance.vnode.shapeFlag & 32 /* SLOTS_CHILDREN */) { const type = children._; if (type) { instance.slots = children; // make compiler marker non-enumerable def(children, '_', type); } else { normalizeObjectSlots(children, (instance.slots = {})); } } else { instance.slots = {}; if (children) { normalizeVNodeSlots(instance, children); } } def(instance.slots, InternalObjectKey, 1); }; const updateSlots = (instance, children, optimized) => { const { vnode, slots } = instance; let needDeletionCheck = true; let deletionComparisonTarget = EMPTY_OBJ; if (vnode.shapeFlag & 32 /* SLOTS_CHILDREN */) { const type = children._; if (type) { // compiled slots. if (isHmrUpdating) { // Parent was HMR updated so slot content may have changed. // force update slots and mark instance for hmr as well extend(slots, children); } else if (optimized && type === 1 /* STABLE */) { // compiled AND stable. // no need to update, and skip stale slots removal. needDeletionCheck = false; } else { // compiled but dynamic (v-if/v-for on slots) - update slots, but skip // normalization. extend(slots, children); // #2893 // when rendering the optimized slots by manually written render function, // we need to delete the `slots._` flag if necessary to make subsequent updates reliable, // i.e. let the `renderSlot` create the bailed Fragment if (!optimized && type === 1 /* STABLE */) { delete slots._; } } } else { needDeletionCheck = !children.$stable; normalizeObjectSlots(children, slots); } deletionComparisonTarget = children; } else if (children) { // non slot object children (direct value) passed to a component normalizeVNodeSlots(instance, children); deletionComparisonTarget = { default: 1 }; } // delete stale slots if (needDeletionCheck) { for (const key in slots) { if (!isInternalKey(key) && !(key in deletionComparisonTarget)) { delete slots[key]; } } } }; /** Runtime helper for applying directives to a vnode. Example usage: const comp = resolveComponent('comp') const foo = resolveDirective('foo') const bar = resolveDirective('bar') return withDirectives(h(comp), [ [foo, this.x], [bar, this.y] ]) */ const isBuiltInDirective = /*#__PURE__*/ makeMap('bind,cloak,else-if,else,for,html,if,model,on,once,pre,show,slot,text'); function validateDirectiveName(name) { if (isBuiltInDirective(name)) { warn('Do not use built-in directive ids as custom directive id: ' + name); } } /** * Adds directives to a VNode. */ function withDirectives(vnode, directives) { const internalInstance = currentRenderingInstance; if (internalInstance === null) { warn(`withDirectives can only be used inside render functions.`); return vnode; } const instance = internalInstance.proxy; const bindings = vnode.dirs || (vnode.dirs = []); for (let i = 0; i < directives.length; i++) { let [dir, value, arg, modifiers = EMPTY_OBJ] = directives[i]; if (isFunction(dir)) { dir = { mounted: dir, updated: dir }; } bindings.push({ dir, instance, value, oldValue: void 0, arg, modifiers }); } return vnode; } function invokeDirectiveHook(vnode, prevVNode, instance, name) { const bindings = vnode.dirs; const oldBindings = prevVNode && prevVNode.dirs; for (let i = 0; i < bindings.length; i++) { const binding = bindings[i]; if (oldBindings) { binding.oldValue = oldBindings[i].value; } const hook = binding.dir[name]; if (hook) { callWithAsyncErrorHandling(hook, instance, 8 /* DIRECTIVE_HOOK */, [ vnode.el, binding, vnode, prevVNode ]); } } } function createAppContext() { return { app: null, config: { isNativeTag: NO, performance: false, globalProperties: {}, optionMergeStrategies: {}, isCustomElement: NO, errorHandler: undefined, warnHandler: undefined }, mixins: [], components: {}, directives: {}, provides: Object.create(null) }; } let uid$1 = 0; function createAppAPI(render, hydrate) { return function createApp(rootComponent, rootProps = null) { if (rootProps != null && !isObject(rootProps)) { warn(`root props passed to app.mount() must be an object.`); rootProps = null; } const context = createAppContext(); const installedPlugins = new Set(); let isMounted = false; const app = (context.app = { _uid: uid$1++, _component: rootComponent, _props: rootProps, _container: null, _context: context, version, get config() { return context.config; }, set config(v) { { warn(`app.config cannot be replaced. Modify individual options instead.`); } }, use(plugin, ...options) { if (installedPlugins.has(plugin)) { warn(`Plugin has already been applied to target app.`); } else if (plugin && isFunction(plugin.install)) { installedPlugins.add(plugin); plugin.install(app, ...options); } else if (isFunction(plugin)) { installedPlugins.add(plugin); plugin(app, ...options); } else { warn(`A plugin must either be a function or an object with an "install" ` + `function.`); } return app; }, mixin(mixin) { { if (!context.mixins.includes(mixin)) { context.mixins.push(mixin); // global mixin with props/emits de-optimizes props/emits // normalization caching. if (mixin.props || mixin.emits) { context.deopt = true; } } else { warn('Mixin has already been applied to target app' + (mixin.name ? `: ${mixin.name}` : '')); } } return app; }, component(name, component) { { validateComponentName(name, context.config); } if (!component) { return context.components[name]; } if (context.components[name]) { warn(`Component "${name}" has already been registered in target app.`); } context.components[name] = component; return app; }, directive(name, directive) { { validateDirectiveName(name); } if (!directive) { return context.directives[name]; } if (context.directives[name]) { warn(`Directive "${name}" has already been registered in target app.`); } context.directives[name] = directive; return app; }, mount(rootContainer, isHydrate, isSVG) { if (!isMounted) { const vnode = createVNode(rootComponent, rootProps); // store app context on the root VNode. // this will be set on the root instance on initial mount. vnode.appContext = context; // HMR root reload { context.reload = () => { render(cloneVNode(vnode), rootContainer, isSVG); }; } if (isHydrate && hydrate) { hydrate(vnode, rootContainer); } else { render(vnode, rootContainer, isSVG); } isMounted = true; app._container = rootContainer; rootContainer.__vue_app__ = app; { devtoolsInitApp(app, version); } return vnode.component.proxy; } else { warn(`App has already been mounted.\n` + `If you want to remount the same app, move your app creation logic ` + `into a factory function and create fresh app instances for each ` + `mount - e.g. \`const createMyApp = () => createApp(App)\``); } }, unmount() { if (isMounted) { render(null, app._container); { devtoolsUnmountApp(app); } delete app._container.__vue_app__; } else { warn(`Cannot unmount an app that is not mounted.`); } }, provide(key, value) { if (key in context.provides) { warn(`App already provides property with key "${String(key)}". ` + `It will be overwritten with the new value.`); } // TypeScript doesn't allow symbols as index type // https://github.com/Microsoft/TypeScript/issues/24587 context.provides[key] = value; return app; } }); return app; }; } let hasMismatch = false; const isSVGContainer = (container) => /svg/.test(container.namespaceURI) && container.tagName !== 'foreignObject'; const isComment = (node) => node.nodeType === 8 /* COMMENT */; // Note: hydration is DOM-specific // But we have to place it in core due to tight coupling with core - splitting // it out creates a ton of unnecessary complexity. // Hydration also depends on some renderer internal logic which needs to be // passed in via arguments. function createHydrationFunctions(rendererInternals) { const { mt: mountComponent, p: patch, o: { patchProp, nextSibling, parentNode, remove, insert, createComment } } = rendererInternals; const hydrate = (vnode, container) => { if (!container.hasChildNodes()) { warn(`Attempting to hydrate existing markup but container is empty. ` + `Performing full mount instead.`); patch(null, vnode, container); return; } hasMismatch = false; hydrateNode(container.firstChild, vnode, null, null, null); flushPostFlushCbs(); if (hasMismatch && !false) { // this error should show up in production console.error(`Hydration completed but contains mismatches.`); } }; const hydrateNode = (node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized = false) => { const isFragmentStart = isComment(node) && node.data === '['; const onMismatch = () => handleMismatch(node, vnode, parentComponent, parentSuspense, slotScopeIds, isFragmentStart); const { type, ref, shapeFlag } = vnode; const domType = node.nodeType; vnode.el = node; let nextNode = null; switch (type) { case Text: if (domType !== 3 /* TEXT */) { nextNode = onMismatch(); } else { if (node.data !== vnode.children) { hasMismatch = true; warn(`Hydration text mismatch:` + `\n- Client: ${JSON.stringify(node.data)}` + `\n- Server: ${JSON.stringify(vnode.children)}`); node.data = vnode.children; } nextNode = nextSibling(node); } break; case Comment: if (domType !== 8 /* COMMENT */ || isFragmentStart) { nextNode = onMismatch(); } else { nextNode = nextSibling(node); } break; case Static: if (domType !== 1 /* ELEMENT */) { nextNode = onMismatch(); } else { // determine anchor, adopt content nextNode = node; // if the static vnode has its content stripped during build, // adopt it from the server-rendered HTML. const needToAdoptContent = !vnode.children.length; for (let i = 0; i < vnode.staticCount; i++) { if (needToAdoptContent) vnode.children += nextNode.outerHTML; if (i === vnode.staticCount - 1) { vnode.anchor = nextNode; } nextNode = nextSibling(nextNode); } return nextNode; } break; case Fragment: if (!isFragmentStart) { nextNode = onMismatch(); } else { nextNode = hydrateFragment(node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized); } break; default: if (shapeFlag & 1 /* ELEMENT */) { if (domType !== 1 /* ELEMENT */ || vnode.type.toLowerCase() !== node.tagName.toLowerCase()) { nextNode = onMismatch(); } else { nextNode = hydrateElement(node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized); } } else if (shapeFlag & 6 /* COMPONENT */) { // when setting up the render effect, if the initial vnode already // has .el set, the component will perform hydration instead of mount // on its sub-tree. vnode.slotScopeIds = slotScopeIds; const container = parentNode(node); const hydrateComponent = () => { mountComponent(vnode, container, null, parentComponent, parentSuspense, isSVGContainer(container), optimized); }; // async component const loadAsync = vnode.type.__asyncLoader; if (loadAsync) { loadAsync().then(hydrateComponent); } else { hydrateComponent(); } // component may be async, so in the case of fragments we cannot rely // on component's rendered output to determine the end of the fragment // instead, we do a lookahead to find the end anchor node. nextNode = isFragmentStart ? locateClosingAsyncAnchor(node) : nextSibling(node); } else if (shapeFlag & 64 /* TELEPORT */) { if (domType !== 8 /* COMMENT */) { nextNode = onMismatch(); } else { nextNode = vnode.type.hydrate(node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized, rendererInternals, hydrateChildren); } } else if (shapeFlag & 128 /* SUSPENSE */) { nextNode = vnode.type.hydrate(node, vnode, parentComponent, parentSuspense, isSVGContainer(parentNode(node)), slotScopeIds, optimized, rendererInternals, hydrateNode); } else { warn('Invalid HostVNode type:', type, `(${typeof type})`); } } if (ref != null) { setRef(ref, null, parentSuspense, vnode); } return nextNode; }; const hydrateElement = (el, vnode, parentComponent, parentSuspense, slotScopeIds, optimized) => { optimized = optimized || !!vnode.dynamicChildren; const { props, patchFlag, shapeFlag, dirs } = vnode; // skip props & children if this is hoisted static nodes if (patchFlag !== -1 /* HOISTED */) { if (dirs) { invokeDirectiveHook(vnode, null, parentComponent, 'created'); } // props if (props) { if (!optimized || (patchFlag & 16 /* FULL_PROPS */ || patchFlag & 32 /* HYDRATE_EVENTS */)) { for (const key in props) { if (!isReservedProp(key) && isOn(key)) { patchProp(el, key, null, props[key]); } } } else if (props.onClick) { // Fast path for click listeners (which is most often) to avoid // iterating through props. patchProp(el, 'onClick', null, props.onClick); } } // vnode / directive hooks let vnodeHooks; if ((vnodeHooks = props && props.onVnodeBeforeMount)) { invokeVNodeHook(vnodeHooks, parentComponent, vnode); } if (dirs) { invokeDirectiveHook(vnode, null, parentComponent, 'beforeMount'); } if ((vnodeHooks = props && props.onVnodeMounted) || dirs) { queueEffectWithSuspense(() => { vnodeHooks && invokeVNodeHook(vnodeHooks, parentComponent, vnode); dirs && invokeDirectiveHook(vnode, null, parentComponent, 'mounted'); }, parentSuspense); } // children if (shapeFlag & 16 /* ARRAY_CHILDREN */ && // skip if element has innerHTML / textContent !(props && (props.innerHTML || props.textContent))) { let next = hydrateChildren(el.firstChild, vnode, el, parentComponent, parentSuspense, slotScopeIds, optimized); let hasWarned = false; while (next) { hasMismatch = true; if (!hasWarned) { warn(`Hydration children mismatch in <${vnode.type}>: ` + `server rendered element contains more child nodes than client vdom.`); hasWarned = true; } // The SSRed DOM contains more nodes than it should. Remove them. const cur = next; next = next.nextSibling; remove(cur); } } else if (shapeFlag & 8 /* TEXT_CHILDREN */) { if (el.textContent !== vnode.children) { hasMismatch = true; warn(`Hydration text content mismatch in <${vnode.type}>:\n` + `- Client: ${el.textContent}\n` + `- Server: ${vnode.children}`); el.textContent = vnode.children; } } } return el.nextSibling; }; const hydrateChildren = (node, parentVNode, container, parentComponent, parentSuspense, slotScopeIds, optimized) => { optimized = optimized || !!parentVNode.dynamicChildren; const children = parentVNode.children; const l = children.length; let hasWarned = false; for (let i = 0; i < l; i++) { const vnode = optimized ? children[i] : (children[i] = normalizeVNode(children[i])); if (node) { node = hydrateNode(node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized); } else if (vnode.type === Text && !vnode.children) { continue; } else { hasMismatch = true; if (!hasWarned) { warn(`Hydration children mismatch in <${container.tagName.toLowerCase()}>: ` + `server rendered element contains fewer child nodes than client vdom.`); hasWarned = true; } // the SSRed DOM didn't contain enough nodes. Mount the missing ones. patch(null, vnode, container, null, parentComponent, parentSuspense, isSVGContainer(container), slotScopeIds); } } return node; }; const hydrateFragment = (node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized) => { const { slotScopeIds: fragmentSlotScopeIds } = vnode; if (fragmentSlotScopeIds) { slotScopeIds = slotScopeIds ? slotScopeIds.concat(fragmentSlotScopeIds) : fragmentSlotScopeIds; } const container = parentNode(node); const next = hydrateChildren(nextSibling(node), vnode, container, parentComponent, parentSuspense, slotScopeIds, optimized); if (next && isComment(next) && next.data === ']') { return nextSibling((vnode.anchor = next)); } else { // fragment didn't hydrate successfully, since we didn't get a end anchor // back. This should have led to node/children mismatch warnings. hasMismatch = true; // since the anchor is missing, we need to create one and insert it insert((vnode.anchor = createComment(`]`)), container, next); return next; } }; const handleMismatch = (node, vnode, parentComponent, parentSuspense, slotScopeIds, isFragment) => { hasMismatch = true; warn(`Hydration node mismatch:\n- Client vnode:`, vnode.type, `\n- Server rendered DOM:`, node, node.nodeType === 3 /* TEXT */ ? `(text)` : isComment(node) && node.data === '[' ? `(start of fragment)` : ``); vnode.el = null; if (isFragment) { // remove excessive fragment nodes const end = locateClosingAsyncAnchor(node); while (true) { const next = nextSibling(node); if (next && next !== end) { remove(next); } else { break; } } } const next = nextSibling(node); const container = parentNode(node); remove(node); patch(null, vnode, container, next, parentComponent, parentSuspense, isSVGContainer(container), slotScopeIds); return next; }; const locateClosingAsyncAnchor = (node) => { let match = 0; while (node) { node = nextSibling(node); if (node && isComment(node)) { if (node.data === '[') match++; if (node.data === ']') { if (match === 0) { return nextSibling(node); } else { match--; } } } } return node; }; return [hydrate, hydrateNode]; } let supported; let perf; function startMeasure(instance, type) { if (instance.appContext.config.performance && isSupported()) { perf.mark(`vue-${type}-${instance.uid}`); } } function endMeasure(instance, type) { if (instance.appContext.config.performance && isSupported()) { const startTag = `vue-${type}-${instance.uid}`; const endTag = startTag + `:end`; perf.mark(endTag); perf.measure(`<${formatComponentName(instance, instance.type)}> ${type}`, startTag, endTag); perf.clearMarks(startTag); perf.clearMarks(endTag); } } function isSupported() { if (supported !== undefined) { return supported; } /* eslint-disable no-restricted-globals */ if (typeof window !== 'undefined' && window.performance) { supported = true; perf = window.performance; } else { supported = false; } /* eslint-enable no-restricted-globals */ return supported; } // implementation, close to no-op function defineComponent(options) { return isFunction(options) ? { setup: options, name: options.name } : options; } const isAsyncWrapper = (i) => !!i.type.__asyncLoader; function defineAsyncComponent(source) { if (isFunction(source)) { source = { loader: source }; } const { loader, loadingComponent, errorComponent, delay = 200, timeout, // undefined = never times out suspensible = true, onError: userOnError } = source; let pendingRequest = null; let resolvedComp; let retries = 0; const retry = () => { retries++; pendingRequest = null; return load(); }; const load = () => { let thisRequest; return (pendingRequest || (thisRequest = pendingRequest = loader() .catch(err => { err = err instanceof Error ? err : new Error(String(err)); if (userOnError) { return new Promise((resolve, reject) => { const userRetry = () => resolve(retry()); const userFail = () => reject(err); userOnError(err, userRetry, userFail, retries + 1); }); } else { throw err; } }) .then((comp) => { if (thisRequest !== pendingRequest && pendingRequest) { return pendingRequest; } if (!comp) { warn(`Async component loader resolved to undefined. ` + `If you are using retry(), make sure to return its return value.`); } // interop module default if (comp && (comp.__esModule || comp[Symbol.toStringTag] === 'Module')) { comp = comp.default; } if (comp && !isObject(comp) && !isFunction(comp)) { throw new Error(`Invalid async component load result: ${comp}`); } resolvedComp = comp; return comp; }))); }; return defineComponent({ __asyncLoader: load, name: 'AsyncComponentWrapper', setup() { const instance = currentInstance; // already resolved if (resolvedComp) { return () => createInnerComp(resolvedComp, instance); } const onError = (err) => { pendingRequest = null; handleError(err, instance, 13 /* ASYNC_COMPONENT_LOADER */, !errorComponent /* do not throw in dev if user provided error component */); }; // suspense-controlled or SSR. if ((suspensible && instance.suspense) || (false )) { return load() .then(comp => { return () => createInnerComp(comp, instance); }) .catch(err => { onError(err); return () => errorComponent ? createVNode(errorComponent, { error: err }) : null; }); } const loaded = ref(false); const error = ref(); const delayed = ref(!!delay); if (delay) { setTimeout(() => { delayed.value = false; }, delay); } if (timeout != null) { setTimeout(() => { if (!loaded.value && !error.value) { const err = new Error(`Async component timed out after ${timeout}ms.`); onError(err); error.value = err; } }, timeout); } load() .then(() => { loaded.value = true; }) .catch(err => { onError(err); error.value = err; }); return () => { if (loaded.value && resolvedComp) { return createInnerComp(resolvedComp, instance); } else if (error.value && errorComponent) { return createVNode(errorComponent, { error: error.value }); } else if (loadingComponent && !delayed.value) { return createVNode(loadingComponent); } }; } }); } function createInnerComp(comp, { vnode: { ref, props, children } }) { const vnode = createVNode(comp, props, children); // ensure inner component inherits the async wrapper's ref owner vnode.ref = ref; return vnode; } function createDevEffectOptions(instance) { return { scheduler: queueJob, allowRecurse: true, onTrack: instance.rtc ? e => invokeArrayFns(instance.rtc, e) : void 0, onTrigger: instance.rtg ? e => invokeArrayFns(instance.rtg, e) : void 0 }; } const queuePostRenderEffect = queueEffectWithSuspense ; const setRef = (rawRef, oldRawRef, parentSuspense, vnode) => { if (isArray(rawRef)) { rawRef.forEach((r, i) => setRef(r, oldRawRef && (isArray(oldRawRef) ? oldRawRef[i] : oldRawRef), parentSuspense, vnode)); return; } let value; if (!vnode) { // means unmount value = null; } else if (isAsyncWrapper(vnode)) { // when mounting async components, nothing needs to be done, // because the template ref is forwarded to inner component return; } else if (vnode.shapeFlag & 4 /* STATEFUL_COMPONENT */) { value = vnode.component.exposed || vnode.component.proxy; } else { value = vnode.el; } const { i: owner, r: ref } = rawRef; if (!owner) { warn(`Missing ref owner context. ref cannot be used on hoisted vnodes. ` + `A vnode with ref must be created inside the render function.`); return; } const oldRef = oldRawRef && oldRawRef.r; const refs = owner.refs === EMPTY_OBJ ? (owner.refs = {}) : owner.refs; const setupState = owner.setupState; // unset old ref if (oldRef != null && oldRef !== ref) { if (isString(oldRef)) { refs[oldRef] = null; if (hasOwn(setupState, oldRef)) { setupState[oldRef] = null; } } else if (isRef(oldRef)) { oldRef.value = null; } } if (isString(ref)) { const doSet = () => { refs[ref] = value; if (hasOwn(setupState, ref)) { setupState[ref] = value; } }; // #1789: for non-null values, set them after render // null values means this is unmount and it should not overwrite another // ref with the same key if (value) { doSet.id = -1; queuePostRenderEffect(doSet, parentSuspense); } else { doSet(); } } else if (isRef(ref)) { const doSet = () => { ref.value = value; }; if (value) { doSet.id = -1; queuePostRenderEffect(doSet, parentSuspense); } else { doSet(); } } else if (isFunction(ref)) { callWithErrorHandling(ref, owner, 12 /* FUNCTION_REF */, [value, refs]); } else { warn('Invalid template ref type:', value, `(${typeof value})`); } }; /** * The createRenderer function accepts two generic arguments: * HostNode and HostElement, corresponding to Node and Element types in the * host environment. For example, for runtime-dom, HostNode would be the DOM * `Node` interface and HostElement would be the DOM `Element` interface. * * Custom renderers can pass in the platform specific types like this: * * ``` js * const { render, createApp } = createRenderer<Node, Element>({ * patchProp, * ...nodeOps * }) * ``` */ function createRenderer(options) { return baseCreateRenderer(options); } // Separate API for creating hydration-enabled renderer. // Hydration logic is only used when calling this function, making it // tree-shakable. function createHydrationRenderer(options) { return baseCreateRenderer(options, createHydrationFunctions); } // implementation function baseCreateRenderer(options, createHydrationFns) { { const target = getGlobalThis(); target.__VUE__ = true; setDevtoolsHook(target.__VUE_DEVTOOLS_GLOBAL_HOOK__); } const { insert: hostInsert, remove: hostRemove, patchProp: hostPatchProp, forcePatchProp: hostForcePatchProp, createElement: hostCreateElement, createText: hostCreateText, createComment: hostCreateComment, setText: hostSetText, setElementText: hostSetElementText, parentNode: hostParentNode, nextSibling: hostNextSibling, setScopeId: hostSetScopeId = NOOP, cloneNode: hostCloneNode, insertStaticContent: hostInsertStaticContent } = options; // Note: functions inside this closure should use `const xxx = () => {}` // style in order to prevent being inlined by minifiers. const patch = (n1, n2, container, anchor = null, parentComponent = null, parentSuspense = null, isSVG = false, slotScopeIds = null, optimized = false) => { // patching & not same type, unmount old tree if (n1 && !isSameVNodeType(n1, n2)) { anchor = getNextHostNode(n1); unmount(n1, parentComponent, parentSuspense, true); n1 = null; } if (n2.patchFlag === -2 /* BAIL */) { optimized = false; n2.dynamicChildren = null; } const { type, ref, shapeFlag } = n2; switch (type) { case Text: processText(n1, n2, container, anchor); break; case Comment: processCommentNode(n1, n2, container, anchor); break; case Static: if (n1 == null) { mountStaticNode(n2, container, anchor, isSVG); } else { patchStaticNode(n1, n2, container, isSVG); } break; case Fragment: processFragment(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); break; default: if (shapeFlag & 1 /* ELEMENT */) { processElement(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else if (shapeFlag & 6 /* COMPONENT */) { processComponent(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else if (shapeFlag & 64 /* TELEPORT */) { type.process(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, internals); } else if (shapeFlag & 128 /* SUSPENSE */) { type.process(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, internals); } else { warn('Invalid VNode type:', type, `(${typeof type})`); } } // set ref if (ref != null && parentComponent) { setRef(ref, n1 && n1.ref, parentSuspense, n2); } }; const processText = (n1, n2, container, anchor) => { if (n1 == null) { hostInsert((n2.el = hostCreateText(n2.children)), container, anchor); } else { const el = (n2.el = n1.el); if (n2.children !== n1.children) { hostSetText(el, n2.children); } } }; const processCommentNode = (n1, n2, container, anchor) => { if (n1 == null) { hostInsert((n2.el = hostCreateComment(n2.children || '')), container, anchor); } else { // there's no support for dynamic comments n2.el = n1.el; } }; const mountStaticNode = (n2, container, anchor, isSVG) => { [n2.el, n2.anchor] = hostInsertStaticContent(n2.children, container, anchor, isSVG); }; /** * Dev / HMR only */ const patchStaticNode = (n1, n2, container, isSVG) => { // static nodes are only patched during dev for HMR if (n2.children !== n1.children) { const anchor = hostNextSibling(n1.anchor); // remove existing removeStaticNode(n1); [n2.el, n2.anchor] = hostInsertStaticContent(n2.children, container, anchor, isSVG); } else { n2.el = n1.el; n2.anchor = n1.anchor; } }; const moveStaticNode = ({ el, anchor }, container, nextSibling) => { let next; while (el && el !== anchor) { next = hostNextSibling(el); hostInsert(el, container, nextSibling); el = next; } hostInsert(anchor, container, nextSibling); }; const removeStaticNode = ({ el, anchor }) => { let next; while (el && el !== anchor) { next = hostNextSibling(el); hostRemove(el); el = next; } hostRemove(anchor); }; const processElement = (n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { isSVG = isSVG || n2.type === 'svg'; if (n1 == null) { mountElement(n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else { patchElement(n1, n2, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } }; const mountElement = (vnode, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { let el; let vnodeHook; const { type, props, shapeFlag, transition, patchFlag, dirs } = vnode; { el = vnode.el = hostCreateElement(vnode.type, isSVG, props && props.is, props); // mount children first, since some props may rely on child content // being already rendered, e.g. `<select value>` if (shapeFlag & 8 /* TEXT_CHILDREN */) { hostSetElementText(el, vnode.children); } else if (shapeFlag & 16 /* ARRAY_CHILDREN */) { mountChildren(vnode.children, el, null, parentComponent, parentSuspense, isSVG && type !== 'foreignObject', slotScopeIds, optimized || !!vnode.dynamicChildren); } if (dirs) { invokeDirectiveHook(vnode, null, parentComponent, 'created'); } // props if (props) { for (const key in props) { if (!isReservedProp(key)) { hostPatchProp(el, key, null, props[key], isSVG, vnode.children, parentComponent, parentSuspense, unmountChildren); } } if ((vnodeHook = props.onVnodeBeforeMount)) { invokeVNodeHook(vnodeHook, parentComponent, vnode); } } // scopeId setScopeId(el, vnode, vnode.scopeId, slotScopeIds, parentComponent); } { Object.defineProperty(el, '__vnode', { value: vnode, enumerable: false }); Object.defineProperty(el, '__vueParentComponent', { value: parentComponent, enumerable: false }); } if (dirs) { invokeDirectiveHook(vnode, null, parentComponent, 'beforeMount'); } // #1583 For inside suspense + suspense not resolved case, enter hook should call when suspense resolved // #1689 For inside suspense + suspense resolved case, just call it const needCallTransitionHooks = (!parentSuspense || (parentSuspense && !parentSuspense.pendingBranch)) && transition && !transition.persisted; if (needCallTransitionHooks) { transition.beforeEnter(el); } hostInsert(el, container, anchor); if ((vnodeHook = props && props.onVnodeMounted) || needCallTransitionHooks || dirs) { queuePostRenderEffect(() => { vnodeHook && invokeVNodeHook(vnodeHook, parentComponent, vnode); needCallTransitionHooks && transition.enter(el); dirs && invokeDirectiveHook(vnode, null, parentComponent, 'mounted'); }, parentSuspense); } }; const setScopeId = (el, vnode, scopeId, slotScopeIds, parentComponent) => { if (scopeId) { hostSetScopeId(el, scopeId); } if (slotScopeIds) { for (let i = 0; i < slotScopeIds.length; i++) { hostSetScopeId(el, slotScopeIds[i]); } } if (parentComponent) { let subTree = parentComponent.subTree; if (subTree.patchFlag > 0 && subTree.patchFlag & 2048 /* DEV_ROOT_FRAGMENT */) { subTree = filterSingleRoot(subTree.children) || subTree; } if (vnode === subTree) { const parentVNode = parentComponent.vnode; setScopeId(el, parentVNode, parentVNode.scopeId, parentVNode.slotScopeIds, parentComponent.parent); } } }; const mountChildren = (children, container, anchor, parentComponent, parentSuspense, isSVG, optimized, slotScopeIds, start = 0) => { for (let i = start; i < children.length; i++) { const child = (children[i] = optimized ? cloneIfMounted(children[i]) : normalizeVNode(children[i])); patch(null, child, container, anchor, parentComponent, parentSuspense, isSVG, optimized, slotScopeIds); } }; const patchElement = (n1, n2, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { const el = (n2.el = n1.el); let { patchFlag, dynamicChildren, dirs } = n2; // #1426 take the old vnode's patch flag into account since user may clone a // compiler-generated vnode, which de-opts to FULL_PROPS patchFlag |= n1.patchFlag & 16 /* FULL_PROPS */; const oldProps = n1.props || EMPTY_OBJ; const newProps = n2.props || EMPTY_OBJ; let vnodeHook; if ((vnodeHook = newProps.onVnodeBeforeUpdate)) { invokeVNodeHook(vnodeHook, parentComponent, n2, n1); } if (dirs) { invokeDirectiveHook(n2, n1, parentComponent, 'beforeUpdate'); } if (isHmrUpdating) { // HMR updated, force full diff patchFlag = 0; optimized = false; dynamicChildren = null; } if (patchFlag > 0) { // the presence of a patchFlag means this element's render code was // generated by the compiler and can take the fast path. // in this path old node and new node are guaranteed to have the same shape // (i.e. at the exact same position in the source template) if (patchFlag & 16 /* FULL_PROPS */) { // element props contain dynamic keys, full diff needed patchProps(el, n2, oldProps, newProps, parentComponent, parentSuspense, isSVG); } else { // class // this flag is matched when the element has dynamic class bindings. if (patchFlag & 2 /* CLASS */) { if (oldProps.class !== newProps.class) { hostPatchProp(el, 'class', null, newProps.class, isSVG); } } // style // this flag is matched when the element has dynamic style bindings if (patchFlag & 4 /* STYLE */) { hostPatchProp(el, 'style', oldProps.style, newProps.style, isSVG); } // props // This flag is matched when the element has dynamic prop/attr bindings // other than class and style. The keys of dynamic prop/attrs are saved for // faster iteration. // Note dynamic keys like :[foo]="bar" will cause this optimization to // bail out and go through a full diff because we need to unset the old key if (patchFlag & 8 /* PROPS */) { // if the flag is present then dynamicProps must be non-null const propsToUpdate = n2.dynamicProps; for (let i = 0; i < propsToUpdate.length; i++) { const key = propsToUpdate[i]; const prev = oldProps[key]; const next = newProps[key]; if (next !== prev || (hostForcePatchProp && hostForcePatchProp(el, key))) { hostPatchProp(el, key, prev, next, isSVG, n1.children, parentComponent, parentSuspense, unmountChildren); } } } } // text // This flag is matched when the element has only dynamic text children. if (patchFlag & 1 /* TEXT */) { if (n1.children !== n2.children) { hostSetElementText(el, n2.children); } } } else if (!optimized && dynamicChildren == null) { // unoptimized, full diff patchProps(el, n2, oldProps, newProps, parentComponent, parentSuspense, isSVG); } const areChildrenSVG = isSVG && n2.type !== 'foreignObject'; if (dynamicChildren) { patchBlockChildren(n1.dynamicChildren, dynamicChildren, el, parentComponent, parentSuspense, areChildrenSVG, slotScopeIds); if (parentComponent && parentComponent.type.__hmrId) { traverseStaticChildren(n1, n2); } } else if (!optimized) { // full diff patchChildren(n1, n2, el, null, parentComponent, parentSuspense, areChildrenSVG, slotScopeIds, false); } if ((vnodeHook = newProps.onVnodeUpdated) || dirs) { queuePostRenderEffect(() => { vnodeHook && invokeVNodeHook(vnodeHook, parentComponent, n2, n1); dirs && invokeDirectiveHook(n2, n1, parentComponent, 'updated'); }, parentSuspense); } }; // The fast path for blocks. const patchBlockChildren = (oldChildren, newChildren, fallbackContainer, parentComponent, parentSuspense, isSVG, slotScopeIds) => { for (let i = 0; i < newChildren.length; i++) { const oldVNode = oldChildren[i]; const newVNode = newChildren[i]; // Determine the container (parent element) for the patch. const container = // - In the case of a Fragment, we need to provide the actual parent // of the Fragment itself so it can move its children. oldVNode.type === Fragment || // - In the case of different nodes, there is going to be a replacement // which also requires the correct parent container !isSameVNodeType(oldVNode, newVNode) || // - In the case of a component, it could contain anything. oldVNode.shapeFlag & 6 /* COMPONENT */ || oldVNode.shapeFlag & 64 /* TELEPORT */ ? hostParentNode(oldVNode.el) : // In other cases, the parent container is not actually used so we // just pass the block element here to avoid a DOM parentNode call. fallbackContainer; patch(oldVNode, newVNode, container, null, parentComponent, parentSuspense, isSVG, slotScopeIds, true); } }; const patchProps = (el, vnode, oldProps, newProps, parentComponent, parentSuspense, isSVG) => { if (oldProps !== newProps) { for (const key in newProps) { // empty string is not valid prop if (isReservedProp(key)) continue; const next = newProps[key]; const prev = oldProps[key]; if (next !== prev || (hostForcePatchProp && hostForcePatchProp(el, key))) { hostPatchProp(el, key, prev, next, isSVG, vnode.children, parentComponent, parentSuspense, unmountChildren); } } if (oldProps !== EMPTY_OBJ) { for (const key in oldProps) { if (!isReservedProp(key) && !(key in newProps)) { hostPatchProp(el, key, oldProps[key], null, isSVG, vnode.children, parentComponent, parentSuspense, unmountChildren); } } } } }; const processFragment = (n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { const fragmentStartAnchor = (n2.el = n1 ? n1.el : hostCreateText('')); const fragmentEndAnchor = (n2.anchor = n1 ? n1.anchor : hostCreateText('')); let { patchFlag, dynamicChildren, slotScopeIds: fragmentSlotScopeIds } = n2; if (patchFlag > 0) { optimized = true; } // check if this is a slot fragment with :slotted scope ids if (fragmentSlotScopeIds) { slotScopeIds = slotScopeIds ? slotScopeIds.concat(fragmentSlotScopeIds) : fragmentSlotScopeIds; } if (isHmrUpdating) { // HMR updated, force full diff patchFlag = 0; optimized = false; dynamicChildren = null; } if (n1 == null) { hostInsert(fragmentStartAnchor, container, anchor); hostInsert(fragmentEndAnchor, container, anchor); // a fragment can only have array children // since they are either generated by the compiler, or implicitly created // from arrays. mountChildren(n2.children, container, fragmentEndAnchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else { if (patchFlag > 0 && patchFlag & 64 /* STABLE_FRAGMENT */ && dynamicChildren && // #2715 the previous fragment could've been a BAILed one as a result // of renderSlot() with no valid children n1.dynamicChildren) { // a stable fragment (template root or <template v-for>) doesn't need to // patch children order, but it may contain dynamicChildren. patchBlockChildren(n1.dynamicChildren, dynamicChildren, container, parentComponent, parentSuspense, isSVG, slotScopeIds); if (parentComponent && parentComponent.type.__hmrId) { traverseStaticChildren(n1, n2); } else if ( // #2080 if the stable fragment has a key, it's a <template v-for> that may // get moved around. Make sure all root level vnodes inherit el. // #2134 or if it's a component root, it may also get moved around // as the component is being moved. n2.key != null || (parentComponent && n2 === parentComponent.subTree)) { traverseStaticChildren(n1, n2, true /* shallow */); } } else { // keyed / unkeyed, or manual fragments. // for keyed & unkeyed, since they are compiler generated from v-for, // each child is guaranteed to be a block so the fragment will never // have dynamicChildren. patchChildren(n1, n2, container, fragmentEndAnchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } } }; const processComponent = (n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { n2.slotScopeIds = slotScopeIds; if (n1 == null) { if (n2.shapeFlag & 512 /* COMPONENT_KEPT_ALIVE */) { parentComponent.ctx.activate(n2, container, anchor, isSVG, optimized); } else { mountComponent(n2, container, anchor, parentComponent, parentSuspense, isSVG, optimized); } } else { updateComponent(n1, n2, optimized); } }; const mountComponent = (initialVNode, container, anchor, parentComponent, parentSuspense, isSVG, optimized) => { const instance = (initialVNode.component = createComponentInstance(initialVNode, parentComponent, parentSuspense)); if (instance.type.__hmrId) { registerHMR(instance); } { pushWarningContext(initialVNode); startMeasure(instance, `mount`); } // inject renderer internals for keepAlive if (isKeepAlive(initialVNode)) { instance.ctx.renderer = internals; } // resolve props and slots for setup context { startMeasure(instance, `init`); } setupComponent(instance); { endMeasure(instance, `init`); } // setup() is async. This component relies on async logic to be resolved // before proceeding if (instance.asyncDep) { parentSuspense && parentSuspense.registerDep(instance, setupRenderEffect); // Give it a placeholder if this is not hydration // TODO handle self-defined fallback if (!initialVNode.el) { const placeholder = (instance.subTree = createVNode(Comment)); processCommentNode(null, placeholder, container, anchor); } return; } setupRenderEffect(instance, initialVNode, container, anchor, parentSuspense, isSVG, optimized); { popWarningContext(); endMeasure(instance, `mount`); } }; const updateComponent = (n1, n2, optimized) => { const instance = (n2.component = n1.component); if (shouldUpdateComponent(n1, n2, optimized)) { if (instance.asyncDep && !instance.asyncResolved) { // async & still pending - just update props and slots // since the component's reactive effect for render isn't set-up yet { pushWarningContext(n2); } updateComponentPreRender(instance, n2, optimized); { popWarningContext(); } return; } else { // normal update instance.next = n2; // in case the child component is also queued, remove it to avoid // double updating the same child component in the same flush. invalidateJob(instance.update); // instance.update is the reactive effect runner. instance.update(); } } else { // no update needed. just copy over properties n2.component = n1.component; n2.el = n1.el; instance.vnode = n2; } }; const setupRenderEffect = (instance, initialVNode, container, anchor, parentSuspense, isSVG, optimized) => { // create reactive effect for rendering instance.update = effect(function componentEffect() { if (!instance.isMounted) { let vnodeHook; const { el, props } = initialVNode; const { bm, m, parent } = instance; // beforeMount hook if (bm) { invokeArrayFns(bm); } // onVnodeBeforeMount if ((vnodeHook = props && props.onVnodeBeforeMount)) { invokeVNodeHook(vnodeHook, parent, initialVNode); } // render { startMeasure(instance, `render`); } const subTree = (instance.subTree = renderComponentRoot(instance)); { endMeasure(instance, `render`); } if (el && hydrateNode) { { startMeasure(instance, `hydrate`); } // vnode has adopted host node - perform hydration instead of mount. hydrateNode(initialVNode.el, subTree, instance, parentSuspense, null); { endMeasure(instance, `hydrate`); } } else { { startMeasure(instance, `patch`); } patch(null, subTree, container, anchor, instance, parentSuspense, isSVG); { endMeasure(instance, `patch`); } initialVNode.el = subTree.el; } // mounted hook if (m) { queuePostRenderEffect(m, parentSuspense); } // onVnodeMounted if ((vnodeHook = props && props.onVnodeMounted)) { const scopedInitialVNode = initialVNode; queuePostRenderEffect(() => { invokeVNodeHook(vnodeHook, parent, scopedInitialVNode); }, parentSuspense); } // activated hook for keep-alive roots. // #1742 activated hook must be accessed after first render // since the hook may be injected by a child keep-alive const { a } = instance; if (a && initialVNode.shapeFlag & 256 /* COMPONENT_SHOULD_KEEP_ALIVE */) { queuePostRenderEffect(a, parentSuspense); } instance.isMounted = true; { devtoolsComponentAdded(instance); } // #2458: deference mount-only object parameters to prevent memleaks initialVNode = container = anchor = null; } else { // updateComponent // This is triggered by mutation of component's own state (next: null) // OR parent calling processComponent (next: VNode) let { next, bu, u, parent, vnode } = instance; let originNext = next; let vnodeHook; { pushWarningContext(next || instance.vnode); } if (next) { next.el = vnode.el; updateComponentPreRender(instance, next, optimized); } else { next = vnode; } // beforeUpdate hook if (bu) { invokeArrayFns(bu); } // onVnodeBeforeUpdate if ((vnodeHook = next.props && next.props.onVnodeBeforeUpdate)) { invokeVNodeHook(vnodeHook, parent, next, vnode); } // render { startMeasure(instance, `render`); } const nextTree = renderComponentRoot(instance); { endMeasure(instance, `render`); } const prevTree = instance.subTree; instance.subTree = nextTree; { startMeasure(instance, `patch`); } patch(prevTree, nextTree, // parent may have changed if it's in a teleport hostParentNode(prevTree.el), // anchor may have changed if it's in a fragment getNextHostNode(prevTree), instance, parentSuspense, isSVG); { endMeasure(instance, `patch`); } next.el = nextTree.el; if (originNext === null) { // self-triggered update. In case of HOC, update parent component // vnode el. HOC is indicated by parent instance's subTree pointing // to child component's vnode updateHOCHostEl(instance, nextTree.el); } // updated hook if (u) { queuePostRenderEffect(u, parentSuspense); } // onVnodeUpdated if ((vnodeHook = next.props && next.props.onVnodeUpdated)) { queuePostRenderEffect(() => { invokeVNodeHook(vnodeHook, parent, next, vnode); }, parentSuspense); } { devtoolsComponentUpdated(instance); } { popWarningContext(); } } }, createDevEffectOptions(instance) ); }; const updateComponentPreRender = (instance, nextVNode, optimized) => { nextVNode.component = instance; const prevProps = instance.vnode.props; instance.vnode = nextVNode; instance.next = null; updateProps(instance, nextVNode.props, prevProps, optimized); updateSlots(instance, nextVNode.children, optimized); pauseTracking(); // props update may have triggered pre-flush watchers. // flush them before the render update. flushPreFlushCbs(undefined, instance.update); resetTracking(); }; const patchChildren = (n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized = false) => { const c1 = n1 && n1.children; const prevShapeFlag = n1 ? n1.shapeFlag : 0; const c2 = n2.children; const { patchFlag, shapeFlag } = n2; // fast path if (patchFlag > 0) { if (patchFlag & 128 /* KEYED_FRAGMENT */) { // this could be either fully-keyed or mixed (some keyed some not) // presence of patchFlag means children are guaranteed to be arrays patchKeyedChildren(c1, c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); return; } else if (patchFlag & 256 /* UNKEYED_FRAGMENT */) { // unkeyed patchUnkeyedChildren(c1, c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); return; } } // children has 3 possibilities: text, array or no children. if (shapeFlag & 8 /* TEXT_CHILDREN */) { // text children fast path if (prevShapeFlag & 16 /* ARRAY_CHILDREN */) { unmountChildren(c1, parentComponent, parentSuspense); } if (c2 !== c1) { hostSetElementText(container, c2); } } else { if (prevShapeFlag & 16 /* ARRAY_CHILDREN */) { // prev children was array if (shapeFlag & 16 /* ARRAY_CHILDREN */) { // two arrays, cannot assume anything, do full diff patchKeyedChildren(c1, c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else { // no new children, just unmount old unmountChildren(c1, parentComponent, parentSuspense, true); } } else { // prev children was text OR null // new children is array OR null if (prevShapeFlag & 8 /* TEXT_CHILDREN */) { hostSetElementText(container, ''); } // mount new if array if (shapeFlag & 16 /* ARRAY_CHILDREN */) { mountChildren(c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } } } }; const patchUnkeyedChildren = (c1, c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { c1 = c1 || EMPTY_ARR; c2 = c2 || EMPTY_ARR; const oldLength = c1.length; const newLength = c2.length; const commonLength = Math.min(oldLength, newLength); let i; for (i = 0; i < commonLength; i++) { const nextChild = (c2[i] = optimized ? cloneIfMounted(c2[i]) : normalizeVNode(c2[i])); patch(c1[i], nextChild, container, null, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } if (oldLength > newLength) { // remove old unmountChildren(c1, parentComponent, parentSuspense, true, false, commonLength); } else { // mount new mountChildren(c2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, commonLength); } }; // can be all-keyed or mixed const patchKeyedChildren = (c1, c2, container, parentAnchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized) => { let i = 0; const l2 = c2.length; let e1 = c1.length - 1; // prev ending index let e2 = l2 - 1; // next ending index // 1. sync from start // (a b) c // (a b) d e while (i <= e1 && i <= e2) { const n1 = c1[i]; const n2 = (c2[i] = optimized ? cloneIfMounted(c2[i]) : normalizeVNode(c2[i])); if (isSameVNodeType(n1, n2)) { patch(n1, n2, container, null, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else { break; } i++; } // 2. sync from end // a (b c) // d e (b c) while (i <= e1 && i <= e2) { const n1 = c1[e1]; const n2 = (c2[e2] = optimized ? cloneIfMounted(c2[e2]) : normalizeVNode(c2[e2])); if (isSameVNodeType(n1, n2)) { patch(n1, n2, container, null, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else { break; } e1--; e2--; } // 3. common sequence + mount // (a b) // (a b) c // i = 2, e1 = 1, e2 = 2 // (a b) // c (a b) // i = 0, e1 = -1, e2 = 0 if (i > e1) { if (i <= e2) { const nextPos = e2 + 1; const anchor = nextPos < l2 ? c2[nextPos].el : parentAnchor; while (i <= e2) { patch(null, (c2[i] = optimized ? cloneIfMounted(c2[i]) : normalizeVNode(c2[i])), container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); i++; } } } // 4. common sequence + unmount // (a b) c // (a b) // i = 2, e1 = 2, e2 = 1 // a (b c) // (b c) // i = 0, e1 = 0, e2 = -1 else if (i > e2) { while (i <= e1) { unmount(c1[i], parentComponent, parentSuspense, true); i++; } } // 5. unknown sequence // [i ... e1 + 1]: a b [c d e] f g // [i ... e2 + 1]: a b [e d c h] f g // i = 2, e1 = 4, e2 = 5 else { const s1 = i; // prev starting index const s2 = i; // next starting index // 5.1 build key:index map for newChildren const keyToNewIndexMap = new Map(); for (i = s2; i <= e2; i++) { const nextChild = (c2[i] = optimized ? cloneIfMounted(c2[i]) : normalizeVNode(c2[i])); if (nextChild.key != null) { if (keyToNewIndexMap.has(nextChild.key)) { warn(`Duplicate keys found during update:`, JSON.stringify(nextChild.key), `Make sure keys are unique.`); } keyToNewIndexMap.set(nextChild.key, i); } } // 5.2 loop through old children left to be patched and try to patch // matching nodes & remove nodes that are no longer present let j; let patched = 0; const toBePatched = e2 - s2 + 1; let moved = false; // used to track whether any node has moved let maxNewIndexSoFar = 0; // works as Map<newIndex, oldIndex> // Note that oldIndex is offset by +1 // and oldIndex = 0 is a special value indicating the new node has // no corresponding old node. // used for determining longest stable subsequence const newIndexToOldIndexMap = new Array(toBePatched); for (i = 0; i < toBePatched; i++) newIndexToOldIndexMap[i] = 0; for (i = s1; i <= e1; i++) { const prevChild = c1[i]; if (patched >= toBePatched) { // all new children have been patched so this can only be a removal unmount(prevChild, parentComponent, parentSuspense, true); continue; } let newIndex; if (prevChild.key != null) { newIndex = keyToNewIndexMap.get(prevChild.key); } else { // key-less node, try to locate a key-less node of the same type for (j = s2; j <= e2; j++) { if (newIndexToOldIndexMap[j - s2] === 0 && isSameVNodeType(prevChild, c2[j])) { newIndex = j; break; } } } if (newIndex === undefined) { unmount(prevChild, parentComponent, parentSuspense, true); } else { newIndexToOldIndexMap[newIndex - s2] = i + 1; if (newIndex >= maxNewIndexSoFar) { maxNewIndexSoFar = newIndex; } else { moved = true; } patch(prevChild, c2[newIndex], container, null, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); patched++; } } // 5.3 move and mount // generate longest stable subsequence only when nodes have moved const increasingNewIndexSequence = moved ? getSequence(newIndexToOldIndexMap) : EMPTY_ARR; j = increasingNewIndexSequence.length - 1; // looping backwards so that we can use last patched node as anchor for (i = toBePatched - 1; i >= 0; i--) { const nextIndex = s2 + i; const nextChild = c2[nextIndex]; const anchor = nextIndex + 1 < l2 ? c2[nextIndex + 1].el : parentAnchor; if (newIndexToOldIndexMap[i] === 0) { // mount new patch(null, nextChild, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } else if (moved) { // move if: // There is no stable subsequence (e.g. a reverse) // OR current node is not among the stable sequence if (j < 0 || i !== increasingNewIndexSequence[j]) { move(nextChild, container, anchor, 2 /* REORDER */); } else { j--; } } } } }; const move = (vnode, container, anchor, moveType, parentSuspense = null) => { const { el, type, transition, children, shapeFlag } = vnode; if (shapeFlag & 6 /* COMPONENT */) { move(vnode.component.subTree, container, anchor, moveType); return; } if (shapeFlag & 128 /* SUSPENSE */) { vnode.suspense.move(container, anchor, moveType); return; } if (shapeFlag & 64 /* TELEPORT */) { type.move(vnode, container, anchor, internals); return; } if (type === Fragment) { hostInsert(el, container, anchor); for (let i = 0; i < children.length; i++) { move(children[i], container, anchor, moveType); } hostInsert(vnode.anchor, container, anchor); return; } if (type === Static) { moveStaticNode(vnode, container, anchor); return; } // single nodes const needTransition = moveType !== 2 /* REORDER */ && shapeFlag & 1 /* ELEMENT */ && transition; if (needTransition) { if (moveType === 0 /* ENTER */) { transition.beforeEnter(el); hostInsert(el, container, anchor); queuePostRenderEffect(() => transition.enter(el), parentSuspense); } else { const { leave, delayLeave, afterLeave } = transition; const remove = () => hostInsert(el, container, anchor); const performLeave = () => { leave(el, () => { remove(); afterLeave && afterLeave(); }); }; if (delayLeave) { delayLeave(el, remove, performLeave); } else { performLeave(); } } } else { hostInsert(el, container, anchor); } }; const unmount = (vnode, parentComponent, parentSuspense, doRemove = false, optimized = false) => { const { type, props, ref, children, dynamicChildren, shapeFlag, patchFlag, dirs } = vnode; // unset ref if (ref != null) { setRef(ref, null, parentSuspense, null); } if (shapeFlag & 256 /* COMPONENT_SHOULD_KEEP_ALIVE */) { parentComponent.ctx.deactivate(vnode); return; } const shouldInvokeDirs = shapeFlag & 1 /* ELEMENT */ && dirs; let vnodeHook; if ((vnodeHook = props && props.onVnodeBeforeUnmount)) { invokeVNodeHook(vnodeHook, parentComponent, vnode); } if (shapeFlag & 6 /* COMPONENT */) { unmountComponent(vnode.component, parentSuspense, doRemove); } else { if (shapeFlag & 128 /* SUSPENSE */) { vnode.suspense.unmount(parentSuspense, doRemove); return; } if (shouldInvokeDirs) { invokeDirectiveHook(vnode, null, parentComponent, 'beforeUnmount'); } if (shapeFlag & 64 /* TELEPORT */) { vnode.type.remove(vnode, parentComponent, parentSuspense, optimized, internals, doRemove); } else if (dynamicChildren && // #1153: fast path should not be taken for non-stable (v-for) fragments (type !== Fragment || (patchFlag > 0 && patchFlag & 64 /* STABLE_FRAGMENT */))) { // fast path for block nodes: only need to unmount dynamic children. unmountChildren(dynamicChildren, parentComponent, parentSuspense, false, true); } else if ((type === Fragment && (patchFlag & 128 /* KEYED_FRAGMENT */ || patchFlag & 256 /* UNKEYED_FRAGMENT */)) || (!optimized && shapeFlag & 16 /* ARRAY_CHILDREN */)) { unmountChildren(children, parentComponent, parentSuspense); } if (doRemove) { remove(vnode); } } if ((vnodeHook = props && props.onVnodeUnmounted) || shouldInvokeDirs) { queuePostRenderEffect(() => { vnodeHook && invokeVNodeHook(vnodeHook, parentComponent, vnode); shouldInvokeDirs && invokeDirectiveHook(vnode, null, parentComponent, 'unmounted'); }, parentSuspense); } }; const remove = vnode => { const { type, el, anchor, transition } = vnode; if (type === Fragment) { removeFragment(el, anchor); return; } if (type === Static) { removeStaticNode(vnode); return; } const performRemove = () => { hostRemove(el); if (transition && !transition.persisted && transition.afterLeave) { transition.afterLeave(); } }; if (vnode.shapeFlag & 1 /* ELEMENT */ && transition && !transition.persisted) { const { leave, delayLeave } = transition; const performLeave = () => leave(el, performRemove); if (delayLeave) { delayLeave(vnode.el, performRemove, performLeave); } else { performLeave(); } } else { performRemove(); } }; const removeFragment = (cur, end) => { // For fragments, directly remove all contained DOM nodes. // (fragment child nodes cannot have transition) let next; while (cur !== end) { next = hostNextSibling(cur); hostRemove(cur); cur = next; } hostRemove(end); }; const unmountComponent = (instance, parentSuspense, doRemove) => { if (instance.type.__hmrId) { unregisterHMR(instance); } const { bum, effects, update, subTree, um } = instance; // beforeUnmount hook if (bum) { invokeArrayFns(bum); } if (effects) { for (let i = 0; i < effects.length; i++) { stop(effects[i]); } } // update may be null if a component is unmounted before its async // setup has resolved. if (update) { stop(update); unmount(subTree, instance, parentSuspense, doRemove); } // unmounted hook if (um) { queuePostRenderEffect(um, parentSuspense); } queuePostRenderEffect(() => { instance.isUnmounted = true; }, parentSuspense); // A component with async dep inside a pending suspense is unmounted before // its async dep resolves. This should remove the dep from the suspense, and // cause the suspense to resolve immediately if that was the last dep. if (parentSuspense && parentSuspense.pendingBranch && !parentSuspense.isUnmounted && instance.asyncDep && !instance.asyncResolved && instance.suspenseId === parentSuspense.pendingId) { parentSuspense.deps--; if (parentSuspense.deps === 0) { parentSuspense.resolve(); } } { devtoolsComponentRemoved(instance); } }; const unmountChildren = (children, parentComponent, parentSuspense, doRemove = false, optimized = false, start = 0) => { for (let i = start; i < children.length; i++) { unmount(children[i], parentComponent, parentSuspense, doRemove, optimized); } }; const getNextHostNode = vnode => { if (vnode.shapeFlag & 6 /* COMPONENT */) { return getNextHostNode(vnode.component.subTree); } if (vnode.shapeFlag & 128 /* SUSPENSE */) { return vnode.suspense.next(); } return hostNextSibling((vnode.anchor || vnode.el)); }; const render = (vnode, container, isSVG) => { if (vnode == null) { if (container._vnode) { unmount(container._vnode, null, null, true); } } else { patch(container._vnode || null, vnode, container, null, null, null, isSVG); } flushPostFlushCbs(); container._vnode = vnode; }; const internals = { p: patch, um: unmount, m: move, r: remove, mt: mountComponent, mc: mountChildren, pc: patchChildren, pbc: patchBlockChildren, n: getNextHostNode, o: options }; let hydrate; let hydrateNode; if (createHydrationFns) { [hydrate, hydrateNode] = createHydrationFns(internals); } return { render, hydrate, createApp: createAppAPI(render, hydrate) }; } function invokeVNodeHook(hook, instance, vnode, prevVNode = null) { callWithAsyncErrorHandling(hook, instance, 7 /* VNODE_HOOK */, [ vnode, prevVNode ]); } /** * #1156 * When a component is HMR-enabled, we need to make sure that all static nodes * inside a block also inherit the DOM element from the previous tree so that * HMR updates (which are full updates) can retrieve the element for patching. * * #2080 * Inside keyed `template` fragment static children, if a fragment is moved, * the children will always moved so that need inherit el form previous nodes * to ensure correct moved position. */ function traverseStaticChildren(n1, n2, shallow = false) { const ch1 = n1.children; const ch2 = n2.children; if (isArray(ch1) && isArray(ch2)) { for (let i = 0; i < ch1.length; i++) { // this is only called in the optimized path so array children are // guaranteed to be vnodes const c1 = ch1[i]; let c2 = ch2[i]; if (c2.shapeFlag & 1 /* ELEMENT */ && !c2.dynamicChildren) { if (c2.patchFlag <= 0 || c2.patchFlag === 32 /* HYDRATE_EVENTS */) { c2 = ch2[i] = cloneIfMounted(ch2[i]); c2.el = c1.el; } if (!shallow) traverseStaticChildren(c1, c2); } // also inherit for comment nodes, but not placeholders (e.g. v-if which // would have received .el during block patch) if (c2.type === Comment && !c2.el) { c2.el = c1.el; } } } } // https://en.wikipedia.org/wiki/Longest_increasing_subsequence function getSequence(arr) { const p = arr.slice(); const result = [0]; let i, j, u, v, c; const len = arr.length; for (i = 0; i < len; i++) { const arrI = arr[i]; if (arrI !== 0) { j = result[result.length - 1]; if (arr[j] < arrI) { p[i] = j; result.push(i); continue; } u = 0; v = result.length - 1; while (u < v) { c = ((u + v) / 2) | 0; if (arr[result[c]] < arrI) { u = c + 1; } else { v = c; } } if (arrI < arr[result[u]]) { if (u > 0) { p[i] = result[u - 1]; } result[u] = i; } } } u = result.length; v = result[u - 1]; while (u-- > 0) { result[u] = v; v = p[v]; } return result; } const isTeleport = (type) => type.__isTeleport; const isTeleportDisabled = (props) => props && (props.disabled || props.disabled === ''); const isTargetSVG = (target) => typeof SVGElement !== 'undefined' && target instanceof SVGElement; const resolveTarget = (props, select) => { const targetSelector = props && props.to; if (isString(targetSelector)) { if (!select) { warn(`Current renderer does not support string target for Teleports. ` + `(missing querySelector renderer option)`); return null; } else { const target = select(targetSelector); if (!target) { warn(`Failed to locate Teleport target with selector "${targetSelector}". ` + `Note the target element must exist before the component is mounted - ` + `i.e. the target cannot be rendered by the component itself, and ` + `ideally should be outside of the entire Vue component tree.`); } return target; } } else { if (!targetSelector && !isTeleportDisabled(props)) { warn(`Invalid Teleport target: ${targetSelector}`); } return targetSelector; } }; const TeleportImpl = { __isTeleport: true, process(n1, n2, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized, internals) { const { mc: mountChildren, pc: patchChildren, pbc: patchBlockChildren, o: { insert, querySelector, createText, createComment } } = internals; const disabled = isTeleportDisabled(n2.props); const { shapeFlag, children } = n2; // #3302 // HMR updated, force full diff if (isHmrUpdating) { optimized = false; n2.dynamicChildren = null; } if (n1 == null) { // insert anchors in the main view const placeholder = (n2.el = createComment('teleport start') ); const mainAnchor = (n2.anchor = createComment('teleport end') ); insert(placeholder, container, anchor); insert(mainAnchor, container, anchor); const target = (n2.target = resolveTarget(n2.props, querySelector)); const targetAnchor = (n2.targetAnchor = createText('')); if (target) { insert(targetAnchor, target); // #2652 we could be teleporting from a non-SVG tree into an SVG tree isSVG = isSVG || isTargetSVG(target); } else if (!disabled) { warn('Invalid Teleport target on mount:', target, `(${typeof target})`); } const mount = (container, anchor) => { // Teleport *always* has Array children. This is enforced in both the // compiler and vnode children normalization. if (shapeFlag & 16 /* ARRAY_CHILDREN */) { mountChildren(children, container, anchor, parentComponent, parentSuspense, isSVG, slotScopeIds, optimized); } }; if (disabled) { mount(container, mainAnchor); } else if (target) { mount(target, targetAnchor); } } else { // update content n2.el = n1.el; const mainAnchor = (n2.anchor = n1.anchor); const target = (n2.target = n1.target); const targetAnchor = (n2.targetAnchor = n1.targetAnchor); const wasDisabled = isTeleportDisabled(n1.props); const currentContainer = wasDisabled ? container : target; const currentAnchor = wasDisabled ? mainAnchor : targetAnchor; isSVG = isSVG || isTargetSVG(target); if (n2.dynamicChildren) { // fast path when the teleport happens to be a block root patchBlockChildren(n1.dynamicChildren, n2.dynamicChildren, currentContainer, parentComponent, parentSuspense, isSVG, slotScopeIds); // even in block tree mode we need to make sure all root-level nodes // in the teleport inherit previous DOM references so that they can // be moved in future patches. traverseStaticChildren(n1, n2, true); } else if (!optimized) { patchChildren(n1, n2, currentContainer, currentAnchor, parentComponent, parentSuspense, isSVG, slotScopeIds, false); } if (disabled) { if (!wasDisabled) { // enabled -> disabled // move into main container moveTeleport(n2, container, mainAnchor, internals, 1 /* TOGGLE */); } } else { // target changed if ((n2.props && n2.props.to) !== (n1.props && n1.props.to)) { const nextTarget = (n2.target = resolveTarget(n2.props, querySelector)); if (nextTarget) { moveTeleport(n2, nextTarget, null, internals, 0 /* TARGET_CHANGE */); } else { warn('Invalid Teleport target on update:', target, `(${typeof target})`); } } else if (wasDisabled) { // disabled -> enabled // move into teleport target moveTeleport(n2, target, targetAnchor, internals, 1 /* TOGGLE */); } } } }, remove(vnode, parentComponent, parentSuspense, optimized, { um: unmount, o: { remove: hostRemove } }, doRemove) { const { shapeFlag, children, anchor, targetAnchor, target, props } = vnode; if (target) { hostRemove(targetAnchor); } // an unmounted teleport should always remove its children if not disabled if (doRemove || !isTeleportDisabled(props)) { hostRemove(anchor); if (shapeFlag & 16 /* ARRAY_CHILDREN */) { for (let i = 0; i < children.length; i++) { unmount(children[i], parentComponent, parentSuspense, true, optimized); } } } }, move: moveTeleport, hydrate: hydrateTeleport }; function moveTeleport(vnode, container, parentAnchor, { o: { insert }, m: move }, moveType = 2 /* REORDER */) { // move target anchor if this is a target change. if (moveType === 0 /* TARGET_CHANGE */) { insert(vnode.targetAnchor, container, parentAnchor); } const { el, anchor, shapeFlag, children, props } = vnode; const isReorder = moveType === 2 /* REORDER */; // move main view anchor if this is a re-order. if (isReorder) { insert(el, container, parentAnchor); } // if this is a re-order and teleport is enabled (content is in target) // do not move children. So the opposite is: only move children if this // is not a reorder, or the teleport is disabled if (!isReorder || isTeleportDisabled(props)) { // Teleport has either Array children or no children. if (shapeFlag & 16 /* ARRAY_CHILDREN */) { for (let i = 0; i < children.length; i++) { move(children[i], container, parentAnchor, 2 /* REORDER */); } } } // move main view anchor if this is a re-order. if (isReorder) { insert(anchor, container, parentAnchor); } } function hydrateTeleport(node, vnode, parentComponent, parentSuspense, slotScopeIds, optimized, { o: { nextSibling, parentNode, querySelector } }, hydrateChildren) { const target = (vnode.target = resolveTarget(vnode.props, querySelector)); if (target) { // if multiple teleports rendered to the same target element, we need to // pick up from where the last teleport finished instead of the first node const targetNode = target._lpa || target.firstChild; if (vnode.shapeFlag & 16 /* ARRAY_CHILDREN */) { if (isTeleportDisabled(vnode.props)) { vnode.anchor = hydrateChildren(nextSibling(node), vnode, parentNode(node), parentComponent, parentSuspense, slotScopeIds, optimized); vnode.targetAnchor = targetNode; } else { vnode.anchor = nextSibling(node); vnode.targetAnchor = hydrateChildren(targetNode, vnode, target, parentComponent, parentSuspense, slotScopeIds, optimized); } target._lpa = vnode.targetAnchor && nextSibling(vnode.targetAnchor); } } return vnode.anchor && nextSibling(vnode.anchor); } // Force-casted public typing for h and TSX props inference const Teleport = TeleportImpl; const COMPONENTS = 'components'; const DIRECTIVES = 'directives'; /** * @private */ function resolveComponent(name, maybeSelfReference) { return resolveAsset(COMPONENTS, name, true, maybeSelfReference) || name; } const NULL_DYNAMIC_COMPONENT = Symbol(); /** * @private */ function resolveDynamicComponent(component) { if (isString(component)) { return resolveAsset(COMPONENTS, component, false) || component; } else { // invalid types will fallthrough to createVNode and raise warning return (component || NULL_DYNAMIC_COMPONENT); } } /** * @private */ function resolveDirective(name) { return resolveAsset(DIRECTIVES, name); } // implementation function resolveAsset(type, name, warnMissing = true, maybeSelfReference = false) { const instance = currentRenderingInstance || currentInstance; if (instance) { const Component = instance.type; // explicit self name has highest priority if (type === COMPONENTS) { const selfName = getComponentName(Component); if (selfName && (selfName === name || selfName === camelize(name) || selfName === capitalize(camelize(name)))) { return Component; } } const res = // local registration // check instance[type] first for components with mixin or extends. resolve(instance[type] || Component[type], name) || // global registration resolve(instance.appContext[type], name); if (!res && maybeSelfReference) { // fallback to implicit self-reference return Component; } if (warnMissing && !res) { warn(`Failed to resolve ${type.slice(0, -1)}: ${name}`); } return res; } else { warn(`resolve${capitalize(type.slice(0, -1))} ` + `can only be used in render() or setup().`); } } function resolve(registry, name) { return (registry && (registry[name] || registry[camelize(name)] || registry[capitalize(camelize(name))])); } const Fragment = Symbol('Fragment' ); const Text = Symbol('Text' ); const Comment = Symbol('Comment' ); const Static = Symbol('Static' ); // Since v-if and v-for are the two possible ways node structure can dynamically // change, once we consider v-if branches and each v-for fragment a block, we // can divide a template into nested blocks, and within each block the node // structure would be stable. This allows us to skip most children diffing // and only worry about the dynamic nodes (indicated by patch flags). const blockStack = []; let currentBlock = null; /** * Open a block. * This must be called before `createBlock`. It cannot be part of `createBlock` * because the children of the block are evaluated before `createBlock` itself * is called. The generated code typically looks like this: * * ```js * function render() { * return (openBlock(),createBlock('div', null, [...])) * } * ``` * disableTracking is true when creating a v-for fragment block, since a v-for * fragment always diffs its children. * * @private */ function openBlock(disableTracking = false) { blockStack.push((currentBlock = disableTracking ? null : [])); } function closeBlock() { blockStack.pop(); currentBlock = blockStack[blockStack.length - 1] || null; } // Whether we should be tracking dynamic child nodes inside a block. // Only tracks when this value is > 0 // We are not using a simple boolean because this value may need to be // incremented/decremented by nested usage of v-once (see below) let shouldTrack$1 = 1; /** * Block tracking sometimes needs to be disabled, for example during the * creation of a tree that needs to be cached by v-once. The compiler generates * code like this: * * ``` js * _cache[1] || ( * setBlockTracking(-1), * _cache[1] = createVNode(...), * setBlockTracking(1), * _cache[1] * ) * ``` * * @private */ function setBlockTracking(value) { shouldTrack$1 += value; } /** * Create a block root vnode. Takes the same exact arguments as `createVNode`. * A block root keeps track of dynamic nodes within the block in the * `dynamicChildren` array. * * @private */ function createBlock(type, props, children, patchFlag, dynamicProps) { const vnode = createVNode(type, props, children, patchFlag, dynamicProps, true /* isBlock: prevent a block from tracking itself */); // save current block children on the block vnode vnode.dynamicChildren = currentBlock || EMPTY_ARR; // close block closeBlock(); // a block is always going to be patched, so track it as a child of its // parent block if (shouldTrack$1 > 0 && currentBlock) { currentBlock.push(vnode); } return vnode; } function isVNode(value) { return value ? value.__v_isVNode === true : false; } function isSameVNodeType(n1, n2) { if (n2.shapeFlag & 6 /* COMPONENT */ && hmrDirtyComponents.has(n2.type)) { // HMR only: if the component has been hot-updated, force a reload. return false; } return n1.type === n2.type && n1.key === n2.key; } let vnodeArgsTransformer; /** * Internal API for registering an arguments transform for createVNode * used for creating stubs in the test-utils * It is *internal* but needs to be exposed for test-utils to pick up proper * typings */ function transformVNodeArgs(transformer) { vnodeArgsTransformer = transformer; } const createVNodeWithArgsTransform = (...args) => { return _createVNode(...(vnodeArgsTransformer ? vnodeArgsTransformer(args, currentRenderingInstance) : args)); }; const InternalObjectKey = `__vInternal`; const normalizeKey = ({ key }) => key != null ? key : null; const normalizeRef = ({ ref }) => { return (ref != null ? isString(ref) || isRef(ref) || isFunction(ref) ? { i: currentRenderingInstance, r: ref } : ref : null); }; const createVNode = (createVNodeWithArgsTransform ); function _createVNode(type, props = null, children = null, patchFlag = 0, dynamicProps = null, isBlockNode = false) { if (!type || type === NULL_DYNAMIC_COMPONENT) { if (!type) { warn(`Invalid vnode type when creating vnode: ${type}.`); } type = Comment; } if (isVNode(type)) { // createVNode receiving an existing vnode. This happens in cases like // <component :is="vnode"/> // #2078 make sure to merge refs during the clone instead of overwriting it const cloned = cloneVNode(type, props, true /* mergeRef: true */); if (children) { normalizeChildren(cloned, children); } return cloned; } // class component normalization. if (isClassComponent(type)) { type = type.__vccOpts; } // class & style normalization. if (props) { // for reactive or proxy objects, we need to clone it to enable mutation. if (isProxy(props) || InternalObjectKey in props) { props = extend({}, props); } let { class: klass, style } = props; if (klass && !isString(klass)) { props.class = normalizeClass(klass); } if (isObject(style)) { // reactive state objects need to be cloned since they are likely to be // mutated if (isProxy(style) && !isArray(style)) { style = extend({}, style); } props.style = normalizeStyle(style); } } // encode the vnode type information into a bitmap const shapeFlag = isString(type) ? 1 /* ELEMENT */ : isSuspense(type) ? 128 /* SUSPENSE */ : isTeleport(type) ? 64 /* TELEPORT */ : isObject(type) ? 4 /* STATEFUL_COMPONENT */ : isFunction(type) ? 2 /* FUNCTIONAL_COMPONENT */ : 0; if (shapeFlag & 4 /* STATEFUL_COMPONENT */ && isProxy(type)) { type = toRaw(type); warn(`Vue received a Component which was made a reactive object. This can ` + `lead to unnecessary performance overhead, and should be avoided by ` + `marking the component with \`markRaw\` or using \`shallowRef\` ` + `instead of \`ref\`.`, `\nComponent that was made reactive: `, type); } const vnode = { __v_isVNode: true, ["__v_skip" /* SKIP */]: true, type, props, key: props && normalizeKey(props), ref: props && normalizeRef(props), scopeId: currentScopeId, slotScopeIds: null, children: null, component: null, suspense: null, ssContent: null, ssFallback: null, dirs: null, transition: null, el: null, anchor: null, target: null, targetAnchor: null, staticCount: 0, shapeFlag, patchFlag, dynamicProps, dynamicChildren: null, appContext: null }; // validate key if (vnode.key !== vnode.key) { warn(`VNode created with invalid key (NaN). VNode type:`, vnode.type); } normalizeChildren(vnode, children); // normalize suspense children if (shapeFlag & 128 /* SUSPENSE */) { const { content, fallback } = normalizeSuspenseChildren(vnode); vnode.ssContent = content; vnode.ssFallback = fallback; } if (shouldTrack$1 > 0 && // avoid a block node from tracking itself !isBlockNode && // has current parent block currentBlock && // presence of a patch flag indicates this node needs patching on updates. // component nodes also should always be patched, because even if the // component doesn't need to update, it needs to persist the instance on to // the next vnode so that it can be properly unmounted later. (patchFlag > 0 || shapeFlag & 6 /* COMPONENT */) && // the EVENTS flag is only for hydration and if it is the only flag, the // vnode should not be considered dynamic due to handler caching. patchFlag !== 32 /* HYDRATE_EVENTS */) { currentBlock.push(vnode); } return vnode; } function cloneVNode(vnode, extraProps, mergeRef = false) { // This is intentionally NOT using spread or extend to avoid the runtime // key enumeration cost. const { props, ref, patchFlag, children } = vnode; const mergedProps = extraProps ? mergeProps(props || {}, extraProps) : props; return { __v_isVNode: true, ["__v_skip" /* SKIP */]: true, type: vnode.type, props: mergedProps, key: mergedProps && normalizeKey(mergedProps), ref: extraProps && extraProps.ref ? // #2078 in the case of <component :is="vnode" ref="extra"/> // if the vnode itself already has a ref, cloneVNode will need to merge // the refs so the single vnode can be set on multiple refs mergeRef && ref ? isArray(ref) ? ref.concat(normalizeRef(extraProps)) : [ref, normalizeRef(extraProps)] : normalizeRef(extraProps) : ref, scopeId: vnode.scopeId, slotScopeIds: vnode.slotScopeIds, children: patchFlag === -1 /* HOISTED */ && isArray(children) ? children.map(deepCloneVNode) : children, target: vnode.target, targetAnchor: vnode.targetAnchor, staticCount: vnode.staticCount, shapeFlag: vnode.shapeFlag, // if the vnode is cloned with extra props, we can no longer assume its // existing patch flag to be reliable and need to add the FULL_PROPS flag. // note: perserve flag for fragments since they use the flag for children // fast paths only. patchFlag: extraProps && vnode.type !== Fragment ? patchFlag === -1 // hoisted node ? 16 /* FULL_PROPS */ : patchFlag | 16 /* FULL_PROPS */ : patchFlag, dynamicProps: vnode.dynamicProps, dynamicChildren: vnode.dynamicChildren, appContext: vnode.appContext, dirs: vnode.dirs, transition: vnode.transition, // These should technically only be non-null on mounted VNodes. However, // they *should* be copied for kept-alive vnodes. So we just always copy // them since them being non-null during a mount doesn't affect the logic as // they will simply be overwritten. component: vnode.component, suspense: vnode.suspense, ssContent: vnode.ssContent && cloneVNode(vnode.ssContent), ssFallback: vnode.ssFallback && cloneVNode(vnode.ssFallback), el: vnode.el, anchor: vnode.anchor }; } /** * Dev only, for HMR of hoisted vnodes reused in v-for * https://github.com/vitejs/vite/issues/2022 */ function deepCloneVNode(vnode) { const cloned = cloneVNode(vnode); if (isArray(vnode.children)) { cloned.children = vnode.children.map(deepCloneVNode); } return cloned; } /** * @private */ function createTextVNode(text = ' ', flag = 0) { return createVNode(Text, null, text, flag); } /** * @private */ function createStaticVNode(content, numberOfNodes) { // A static vnode can contain multiple stringified elements, and the number // of elements is necessary for hydration. const vnode = createVNode(Static, null, content); vnode.staticCount = numberOfNodes; return vnode; } /** * @private */ function createCommentVNode(text = '', // when used as the v-else branch, the comment node must be created as a // block to ensure correct updates. asBlock = false) { return asBlock ? (openBlock(), createBlock(Comment, null, text)) : createVNode(Comment, null, text); } function normalizeVNode(child) { if (child == null || typeof child === 'boolean') { // empty placeholder return createVNode(Comment); } else if (isArray(child)) { // fragment return createVNode(Fragment, null, child); } else if (typeof child === 'object') { // already vnode, this should be the most common since compiled templates // always produce all-vnode children arrays return child.el === null ? child : cloneVNode(child); } else { // strings and numbers return createVNode(Text, null, String(child)); } } // optimized normalization for template-compiled render fns function cloneIfMounted(child) { return child.el === null ? child : cloneVNode(child); } function normalizeChildren(vnode, children) { let type = 0; const { shapeFlag } = vnode; if (children == null) { children = null; } else if (isArray(children)) { type = 16 /* ARRAY_CHILDREN */; } else if (typeof children === 'object') { if (shapeFlag & 1 /* ELEMENT */ || shapeFlag & 64 /* TELEPORT */) { // Normalize slot to plain children for plain element and Teleport const slot = children.default; if (slot) { // _c marker is added by withCtx() indicating this is a compiled slot slot._c && setCompiledSlotRendering(1); normalizeChildren(vnode, slot()); slot._c && setCompiledSlotRendering(-1); } return; } else { type = 32 /* SLOTS_CHILDREN */; const slotFlag = children._; if (!slotFlag && !(InternalObjectKey in children)) { children._ctx = currentRenderingInstance; } else if (slotFlag === 3 /* FORWARDED */ && currentRenderingInstance) { // a child component receives forwarded slots from the parent. // its slot type is determined by its parent's slot type. if (currentRenderingInstance.vnode.patchFlag & 1024 /* DYNAMIC_SLOTS */) { children._ = 2 /* DYNAMIC */; vnode.patchFlag |= 1024 /* DYNAMIC_SLOTS */; } else { children._ = 1 /* STABLE */; } } } } else if (isFunction(children)) { children = { default: children, _ctx: currentRenderingInstance }; type = 32 /* SLOTS_CHILDREN */; } else { children = String(children); // force teleport children to array so it can be moved around if (shapeFlag & 64 /* TELEPORT */) { type = 16 /* ARRAY_CHILDREN */; children = [createTextVNode(children)]; } else { type = 8 /* TEXT_CHILDREN */; } } vnode.children = children; vnode.shapeFlag |= type; } function mergeProps(...args) { const ret = extend({}, args[0]); for (let i = 1; i < args.length; i++) { const toMerge = args[i]; for (const key in toMerge) { if (key === 'class') { if (ret.class !== toMerge.class) { ret.class = normalizeClass([ret.class, toMerge.class]); } } else if (key === 'style') { ret.style = normalizeStyle([ret.style, toMerge.style]); } else if (isOn(key)) { const existing = ret[key]; const incoming = toMerge[key]; if (existing !== incoming) { ret[key] = existing ? [].concat(existing, toMerge[key]) : incoming; } } else if (key !== '') { ret[key] = toMerge[key]; } } } return ret; } function provide(key, value) { if (!currentInstance) { { warn(`provide() can only be used inside setup().`); } } else { let provides = currentInstance.provides; // by default an instance inherits its parent's provides object // but when it needs to provide values of its own, it creates its // own provides object using parent provides object as prototype. // this way in `inject` we can simply look up injections from direct // parent and let the prototype chain do the work. const parentProvides = currentInstance.parent && currentInstance.parent.provides; if (parentProvides === provides) { provides = currentInstance.provides = Object.create(parentProvides); } // TS doesn't allow symbol as index type provides[key] = value; } } function inject(key, defaultValue, treatDefaultAsFactory = false) { // fallback to `currentRenderingInstance` so that this can be called in // a functional component const instance = currentInstance || currentRenderingInstance; if (instance) { // #2400 // to support `app.use` plugins, // fallback to appContext's `provides` if the intance is at root const provides = instance.parent == null ? instance.vnode.appContext && instance.vnode.appContext.provides : instance.parent.provides; if (provides && key in provides) { // TS doesn't allow symbol as index type return provides[key]; } else if (arguments.length > 1) { return treatDefaultAsFactory && isFunction(defaultValue) ? defaultValue() : defaultValue; } else { warn(`injection "${String(key)}" not found.`); } } else { warn(`inject() can only be used inside setup() or functional components.`); } } function createDuplicateChecker() { const cache = Object.create(null); return (type, key) => { if (cache[key]) { warn(`${type} property "${key}" is already defined in ${cache[key]}.`); } else { cache[key] = type; } }; } let shouldCacheAccess = true; function applyOptions(instance, options, deferredData = [], deferredWatch = [], deferredProvide = [], asMixin = false) { const { // composition mixins, extends: extendsOptions, // state data: dataOptions, computed: computedOptions, methods, watch: watchOptions, provide: provideOptions, inject: injectOptions, // assets components, directives, // lifecycle beforeMount, mounted, beforeUpdate, updated, activated, deactivated, beforeDestroy, beforeUnmount, destroyed, unmounted, render, renderTracked, renderTriggered, errorCaptured, // public API expose } = options; const publicThis = instance.proxy; const ctx = instance.ctx; const globalMixins = instance.appContext.mixins; if (asMixin && render && instance.render === NOOP) { instance.render = render; } // applyOptions is called non-as-mixin once per instance if (!asMixin) { shouldCacheAccess = false; callSyncHook('beforeCreate', "bc" /* BEFORE_CREATE */, options, instance, globalMixins); shouldCacheAccess = true; // global mixins are applied first applyMixins(instance, globalMixins, deferredData, deferredWatch, deferredProvide); } // extending a base component... if (extendsOptions) { applyOptions(instance, extendsOptions, deferredData, deferredWatch, deferredProvide, true); } // local mixins if (mixins) { applyMixins(instance, mixins, deferredData, deferredWatch, deferredProvide); } const checkDuplicateProperties = createDuplicateChecker() ; { const [propsOptions] = instance.propsOptions; if (propsOptions) { for (const key in propsOptions) { checkDuplicateProperties("Props" /* PROPS */, key); } } } // options initialization order (to be consistent with Vue 2): // - props (already done outside of this function) // - inject // - methods // - data (deferred since it relies on `this` access) // - computed // - watch (deferred since it relies on `this` access) if (injectOptions) { if (isArray(injectOptions)) { for (let i = 0; i < injectOptions.length; i++) { const key = injectOptions[i]; ctx[key] = inject(key); { checkDuplicateProperties("Inject" /* INJECT */, key); } } } else { for (const key in injectOptions) { const opt = injectOptions[key]; if (isObject(opt)) { ctx[key] = inject(opt.from || key, opt.default, true /* treat default function as factory */); } else { ctx[key] = inject(opt); } { checkDuplicateProperties("Inject" /* INJECT */, key); } } } } if (methods) { for (const key in methods) { const methodHandler = methods[key]; if (isFunction(methodHandler)) { // In dev mode, we use the `createRenderContext` function to define methods to the proxy target, // and those are read-only but reconfigurable, so it needs to be redefined here { Object.defineProperty(ctx, key, { value: methodHandler.bind(publicThis), configurable: true, enumerable: true, writable: true }); } { checkDuplicateProperties("Methods" /* METHODS */, key); } } else { warn(`Method "${key}" has type "${typeof methodHandler}" in the component definition. ` + `Did you reference the function correctly?`); } } } if (!asMixin) { if (deferredData.length) { deferredData.forEach(dataFn => resolveData(instance, dataFn, publicThis)); } if (dataOptions) { // @ts-ignore dataOptions is not fully type safe resolveData(instance, dataOptions, publicThis); } { const rawData = toRaw(instance.data); for (const key in rawData) { checkDuplicateProperties("Data" /* DATA */, key); // expose data on ctx during dev if (key[0] !== '$' && key[0] !== '_') { Object.defineProperty(ctx, key, { configurable: true, enumerable: true, get: () => rawData[key], set: NOOP }); } } } } else if (dataOptions) { deferredData.push(dataOptions); } if (computedOptions) { for (const key in computedOptions) { const opt = computedOptions[key]; const get = isFunction(opt) ? opt.bind(publicThis, publicThis) : isFunction(opt.get) ? opt.get.bind(publicThis, publicThis) : NOOP; if (get === NOOP) { warn(`Computed property "${key}" has no getter.`); } const set = !isFunction(opt) && isFunction(opt.set) ? opt.set.bind(publicThis) : () => { warn(`Write operation failed: computed property "${key}" is readonly.`); } ; const c = computed$1({ get, set }); Object.defineProperty(ctx, key, { enumerable: true, configurable: true, get: () => c.value, set: v => (c.value = v) }); { checkDuplicateProperties("Computed" /* COMPUTED */, key); } } } if (watchOptions) { deferredWatch.push(watchOptions); } if (!asMixin && deferredWatch.length) { deferredWatch.forEach(watchOptions => { for (const key in watchOptions) { createWatcher(watchOptions[key], ctx, publicThis, key); } }); } if (provideOptions) { deferredProvide.push(provideOptions); } if (!asMixin && deferredProvide.length) { deferredProvide.forEach(provideOptions => { const provides = isFunction(provideOptions) ? provideOptions.call(publicThis) : provideOptions; Reflect.ownKeys(provides).forEach(key => { provide(key, provides[key]); }); }); } // asset options. // To reduce memory usage, only components with mixins or extends will have // resolved asset registry attached to instance. if (asMixin) { if (components) { extend(instance.components || (instance.components = extend({}, instance.type.components)), components); } if (directives) { extend(instance.directives || (instance.directives = extend({}, instance.type.directives)), directives); } } // lifecycle options if (!asMixin) { callSyncHook('created', "c" /* CREATED */, options, instance, globalMixins); } if (beforeMount) { onBeforeMount(beforeMount.bind(publicThis)); } if (mounted) { onMounted(mounted.bind(publicThis)); } if (beforeUpdate) { onBeforeUpdate(beforeUpdate.bind(publicThis)); } if (updated) { onUpdated(updated.bind(publicThis)); } if (activated) { onActivated(activated.bind(publicThis)); } if (deactivated) { onDeactivated(deactivated.bind(publicThis)); } if (errorCaptured) { onErrorCaptured(errorCaptured.bind(publicThis)); } if (renderTracked) { onRenderTracked(renderTracked.bind(publicThis)); } if (renderTriggered) { onRenderTriggered(renderTriggered.bind(publicThis)); } if (beforeDestroy) { warn(`\`beforeDestroy\` has been renamed to \`beforeUnmount\`.`); } if (beforeUnmount) { onBeforeUnmount(beforeUnmount.bind(publicThis)); } if (destroyed) { warn(`\`destroyed\` has been renamed to \`unmounted\`.`); } if (unmounted) { onUnmounted(unmounted.bind(publicThis)); } if (isArray(expose)) { if (!asMixin) { if (expose.length) { const exposed = instance.exposed || (instance.exposed = proxyRefs({})); expose.forEach(key => { exposed[key] = toRef(publicThis, key); }); } else if (!instance.exposed) { instance.exposed = EMPTY_OBJ; } } else { warn(`The \`expose\` option is ignored when used in mixins.`); } } } function callSyncHook(name, type, options, instance, globalMixins) { for (let i = 0; i < globalMixins.length; i++) { callHookWithMixinAndExtends(name, type, globalMixins[i], instance); } callHookWithMixinAndExtends(name, type, options, instance); } function callHookWithMixinAndExtends(name, type, options, instance) { const { extends: base, mixins } = options; const selfHook = options[name]; if (base) { callHookWithMixinAndExtends(name, type, base, instance); } if (mixins) { for (let i = 0; i < mixins.length; i++) { callHookWithMixinAndExtends(name, type, mixins[i], instance); } } if (selfHook) { callWithAsyncErrorHandling(selfHook.bind(instance.proxy), instance, type); } } function applyMixins(instance, mixins, deferredData, deferredWatch, deferredProvide) { for (let i = 0; i < mixins.length; i++) { applyOptions(instance, mixins[i], deferredData, deferredWatch, deferredProvide, true); } } function resolveData(instance, dataFn, publicThis) { if (!isFunction(dataFn)) { warn(`The data option must be a function. ` + `Plain object usage is no longer supported.`); } shouldCacheAccess = false; const data = dataFn.call(publicThis, publicThis); shouldCacheAccess = true; if (isPromise(data)) { warn(`data() returned a Promise - note data() cannot be async; If you ` + `intend to perform data fetching before component renders, use ` + `async setup() + <Suspense>.`); } if (!isObject(data)) { warn(`data() should return an object.`); } else if (instance.data === EMPTY_OBJ) { instance.data = reactive(data); } else { // existing data: this is a mixin or extends. extend(instance.data, data); } } function createWatcher(raw, ctx, publicThis, key) { const getter = key.includes('.') ? createPathGetter(publicThis, key) : () => publicThis[key]; if (isString(raw)) { const handler = ctx[raw]; if (isFunction(handler)) { watch(getter, handler); } else { warn(`Invalid watch handler specified by key "${raw}"`, handler); } } else if (isFunction(raw)) { watch(getter, raw.bind(publicThis)); } else if (isObject(raw)) { if (isArray(raw)) { raw.forEach(r => createWatcher(r, ctx, publicThis, key)); } else { const handler = isFunction(raw.handler) ? raw.handler.bind(publicThis) : ctx[raw.handler]; if (isFunction(handler)) { watch(getter, handler, raw); } else { warn(`Invalid watch handler specified by key "${raw.handler}"`, handler); } } } else { warn(`Invalid watch option: "${key}"`, raw); } } function createPathGetter(ctx, path) { const segments = path.split('.'); return () => { let cur = ctx; for (let i = 0; i < segments.length && cur; i++) { cur = cur[segments[i]]; } return cur; }; } function resolveMergedOptions(instance) { const raw = instance.type; const { __merged, mixins, extends: extendsOptions } = raw; if (__merged) return __merged; const globalMixins = instance.appContext.mixins; if (!globalMixins.length && !mixins && !extendsOptions) return raw; const options = {}; globalMixins.forEach(m => mergeOptions(options, m, instance)); mergeOptions(options, raw, instance); return (raw.__merged = options); } function mergeOptions(to, from, instance) { const strats = instance.appContext.config.optionMergeStrategies; const { mixins, extends: extendsOptions } = from; extendsOptions && mergeOptions(to, extendsOptions, instance); mixins && mixins.forEach((m) => mergeOptions(to, m, instance)); for (const key in from) { if (strats && hasOwn(strats, key)) { to[key] = strats[key](to[key], from[key], instance.proxy, key); } else { to[key] = from[key]; } } } /** * #2437 In Vue 3, functional components do not have a public instance proxy but * they exist in the internal parent chain. For code that relies on traversing * public $parent chains, skip functional ones and go to the parent instead. */ const getPublicInstance = (i) => { if (!i) return null; if (isStatefulComponent(i)) return i.exposed ? i.exposed : i.proxy; return getPublicInstance(i.parent); }; const publicPropertiesMap = extend(Object.create(null), { $: i => i, $el: i => i.vnode.el, $data: i => i.data, $props: i => (shallowReadonly(i.props) ), $attrs: i => (shallowReadonly(i.attrs) ), $slots: i => (shallowReadonly(i.slots) ), $refs: i => (shallowReadonly(i.refs) ), $parent: i => getPublicInstance(i.parent), $root: i => getPublicInstance(i.root), $emit: i => i.emit, $options: i => (resolveMergedOptions(i) ), $forceUpdate: i => () => queueJob(i.update), $nextTick: i => nextTick.bind(i.proxy), $watch: i => (instanceWatch.bind(i) ) }); const PublicInstanceProxyHandlers = { get({ _: instance }, key) { const { ctx, setupState, data, props, accessCache, type, appContext } = instance; // let @vue/reactivity know it should never observe Vue public instances. if (key === "__v_skip" /* SKIP */) { return true; } // for internal formatters to know that this is a Vue instance if (key === '__isVue') { return true; } // data / props / ctx // This getter gets called for every property access on the render context // during render and is a major hotspot. The most expensive part of this // is the multiple hasOwn() calls. It's much faster to do a simple property // access on a plain object, so we use an accessCache object (with null // prototype) to memoize what access type a key corresponds to. let normalizedProps; if (key[0] !== '$') { const n = accessCache[key]; if (n !== undefined) { switch (n) { case 0 /* SETUP */: return setupState[key]; case 1 /* DATA */: return data[key]; case 3 /* CONTEXT */: return ctx[key]; case 2 /* PROPS */: return props[key]; // default: just fallthrough } } else if (setupState !== EMPTY_OBJ && hasOwn(setupState, key)) { accessCache[key] = 0 /* SETUP */; return setupState[key]; } else if (data !== EMPTY_OBJ && hasOwn(data, key)) { accessCache[key] = 1 /* DATA */; return data[key]; } else if ( // only cache other properties when instance has declared (thus stable) // props (normalizedProps = instance.propsOptions[0]) && hasOwn(normalizedProps, key)) { accessCache[key] = 2 /* PROPS */; return props[key]; } else if (ctx !== EMPTY_OBJ && hasOwn(ctx, key)) { accessCache[key] = 3 /* CONTEXT */; return ctx[key]; } else if (shouldCacheAccess) { accessCache[key] = 4 /* OTHER */; } } const publicGetter = publicPropertiesMap[key]; let cssModule, globalProperties; // public $xxx properties if (publicGetter) { if (key === '$attrs') { track(instance, "get" /* GET */, key); markAttrsAccessed(); } return publicGetter(instance); } else if ( // css module (injected by vue-loader) (cssModule = type.__cssModules) && (cssModule = cssModule[key])) { return cssModule; } else if (ctx !== EMPTY_OBJ && hasOwn(ctx, key)) { // user may set custom properties to `this` that start with `$` accessCache[key] = 3 /* CONTEXT */; return ctx[key]; } else if ( // global properties ((globalProperties = appContext.config.globalProperties), hasOwn(globalProperties, key))) { return globalProperties[key]; } else if (currentRenderingInstance && (!isString(key) || // #1091 avoid internal isRef/isVNode checks on component instance leading // to infinite warning loop key.indexOf('__v') !== 0)) { if (data !== EMPTY_OBJ && (key[0] === '$' || key[0] === '_') && hasOwn(data, key)) { warn(`Property ${JSON.stringify(key)} must be accessed via $data because it starts with a reserved ` + `character ("$" or "_") and is not proxied on the render context.`); } else if (instance === currentRenderingInstance) { warn(`Property ${JSON.stringify(key)} was accessed during render ` + `but is not defined on instance.`); } } }, set({ _: instance }, key, value) { const { data, setupState, ctx } = instance; if (setupState !== EMPTY_OBJ && hasOwn(setupState, key)) { setupState[key] = value; } else if (data !== EMPTY_OBJ && hasOwn(data, key)) { data[key] = value; } else if (hasOwn(instance.props, key)) { warn(`Attempting to mutate prop "${key}". Props are readonly.`, instance); return false; } if (key[0] === '$' && key.slice(1) in instance) { warn(`Attempting to mutate public property "${key}". ` + `Properties starting with $ are reserved and readonly.`, instance); return false; } else { if (key in instance.appContext.config.globalProperties) { Object.defineProperty(ctx, key, { enumerable: true, configurable: true, value }); } else { ctx[key] = value; } } return true; }, has({ _: { data, setupState, accessCache, ctx, appContext, propsOptions } }, key) { let normalizedProps; return (accessCache[key] !== undefined || (data !== EMPTY_OBJ && hasOwn(data, key)) || (setupState !== EMPTY_OBJ && hasOwn(setupState, key)) || ((normalizedProps = propsOptions[0]) && hasOwn(normalizedProps, key)) || hasOwn(ctx, key) || hasOwn(publicPropertiesMap, key) || hasOwn(appContext.config.globalProperties, key)); } }; { PublicInstanceProxyHandlers.ownKeys = (target) => { warn(`Avoid app logic that relies on enumerating keys on a component instance. ` + `The keys will be empty in production mode to avoid performance overhead.`); return Reflect.ownKeys(target); }; } const RuntimeCompiledPublicInstanceProxyHandlers = extend({}, PublicInstanceProxyHandlers, { get(target, key) { // fast path for unscopables when using `with` block if (key === Symbol.unscopables) { return; } return PublicInstanceProxyHandlers.get(target, key, target); }, has(_, key) { const has = key[0] !== '_' && !isGloballyWhitelisted(key); if (!has && PublicInstanceProxyHandlers.has(_, key)) { warn(`Property ${JSON.stringify(key)} should not start with _ which is a reserved prefix for Vue internals.`); } return has; } }); // In dev mode, the proxy target exposes the same properties as seen on `this` // for easier console inspection. In prod mode it will be an empty object so // these properties definitions can be skipped. function createRenderContext(instance) { const target = {}; // expose internal instance for proxy handlers Object.defineProperty(target, `_`, { configurable: true, enumerable: false, get: () => instance }); // expose public properties Object.keys(publicPropertiesMap).forEach(key => { Object.defineProperty(target, key, { configurable: true, enumerable: false, get: () => publicPropertiesMap[key](instance), // intercepted by the proxy so no need for implementation, // but needed to prevent set errors set: NOOP }); }); // expose global properties const { globalProperties } = instance.appContext.config; Object.keys(globalProperties).forEach(key => { Object.defineProperty(target, key, { configurable: true, enumerable: false, get: () => globalProperties[key], set: NOOP }); }); return target; } // dev only function exposePropsOnRenderContext(instance) { const { ctx, propsOptions: [propsOptions] } = instance; if (propsOptions) { Object.keys(propsOptions).forEach(key => { Object.defineProperty(ctx, key, { enumerable: true, configurable: true, get: () => instance.props[key], set: NOOP }); }); } } // dev only function exposeSetupStateOnRenderContext(instance) { const { ctx, setupState } = instance; Object.keys(toRaw(setupState)).forEach(key => { if (key[0] === '$' || key[0] === '_') { warn(`setup() return property ${JSON.stringify(key)} should not start with "$" or "_" ` + `which are reserved prefixes for Vue internals.`); return; } Object.defineProperty(ctx, key, { enumerable: true, configurable: true, get: () => setupState[key], set: NOOP }); }); } const emptyAppContext = createAppContext(); let uid$2 = 0; function createComponentInstance(vnode, parent, suspense) { const type = vnode.type; // inherit parent app context - or - if root, adopt from root vnode const appContext = (parent ? parent.appContext : vnode.appContext) || emptyAppContext; const instance = { uid: uid$2++, vnode, type, parent, appContext, root: null, next: null, subTree: null, update: null, render: null, proxy: null, exposed: null, withProxy: null, effects: null, provides: parent ? parent.provides : Object.create(appContext.provides), accessCache: null, renderCache: [], // local resovled assets components: null, directives: null, // resolved props and emits options propsOptions: normalizePropsOptions(type, appContext), emitsOptions: normalizeEmitsOptions(type, appContext), // emit emit: null, emitted: null, // props default value propsDefaults: EMPTY_OBJ, // state ctx: EMPTY_OBJ, data: EMPTY_OBJ, props: EMPTY_OBJ, attrs: EMPTY_OBJ, slots: EMPTY_OBJ, refs: EMPTY_OBJ, setupState: EMPTY_OBJ, setupContext: null, // suspense related suspense, suspenseId: suspense ? suspense.pendingId : 0, asyncDep: null, asyncResolved: false, // lifecycle hooks // not using enums here because it results in computed properties isMounted: false, isUnmounted: false, isDeactivated: false, bc: null, c: null, bm: null, m: null, bu: null, u: null, um: null, bum: null, da: null, a: null, rtg: null, rtc: null, ec: null }; { instance.ctx = createRenderContext(instance); } instance.root = parent ? parent.root : instance; instance.emit = emit.bind(null, instance); return instance; } let currentInstance = null; const getCurrentInstance = () => currentInstance || currentRenderingInstance; const setCurrentInstance = (instance) => { currentInstance = instance; }; const isBuiltInTag = /*#__PURE__*/ makeMap('slot,component'); function validateComponentName(name, config) { const appIsNativeTag = config.isNativeTag || NO; if (isBuiltInTag(name) || appIsNativeTag(name)) { warn('Do not use built-in or reserved HTML elements as component id: ' + name); } } function isStatefulComponent(instance) { return instance.vnode.shapeFlag & 4 /* STATEFUL_COMPONENT */; } let isInSSRComponentSetup = false; function setupComponent(instance, isSSR = false) { isInSSRComponentSetup = isSSR; const { props, children } = instance.vnode; const isStateful = isStatefulComponent(instance); initProps(instance, props, isStateful, isSSR); initSlots(instance, children); const setupResult = isStateful ? setupStatefulComponent(instance, isSSR) : undefined; isInSSRComponentSetup = false; return setupResult; } function setupStatefulComponent(instance, isSSR) { const Component = instance.type; { if (Component.name) { validateComponentName(Component.name, instance.appContext.config); } if (Component.components) { const names = Object.keys(Component.components); for (let i = 0; i < names.length; i++) { validateComponentName(names[i], instance.appContext.config); } } if (Component.directives) { const names = Object.keys(Component.directives); for (let i = 0; i < names.length; i++) { validateDirectiveName(names[i]); } } } // 0. create render proxy property access cache instance.accessCache = Object.create(null); // 1. create public instance / render proxy // also mark it raw so it's never observed instance.proxy = new Proxy(instance.ctx, PublicInstanceProxyHandlers); { exposePropsOnRenderContext(instance); } // 2. call setup() const { setup } = Component; if (setup) { const setupContext = (instance.setupContext = setup.length > 1 ? createSetupContext(instance) : null); currentInstance = instance; pauseTracking(); const setupResult = callWithErrorHandling(setup, instance, 0 /* SETUP_FUNCTION */, [shallowReadonly(instance.props) , setupContext]); resetTracking(); currentInstance = null; if (isPromise(setupResult)) { if (isSSR) { // return the promise so server-renderer can wait on it return setupResult .then((resolvedResult) => { handleSetupResult(instance, resolvedResult, isSSR); }) .catch(e => { handleError(e, instance, 0 /* SETUP_FUNCTION */); }); } else { // async setup returned Promise. // bail here and wait for re-entry. instance.asyncDep = setupResult; } } else { handleSetupResult(instance, setupResult, isSSR); } } else { finishComponentSetup(instance, isSSR); } } function handleSetupResult(instance, setupResult, isSSR) { if (isFunction(setupResult)) { // setup returned an inline render function { instance.render = setupResult; } } else if (isObject(setupResult)) { if (isVNode(setupResult)) { warn(`setup() should not return VNodes directly - ` + `return a render function instead.`); } // setup returned bindings. // assuming a render function compiled from template is present. { instance.devtoolsRawSetupState = setupResult; } instance.setupState = proxyRefs(setupResult); { exposeSetupStateOnRenderContext(instance); } } else if (setupResult !== undefined) { warn(`setup() should return an object. Received: ${setupResult === null ? 'null' : typeof setupResult}`); } finishComponentSetup(instance, isSSR); } let compile; // dev only const isRuntimeOnly = () => !compile; /** * For runtime-dom to register the compiler. * Note the exported method uses any to avoid d.ts relying on the compiler types. */ function registerRuntimeCompiler(_compile) { compile = _compile; } function finishComponentSetup(instance, isSSR) { const Component = instance.type; // template / render function normalization if (!instance.render) { // could be set from setup() if (compile && Component.template && !Component.render) { { startMeasure(instance, `compile`); } Component.render = compile(Component.template, { isCustomElement: instance.appContext.config.isCustomElement, delimiters: Component.delimiters }); { endMeasure(instance, `compile`); } } instance.render = (Component.render || NOOP); // for runtime-compiled render functions using `with` blocks, the render // proxy used needs a different `has` handler which is more performant and // also only allows a whitelist of globals to fallthrough. if (instance.render._rc) { instance.withProxy = new Proxy(instance.ctx, RuntimeCompiledPublicInstanceProxyHandlers); } } // support for 2.x options { currentInstance = instance; pauseTracking(); applyOptions(instance, Component); resetTracking(); currentInstance = null; } // warn missing template/render // the runtime compilation of template in SSR is done by server-render if (!Component.render && instance.render === NOOP && !isSSR) { /* istanbul ignore if */ if (!compile && Component.template) { warn(`Component provided template option but ` + `runtime compilation is not supported in this build of Vue.` + (` Use "vue.global.js" instead.` ) /* should not happen */); } else { warn(`Component is missing template or render function.`); } } } const attrHandlers = { get: (target, key) => { { markAttrsAccessed(); } return target[key]; }, set: () => { warn(`setupContext.attrs is readonly.`); return false; }, deleteProperty: () => { warn(`setupContext.attrs is readonly.`); return false; } }; function createSetupContext(instance) { const expose = exposed => { if (instance.exposed) { warn(`expose() should be called only once per setup().`); } instance.exposed = proxyRefs(exposed); }; { // We use getters in dev in case libs like test-utils overwrite instance // properties (overwrites should not be done in prod) return Object.freeze({ get attrs() { return new Proxy(instance.attrs, attrHandlers); }, get slots() { return shallowReadonly(instance.slots); }, get emit() { return (event, ...args) => instance.emit(event, ...args); }, expose }); } } // record effects created during a component's setup() so that they can be // stopped when the component unmounts function recordInstanceBoundEffect(effect, instance = currentInstance) { if (instance) { (instance.effects || (instance.effects = [])).push(effect); } } const classifyRE = /(?:^|[-_])(\w)/g; const classify = (str) => str.replace(classifyRE, c => c.toUpperCase()).replace(/[-_]/g, ''); function getComponentName(Component) { return isFunction(Component) ? Component.displayName || Component.name : Component.name; } /* istanbul ignore next */ function formatComponentName(instance, Component, isRoot = false) { let name = getComponentName(Component); if (!name && Component.__file) { const match = Component.__file.match(/([^/\\]+)\.\w+$/); if (match) { name = match[1]; } } if (!name && instance && instance.parent) { // try to infer the name based on reverse resolution const inferFromRegistry = (registry) => { for (const key in registry) { if (registry[key] === Component) { return key; } } }; name = inferFromRegistry(instance.components || instance.parent.type.components) || inferFromRegistry(instance.appContext.components); } return name ? classify(name) : isRoot ? `App` : `Anonymous`; } function isClassComponent(value) { return isFunction(value) && '__vccOpts' in value; } function computed$1(getterOrOptions) { const c = computed(getterOrOptions); recordInstanceBoundEffect(c.effect); return c; } // implementation function defineProps() { { warn(`defineProps() is a compiler-hint helper that is only usable inside ` + `<script setup> of a single file component. Its arguments should be ` + `compiled away and passing it at runtime has no effect.`); } return null; } // implementation function defineEmit() { { warn(`defineEmit() is a compiler-hint helper that is only usable inside ` + `<script setup> of a single file component. Its arguments should be ` + `compiled away and passing it at runtime has no effect.`); } return null; } function useContext() { const i = getCurrentInstance(); if (!i) { warn(`useContext() called without active instance.`); } return i.setupContext || (i.setupContext = createSetupContext(i)); } // Actual implementation function h(type, propsOrChildren, children) { const l = arguments.length; if (l === 2) { if (isObject(propsOrChildren) && !isArray(propsOrChildren)) { // single vnode without props if (isVNode(propsOrChildren)) { return createVNode(type, null, [propsOrChildren]); } // props without children return createVNode(type, propsOrChildren); } else { // omit props return createVNode(type, null, propsOrChildren); } } else { if (l > 3) { children = Array.prototype.slice.call(arguments, 2); } else if (l === 3 && isVNode(children)) { children = [children]; } return createVNode(type, propsOrChildren, children); } } const ssrContextKey = Symbol(`ssrContext` ); const useSSRContext = () => { { warn(`useSSRContext() is not supported in the global build.`); } }; function initCustomFormatter() { /* eslint-disable no-restricted-globals */ if (typeof window === 'undefined') { return; } const vueStyle = { style: 'color:#3ba776' }; const numberStyle = { style: 'color:#0b1bc9' }; const stringStyle = { style: 'color:#b62e24' }; const keywordStyle = { style: 'color:#9d288c' }; // custom formatter for Chrome // https://www.mattzeunert.com/2016/02/19/custom-chrome-devtools-object-formatters.html const formatter = { header(obj) { // TODO also format ComponentPublicInstance & ctx.slots/attrs in setup if (!isObject(obj)) { return null; } if (obj.__isVue) { return ['div', vueStyle, `VueInstance`]; } else if (isRef(obj)) { return [ 'div', {}, ['span', vueStyle, genRefFlag(obj)], '<', formatValue(obj.value), `>` ]; } else if (isReactive(obj)) { return [ 'div', {}, ['span', vueStyle, 'Reactive'], '<', formatValue(obj), `>${isReadonly(obj) ? ` (readonly)` : ``}` ]; } else if (isReadonly(obj)) { return [ 'div', {}, ['span', vueStyle, 'Readonly'], '<', formatValue(obj), '>' ]; } return null; }, hasBody(obj) { return obj && obj.__isVue; }, body(obj) { if (obj && obj.__isVue) { return [ 'div', {}, ...formatInstance(obj.$) ]; } } }; function formatInstance(instance) { const blocks = []; if (instance.type.props && instance.props) { blocks.push(createInstanceBlock('props', toRaw(instance.props))); } if (instance.setupState !== EMPTY_OBJ) { blocks.push(createInstanceBlock('setup', instance.setupState)); } if (instance.data !== EMPTY_OBJ) { blocks.push(createInstanceBlock('data', toRaw(instance.data))); } const computed = extractKeys(instance, 'computed'); if (computed) { blocks.push(createInstanceBlock('computed', computed)); } const injected = extractKeys(instance, 'inject'); if (injected) { blocks.push(createInstanceBlock('injected', injected)); } blocks.push([ 'div', {}, [ 'span', { style: keywordStyle.style + ';opacity:0.66' }, '$ (internal): ' ], ['object', { object: instance }] ]); return blocks; } function createInstanceBlock(type, target) { target = extend({}, target); if (!Object.keys(target).length) { return ['span', {}]; } return [ 'div', { style: 'line-height:1.25em;margin-bottom:0.6em' }, [ 'div', { style: 'color:#476582' }, type ], [ 'div', { style: 'padding-left:1.25em' }, ...Object.keys(target).map(key => { return [ 'div', {}, ['span', keywordStyle, key + ': '], formatValue(target[key], false) ]; }) ] ]; } function formatValue(v, asRaw = true) { if (typeof v === 'number') { return ['span', numberStyle, v]; } else if (typeof v === 'string') { return ['span', stringStyle, JSON.stringify(v)]; } else if (typeof v === 'boolean') { return ['span', keywordStyle, v]; } else if (isObject(v)) { return ['object', { object: asRaw ? toRaw(v) : v }]; } else { return ['span', stringStyle, String(v)]; } } function extractKeys(instance, type) { const Comp = instance.type; if (isFunction(Comp)) { return; } const extracted = {}; for (const key in instance.ctx) { if (isKeyOfType(Comp, key, type)) { extracted[key] = instance.ctx[key]; } } return extracted; } function isKeyOfType(Comp, key, type) { const opts = Comp[type]; if ((isArray(opts) && opts.includes(key)) || (isObject(opts) && key in opts)) { return true; } if (Comp.extends && isKeyOfType(Comp.extends, key, type)) { return true; } if (Comp.mixins && Comp.mixins.some(m => isKeyOfType(m, key, type))) { return true; } } function genRefFlag(v) { if (v._shallow) { return `ShallowRef`; } if (v.effect) { return `ComputedRef`; } return `Ref`; } if (window.devtoolsFormatters) { window.devtoolsFormatters.push(formatter); } else { window.devtoolsFormatters = [formatter]; } } /** * Actual implementation */ function renderList(source, renderItem) { let ret; if (isArray(source) || isString(source)) { ret = new Array(source.length); for (let i = 0, l = source.length; i < l; i++) { ret[i] = renderItem(source[i], i); } } else if (typeof source === 'number') { if (!Number.isInteger(source)) { warn(`The v-for range expect an integer value but got ${source}.`); return []; } ret = new Array(source); for (let i = 0; i < source; i++) { ret[i] = renderItem(i + 1, i); } } else if (isObject(source)) { if (source[Symbol.iterator]) { ret = Array.from(source, renderItem); } else { const keys = Object.keys(source); ret = new Array(keys.length); for (let i = 0, l = keys.length; i < l; i++) { const key = keys[i]; ret[i] = renderItem(source[key], key, i); } } } else { ret = []; } return ret; } /** * For prefixing keys in v-on="obj" with "on" * @private */ function toHandlers(obj) { const ret = {}; if (!isObject(obj)) { warn(`v-on with no argument expects an object value.`); return ret; } for (const key in obj) { ret[toHandlerKey(key)] = obj[key]; } return ret; } /** * Compiler runtime helper for creating dynamic slots object * @private */ function createSlots(slots, dynamicSlots) { for (let i = 0; i < dynamicSlots.length; i++) { const slot = dynamicSlots[i]; // array of dynamic slot generated by <template v-for="..." #[...]> if (isArray(slot)) { for (let j = 0; j < slot.length; j++) { slots[slot[j].name] = slot[j].fn; } } else if (slot) { // conditional single slot generated by <template v-if="..." #foo> slots[slot.name] = slot.fn; } } return slots; } // Core API ------------------------------------------------------------------ const version = "3.0.11"; /** * SSR utils for \@vue/server-renderer. Only exposed in cjs builds. * @internal */ const ssrUtils = (null); const svgNS = 'http://www.w3.org/2000/svg'; const doc = (typeof document !== 'undefined' ? document : null); let tempContainer; let tempSVGContainer; const nodeOps = { insert: (child, parent, anchor) => { parent.insertBefore(child, anchor || null); }, remove: child => { const parent = child.parentNode; if (parent) { parent.removeChild(child); } }, createElement: (tag, isSVG, is, props) => { const el = isSVG ? doc.createElementNS(svgNS, tag) : doc.createElement(tag, is ? { is } : undefined); if (tag === 'select' && props && props.multiple != null) { el.setAttribute('multiple', props.multiple); } return el; }, createText: text => doc.createTextNode(text), createComment: text => doc.createComment(text), setText: (node, text) => { node.nodeValue = text; }, setElementText: (el, text) => { el.textContent = text; }, parentNode: node => node.parentNode, nextSibling: node => node.nextSibling, querySelector: selector => doc.querySelector(selector), setScopeId(el, id) { el.setAttribute(id, ''); }, cloneNode(el) { const cloned = el.cloneNode(true); // #3072 // - in `patchDOMProp`, we store the actual value in the `el._value` property. // - normally, elements using `:value` bindings will not be hoisted, but if // the bound value is a constant, e.g. `:value="true"` - they do get // hoisted. // - in production, hoisted nodes are cloned when subsequent inserts, but // cloneNode() does not copy the custom property we attached. // - This may need to account for other custom DOM properties we attach to // elements in addition to `_value` in the future. if (`_value` in el) { cloned._value = el._value; } return cloned; }, // __UNSAFE__ // Reason: innerHTML. // Static content here can only come from compiled templates. // As long as the user only uses trusted templates, this is safe. insertStaticContent(content, parent, anchor, isSVG) { const temp = isSVG ? tempSVGContainer || (tempSVGContainer = doc.createElementNS(svgNS, 'svg')) : tempContainer || (tempContainer = doc.createElement('div')); temp.innerHTML = content; const first = temp.firstChild; let node = first; let last = node; while (node) { last = node; nodeOps.insert(node, parent, anchor); node = temp.firstChild; } return [first, last]; } }; // compiler should normalize class + :class bindings on the same element // into a single binding ['staticClass', dynamic] function patchClass(el, value, isSVG) { if (value == null) { value = ''; } if (isSVG) { el.setAttribute('class', value); } else { // directly setting className should be faster than setAttribute in theory // if this is an element during a transition, take the temporary transition // classes into account. const transitionClasses = el._vtc; if (transitionClasses) { value = (value ? [value, ...transitionClasses] : [...transitionClasses]).join(' '); } el.className = value; } } function patchStyle(el, prev, next) { const style = el.style; if (!next) { el.removeAttribute('style'); } else if (isString(next)) { if (prev !== next) { const current = style.display; style.cssText = next; // indicates that the `display` of the element is controlled by `v-show`, // so we always keep the current `display` value regardless of the `style` value, // thus handing over control to `v-show`. if ('_vod' in el) { style.display = current; } } } else { for (const key in next) { setStyle(style, key, next[key]); } if (prev && !isString(prev)) { for (const key in prev) { if (next[key] == null) { setStyle(style, key, ''); } } } } } const importantRE = /\s*!important$/; function setStyle(style, name, val) { if (isArray(val)) { val.forEach(v => setStyle(style, name, v)); } else { if (name.startsWith('--')) { // custom property definition style.setProperty(name, val); } else { const prefixed = autoPrefix(style, name); if (importantRE.test(val)) { // !important style.setProperty(hyphenate(prefixed), val.replace(importantRE, ''), 'important'); } else { style[prefixed] = val; } } } } const prefixes = ['Webkit', 'Moz', 'ms']; const prefixCache = {}; function autoPrefix(style, rawName) { const cached = prefixCache[rawName]; if (cached) { return cached; } let name = camelize(rawName); if (name !== 'filter' && name in style) { return (prefixCache[rawName] = name); } name = capitalize(name); for (let i = 0; i < prefixes.length; i++) { const prefixed = prefixes[i] + name; if (prefixed in style) { return (prefixCache[rawName] = prefixed); } } return rawName; } const xlinkNS = 'http://www.w3.org/1999/xlink'; function patchAttr(el, key, value, isSVG) { if (isSVG && key.startsWith('xlink:')) { if (value == null) { el.removeAttributeNS(xlinkNS, key.slice(6, key.length)); } else { el.setAttributeNS(xlinkNS, key, value); } } else { // note we are only checking boolean attributes that don't have a // corresponding dom prop of the same name here. const isBoolean = isSpecialBooleanAttr(key); if (value == null || (isBoolean && value === false)) { el.removeAttribute(key); } else { el.setAttribute(key, isBoolean ? '' : value); } } } // __UNSAFE__ // functions. The user is responsible for using them with only trusted content. function patchDOMProp(el, key, value, // the following args are passed only due to potential innerHTML/textContent // overriding existing VNodes, in which case the old tree must be properly // unmounted. prevChildren, parentComponent, parentSuspense, unmountChildren) { if (key === 'innerHTML' || key === 'textContent') { if (prevChildren) { unmountChildren(prevChildren, parentComponent, parentSuspense); } el[key] = value == null ? '' : value; return; } if (key === 'value' && el.tagName !== 'PROGRESS') { // store value as _value as well since // non-string values will be stringified. el._value = value; const newValue = value == null ? '' : value; if (el.value !== newValue) { el.value = newValue; } return; } if (value === '' || value == null) { const type = typeof el[key]; if (value === '' && type === 'boolean') { // e.g. <select multiple> compiles to { multiple: '' } el[key] = true; return; } else if (value == null && type === 'string') { // e.g. <div :id="null"> el[key] = ''; el.removeAttribute(key); return; } else if (type === 'number') { // e.g. <img :width="null"> el[key] = 0; el.removeAttribute(key); return; } } // some properties perform value validation and throw try { el[key] = value; } catch (e) { { warn(`Failed setting prop "${key}" on <${el.tagName.toLowerCase()}>: ` + `value ${value} is invalid.`, e); } } } // Async edge case fix requires storing an event listener's attach timestamp. let _getNow = Date.now; let skipTimestampCheck = false; if (typeof window !== 'undefined') { // Determine what event timestamp the browser is using. Annoyingly, the // timestamp can either be hi-res (relative to page load) or low-res // (relative to UNIX epoch), so in order to compare time we have to use the // same timestamp type when saving the flush timestamp. if (_getNow() > document.createEvent('Event').timeStamp) { // if the low-res timestamp which is bigger than the event timestamp // (which is evaluated AFTER) it means the event is using a hi-res timestamp, // and we need to use the hi-res version for event listeners as well. _getNow = () => performance.now(); } // #3485: Firefox <= 53 has incorrect Event.timeStamp implementation // and does not fire microtasks in between event propagation, so safe to exclude. const ffMatch = navigator.userAgent.match(/firefox\/(\d+)/i); skipTimestampCheck = !!(ffMatch && Number(ffMatch[1]) <= 53); } // To avoid the overhead of repeatedly calling performance.now(), we cache // and use the same timestamp for all event listeners attached in the same tick. let cachedNow = 0; const p = Promise.resolve(); const reset = () => { cachedNow = 0; }; const getNow = () => cachedNow || (p.then(reset), (cachedNow = _getNow())); function addEventListener(el, event, handler, options) { el.addEventListener(event, handler, options); } function removeEventListener(el, event, handler, options) { el.removeEventListener(event, handler, options); } function patchEvent(el, rawName, prevValue, nextValue, instance = null) { // vei = vue event invokers const invokers = el._vei || (el._vei = {}); const existingInvoker = invokers[rawName]; if (nextValue && existingInvoker) { // patch existingInvoker.value = nextValue; } else { const [name, options] = parseName(rawName); if (nextValue) { // add const invoker = (invokers[rawName] = createInvoker(nextValue, instance)); addEventListener(el, name, invoker, options); } else if (existingInvoker) { // remove removeEventListener(el, name, existingInvoker, options); invokers[rawName] = undefined; } } } const optionsModifierRE = /(?:Once|Passive|Capture)$/; function parseName(name) { let options; if (optionsModifierRE.test(name)) { options = {}; let m; while ((m = name.match(optionsModifierRE))) { name = name.slice(0, name.length - m[0].length); options[m[0].toLowerCase()] = true; } } return [hyphenate(name.slice(2)), options]; } function createInvoker(initialValue, instance) { const invoker = (e) => { // async edge case #6566: inner click event triggers patch, event handler // attached to outer element during patch, and triggered again. This // happens because browsers fire microtask ticks between event propagation. // the solution is simple: we save the timestamp when a handler is attached, // and the handler would only fire if the event passed to it was fired // AFTER it was attached. const timeStamp = e.timeStamp || _getNow(); if (skipTimestampCheck || timeStamp >= invoker.attached - 1) { callWithAsyncErrorHandling(patchStopImmediatePropagation(e, invoker.value), instance, 5 /* NATIVE_EVENT_HANDLER */, [e]); } }; invoker.value = initialValue; invoker.attached = getNow(); return invoker; } function patchStopImmediatePropagation(e, value) { if (isArray(value)) { const originalStop = e.stopImmediatePropagation; e.stopImmediatePropagation = () => { originalStop.call(e); e._stopped = true; }; return value.map(fn => (e) => !e._stopped && fn(e)); } else { return value; } } const nativeOnRE = /^on[a-z]/; const forcePatchProp = (_, key) => key === 'value'; const patchProp = (el, key, prevValue, nextValue, isSVG = false, prevChildren, parentComponent, parentSuspense, unmountChildren) => { switch (key) { // special case 'class': patchClass(el, nextValue, isSVG); break; case 'style': patchStyle(el, prevValue, nextValue); break; default: if (isOn(key)) { // ignore v-model listeners if (!isModelListener(key)) { patchEvent(el, key, prevValue, nextValue, parentComponent); } } else if (shouldSetAsProp(el, key, nextValue, isSVG)) { patchDOMProp(el, key, nextValue, prevChildren, parentComponent, parentSuspense, unmountChildren); } else { // special case for <input v-model type="checkbox"> with // :true-value & :false-value // store value as dom properties since non-string values will be // stringified. if (key === 'true-value') { el._trueValue = nextValue; } else if (key === 'false-value') { el._falseValue = nextValue; } patchAttr(el, key, nextValue, isSVG); } break; } }; function shouldSetAsProp(el, key, value, isSVG) { if (isSVG) { // most keys must be set as attribute on svg elements to work // ...except innerHTML if (key === 'innerHTML') { return true; } // or native onclick with function values if (key in el && nativeOnRE.test(key) && isFunction(value)) { return true; } return false; } // spellcheck and draggable are numerated attrs, however their // corresponding DOM properties are actually booleans - this leads to // setting it with a string "false" value leading it to be coerced to // `true`, so we need to always treat them as attributes. // Note that `contentEditable` doesn't have this problem: its DOM // property is also enumerated string values. if (key === 'spellcheck' || key === 'draggable') { return false; } // #1787, #2840 form property on form elements is readonly and must be set as // attribute. if (key === 'form') { return false; } // #1526 <input list> must be set as attribute if (key === 'list' && el.tagName === 'INPUT') { return false; } // #2766 <textarea type> must be set as attribute if (key === 'type' && el.tagName === 'TEXTAREA') { return false; } // native onclick with string value, must be set as attribute if (nativeOnRE.test(key) && isString(value)) { return false; } return key in el; } function useCssModule(name = '$style') { /* istanbul ignore else */ { { warn(`useCssModule() is not supported in the global build.`); } return EMPTY_OBJ; } } /** * Runtime helper for SFC's CSS variable injection feature. * @private */ function useCssVars(getter) { const instance = getCurrentInstance(); /* istanbul ignore next */ if (!instance) { warn(`useCssVars is called without current active component instance.`); return; } const setVars = () => setVarsOnVNode(instance.subTree, getter(instance.proxy)); onMounted(() => watchEffect(setVars, { flush: 'post' })); onUpdated(setVars); } function setVarsOnVNode(vnode, vars) { if (vnode.shapeFlag & 128 /* SUSPENSE */) { const suspense = vnode.suspense; vnode = suspense.activeBranch; if (suspense.pendingBranch && !suspense.isHydrating) { suspense.effects.push(() => { setVarsOnVNode(suspense.activeBranch, vars); }); } } // drill down HOCs until it's a non-component vnode while (vnode.component) { vnode = vnode.component.subTree; } if (vnode.shapeFlag & 1 /* ELEMENT */ && vnode.el) { const style = vnode.el.style; for (const key in vars) { style.setProperty(`--${key}`, vars[key]); } } else if (vnode.type === Fragment) { vnode.children.forEach(c => setVarsOnVNode(c, vars)); } } const TRANSITION = 'transition'; const ANIMATION = 'animation'; // DOM Transition is a higher-order-component based on the platform-agnostic // base Transition component, with DOM-specific logic. const Transition = (props, { slots }) => h(BaseTransition, resolveTransitionProps(props), slots); Transition.displayName = 'Transition'; const DOMTransitionPropsValidators = { name: String, type: String, css: { type: Boolean, default: true }, duration: [String, Number, Object], enterFromClass: String, enterActiveClass: String, enterToClass: String, appearFromClass: String, appearActiveClass: String, appearToClass: String, leaveFromClass: String, leaveActiveClass: String, leaveToClass: String }; const TransitionPropsValidators = (Transition.props = /*#__PURE__*/ extend({}, BaseTransition.props, DOMTransitionPropsValidators)); function resolveTransitionProps(rawProps) { let { name = 'v', type, css = true, duration, enterFromClass = `${name}-enter-from`, enterActiveClass = `${name}-enter-active`, enterToClass = `${name}-enter-to`, appearFromClass = enterFromClass, appearActiveClass = enterActiveClass, appearToClass = enterToClass, leaveFromClass = `${name}-leave-from`, leaveActiveClass = `${name}-leave-active`, leaveToClass = `${name}-leave-to` } = rawProps; const baseProps = {}; for (const key in rawProps) { if (!(key in DOMTransitionPropsValidators)) { baseProps[key] = rawProps[key]; } } if (!css) { return baseProps; } const durations = normalizeDuration(duration); const enterDuration = durations && durations[0]; const leaveDuration = durations && durations[1]; const { onBeforeEnter, onEnter, onEnterCancelled, onLeave, onLeaveCancelled, onBeforeAppear = onBeforeEnter, onAppear = onEnter, onAppearCancelled = onEnterCancelled } = baseProps; const finishEnter = (el, isAppear, done) => { removeTransitionClass(el, isAppear ? appearToClass : enterToClass); removeTransitionClass(el, isAppear ? appearActiveClass : enterActiveClass); done && done(); }; const finishLeave = (el, done) => { removeTransitionClass(el, leaveToClass); removeTransitionClass(el, leaveActiveClass); done && done(); }; const makeEnterHook = (isAppear) => { return (el, done) => { const hook = isAppear ? onAppear : onEnter; const resolve = () => finishEnter(el, isAppear, done); hook && hook(el, resolve); nextFrame(() => { removeTransitionClass(el, isAppear ? appearFromClass : enterFromClass); addTransitionClass(el, isAppear ? appearToClass : enterToClass); if (!(hook && hook.length > 1)) { whenTransitionEnds(el, type, enterDuration, resolve); } }); }; }; return extend(baseProps, { onBeforeEnter(el) { onBeforeEnter && onBeforeEnter(el); addTransitionClass(el, enterFromClass); addTransitionClass(el, enterActiveClass); }, onBeforeAppear(el) { onBeforeAppear && onBeforeAppear(el); addTransitionClass(el, appearFromClass); addTransitionClass(el, appearActiveClass); }, onEnter: makeEnterHook(false), onAppear: makeEnterHook(true), onLeave(el, done) { const resolve = () => finishLeave(el, done); addTransitionClass(el, leaveFromClass); // force reflow so *-leave-from classes immediately take effect (#2593) forceReflow(); addTransitionClass(el, leaveActiveClass); nextFrame(() => { removeTransitionClass(el, leaveFromClass); addTransitionClass(el, leaveToClass); if (!(onLeave && onLeave.length > 1)) { whenTransitionEnds(el, type, leaveDuration, resolve); } }); onLeave && onLeave(el, resolve); }, onEnterCancelled(el) { finishEnter(el, false); onEnterCancelled && onEnterCancelled(el); }, onAppearCancelled(el) { finishEnter(el, true); onAppearCancelled && onAppearCancelled(el); }, onLeaveCancelled(el) { finishLeave(el); onLeaveCancelled && onLeaveCancelled(el); } }); } function normalizeDuration(duration) { if (duration == null) { return null; } else if (isObject(duration)) { return [NumberOf(duration.enter), NumberOf(duration.leave)]; } else { const n = NumberOf(duration); return [n, n]; } } function NumberOf(val) { const res = toNumber(val); validateDuration(res); return res; } function validateDuration(val) { if (typeof val !== 'number') { warn(`<transition> explicit duration is not a valid number - ` + `got ${JSON.stringify(val)}.`); } else if (isNaN(val)) { warn(`<transition> explicit duration is NaN - ` + 'the duration expression might be incorrect.'); } } function addTransitionClass(el, cls) { cls.split(/\s+/).forEach(c => c && el.classList.add(c)); (el._vtc || (el._vtc = new Set())).add(cls); } function removeTransitionClass(el, cls) { cls.split(/\s+/).forEach(c => c && el.classList.remove(c)); const { _vtc } = el; if (_vtc) { _vtc.delete(cls); if (!_vtc.size) { el._vtc = undefined; } } } function nextFrame(cb) { requestAnimationFrame(() => { requestAnimationFrame(cb); }); } let endId = 0; function whenTransitionEnds(el, expectedType, explicitTimeout, resolve) { const id = (el._endId = ++endId); const resolveIfNotStale = () => { if (id === el._endId) { resolve(); } }; if (explicitTimeout) { return setTimeout(resolveIfNotStale, explicitTimeout); } const { type, timeout, propCount } = getTransitionInfo(el, expectedType); if (!type) { return resolve(); } const endEvent = type + 'end'; let ended = 0; const end = () => { el.removeEventListener(endEvent, onEnd); resolveIfNotStale(); }; const onEnd = (e) => { if (e.target === el && ++ended >= propCount) { end(); } }; setTimeout(() => { if (ended < propCount) { end(); } }, timeout + 1); el.addEventListener(endEvent, onEnd); } function getTransitionInfo(el, expectedType) { const styles = window.getComputedStyle(el); // JSDOM may return undefined for transition properties const getStyleProperties = (key) => (styles[key] || '').split(', '); const transitionDelays = getStyleProperties(TRANSITION + 'Delay'); const transitionDurations = getStyleProperties(TRANSITION + 'Duration'); const transitionTimeout = getTimeout(transitionDelays, transitionDurations); const animationDelays = getStyleProperties(ANIMATION + 'Delay'); const animationDurations = getStyleProperties(ANIMATION + 'Duration'); const animationTimeout = getTimeout(animationDelays, animationDurations); let type = null; let timeout = 0; let propCount = 0; /* istanbul ignore if */ if (expectedType === TRANSITION) { if (transitionTimeout > 0) { type = TRANSITION; timeout = transitionTimeout; propCount = transitionDurations.length; } } else if (expectedType === ANIMATION) { if (animationTimeout > 0) { type = ANIMATION; timeout = animationTimeout; propCount = animationDurations.length; } } else { timeout = Math.max(transitionTimeout, animationTimeout); type = timeout > 0 ? transitionTimeout > animationTimeout ? TRANSITION : ANIMATION : null; propCount = type ? type === TRANSITION ? transitionDurations.length : animationDurations.length : 0; } const hasTransform = type === TRANSITION && /\b(transform|all)(,|$)/.test(styles[TRANSITION + 'Property']); return { type, timeout, propCount, hasTransform }; } function getTimeout(delays, durations) { while (delays.length < durations.length) { delays = delays.concat(delays); } return Math.max(...durations.map((d, i) => toMs(d) + toMs(delays[i]))); } // Old versions of Chromium (below 61.0.3163.100) formats floating pointer // numbers in a locale-dependent way, using a comma instead of a dot. // If comma is not replaced with a dot, the input will be rounded down // (i.e. acting as a floor function) causing unexpected behaviors function toMs(s) { return Number(s.slice(0, -1).replace(',', '.')) * 1000; } // synchronously force layout to put elements into a certain state function forceReflow() { return document.body.offsetHeight; } const positionMap = new WeakMap(); const newPositionMap = new WeakMap(); const TransitionGroupImpl = { name: 'TransitionGroup', props: /*#__PURE__*/ extend({}, TransitionPropsValidators, { tag: String, moveClass: String }), setup(props, { slots }) { const instance = getCurrentInstance(); const state = useTransitionState(); let prevChildren; let children; onUpdated(() => { // children is guaranteed to exist after initial render if (!prevChildren.length) { return; } const moveClass = props.moveClass || `${props.name || 'v'}-move`; if (!hasCSSTransform(prevChildren[0].el, instance.vnode.el, moveClass)) { return; } // we divide the work into three loops to avoid mixing DOM reads and writes // in each iteration - which helps prevent layout thrashing. prevChildren.forEach(callPendingCbs); prevChildren.forEach(recordPosition); const movedChildren = prevChildren.filter(applyTranslation); // force reflow to put everything in position forceReflow(); movedChildren.forEach(c => { const el = c.el; const style = el.style; addTransitionClass(el, moveClass); style.transform = style.webkitTransform = style.transitionDuration = ''; const cb = (el._moveCb = (e) => { if (e && e.target !== el) { return; } if (!e || /transform$/.test(e.propertyName)) { el.removeEventListener('transitionend', cb); el._moveCb = null; removeTransitionClass(el, moveClass); } }); el.addEventListener('transitionend', cb); }); }); return () => { const rawProps = toRaw(props); const cssTransitionProps = resolveTransitionProps(rawProps); const tag = rawProps.tag || Fragment; prevChildren = children; children = slots.default ? getTransitionRawChildren(slots.default()) : []; for (let i = 0; i < children.length; i++) { const child = children[i]; if (child.key != null) { setTransitionHooks(child, resolveTransitionHooks(child, cssTransitionProps, state, instance)); } else { warn(`<TransitionGroup> children must be keyed.`); } } if (prevChildren) { for (let i = 0; i < prevChildren.length; i++) { const child = prevChildren[i]; setTransitionHooks(child, resolveTransitionHooks(child, cssTransitionProps, state, instance)); positionMap.set(child, child.el.getBoundingClientRect()); } } return createVNode(tag, null, children); }; } }; const TransitionGroup = TransitionGroupImpl; function callPendingCbs(c) { const el = c.el; if (el._moveCb) { el._moveCb(); } if (el._enterCb) { el._enterCb(); } } function recordPosition(c) { newPositionMap.set(c, c.el.getBoundingClientRect()); } function applyTranslation(c) { const oldPos = positionMap.get(c); const newPos = newPositionMap.get(c); const dx = oldPos.left - newPos.left; const dy = oldPos.top - newPos.top; if (dx || dy) { const s = c.el.style; s.transform = s.webkitTransform = `translate(${dx}px,${dy}px)`; s.transitionDuration = '0s'; return c; } } function hasCSSTransform(el, root, moveClass) { // Detect whether an element with the move class applied has // CSS transitions. Since the element may be inside an entering // transition at this very moment, we make a clone of it and remove // all other transition classes applied to ensure only the move class // is applied. const clone = el.cloneNode(); if (el._vtc) { el._vtc.forEach(cls => { cls.split(/\s+/).forEach(c => c && clone.classList.remove(c)); }); } moveClass.split(/\s+/).forEach(c => c && clone.classList.add(c)); clone.style.display = 'none'; const container = (root.nodeType === 1 ? root : root.parentNode); container.appendChild(clone); const { hasTransform } = getTransitionInfo(clone); container.removeChild(clone); return hasTransform; } const getModelAssigner = (vnode) => { const fn = vnode.props['onUpdate:modelValue']; return isArray(fn) ? value => invokeArrayFns(fn, value) : fn; }; function onCompositionStart(e) { e.target.composing = true; } function onCompositionEnd(e) { const target = e.target; if (target.composing) { target.composing = false; trigger$1(target, 'input'); } } function trigger$1(el, type) { const e = document.createEvent('HTMLEvents'); e.initEvent(type, true, true); el.dispatchEvent(e); } // We are exporting the v-model runtime directly as vnode hooks so that it can // be tree-shaken in case v-model is never used. const vModelText = { created(el, { modifiers: { lazy, trim, number } }, vnode) { el._assign = getModelAssigner(vnode); const castToNumber = number || el.type === 'number'; addEventListener(el, lazy ? 'change' : 'input', e => { if (e.target.composing) return; let domValue = el.value; if (trim) { domValue = domValue.trim(); } else if (castToNumber) { domValue = toNumber(domValue); } el._assign(domValue); }); if (trim) { addEventListener(el, 'change', () => { el.value = el.value.trim(); }); } if (!lazy) { addEventListener(el, 'compositionstart', onCompositionStart); addEventListener(el, 'compositionend', onCompositionEnd); // Safari < 10.2 & UIWebView doesn't fire compositionend when // switching focus before confirming composition choice // this also fixes the issue where some browsers e.g. iOS Chrome // fires "change" instead of "input" on autocomplete. addEventListener(el, 'change', onCompositionEnd); } }, // set value on mounted so it's after min/max for type="range" mounted(el, { value }) { el.value = value == null ? '' : value; }, beforeUpdate(el, { value, modifiers: { trim, number } }, vnode) { el._assign = getModelAssigner(vnode); // avoid clearing unresolved text. #2302 if (el.composing) return; if (document.activeElement === el) { if (trim && el.value.trim() === value) { return; } if ((number || el.type === 'number') && toNumber(el.value) === value) { return; } } const newValue = value == null ? '' : value; if (el.value !== newValue) { el.value = newValue; } } }; const vModelCheckbox = { created(el, _, vnode) { el._assign = getModelAssigner(vnode); addEventListener(el, 'change', () => { const modelValue = el._modelValue; const elementValue = getValue(el); const checked = el.checked; const assign = el._assign; if (isArray(modelValue)) { const index = looseIndexOf(modelValue, elementValue); const found = index !== -1; if (checked && !found) { assign(modelValue.concat(elementValue)); } else if (!checked && found) { const filtered = [...modelValue]; filtered.splice(index, 1); assign(filtered); } } else if (isSet(modelValue)) { const cloned = new Set(modelValue); if (checked) { cloned.add(elementValue); } else { cloned.delete(elementValue); } assign(cloned); } else { assign(getCheckboxValue(el, checked)); } }); }, // set initial checked on mount to wait for true-value/false-value mounted: setChecked, beforeUpdate(el, binding, vnode) { el._assign = getModelAssigner(vnode); setChecked(el, binding, vnode); } }; function setChecked(el, { value, oldValue }, vnode) { el._modelValue = value; if (isArray(value)) { el.checked = looseIndexOf(value, vnode.props.value) > -1; } else if (isSet(value)) { el.checked = value.has(vnode.props.value); } else if (value !== oldValue) { el.checked = looseEqual(value, getCheckboxValue(el, true)); } } const vModelRadio = { created(el, { value }, vnode) { el.checked = looseEqual(value, vnode.props.value); el._assign = getModelAssigner(vnode); addEventListener(el, 'change', () => { el._assign(getValue(el)); }); }, beforeUpdate(el, { value, oldValue }, vnode) { el._assign = getModelAssigner(vnode); if (value !== oldValue) { el.checked = looseEqual(value, vnode.props.value); } } }; const vModelSelect = { created(el, { value, modifiers: { number } }, vnode) { const isSetModel = isSet(value); addEventListener(el, 'change', () => { const selectedVal = Array.prototype.filter .call(el.options, (o) => o.selected) .map((o) => number ? toNumber(getValue(o)) : getValue(o)); el._assign(el.multiple ? isSetModel ? new Set(selectedVal) : selectedVal : selectedVal[0]); }); el._assign = getModelAssigner(vnode); }, // set value in mounted & updated because <select> relies on its children // <option>s. mounted(el, { value }) { setSelected(el, value); }, beforeUpdate(el, _binding, vnode) { el._assign = getModelAssigner(vnode); }, updated(el, { value }) { setSelected(el, value); } }; function setSelected(el, value) { const isMultiple = el.multiple; if (isMultiple && !isArray(value) && !isSet(value)) { warn(`<select multiple v-model> expects an Array or Set value for its binding, ` + `but got ${Object.prototype.toString.call(value).slice(8, -1)}.`); return; } for (let i = 0, l = el.options.length; i < l; i++) { const option = el.options[i]; const optionValue = getValue(option); if (isMultiple) { if (isArray(value)) { option.selected = looseIndexOf(value, optionValue) > -1; } else { option.selected = value.has(optionValue); } } else { if (looseEqual(getValue(option), value)) { el.selectedIndex = i; return; } } } if (!isMultiple) { el.selectedIndex = -1; } } // retrieve raw value set via :value bindings function getValue(el) { return '_value' in el ? el._value : el.value; } // retrieve raw value for true-value and false-value set via :true-value or :false-value bindings function getCheckboxValue(el, checked) { const key = checked ? '_trueValue' : '_falseValue'; return key in el ? el[key] : checked; } const vModelDynamic = { created(el, binding, vnode) { callModelHook(el, binding, vnode, null, 'created'); }, mounted(el, binding, vnode) { callModelHook(el, binding, vnode, null, 'mounted'); }, beforeUpdate(el, binding, vnode, prevVNode) { callModelHook(el, binding, vnode, prevVNode, 'beforeUpdate'); }, updated(el, binding, vnode, prevVNode) { callModelHook(el, binding, vnode, prevVNode, 'updated'); } }; function callModelHook(el, binding, vnode, prevVNode, hook) { let modelToUse; switch (el.tagName) { case 'SELECT': modelToUse = vModelSelect; break; case 'TEXTAREA': modelToUse = vModelText; break; default: switch (vnode.props && vnode.props.type) { case 'checkbox': modelToUse = vModelCheckbox; break; case 'radio': modelToUse = vModelRadio; break; default: modelToUse = vModelText; } } const fn = modelToUse[hook]; fn && fn(el, binding, vnode, prevVNode); } const systemModifiers = ['ctrl', 'shift', 'alt', 'meta']; const modifierGuards = { stop: e => e.stopPropagation(), prevent: e => e.preventDefault(), self: e => e.target !== e.currentTarget, ctrl: e => !e.ctrlKey, shift: e => !e.shiftKey, alt: e => !e.altKey, meta: e => !e.metaKey, left: e => 'button' in e && e.button !== 0, middle: e => 'button' in e && e.button !== 1, right: e => 'button' in e && e.button !== 2, exact: (e, modifiers) => systemModifiers.some(m => e[`${m}Key`] && !modifiers.includes(m)) }; /** * @private */ const withModifiers = (fn, modifiers) => { return (event, ...args) => { for (let i = 0; i < modifiers.length; i++) { const guard = modifierGuards[modifiers[i]]; if (guard && guard(event, modifiers)) return; } return fn(event, ...args); }; }; // Kept for 2.x compat. // Note: IE11 compat for `spacebar` and `del` is removed for now. const keyNames = { esc: 'escape', space: ' ', up: 'arrow-up', left: 'arrow-left', right: 'arrow-right', down: 'arrow-down', delete: 'backspace' }; /** * @private */ const withKeys = (fn, modifiers) => { return (event) => { if (!('key' in event)) return; const eventKey = hyphenate(event.key); if ( // None of the provided key modifiers match the current event key !modifiers.some(k => k === eventKey || keyNames[k] === eventKey)) { return; } return fn(event); }; }; const vShow = { beforeMount(el, { value }, { transition }) { el._vod = el.style.display === 'none' ? '' : el.style.display; if (transition && value) { transition.beforeEnter(el); } else { setDisplay(el, value); } }, mounted(el, { value }, { transition }) { if (transition && value) { transition.enter(el); } }, updated(el, { value, oldValue }, { transition }) { if (!value === !oldValue) return; if (transition) { if (value) { transition.beforeEnter(el); setDisplay(el, true); transition.enter(el); } else { transition.leave(el, () => { setDisplay(el, false); }); } } else { setDisplay(el, value); } }, beforeUnmount(el, { value }) { setDisplay(el, value); } }; function setDisplay(el, value) { el.style.display = value ? el._vod : 'none'; } const rendererOptions = extend({ patchProp, forcePatchProp }, nodeOps); // lazy create the renderer - this makes core renderer logic tree-shakable // in case the user only imports reactivity utilities from Vue. let renderer; let enabledHydration = false; function ensureRenderer() { return renderer || (renderer = createRenderer(rendererOptions)); } function ensureHydrationRenderer() { renderer = enabledHydration ? renderer : createHydrationRenderer(rendererOptions); enabledHydration = true; return renderer; } // use explicit type casts here to avoid import() calls in rolled-up d.ts const render = ((...args) => { ensureRenderer().render(...args); }); const hydrate = ((...args) => { ensureHydrationRenderer().hydrate(...args); }); const createApp = ((...args) => { const app = ensureRenderer().createApp(...args); { injectNativeTagCheck(app); injectCustomElementCheck(app); } const { mount } = app; app.mount = (containerOrSelector) => { const container = normalizeContainer(containerOrSelector); if (!container) return; const component = app._component; if (!isFunction(component) && !component.render && !component.template) { component.template = container.innerHTML; } // clear content before mounting container.innerHTML = ''; const proxy = mount(container, false, container instanceof SVGElement); if (container instanceof Element) { container.removeAttribute('v-cloak'); container.setAttribute('data-v-app', ''); } return proxy; }; return app; }); const createSSRApp = ((...args) => { const app = ensureHydrationRenderer().createApp(...args); { injectNativeTagCheck(app); injectCustomElementCheck(app); } const { mount } = app; app.mount = (containerOrSelector) => { const container = normalizeContainer(containerOrSelector); if (container) { return mount(container, true, container instanceof SVGElement); } }; return app; }); function injectNativeTagCheck(app) { // Inject `isNativeTag` // this is used for component name validation (dev only) Object.defineProperty(app.config, 'isNativeTag', { value: (tag) => isHTMLTag(tag) || isSVGTag(tag), writable: false }); } // dev only function injectCustomElementCheck(app) { if (isRuntimeOnly()) { const value = app.config.isCustomElement; Object.defineProperty(app.config, 'isCustomElement', { get() { return value; }, set() { warn(`The \`isCustomElement\` config option is only respected when using the runtime compiler.` + `If you are using the runtime-only build, \`isCustomElement\` must be passed to \`@vue/compiler-dom\` in the build setup instead` + `- for example, via the \`compilerOptions\` option in vue-loader: https://vue-loader.vuejs.org/options.html#compileroptions.`); } }); } } function normalizeContainer(container) { if (isString(container)) { const res = document.querySelector(container); if (!res) { warn(`Failed to mount app: mount target selector "${container}" returned null.`); } return res; } if (container instanceof window.ShadowRoot && container.mode === 'closed') { warn(`mounting on a ShadowRoot with \`{mode: "closed"}\` may lead to unpredictable bugs`); } return container; } function initDev() { { { console.info(`You are running a development build of Vue.\n` + `Make sure to use the production build (*.prod.js) when deploying for production.`); } initCustomFormatter(); } } function defaultOnError(error) { throw error; } function createCompilerError(code, loc, messages, additionalMessage) { const msg = (messages || errorMessages)[code] + (additionalMessage || ``) ; const error = new SyntaxError(String(msg)); error.code = code; error.loc = loc; return error; } const errorMessages = { // parse errors [0 /* ABRUPT_CLOSING_OF_EMPTY_COMMENT */]: 'Illegal comment.', [1 /* CDATA_IN_HTML_CONTENT */]: 'CDATA section is allowed only in XML context.', [2 /* DUPLICATE_ATTRIBUTE */]: 'Duplicate attribute.', [3 /* END_TAG_WITH_ATTRIBUTES */]: 'End tag cannot have attributes.', [4 /* END_TAG_WITH_TRAILING_SOLIDUS */]: "Illegal '/' in tags.", [5 /* EOF_BEFORE_TAG_NAME */]: 'Unexpected EOF in tag.', [6 /* EOF_IN_CDATA */]: 'Unexpected EOF in CDATA section.', [7 /* EOF_IN_COMMENT */]: 'Unexpected EOF in comment.', [8 /* EOF_IN_SCRIPT_HTML_COMMENT_LIKE_TEXT */]: 'Unexpected EOF in script.', [9 /* EOF_IN_TAG */]: 'Unexpected EOF in tag.', [10 /* INCORRECTLY_CLOSED_COMMENT */]: 'Incorrectly closed comment.', [11 /* INCORRECTLY_OPENED_COMMENT */]: 'Incorrectly opened comment.', [12 /* INVALID_FIRST_CHARACTER_OF_TAG_NAME */]: "Illegal tag name. Use '&lt;' to print '<'.", [13 /* MISSING_ATTRIBUTE_VALUE */]: 'Attribute value was expected.', [14 /* MISSING_END_TAG_NAME */]: 'End tag name was expected.', [15 /* MISSING_WHITESPACE_BETWEEN_ATTRIBUTES */]: 'Whitespace was expected.', [16 /* NESTED_COMMENT */]: "Unexpected '<!--' in comment.", [17 /* UNEXPECTED_CHARACTER_IN_ATTRIBUTE_NAME */]: 'Attribute name cannot contain U+0022 ("), U+0027 (\'), and U+003C (<).', [18 /* UNEXPECTED_CHARACTER_IN_UNQUOTED_ATTRIBUTE_VALUE */]: 'Unquoted attribute value cannot contain U+0022 ("), U+0027 (\'), U+003C (<), U+003D (=), and U+0060 (`).', [19 /* UNEXPECTED_EQUALS_SIGN_BEFORE_ATTRIBUTE_NAME */]: "Attribute name cannot start with '='.", [21 /* UNEXPECTED_QUESTION_MARK_INSTEAD_OF_TAG_NAME */]: "'<?' is allowed only in XML context.", [22 /* UNEXPECTED_SOLIDUS_IN_TAG */]: "Illegal '/' in tags.", // Vue-specific parse errors [23 /* X_INVALID_END_TAG */]: 'Invalid end tag.', [24 /* X_MISSING_END_TAG */]: 'Element is missing end tag.', [25 /* X_MISSING_INTERPOLATION_END */]: 'Interpolation end sign was not found.', [26 /* X_MISSING_DYNAMIC_DIRECTIVE_ARGUMENT_END */]: 'End bracket for dynamic directive argument was not found. ' + 'Note that dynamic directive argument cannot contain spaces.', // transform errors [27 /* X_V_IF_NO_EXPRESSION */]: `v-if/v-else-if is missing expression.`, [28 /* X_V_IF_SAME_KEY */]: `v-if/else branches must use unique keys.`, [29 /* X_V_ELSE_NO_ADJACENT_IF */]: `v-else/v-else-if has no adjacent v-if.`, [30 /* X_V_FOR_NO_EXPRESSION */]: `v-for is missing expression.`, [31 /* X_V_FOR_MALFORMED_EXPRESSION */]: `v-for has invalid expression.`, [32 /* X_V_FOR_TEMPLATE_KEY_PLACEMENT */]: `<template v-for> key should be placed on the <template> tag.`, [33 /* X_V_BIND_NO_EXPRESSION */]: `v-bind is missing expression.`, [34 /* X_V_ON_NO_EXPRESSION */]: `v-on is missing expression.`, [35 /* X_V_SLOT_UNEXPECTED_DIRECTIVE_ON_SLOT_OUTLET */]: `Unexpected custom directive on <slot> outlet.`, [36 /* X_V_SLOT_MIXED_SLOT_USAGE */]: `Mixed v-slot usage on both the component and nested <template>.` + `When there are multiple named slots, all slots should use <template> ` + `syntax to avoid scope ambiguity.`, [37 /* X_V_SLOT_DUPLICATE_SLOT_NAMES */]: `Duplicate slot names found. `, [38 /* X_V_SLOT_EXTRANEOUS_DEFAULT_SLOT_CHILDREN */]: `Extraneous children found when component already has explicitly named ` + `default slot. These children will be ignored.`, [39 /* X_V_SLOT_MISPLACED */]: `v-slot can only be used on components or <template> tags.`, [40 /* X_V_MODEL_NO_EXPRESSION */]: `v-model is missing expression.`, [41 /* X_V_MODEL_MALFORMED_EXPRESSION */]: `v-model value must be a valid JavaScript member expression.`, [42 /* X_V_MODEL_ON_SCOPE_VARIABLE */]: `v-model cannot be used on v-for or v-slot scope variables because they are not writable.`, [43 /* X_INVALID_EXPRESSION */]: `Error parsing JavaScript expression: `, [44 /* X_KEEP_ALIVE_INVALID_CHILDREN */]: `<KeepAlive> expects exactly one child component.`, // generic errors [45 /* X_PREFIX_ID_NOT_SUPPORTED */]: `"prefixIdentifiers" option is not supported in this build of compiler.`, [46 /* X_MODULE_MODE_NOT_SUPPORTED */]: `ES module mode is not supported in this build of compiler.`, [47 /* X_CACHE_HANDLER_NOT_SUPPORTED */]: `"cacheHandlers" option is only supported when the "prefixIdentifiers" option is enabled.`, [48 /* X_SCOPE_ID_NOT_SUPPORTED */]: `"scopeId" option is only supported in module mode.` }; const FRAGMENT = Symbol(`Fragment` ); const TELEPORT = Symbol(`Teleport` ); const SUSPENSE = Symbol(`Suspense` ); const KEEP_ALIVE = Symbol(`KeepAlive` ); const BASE_TRANSITION = Symbol(`BaseTransition` ); const OPEN_BLOCK = Symbol(`openBlock` ); const CREATE_BLOCK = Symbol(`createBlock` ); const CREATE_VNODE = Symbol(`createVNode` ); const CREATE_COMMENT = Symbol(`createCommentVNode` ); const CREATE_TEXT = Symbol(`createTextVNode` ); const CREATE_STATIC = Symbol(`createStaticVNode` ); const RESOLVE_COMPONENT = Symbol(`resolveComponent` ); const RESOLVE_DYNAMIC_COMPONENT = Symbol(`resolveDynamicComponent` ); const RESOLVE_DIRECTIVE = Symbol(`resolveDirective` ); const WITH_DIRECTIVES = Symbol(`withDirectives` ); const RENDER_LIST = Symbol(`renderList` ); const RENDER_SLOT = Symbol(`renderSlot` ); const CREATE_SLOTS = Symbol(`createSlots` ); const TO_DISPLAY_STRING = Symbol(`toDisplayString` ); const MERGE_PROPS = Symbol(`mergeProps` ); const TO_HANDLERS = Symbol(`toHandlers` ); const CAMELIZE = Symbol(`camelize` ); const CAPITALIZE = Symbol(`capitalize` ); const TO_HANDLER_KEY = Symbol(`toHandlerKey` ); const SET_BLOCK_TRACKING = Symbol(`setBlockTracking` ); const PUSH_SCOPE_ID = Symbol(`pushScopeId` ); const POP_SCOPE_ID = Symbol(`popScopeId` ); const WITH_SCOPE_ID = Symbol(`withScopeId` ); const WITH_CTX = Symbol(`withCtx` ); const UNREF = Symbol(`unref` ); const IS_REF = Symbol(`isRef` ); // Name mapping for runtime helpers that need to be imported from 'vue' in // generated code. Make sure these are correctly exported in the runtime! // Using `any` here because TS doesn't allow symbols as index type. const helperNameMap = { [FRAGMENT]: `Fragment`, [TELEPORT]: `Teleport`, [SUSPENSE]: `Suspense`, [KEEP_ALIVE]: `KeepAlive`, [BASE_TRANSITION]: `BaseTransition`, [OPEN_BLOCK]: `openBlock`, [CREATE_BLOCK]: `createBlock`, [CREATE_VNODE]: `createVNode`, [CREATE_COMMENT]: `createCommentVNode`, [CREATE_TEXT]: `createTextVNode`, [CREATE_STATIC]: `createStaticVNode`, [RESOLVE_COMPONENT]: `resolveComponent`, [RESOLVE_DYNAMIC_COMPONENT]: `resolveDynamicComponent`, [RESOLVE_DIRECTIVE]: `resolveDirective`, [WITH_DIRECTIVES]: `withDirectives`, [RENDER_LIST]: `renderList`, [RENDER_SLOT]: `renderSlot`, [CREATE_SLOTS]: `createSlots`, [TO_DISPLAY_STRING]: `toDisplayString`, [MERGE_PROPS]: `mergeProps`, [TO_HANDLERS]: `toHandlers`, [CAMELIZE]: `camelize`, [CAPITALIZE]: `capitalize`, [TO_HANDLER_KEY]: `toHandlerKey`, [SET_BLOCK_TRACKING]: `setBlockTracking`, [PUSH_SCOPE_ID]: `pushScopeId`, [POP_SCOPE_ID]: `popScopeId`, [WITH_SCOPE_ID]: `withScopeId`, [WITH_CTX]: `withCtx`, [UNREF]: `unref`, [IS_REF]: `isRef` }; function registerRuntimeHelpers(helpers) { Object.getOwnPropertySymbols(helpers).forEach(s => { helperNameMap[s] = helpers[s]; }); } // AST Utilities --------------------------------------------------------------- // Some expressions, e.g. sequence and conditional expressions, are never // associated with template nodes, so their source locations are just a stub. // Container types like CompoundExpression also don't need a real location. const locStub = { source: '', start: { line: 1, column: 1, offset: 0 }, end: { line: 1, column: 1, offset: 0 } }; function createRoot(children, loc = locStub) { return { type: 0 /* ROOT */, children, helpers: [], components: [], directives: [], hoists: [], imports: [], cached: 0, temps: 0, codegenNode: undefined, loc }; } function createVNodeCall(context, tag, props, children, patchFlag, dynamicProps, directives, isBlock = false, disableTracking = false, loc = locStub) { if (context) { if (isBlock) { context.helper(OPEN_BLOCK); context.helper(CREATE_BLOCK); } else { context.helper(CREATE_VNODE); } if (directives) { context.helper(WITH_DIRECTIVES); } } return { type: 13 /* VNODE_CALL */, tag, props, children, patchFlag, dynamicProps, directives, isBlock, disableTracking, loc }; } function createArrayExpression(elements, loc = locStub) { return { type: 17 /* JS_ARRAY_EXPRESSION */, loc, elements }; } function createObjectExpression(properties, loc = locStub) { return { type: 15 /* JS_OBJECT_EXPRESSION */, loc, properties }; } function createObjectProperty(key, value) { return { type: 16 /* JS_PROPERTY */, loc: locStub, key: isString(key) ? createSimpleExpression(key, true) : key, value }; } function createSimpleExpression(content, isStatic, loc = locStub, constType = 0 /* NOT_CONSTANT */) { return { type: 4 /* SIMPLE_EXPRESSION */, loc, content, isStatic, constType: isStatic ? 3 /* CAN_STRINGIFY */ : constType }; } function createCompoundExpression(children, loc = locStub) { return { type: 8 /* COMPOUND_EXPRESSION */, loc, children }; } function createCallExpression(callee, args = [], loc = locStub) { return { type: 14 /* JS_CALL_EXPRESSION */, loc, callee, arguments: args }; } function createFunctionExpression(params, returns = undefined, newline = false, isSlot = false, loc = locStub) { return { type: 18 /* JS_FUNCTION_EXPRESSION */, params, returns, newline, isSlot, loc }; } function createConditionalExpression(test, consequent, alternate, newline = true) { return { type: 19 /* JS_CONDITIONAL_EXPRESSION */, test, consequent, alternate, newline, loc: locStub }; } function createCacheExpression(index, value, isVNode = false) { return { type: 20 /* JS_CACHE_EXPRESSION */, index, value, isVNode, loc: locStub }; } const isStaticExp = (p) => p.type === 4 /* SIMPLE_EXPRESSION */ && p.isStatic; const isBuiltInType = (tag, expected) => tag === expected || tag === hyphenate(expected); function isCoreComponent(tag) { if (isBuiltInType(tag, 'Teleport')) { return TELEPORT; } else if (isBuiltInType(tag, 'Suspense')) { return SUSPENSE; } else if (isBuiltInType(tag, 'KeepAlive')) { return KEEP_ALIVE; } else if (isBuiltInType(tag, 'BaseTransition')) { return BASE_TRANSITION; } } const nonIdentifierRE = /^\d|[^\$\w]/; const isSimpleIdentifier = (name) => !nonIdentifierRE.test(name); const memberExpRE = /^[A-Za-z_$\xA0-\uFFFF][\w$\xA0-\uFFFF]*(?:\s*\.\s*[A-Za-z_$\xA0-\uFFFF][\w$\xA0-\uFFFF]*|\[[^\]]+\])*$/; const isMemberExpression = (path) => { if (!path) return false; return memberExpRE.test(path.trim()); }; function getInnerRange(loc, offset, length) { const source = loc.source.substr(offset, length); const newLoc = { source, start: advancePositionWithClone(loc.start, loc.source, offset), end: loc.end }; if (length != null) { newLoc.end = advancePositionWithClone(loc.start, loc.source, offset + length); } return newLoc; } function advancePositionWithClone(pos, source, numberOfCharacters = source.length) { return advancePositionWithMutation(extend({}, pos), source, numberOfCharacters); } // advance by mutation without cloning (for performance reasons), since this // gets called a lot in the parser function advancePositionWithMutation(pos, source, numberOfCharacters = source.length) { let linesCount = 0; let lastNewLinePos = -1; for (let i = 0; i < numberOfCharacters; i++) { if (source.charCodeAt(i) === 10 /* newline char code */) { linesCount++; lastNewLinePos = i; } } pos.offset += numberOfCharacters; pos.line += linesCount; pos.column = lastNewLinePos === -1 ? pos.column + numberOfCharacters : numberOfCharacters - lastNewLinePos; return pos; } function assert(condition, msg) { /* istanbul ignore if */ if (!condition) { throw new Error(msg || `unexpected compiler condition`); } } function findDir(node, name, allowEmpty = false) { for (let i = 0; i < node.props.length; i++) { const p = node.props[i]; if (p.type === 7 /* DIRECTIVE */ && (allowEmpty || p.exp) && (isString(name) ? p.name === name : name.test(p.name))) { return p; } } } function findProp(node, name, dynamicOnly = false, allowEmpty = false) { for (let i = 0; i < node.props.length; i++) { const p = node.props[i]; if (p.type === 6 /* ATTRIBUTE */) { if (dynamicOnly) continue; if (p.name === name && (p.value || allowEmpty)) { return p; } } else if (p.name === 'bind' && (p.exp || allowEmpty) && isBindKey(p.arg, name)) { return p; } } } function isBindKey(arg, name) { return !!(arg && isStaticExp(arg) && arg.content === name); } function hasDynamicKeyVBind(node) { return node.props.some(p => p.type === 7 /* DIRECTIVE */ && p.name === 'bind' && (!p.arg || // v-bind="obj" p.arg.type !== 4 /* SIMPLE_EXPRESSION */ || // v-bind:[_ctx.foo] !p.arg.isStatic) // v-bind:[foo] ); } function isText(node) { return node.type === 5 /* INTERPOLATION */ || node.type === 2 /* TEXT */; } function isVSlot(p) { return p.type === 7 /* DIRECTIVE */ && p.name === 'slot'; } function isTemplateNode(node) { return (node.type === 1 /* ELEMENT */ && node.tagType === 3 /* TEMPLATE */); } function isSlotOutlet(node) { return node.type === 1 /* ELEMENT */ && node.tagType === 2 /* SLOT */; } function injectProp(node, prop, context) { let propsWithInjection; const props = node.type === 13 /* VNODE_CALL */ ? node.props : node.arguments[2]; if (props == null || isString(props)) { propsWithInjection = createObjectExpression([prop]); } else if (props.type === 14 /* JS_CALL_EXPRESSION */) { // merged props... add ours // only inject key to object literal if it's the first argument so that // if doesn't override user provided keys const first = props.arguments[0]; if (!isString(first) && first.type === 15 /* JS_OBJECT_EXPRESSION */) { first.properties.unshift(prop); } else { if (props.callee === TO_HANDLERS) { // #2366 propsWithInjection = createCallExpression(context.helper(MERGE_PROPS), [ createObjectExpression([prop]), props ]); } else { props.arguments.unshift(createObjectExpression([prop])); } } !propsWithInjection && (propsWithInjection = props); } else if (props.type === 15 /* JS_OBJECT_EXPRESSION */) { let alreadyExists = false; // check existing key to avoid overriding user provided keys if (prop.key.type === 4 /* SIMPLE_EXPRESSION */) { const propKeyName = prop.key.content; alreadyExists = props.properties.some(p => p.key.type === 4 /* SIMPLE_EXPRESSION */ && p.key.content === propKeyName); } if (!alreadyExists) { props.properties.unshift(prop); } propsWithInjection = props; } else { // single v-bind with expression, return a merged replacement propsWithInjection = createCallExpression(context.helper(MERGE_PROPS), [ createObjectExpression([prop]), props ]); } if (node.type === 13 /* VNODE_CALL */) { node.props = propsWithInjection; } else { node.arguments[2] = propsWithInjection; } } function toValidAssetId(name, type) { return `_${type}_${name.replace(/[^\w]/g, '_')}`; } // The default decoder only provides escapes for characters reserved as part of // the template syntax, and is only used if the custom renderer did not provide // a platform-specific decoder. const decodeRE = /&(gt|lt|amp|apos|quot);/g; const decodeMap = { gt: '>', lt: '<', amp: '&', apos: "'", quot: '"' }; const defaultParserOptions = { delimiters: [`{{`, `}}`], getNamespace: () => 0 /* HTML */, getTextMode: () => 0 /* DATA */, isVoidTag: NO, isPreTag: NO, isCustomElement: NO, decodeEntities: (rawText) => rawText.replace(decodeRE, (_, p1) => decodeMap[p1]), onError: defaultOnError, comments: false }; function baseParse(content, options = {}) { const context = createParserContext(content, options); const start = getCursor(context); return createRoot(parseChildren(context, 0 /* DATA */, []), getSelection(context, start)); } function createParserContext(content, rawOptions) { const options = extend({}, defaultParserOptions); for (const key in rawOptions) { // @ts-ignore options[key] = rawOptions[key] || defaultParserOptions[key]; } return { options, column: 1, line: 1, offset: 0, originalSource: content, source: content, inPre: false, inVPre: false }; } function parseChildren(context, mode, ancestors) { const parent = last(ancestors); const ns = parent ? parent.ns : 0 /* HTML */; const nodes = []; while (!isEnd(context, mode, ancestors)) { const s = context.source; let node = undefined; if (mode === 0 /* DATA */ || mode === 1 /* RCDATA */) { if (!context.inVPre && startsWith(s, context.options.delimiters[0])) { // '{{' node = parseInterpolation(context, mode); } else if (mode === 0 /* DATA */ && s[0] === '<') { // https://html.spec.whatwg.org/multipage/parsing.html#tag-open-state if (s.length === 1) { emitError(context, 5 /* EOF_BEFORE_TAG_NAME */, 1); } else if (s[1] === '!') { // https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state if (startsWith(s, '<!--')) { node = parseComment(context); } else if (startsWith(s, '<!DOCTYPE')) { // Ignore DOCTYPE by a limitation. node = parseBogusComment(context); } else if (startsWith(s, '<![CDATA[')) { if (ns !== 0 /* HTML */) { node = parseCDATA(context, ancestors); } else { emitError(context, 1 /* CDATA_IN_HTML_CONTENT */); node = parseBogusComment(context); } } else { emitError(context, 11 /* INCORRECTLY_OPENED_COMMENT */); node = parseBogusComment(context); } } else if (s[1] === '/') { // https://html.spec.whatwg.org/multipage/parsing.html#end-tag-open-state if (s.length === 2) { emitError(context, 5 /* EOF_BEFORE_TAG_NAME */, 2); } else if (s[2] === '>') { emitError(context, 14 /* MISSING_END_TAG_NAME */, 2); advanceBy(context, 3); continue; } else if (/[a-z]/i.test(s[2])) { emitError(context, 23 /* X_INVALID_END_TAG */); parseTag(context, 1 /* End */, parent); continue; } else { emitError(context, 12 /* INVALID_FIRST_CHARACTER_OF_TAG_NAME */, 2); node = parseBogusComment(context); } } else if (/[a-z]/i.test(s[1])) { node = parseElement(context, ancestors); } else if (s[1] === '?') { emitError(context, 21 /* UNEXPECTED_QUESTION_MARK_INSTEAD_OF_TAG_NAME */, 1); node = parseBogusComment(context); } else { emitError(context, 12 /* INVALID_FIRST_CHARACTER_OF_TAG_NAME */, 1); } } } if (!node) { node = parseText(context, mode); } if (isArray(node)) { for (let i = 0; i < node.length; i++) { pushNode(nodes, node[i]); } } else { pushNode(nodes, node); } } // Whitespace management for more efficient output // (same as v2 whitespace: 'condense') let removedWhitespace = false; if (mode !== 2 /* RAWTEXT */ && mode !== 1 /* RCDATA */) { for (let i = 0; i < nodes.length; i++) { const node = nodes[i]; if (!context.inPre && node.type === 2 /* TEXT */) { if (!/[^\t\r\n\f ]/.test(node.content)) { const prev = nodes[i - 1]; const next = nodes[i + 1]; // If: // - the whitespace is the first or last node, or: // - the whitespace is adjacent to a comment, or: // - the whitespace is between two elements AND contains newline // Then the whitespace is ignored. if (!prev || !next || prev.type === 3 /* COMMENT */ || next.type === 3 /* COMMENT */ || (prev.type === 1 /* ELEMENT */ && next.type === 1 /* ELEMENT */ && /[\r\n]/.test(node.content))) { removedWhitespace = true; nodes[i] = null; } else { // Otherwise, condensed consecutive whitespace inside the text // down to a single space node.content = ' '; } } else { node.content = node.content.replace(/[\t\r\n\f ]+/g, ' '); } } } if (context.inPre && parent && context.options.isPreTag(parent.tag)) { // remove leading newline per html spec // https://html.spec.whatwg.org/multipage/grouping-content.html#the-pre-element const first = nodes[0]; if (first && first.type === 2 /* TEXT */) { first.content = first.content.replace(/^\r?\n/, ''); } } } return removedWhitespace ? nodes.filter(Boolean) : nodes; } function pushNode(nodes, node) { if (node.type === 2 /* TEXT */) { const prev = last(nodes); // Merge if both this and the previous node are text and those are // consecutive. This happens for cases like "a < b". if (prev && prev.type === 2 /* TEXT */ && prev.loc.end.offset === node.loc.start.offset) { prev.content += node.content; prev.loc.end = node.loc.end; prev.loc.source += node.loc.source; return; } } nodes.push(node); } function parseCDATA(context, ancestors) { advanceBy(context, 9); const nodes = parseChildren(context, 3 /* CDATA */, ancestors); if (context.source.length === 0) { emitError(context, 6 /* EOF_IN_CDATA */); } else { advanceBy(context, 3); } return nodes; } function parseComment(context) { const start = getCursor(context); let content; // Regular comment. const match = /--(\!)?>/.exec(context.source); if (!match) { content = context.source.slice(4); advanceBy(context, context.source.length); emitError(context, 7 /* EOF_IN_COMMENT */); } else { if (match.index <= 3) { emitError(context, 0 /* ABRUPT_CLOSING_OF_EMPTY_COMMENT */); } if (match[1]) { emitError(context, 10 /* INCORRECTLY_CLOSED_COMMENT */); } content = context.source.slice(4, match.index); // Advancing with reporting nested comments. const s = context.source.slice(0, match.index); let prevIndex = 1, nestedIndex = 0; while ((nestedIndex = s.indexOf('<!--', prevIndex)) !== -1) { advanceBy(context, nestedIndex - prevIndex + 1); if (nestedIndex + 4 < s.length) { emitError(context, 16 /* NESTED_COMMENT */); } prevIndex = nestedIndex + 1; } advanceBy(context, match.index + match[0].length - prevIndex + 1); } return { type: 3 /* COMMENT */, content, loc: getSelection(context, start) }; } function parseBogusComment(context) { const start = getCursor(context); const contentStart = context.source[1] === '?' ? 1 : 2; let content; const closeIndex = context.source.indexOf('>'); if (closeIndex === -1) { content = context.source.slice(contentStart); advanceBy(context, context.source.length); } else { content = context.source.slice(contentStart, closeIndex); advanceBy(context, closeIndex + 1); } return { type: 3 /* COMMENT */, content, loc: getSelection(context, start) }; } function parseElement(context, ancestors) { // Start tag. const wasInPre = context.inPre; const wasInVPre = context.inVPre; const parent = last(ancestors); const element = parseTag(context, 0 /* Start */, parent); const isPreBoundary = context.inPre && !wasInPre; const isVPreBoundary = context.inVPre && !wasInVPre; if (element.isSelfClosing || context.options.isVoidTag(element.tag)) { return element; } // Children. ancestors.push(element); const mode = context.options.getTextMode(element, parent); const children = parseChildren(context, mode, ancestors); ancestors.pop(); element.children = children; // End tag. if (startsWithEndTagOpen(context.source, element.tag)) { parseTag(context, 1 /* End */, parent); } else { emitError(context, 24 /* X_MISSING_END_TAG */, 0, element.loc.start); if (context.source.length === 0 && element.tag.toLowerCase() === 'script') { const first = children[0]; if (first && startsWith(first.loc.source, '<!--')) { emitError(context, 8 /* EOF_IN_SCRIPT_HTML_COMMENT_LIKE_TEXT */); } } } element.loc = getSelection(context, element.loc.start); if (isPreBoundary) { context.inPre = false; } if (isVPreBoundary) { context.inVPre = false; } return element; } const isSpecialTemplateDirective = /*#__PURE__*/ makeMap(`if,else,else-if,for,slot`); /** * Parse a tag (E.g. `<div id=a>`) with that type (start tag or end tag). */ function parseTag(context, type, parent) { // Tag open. const start = getCursor(context); const match = /^<\/?([a-z][^\t\r\n\f />]*)/i.exec(context.source); const tag = match[1]; const ns = context.options.getNamespace(tag, parent); advanceBy(context, match[0].length); advanceSpaces(context); // save current state in case we need to re-parse attributes with v-pre const cursor = getCursor(context); const currentSource = context.source; // Attributes. let props = parseAttributes(context, type); // check <pre> tag if (context.options.isPreTag(tag)) { context.inPre = true; } // check v-pre if (!context.inVPre && props.some(p => p.type === 7 /* DIRECTIVE */ && p.name === 'pre')) { context.inVPre = true; // reset context extend(context, cursor); context.source = currentSource; // re-parse attrs and filter out v-pre itself props = parseAttributes(context, type).filter(p => p.name !== 'v-pre'); } // Tag close. let isSelfClosing = false; if (context.source.length === 0) { emitError(context, 9 /* EOF_IN_TAG */); } else { isSelfClosing = startsWith(context.source, '/>'); if (type === 1 /* End */ && isSelfClosing) { emitError(context, 4 /* END_TAG_WITH_TRAILING_SOLIDUS */); } advanceBy(context, isSelfClosing ? 2 : 1); } let tagType = 0 /* ELEMENT */; const options = context.options; if (!context.inVPre && !options.isCustomElement(tag)) { const hasVIs = props.some(p => p.type === 7 /* DIRECTIVE */ && p.name === 'is'); if (options.isNativeTag && !hasVIs) { if (!options.isNativeTag(tag)) tagType = 1 /* COMPONENT */; } else if (hasVIs || isCoreComponent(tag) || (options.isBuiltInComponent && options.isBuiltInComponent(tag)) || /^[A-Z]/.test(tag) || tag === 'component') { tagType = 1 /* COMPONENT */; } if (tag === 'slot') { tagType = 2 /* SLOT */; } else if (tag === 'template' && props.some(p => { return (p.type === 7 /* DIRECTIVE */ && isSpecialTemplateDirective(p.name)); })) { tagType = 3 /* TEMPLATE */; } } return { type: 1 /* ELEMENT */, ns, tag, tagType, props, isSelfClosing, children: [], loc: getSelection(context, start), codegenNode: undefined // to be created during transform phase }; } function parseAttributes(context, type) { const props = []; const attributeNames = new Set(); while (context.source.length > 0 && !startsWith(context.source, '>') && !startsWith(context.source, '/>')) { if (startsWith(context.source, '/')) { emitError(context, 22 /* UNEXPECTED_SOLIDUS_IN_TAG */); advanceBy(context, 1); advanceSpaces(context); continue; } if (type === 1 /* End */) { emitError(context, 3 /* END_TAG_WITH_ATTRIBUTES */); } const attr = parseAttribute(context, attributeNames); if (type === 0 /* Start */) { props.push(attr); } if (/^[^\t\r\n\f />]/.test(context.source)) { emitError(context, 15 /* MISSING_WHITESPACE_BETWEEN_ATTRIBUTES */); } advanceSpaces(context); } return props; } function parseAttribute(context, nameSet) { // Name. const start = getCursor(context); const match = /^[^\t\r\n\f />][^\t\r\n\f />=]*/.exec(context.source); const name = match[0]; if (nameSet.has(name)) { emitError(context, 2 /* DUPLICATE_ATTRIBUTE */); } nameSet.add(name); if (name[0] === '=') { emitError(context, 19 /* UNEXPECTED_EQUALS_SIGN_BEFORE_ATTRIBUTE_NAME */); } { const pattern = /["'<]/g; let m; while ((m = pattern.exec(name))) { emitError(context, 17 /* UNEXPECTED_CHARACTER_IN_ATTRIBUTE_NAME */, m.index); } } advanceBy(context, name.length); // Value let value = undefined; if (/^[\t\r\n\f ]*=/.test(context.source)) { advanceSpaces(context); advanceBy(context, 1); advanceSpaces(context); value = parseAttributeValue(context); if (!value) { emitError(context, 13 /* MISSING_ATTRIBUTE_VALUE */); } } const loc = getSelection(context, start); if (!context.inVPre && /^(v-|:|@|#)/.test(name)) { const match = /(?:^v-([a-z0-9-]+))?(?:(?::|^@|^#)(\[[^\]]+\]|[^\.]+))?(.+)?$/i.exec(name); const dirName = match[1] || (startsWith(name, ':') ? 'bind' : startsWith(name, '@') ? 'on' : 'slot'); let arg; if (match[2]) { const isSlot = dirName === 'slot'; const startOffset = name.lastIndexOf(match[2]); const loc = getSelection(context, getNewPosition(context, start, startOffset), getNewPosition(context, start, startOffset + match[2].length + ((isSlot && match[3]) || '').length)); let content = match[2]; let isStatic = true; if (content.startsWith('[')) { isStatic = false; if (!content.endsWith(']')) { emitError(context, 26 /* X_MISSING_DYNAMIC_DIRECTIVE_ARGUMENT_END */); } content = content.substr(1, content.length - 2); } else if (isSlot) { // #1241 special case for v-slot: vuetify relies extensively on slot // names containing dots. v-slot doesn't have any modifiers and Vue 2.x // supports such usage so we are keeping it consistent with 2.x. content += match[3] || ''; } arg = { type: 4 /* SIMPLE_EXPRESSION */, content, isStatic, constType: isStatic ? 3 /* CAN_STRINGIFY */ : 0 /* NOT_CONSTANT */, loc }; } if (value && value.isQuoted) { const valueLoc = value.loc; valueLoc.start.offset++; valueLoc.start.column++; valueLoc.end = advancePositionWithClone(valueLoc.start, value.content); valueLoc.source = valueLoc.source.slice(1, -1); } return { type: 7 /* DIRECTIVE */, name: dirName, exp: value && { type: 4 /* SIMPLE_EXPRESSION */, content: value.content, isStatic: false, // Treat as non-constant by default. This can be potentially set to // other values by `transformExpression` to make it eligible for hoisting. constType: 0 /* NOT_CONSTANT */, loc: value.loc }, arg, modifiers: match[3] ? match[3].substr(1).split('.') : [], loc }; } return { type: 6 /* ATTRIBUTE */, name, value: value && { type: 2 /* TEXT */, content: value.content, loc: value.loc }, loc }; } function parseAttributeValue(context) { const start = getCursor(context); let content; const quote = context.source[0]; const isQuoted = quote === `"` || quote === `'`; if (isQuoted) { // Quoted value. advanceBy(context, 1); const endIndex = context.source.indexOf(quote); if (endIndex === -1) { content = parseTextData(context, context.source.length, 4 /* ATTRIBUTE_VALUE */); } else { content = parseTextData(context, endIndex, 4 /* ATTRIBUTE_VALUE */); advanceBy(context, 1); } } else { // Unquoted const match = /^[^\t\r\n\f >]+/.exec(context.source); if (!match) { return undefined; } const unexpectedChars = /["'<=`]/g; let m; while ((m = unexpectedChars.exec(match[0]))) { emitError(context, 18 /* UNEXPECTED_CHARACTER_IN_UNQUOTED_ATTRIBUTE_VALUE */, m.index); } content = parseTextData(context, match[0].length, 4 /* ATTRIBUTE_VALUE */); } return { content, isQuoted, loc: getSelection(context, start) }; } function parseInterpolation(context, mode) { const [open, close] = context.options.delimiters; const closeIndex = context.source.indexOf(close, open.length); if (closeIndex === -1) { emitError(context, 25 /* X_MISSING_INTERPOLATION_END */); return undefined; } const start = getCursor(context); advanceBy(context, open.length); const innerStart = getCursor(context); const innerEnd = getCursor(context); const rawContentLength = closeIndex - open.length; const rawContent = context.source.slice(0, rawContentLength); const preTrimContent = parseTextData(context, rawContentLength, mode); const content = preTrimContent.trim(); const startOffset = preTrimContent.indexOf(content); if (startOffset > 0) { advancePositionWithMutation(innerStart, rawContent, startOffset); } const endOffset = rawContentLength - (preTrimContent.length - content.length - startOffset); advancePositionWithMutation(innerEnd, rawContent, endOffset); advanceBy(context, close.length); return { type: 5 /* INTERPOLATION */, content: { type: 4 /* SIMPLE_EXPRESSION */, isStatic: false, // Set `isConstant` to false by default and will decide in transformExpression constType: 0 /* NOT_CONSTANT */, content, loc: getSelection(context, innerStart, innerEnd) }, loc: getSelection(context, start) }; } function parseText(context, mode) { const endTokens = ['<', context.options.delimiters[0]]; if (mode === 3 /* CDATA */) { endTokens.push(']]>'); } let endIndex = context.source.length; for (let i = 0; i < endTokens.length; i++) { const index = context.source.indexOf(endTokens[i], 1); if (index !== -1 && endIndex > index) { endIndex = index; } } const start = getCursor(context); const content = parseTextData(context, endIndex, mode); return { type: 2 /* TEXT */, content, loc: getSelection(context, start) }; } /** * Get text data with a given length from the current location. * This translates HTML entities in the text data. */ function parseTextData(context, length, mode) { const rawText = context.source.slice(0, length); advanceBy(context, length); if (mode === 2 /* RAWTEXT */ || mode === 3 /* CDATA */ || rawText.indexOf('&') === -1) { return rawText; } else { // DATA or RCDATA containing "&"". Entity decoding required. return context.options.decodeEntities(rawText, mode === 4 /* ATTRIBUTE_VALUE */); } } function getCursor(context) { const { column, line, offset } = context; return { column, line, offset }; } function getSelection(context, start, end) { end = end || getCursor(context); return { start, end, source: context.originalSource.slice(start.offset, end.offset) }; } function last(xs) { return xs[xs.length - 1]; } function startsWith(source, searchString) { return source.startsWith(searchString); } function advanceBy(context, numberOfCharacters) { const { source } = context; advancePositionWithMutation(context, source, numberOfCharacters); context.source = source.slice(numberOfCharacters); } function advanceSpaces(context) { const match = /^[\t\r\n\f ]+/.exec(context.source); if (match) { advanceBy(context, match[0].length); } } function getNewPosition(context, start, numberOfCharacters) { return advancePositionWithClone(start, context.originalSource.slice(start.offset, numberOfCharacters), numberOfCharacters); } function emitError(context, code, offset, loc = getCursor(context)) { if (offset) { loc.offset += offset; loc.column += offset; } context.options.onError(createCompilerError(code, { start: loc, end: loc, source: '' })); } function isEnd(context, mode, ancestors) { const s = context.source; switch (mode) { case 0 /* DATA */: if (startsWith(s, '</')) { // TODO: probably bad performance for (let i = ancestors.length - 1; i >= 0; --i) { if (startsWithEndTagOpen(s, ancestors[i].tag)) { return true; } } } break; case 1 /* RCDATA */: case 2 /* RAWTEXT */: { const parent = last(ancestors); if (parent && startsWithEndTagOpen(s, parent.tag)) { return true; } break; } case 3 /* CDATA */: if (startsWith(s, ']]>')) { return true; } break; } return !s; } function startsWithEndTagOpen(source, tag) { return (startsWith(source, '</') && source.substr(2, tag.length).toLowerCase() === tag.toLowerCase() && /[\t\r\n\f />]/.test(source[2 + tag.length] || '>')); } function hoistStatic(root, context) { walk(root, context, // Root node is unfortunately non-hoistable due to potential parent // fallthrough attributes. isSingleElementRoot(root, root.children[0])); } function isSingleElementRoot(root, child) { const { children } = root; return (children.length === 1 && child.type === 1 /* ELEMENT */ && !isSlotOutlet(child)); } function walk(node, context, doNotHoistNode = false) { let hasHoistedNode = false; // Some transforms, e.g. transformAssetUrls from @vue/compiler-sfc, replaces // static bindings with expressions. These expressions are guaranteed to be // constant so they are still eligible for hoisting, but they are only // available at runtime and therefore cannot be evaluated ahead of time. // This is only a concern for pre-stringification (via transformHoist by // @vue/compiler-dom), but doing it here allows us to perform only one full // walk of the AST and allow `stringifyStatic` to stop walking as soon as its // stringficiation threshold is met. let canStringify = true; const { children } = node; for (let i = 0; i < children.length; i++) { const child = children[i]; // only plain elements & text calls are eligible for hoisting. if (child.type === 1 /* ELEMENT */ && child.tagType === 0 /* ELEMENT */) { const constantType = doNotHoistNode ? 0 /* NOT_CONSTANT */ : getConstantType(child, context); if (constantType > 0 /* NOT_CONSTANT */) { if (constantType < 3 /* CAN_STRINGIFY */) { canStringify = false; } if (constantType >= 2 /* CAN_HOIST */) { child.codegenNode.patchFlag = -1 /* HOISTED */ + (` /* HOISTED */` ); child.codegenNode = context.hoist(child.codegenNode); hasHoistedNode = true; continue; } } else { // node may contain dynamic children, but its props may be eligible for // hoisting. const codegenNode = child.codegenNode; if (codegenNode.type === 13 /* VNODE_CALL */) { const flag = getPatchFlag(codegenNode); if ((!flag || flag === 512 /* NEED_PATCH */ || flag === 1 /* TEXT */) && getGeneratedPropsConstantType(child, context) >= 2 /* CAN_HOIST */) { const props = getNodeProps(child); if (props) { codegenNode.props = context.hoist(props); } } } } } else if (child.type === 12 /* TEXT_CALL */) { const contentType = getConstantType(child.content, context); if (contentType > 0) { if (contentType < 3 /* CAN_STRINGIFY */) { canStringify = false; } if (contentType >= 2 /* CAN_HOIST */) { child.codegenNode = context.hoist(child.codegenNode); hasHoistedNode = true; } } } // walk further if (child.type === 1 /* ELEMENT */) { const isComponent = child.tagType === 1 /* COMPONENT */; if (isComponent) { context.scopes.vSlot++; } walk(child, context); if (isComponent) { context.scopes.vSlot--; } } else if (child.type === 11 /* FOR */) { // Do not hoist v-for single child because it has to be a block walk(child, context, child.children.length === 1); } else if (child.type === 9 /* IF */) { for (let i = 0; i < child.branches.length; i++) { // Do not hoist v-if single child because it has to be a block walk(child.branches[i], context, child.branches[i].children.length === 1); } } } if (canStringify && hasHoistedNode && context.transformHoist) { context.transformHoist(children, context, node); } } function getConstantType(node, context) { const { constantCache } = context; switch (node.type) { case 1 /* ELEMENT */: if (node.tagType !== 0 /* ELEMENT */) { return 0 /* NOT_CONSTANT */; } const cached = constantCache.get(node); if (cached !== undefined) { return cached; } const codegenNode = node.codegenNode; if (codegenNode.type !== 13 /* VNODE_CALL */) { return 0 /* NOT_CONSTANT */; } const flag = getPatchFlag(codegenNode); if (!flag) { let returnType = 3 /* CAN_STRINGIFY */; // Element itself has no patch flag. However we still need to check: // 1. Even for a node with no patch flag, it is possible for it to contain // non-hoistable expressions that refers to scope variables, e.g. compiler // injected keys or cached event handlers. Therefore we need to always // check the codegenNode's props to be sure. const generatedPropsType = getGeneratedPropsConstantType(node, context); if (generatedPropsType === 0 /* NOT_CONSTANT */) { constantCache.set(node, 0 /* NOT_CONSTANT */); return 0 /* NOT_CONSTANT */; } if (generatedPropsType < returnType) { returnType = generatedPropsType; } // 2. its children. for (let i = 0; i < node.children.length; i++) { const childType = getConstantType(node.children[i], context); if (childType === 0 /* NOT_CONSTANT */) { constantCache.set(node, 0 /* NOT_CONSTANT */); return 0 /* NOT_CONSTANT */; } if (childType < returnType) { returnType = childType; } } // 3. if the type is not already CAN_SKIP_PATCH which is the lowest non-0 // type, check if any of the props can cause the type to be lowered // we can skip can_patch because it's guaranteed by the absence of a // patchFlag. if (returnType > 1 /* CAN_SKIP_PATCH */) { for (let i = 0; i < node.props.length; i++) { const p = node.props[i]; if (p.type === 7 /* DIRECTIVE */ && p.name === 'bind' && p.exp) { const expType = getConstantType(p.exp, context); if (expType === 0 /* NOT_CONSTANT */) { constantCache.set(node, 0 /* NOT_CONSTANT */); return 0 /* NOT_CONSTANT */; } if (expType < returnType) { returnType = expType; } } } } // only svg/foreignObject could be block here, however if they are // static then they don't need to be blocks since there will be no // nested updates. if (codegenNode.isBlock) { context.removeHelper(OPEN_BLOCK); context.removeHelper(CREATE_BLOCK); codegenNode.isBlock = false; context.helper(CREATE_VNODE); } constantCache.set(node, returnType); return returnType; } else { constantCache.set(node, 0 /* NOT_CONSTANT */); return 0 /* NOT_CONSTANT */; } case 2 /* TEXT */: case 3 /* COMMENT */: return 3 /* CAN_STRINGIFY */; case 9 /* IF */: case 11 /* FOR */: case 10 /* IF_BRANCH */: return 0 /* NOT_CONSTANT */; case 5 /* INTERPOLATION */: case 12 /* TEXT_CALL */: return getConstantType(node.content, context); case 4 /* SIMPLE_EXPRESSION */: return node.constType; case 8 /* COMPOUND_EXPRESSION */: let returnType = 3 /* CAN_STRINGIFY */; for (let i = 0; i < node.children.length; i++) { const child = node.children[i]; if (isString(child) || isSymbol(child)) { continue; } const childType = getConstantType(child, context); if (childType === 0 /* NOT_CONSTANT */) { return 0 /* NOT_CONSTANT */; } else if (childType < returnType) { returnType = childType; } } return returnType; default: return 0 /* NOT_CONSTANT */; } } function getGeneratedPropsConstantType(node, context) { let returnType = 3 /* CAN_STRINGIFY */; const props = getNodeProps(node); if (props && props.type === 15 /* JS_OBJECT_EXPRESSION */) { const { properties } = props; for (let i = 0; i < properties.length; i++) { const { key, value } = properties[i]; const keyType = getConstantType(key, context); if (keyType === 0 /* NOT_CONSTANT */) { return keyType; } if (keyType < returnType) { returnType = keyType; } if (value.type !== 4 /* SIMPLE_EXPRESSION */) { return 0 /* NOT_CONSTANT */; } const valueType = getConstantType(value, context); if (valueType === 0 /* NOT_CONSTANT */) { return valueType; } if (valueType < returnType) { returnType = valueType; } } } return returnType; } function getNodeProps(node) { const codegenNode = node.codegenNode; if (codegenNode.type === 13 /* VNODE_CALL */) { return codegenNode.props; } } function getPatchFlag(node) { const flag = node.patchFlag; return flag ? parseInt(flag, 10) : undefined; } function createTransformContext(root, { filename = '', prefixIdentifiers = false, hoistStatic = false, cacheHandlers = false, nodeTransforms = [], directiveTransforms = {}, transformHoist = null, isBuiltInComponent = NOOP, isCustomElement = NOOP, expressionPlugins = [], scopeId = null, slotted = true, ssr = false, ssrCssVars = ``, bindingMetadata = EMPTY_OBJ, inline = false, isTS = false, onError = defaultOnError }) { const nameMatch = filename.replace(/\?.*$/, '').match(/([^/\\]+)\.\w+$/); const context = { // options selfName: nameMatch && capitalize(camelize(nameMatch[1])), prefixIdentifiers, hoistStatic, cacheHandlers, nodeTransforms, directiveTransforms, transformHoist, isBuiltInComponent, isCustomElement, expressionPlugins, scopeId, slotted, ssr, ssrCssVars, bindingMetadata, inline, isTS, onError, // state root, helpers: new Map(), components: new Set(), directives: new Set(), hoists: [], imports: [], constantCache: new Map(), temps: 0, cached: 0, identifiers: Object.create(null), scopes: { vFor: 0, vSlot: 0, vPre: 0, vOnce: 0 }, parent: null, currentNode: root, childIndex: 0, // methods helper(name) { const count = context.helpers.get(name) || 0; context.helpers.set(name, count + 1); return name; }, removeHelper(name) { const count = context.helpers.get(name); if (count) { const currentCount = count - 1; if (!currentCount) { context.helpers.delete(name); } else { context.helpers.set(name, currentCount); } } }, helperString(name) { return `_${helperNameMap[context.helper(name)]}`; }, replaceNode(node) { /* istanbul ignore if */ { if (!context.currentNode) { throw new Error(`Node being replaced is already removed.`); } if (!context.parent) { throw new Error(`Cannot replace root node.`); } } context.parent.children[context.childIndex] = context.currentNode = node; }, removeNode(node) { if (!context.parent) { throw new Error(`Cannot remove root node.`); } const list = context.parent.children; const removalIndex = node ? list.indexOf(node) : context.currentNode ? context.childIndex : -1; /* istanbul ignore if */ if (removalIndex < 0) { throw new Error(`node being removed is not a child of current parent`); } if (!node || node === context.currentNode) { // current node removed context.currentNode = null; context.onNodeRemoved(); } else { // sibling node removed if (context.childIndex > removalIndex) { context.childIndex--; context.onNodeRemoved(); } } context.parent.children.splice(removalIndex, 1); }, onNodeRemoved: () => { }, addIdentifiers(exp) { }, removeIdentifiers(exp) { }, hoist(exp) { context.hoists.push(exp); const identifier = createSimpleExpression(`_hoisted_${context.hoists.length}`, false, exp.loc, 2 /* CAN_HOIST */); identifier.hoisted = exp; return identifier; }, cache(exp, isVNode = false) { return createCacheExpression(++context.cached, exp, isVNode); } }; return context; } function transform(root, options) { const context = createTransformContext(root, options); traverseNode(root, context); if (options.hoistStatic) { hoistStatic(root, context); } if (!options.ssr) { createRootCodegen(root, context); } // finalize meta information root.helpers = [...context.helpers.keys()]; root.components = [...context.components]; root.directives = [...context.directives]; root.imports = context.imports; root.hoists = context.hoists; root.temps = context.temps; root.cached = context.cached; } function createRootCodegen(root, context) { const { helper, removeHelper } = context; const { children } = root; if (children.length === 1) { const child = children[0]; // if the single child is an element, turn it into a block. if (isSingleElementRoot(root, child) && child.codegenNode) { // single element root is never hoisted so codegenNode will never be // SimpleExpressionNode const codegenNode = child.codegenNode; if (codegenNode.type === 13 /* VNODE_CALL */) { if (!codegenNode.isBlock) { removeHelper(CREATE_VNODE); codegenNode.isBlock = true; helper(OPEN_BLOCK); helper(CREATE_BLOCK); } } root.codegenNode = codegenNode; } else { // - single <slot/>, IfNode, ForNode: already blocks. // - single text node: always patched. // root codegen falls through via genNode() root.codegenNode = child; } } else if (children.length > 1) { // root has multiple nodes - return a fragment block. let patchFlag = 64 /* STABLE_FRAGMENT */; let patchFlagText = PatchFlagNames[64 /* STABLE_FRAGMENT */]; // check if the fragment actually contains a single valid child with // the rest being comments if (children.filter(c => c.type !== 3 /* COMMENT */).length === 1) { patchFlag |= 2048 /* DEV_ROOT_FRAGMENT */; patchFlagText += `, ${PatchFlagNames[2048 /* DEV_ROOT_FRAGMENT */]}`; } root.codegenNode = createVNodeCall(context, helper(FRAGMENT), undefined, root.children, patchFlag + (` /* ${patchFlagText} */` ), undefined, undefined, true); } else ; } function traverseChildren(parent, context) { let i = 0; const nodeRemoved = () => { i--; }; for (; i < parent.children.length; i++) { const child = parent.children[i]; if (isString(child)) continue; context.parent = parent; context.childIndex = i; context.onNodeRemoved = nodeRemoved; traverseNode(child, context); } } function traverseNode(node, context) { context.currentNode = node; // apply transform plugins const { nodeTransforms } = context; const exitFns = []; for (let i = 0; i < nodeTransforms.length; i++) { const onExit = nodeTransforms[i](node, context); if (onExit) { if (isArray(onExit)) { exitFns.push(...onExit); } else { exitFns.push(onExit); } } if (!context.currentNode) { // node was removed return; } else { // node may have been replaced node = context.currentNode; } } switch (node.type) { case 3 /* COMMENT */: if (!context.ssr) { // inject import for the Comment symbol, which is needed for creating // comment nodes with `createVNode` context.helper(CREATE_COMMENT); } break; case 5 /* INTERPOLATION */: // no need to traverse, but we need to inject toString helper if (!context.ssr) { context.helper(TO_DISPLAY_STRING); } break; // for container types, further traverse downwards case 9 /* IF */: for (let i = 0; i < node.branches.length; i++) { traverseNode(node.branches[i], context); } break; case 10 /* IF_BRANCH */: case 11 /* FOR */: case 1 /* ELEMENT */: case 0 /* ROOT */: traverseChildren(node, context); break; } // exit transforms context.currentNode = node; let i = exitFns.length; while (i--) { exitFns[i](); } } function createStructuralDirectiveTransform(name, fn) { const matches = isString(name) ? (n) => n === name : (n) => name.test(n); return (node, context) => { if (node.type === 1 /* ELEMENT */) { const { props } = node; // structural directive transforms are not concerned with slots // as they are handled separately in vSlot.ts if (node.tagType === 3 /* TEMPLATE */ && props.some(isVSlot)) { return; } const exitFns = []; for (let i = 0; i < props.length; i++) { const prop = props[i]; if (prop.type === 7 /* DIRECTIVE */ && matches(prop.name)) { // structural directives are removed to avoid infinite recursion // also we remove them *before* applying so that it can further // traverse itself in case it moves the node around props.splice(i, 1); i--; const onExit = fn(node, prop, context); if (onExit) exitFns.push(onExit); } } return exitFns; } }; } const PURE_ANNOTATION = `/*#__PURE__*/`; function createCodegenContext(ast, { mode = 'function', prefixIdentifiers = mode === 'module', sourceMap = false, filename = `template.vue.html`, scopeId = null, optimizeImports = false, runtimeGlobalName = `Vue`, runtimeModuleName = `vue`, ssr = false }) { const context = { mode, prefixIdentifiers, sourceMap, filename, scopeId, optimizeImports, runtimeGlobalName, runtimeModuleName, ssr, source: ast.loc.source, code: ``, column: 1, line: 1, offset: 0, indentLevel: 0, pure: false, map: undefined, helper(key) { return `_${helperNameMap[key]}`; }, push(code, node) { context.code += code; }, indent() { newline(++context.indentLevel); }, deindent(withoutNewLine = false) { if (withoutNewLine) { --context.indentLevel; } else { newline(--context.indentLevel); } }, newline() { newline(context.indentLevel); } }; function newline(n) { context.push('\n' + ` `.repeat(n)); } return context; } function generate(ast, options = {}) { const context = createCodegenContext(ast, options); if (options.onContextCreated) options.onContextCreated(context); const { mode, push, prefixIdentifiers, indent, deindent, newline, scopeId, ssr } = context; const hasHelpers = ast.helpers.length > 0; const useWithBlock = !prefixIdentifiers && mode !== 'module'; // preambles // in setup() inline mode, the preamble is generated in a sub context // and returned separately. const preambleContext = context; { genFunctionPreamble(ast, preambleContext); } // enter render function const functionName = ssr ? `ssrRender` : `render`; const args = ssr ? ['_ctx', '_push', '_parent', '_attrs'] : ['_ctx', '_cache']; const signature = args.join(', '); { push(`function ${functionName}(${signature}) {`); } indent(); if (useWithBlock) { push(`with (_ctx) {`); indent(); // function mode const declarations should be inside with block // also they should be renamed to avoid collision with user properties if (hasHelpers) { push(`const { ${ast.helpers .map(s => `${helperNameMap[s]}: _${helperNameMap[s]}`) .join(', ')} } = _Vue`); push(`\n`); newline(); } } // generate asset resolution statements if (ast.components.length) { genAssets(ast.components, 'component', context); if (ast.directives.length || ast.temps > 0) { newline(); } } if (ast.directives.length) { genAssets(ast.directives, 'directive', context); if (ast.temps > 0) { newline(); } } if (ast.temps > 0) { push(`let `); for (let i = 0; i < ast.temps; i++) { push(`${i > 0 ? `, ` : ``}_temp${i}`); } } if (ast.components.length || ast.directives.length || ast.temps) { push(`\n`); newline(); } // generate the VNode tree expression if (!ssr) { push(`return `); } if (ast.codegenNode) { genNode(ast.codegenNode, context); } else { push(`null`); } if (useWithBlock) { deindent(); push(`}`); } deindent(); push(`}`); return { ast, code: context.code, preamble: ``, // SourceMapGenerator does have toJSON() method but it's not in the types map: context.map ? context.map.toJSON() : undefined }; } function genFunctionPreamble(ast, context) { const { ssr, prefixIdentifiers, push, newline, runtimeModuleName, runtimeGlobalName } = context; const VueBinding = runtimeGlobalName; const aliasHelper = (s) => `${helperNameMap[s]}: _${helperNameMap[s]}`; // Generate const declaration for helpers // In prefix mode, we place the const declaration at top so it's done // only once; But if we not prefixing, we place the declaration inside the // with block so it doesn't incur the `in` check cost for every helper access. if (ast.helpers.length > 0) { { // "with" mode. // save Vue in a separate variable to avoid collision push(`const _Vue = ${VueBinding}\n`); // in "with" mode, helpers are declared inside the with block to avoid // has check cost, but hoists are lifted out of the function - we need // to provide the helper here. if (ast.hoists.length) { const staticHelpers = [ CREATE_VNODE, CREATE_COMMENT, CREATE_TEXT, CREATE_STATIC ] .filter(helper => ast.helpers.includes(helper)) .map(aliasHelper) .join(', '); push(`const { ${staticHelpers} } = _Vue\n`); } } } genHoists(ast.hoists, context); newline(); push(`return `); } function genAssets(assets, type, { helper, push, newline }) { const resolver = helper(type === 'component' ? RESOLVE_COMPONENT : RESOLVE_DIRECTIVE); for (let i = 0; i < assets.length; i++) { let id = assets[i]; // potential component implicit self-reference inferred from SFC filename const maybeSelfReference = id.endsWith('__self'); if (maybeSelfReference) { id = id.slice(0, -6); } push(`const ${toValidAssetId(id, type)} = ${resolver}(${JSON.stringify(id)}${maybeSelfReference ? `, true` : ``})`); if (i < assets.length - 1) { newline(); } } } function genHoists(hoists, context) { if (!hoists.length) { return; } context.pure = true; const { push, newline, helper, scopeId, mode } = context; newline(); hoists.forEach((exp, i) => { if (exp) { push(`const _hoisted_${i + 1} = `); genNode(exp, context); newline(); } }); context.pure = false; } function isText$1(n) { return (isString(n) || n.type === 4 /* SIMPLE_EXPRESSION */ || n.type === 2 /* TEXT */ || n.type === 5 /* INTERPOLATION */ || n.type === 8 /* COMPOUND_EXPRESSION */); } function genNodeListAsArray(nodes, context) { const multilines = nodes.length > 3 || (nodes.some(n => isArray(n) || !isText$1(n))); context.push(`[`); multilines && context.indent(); genNodeList(nodes, context, multilines); multilines && context.deindent(); context.push(`]`); } function genNodeList(nodes, context, multilines = false, comma = true) { const { push, newline } = context; for (let i = 0; i < nodes.length; i++) { const node = nodes[i]; if (isString(node)) { push(node); } else if (isArray(node)) { genNodeListAsArray(node, context); } else { genNode(node, context); } if (i < nodes.length - 1) { if (multilines) { comma && push(','); newline(); } else { comma && push(', '); } } } } function genNode(node, context) { if (isString(node)) { context.push(node); return; } if (isSymbol(node)) { context.push(context.helper(node)); return; } switch (node.type) { case 1 /* ELEMENT */: case 9 /* IF */: case 11 /* FOR */: assert(node.codegenNode != null, `Codegen node is missing for element/if/for node. ` + `Apply appropriate transforms first.`); genNode(node.codegenNode, context); break; case 2 /* TEXT */: genText(node, context); break; case 4 /* SIMPLE_EXPRESSION */: genExpression(node, context); break; case 5 /* INTERPOLATION */: genInterpolation(node, context); break; case 12 /* TEXT_CALL */: genNode(node.codegenNode, context); break; case 8 /* COMPOUND_EXPRESSION */: genCompoundExpression(node, context); break; case 3 /* COMMENT */: genComment(node, context); break; case 13 /* VNODE_CALL */: genVNodeCall(node, context); break; case 14 /* JS_CALL_EXPRESSION */: genCallExpression(node, context); break; case 15 /* JS_OBJECT_EXPRESSION */: genObjectExpression(node, context); break; case 17 /* JS_ARRAY_EXPRESSION */: genArrayExpression(node, context); break; case 18 /* JS_FUNCTION_EXPRESSION */: genFunctionExpression(node, context); break; case 19 /* JS_CONDITIONAL_EXPRESSION */: genConditionalExpression(node, context); break; case 20 /* JS_CACHE_EXPRESSION */: genCacheExpression(node, context); break; // SSR only types case 21 /* JS_BLOCK_STATEMENT */: break; case 22 /* JS_TEMPLATE_LITERAL */: break; case 23 /* JS_IF_STATEMENT */: break; case 24 /* JS_ASSIGNMENT_EXPRESSION */: break; case 25 /* JS_SEQUENCE_EXPRESSION */: break; case 26 /* JS_RETURN_STATEMENT */: break; /* istanbul ignore next */ case 10 /* IF_BRANCH */: // noop break; default: { assert(false, `unhandled codegen node type: ${node.type}`); // make sure we exhaust all possible types const exhaustiveCheck = node; return exhaustiveCheck; } } } function genText(node, context) { context.push(JSON.stringify(node.content), node); } function genExpression(node, context) { const { content, isStatic } = node; context.push(isStatic ? JSON.stringify(content) : content, node); } function genInterpolation(node, context) { const { push, helper, pure } = context; if (pure) push(PURE_ANNOTATION); push(`${helper(TO_DISPLAY_STRING)}(`); genNode(node.content, context); push(`)`); } function genCompoundExpression(node, context) { for (let i = 0; i < node.children.length; i++) { const child = node.children[i]; if (isString(child)) { context.push(child); } else { genNode(child, context); } } } function genExpressionAsPropertyKey(node, context) { const { push } = context; if (node.type === 8 /* COMPOUND_EXPRESSION */) { push(`[`); genCompoundExpression(node, context); push(`]`); } else if (node.isStatic) { // only quote keys if necessary const text = isSimpleIdentifier(node.content) ? node.content : JSON.stringify(node.content); push(text, node); } else { push(`[${node.content}]`, node); } } function genComment(node, context) { { const { push, helper, pure } = context; if (pure) { push(PURE_ANNOTATION); } push(`${helper(CREATE_COMMENT)}(${JSON.stringify(node.content)})`, node); } } function genVNodeCall(node, context) { const { push, helper, pure } = context; const { tag, props, children, patchFlag, dynamicProps, directives, isBlock, disableTracking } = node; if (directives) { push(helper(WITH_DIRECTIVES) + `(`); } if (isBlock) { push(`(${helper(OPEN_BLOCK)}(${disableTracking ? `true` : ``}), `); } if (pure) { push(PURE_ANNOTATION); } push(helper(isBlock ? CREATE_BLOCK : CREATE_VNODE) + `(`, node); genNodeList(genNullableArgs([tag, props, children, patchFlag, dynamicProps]), context); push(`)`); if (isBlock) { push(`)`); } if (directives) { push(`, `); genNode(directives, context); push(`)`); } } function genNullableArgs(args) { let i = args.length; while (i--) { if (args[i] != null) break; } return args.slice(0, i + 1).map(arg => arg || `null`); } // JavaScript function genCallExpression(node, context) { const { push, helper, pure } = context; const callee = isString(node.callee) ? node.callee : helper(node.callee); if (pure) { push(PURE_ANNOTATION); } push(callee + `(`, node); genNodeList(node.arguments, context); push(`)`); } function genObjectExpression(node, context) { const { push, indent, deindent, newline } = context; const { properties } = node; if (!properties.length) { push(`{}`, node); return; } const multilines = properties.length > 1 || (properties.some(p => p.value.type !== 4 /* SIMPLE_EXPRESSION */)); push(multilines ? `{` : `{ `); multilines && indent(); for (let i = 0; i < properties.length; i++) { const { key, value } = properties[i]; // key genExpressionAsPropertyKey(key, context); push(`: `); // value genNode(value, context); if (i < properties.length - 1) { // will only reach this if it's multilines push(`,`); newline(); } } multilines && deindent(); push(multilines ? `}` : ` }`); } function genArrayExpression(node, context) { genNodeListAsArray(node.elements, context); } function genFunctionExpression(node, context) { const { push, indent, deindent, scopeId, mode } = context; const { params, returns, body, newline, isSlot } = node; if (isSlot) { // wrap slot functions with owner context push(`_${helperNameMap[WITH_CTX]}(`); } push(`(`, node); if (isArray(params)) { genNodeList(params, context); } else if (params) { genNode(params, context); } push(`) => `); if (newline || body) { push(`{`); indent(); } if (returns) { if (newline) { push(`return `); } if (isArray(returns)) { genNodeListAsArray(returns, context); } else { genNode(returns, context); } } else if (body) { genNode(body, context); } if (newline || body) { deindent(); push(`}`); } if (isSlot) { push(`)`); } } function genConditionalExpression(node, context) { const { test, consequent, alternate, newline: needNewline } = node; const { push, indent, deindent, newline } = context; if (test.type === 4 /* SIMPLE_EXPRESSION */) { const needsParens = !isSimpleIdentifier(test.content); needsParens && push(`(`); genExpression(test, context); needsParens && push(`)`); } else { push(`(`); genNode(test, context); push(`)`); } needNewline && indent(); context.indentLevel++; needNewline || push(` `); push(`? `); genNode(consequent, context); context.indentLevel--; needNewline && newline(); needNewline || push(` `); push(`: `); const isNested = alternate.type === 19 /* JS_CONDITIONAL_EXPRESSION */; if (!isNested) { context.indentLevel++; } genNode(alternate, context); if (!isNested) { context.indentLevel--; } needNewline && deindent(true /* without newline */); } function genCacheExpression(node, context) { const { push, helper, indent, deindent, newline } = context; push(`_cache[${node.index}] || (`); if (node.isVNode) { indent(); push(`${helper(SET_BLOCK_TRACKING)}(-1),`); newline(); } push(`_cache[${node.index}] = `); genNode(node.value, context); if (node.isVNode) { push(`,`); newline(); push(`${helper(SET_BLOCK_TRACKING)}(1),`); newline(); push(`_cache[${node.index}]`); deindent(); } push(`)`); } // these keywords should not appear inside expressions, but operators like // typeof, instanceof and in are allowed const prohibitedKeywordRE = new RegExp('\\b' + ('do,if,for,let,new,try,var,case,else,with,await,break,catch,class,const,' + 'super,throw,while,yield,delete,export,import,return,switch,default,' + 'extends,finally,continue,debugger,function,arguments,typeof,void') .split(',') .join('\\b|\\b') + '\\b'); // strip strings in expressions const stripStringRE = /'(?:[^'\\]|\\.)*'|"(?:[^"\\]|\\.)*"|`(?:[^`\\]|\\.)*\$\{|\}(?:[^`\\]|\\.)*`|`(?:[^`\\]|\\.)*`/g; /** * Validate a non-prefixed expression. * This is only called when using the in-browser runtime compiler since it * doesn't prefix expressions. */ function validateBrowserExpression(node, context, asParams = false, asRawStatements = false) { const exp = node.content; // empty expressions are validated per-directive since some directives // do allow empty expressions. if (!exp.trim()) { return; } try { new Function(asRawStatements ? ` ${exp} ` : `return ${asParams ? `(${exp}) => {}` : `(${exp})`}`); } catch (e) { let message = e.message; const keywordMatch = exp .replace(stripStringRE, '') .match(prohibitedKeywordRE); if (keywordMatch) { message = `avoid using JavaScript keyword as property name: "${keywordMatch[0]}"`; } context.onError(createCompilerError(43 /* X_INVALID_EXPRESSION */, node.loc, undefined, message)); } } const transformExpression = (node, context) => { if (node.type === 5 /* INTERPOLATION */) { node.content = processExpression(node.content, context); } else if (node.type === 1 /* ELEMENT */) { // handle directives on element for (let i = 0; i < node.props.length; i++) { const dir = node.props[i]; // do not process for v-on & v-for since they are special handled if (dir.type === 7 /* DIRECTIVE */ && dir.name !== 'for') { const exp = dir.exp; const arg = dir.arg; // do not process exp if this is v-on:arg - we need special handling // for wrapping inline statements. if (exp && exp.type === 4 /* SIMPLE_EXPRESSION */ && !(dir.name === 'on' && arg)) { dir.exp = processExpression(exp, context, // slot args must be processed as function params dir.name === 'slot'); } if (arg && arg.type === 4 /* SIMPLE_EXPRESSION */ && !arg.isStatic) { dir.arg = processExpression(arg, context); } } } } }; // Important: since this function uses Node.js only dependencies, it should // always be used with a leading !true check so that it can be // tree-shaken from the browser build. function processExpression(node, context, // some expressions like v-slot props & v-for aliases should be parsed as // function params asParams = false, // v-on handler values may contain multiple statements asRawStatements = false) { { { // simple in-browser validation (same logic in 2.x) validateBrowserExpression(node, context, asParams, asRawStatements); } return node; } } const transformIf = createStructuralDirectiveTransform(/^(if|else|else-if)$/, (node, dir, context) => { return processIf(node, dir, context, (ifNode, branch, isRoot) => { // #1587: We need to dynamically increment the key based on the current // node's sibling nodes, since chained v-if/else branches are // rendered at the same depth const siblings = context.parent.children; let i = siblings.indexOf(ifNode); let key = 0; while (i-- >= 0) { const sibling = siblings[i]; if (sibling && sibling.type === 9 /* IF */) { key += sibling.branches.length; } } // Exit callback. Complete the codegenNode when all children have been // transformed. return () => { if (isRoot) { ifNode.codegenNode = createCodegenNodeForBranch(branch, key, context); } else { // attach this branch's codegen node to the v-if root. const parentCondition = getParentCondition(ifNode.codegenNode); parentCondition.alternate = createCodegenNodeForBranch(branch, key + ifNode.branches.length - 1, context); } }; }); }); // target-agnostic transform used for both Client and SSR function processIf(node, dir, context, processCodegen) { if (dir.name !== 'else' && (!dir.exp || !dir.exp.content.trim())) { const loc = dir.exp ? dir.exp.loc : node.loc; context.onError(createCompilerError(27 /* X_V_IF_NO_EXPRESSION */, dir.loc)); dir.exp = createSimpleExpression(`true`, false, loc); } if (dir.exp) { validateBrowserExpression(dir.exp, context); } if (dir.name === 'if') { const branch = createIfBranch(node, dir); const ifNode = { type: 9 /* IF */, loc: node.loc, branches: [branch] }; context.replaceNode(ifNode); if (processCodegen) { return processCodegen(ifNode, branch, true); } } else { // locate the adjacent v-if const siblings = context.parent.children; const comments = []; let i = siblings.indexOf(node); while (i-- >= -1) { const sibling = siblings[i]; if (sibling && sibling.type === 3 /* COMMENT */) { context.removeNode(sibling); comments.unshift(sibling); continue; } if (sibling && sibling.type === 2 /* TEXT */ && !sibling.content.trim().length) { context.removeNode(sibling); continue; } if (sibling && sibling.type === 9 /* IF */) { // move the node to the if node's branches context.removeNode(); const branch = createIfBranch(node, dir); if (comments.length) { branch.children = [...comments, ...branch.children]; } // check if user is forcing same key on different branches { const key = branch.userKey; if (key) { sibling.branches.forEach(({ userKey }) => { if (isSameKey(userKey, key)) { context.onError(createCompilerError(28 /* X_V_IF_SAME_KEY */, branch.userKey.loc)); } }); } } sibling.branches.push(branch); const onExit = processCodegen && processCodegen(sibling, branch, false); // since the branch was removed, it will not be traversed. // make sure to traverse here. traverseNode(branch, context); // call on exit if (onExit) onExit(); // make sure to reset currentNode after traversal to indicate this // node has been removed. context.currentNode = null; } else { context.onError(createCompilerError(29 /* X_V_ELSE_NO_ADJACENT_IF */, node.loc)); } break; } } } function createIfBranch(node, dir) { return { type: 10 /* IF_BRANCH */, loc: node.loc, condition: dir.name === 'else' ? undefined : dir.exp, children: node.tagType === 3 /* TEMPLATE */ && !findDir(node, 'for') ? node.children : [node], userKey: findProp(node, `key`) }; } function createCodegenNodeForBranch(branch, keyIndex, context) { if (branch.condition) { return createConditionalExpression(branch.condition, createChildrenCodegenNode(branch, keyIndex, context), // make sure to pass in asBlock: true so that the comment node call // closes the current block. createCallExpression(context.helper(CREATE_COMMENT), [ '"v-if"' , 'true' ])); } else { return createChildrenCodegenNode(branch, keyIndex, context); } } function createChildrenCodegenNode(branch, keyIndex, context) { const { helper, removeHelper } = context; const keyProperty = createObjectProperty(`key`, createSimpleExpression(`${keyIndex}`, false, locStub, 2 /* CAN_HOIST */)); const { children } = branch; const firstChild = children[0]; const needFragmentWrapper = children.length !== 1 || firstChild.type !== 1 /* ELEMENT */; if (needFragmentWrapper) { if (children.length === 1 && firstChild.type === 11 /* FOR */) { // optimize away nested fragments when child is a ForNode const vnodeCall = firstChild.codegenNode; injectProp(vnodeCall, keyProperty, context); return vnodeCall; } else { let patchFlag = 64 /* STABLE_FRAGMENT */; let patchFlagText = PatchFlagNames[64 /* STABLE_FRAGMENT */]; // check if the fragment actually contains a single valid child with // the rest being comments if (children.filter(c => c.type !== 3 /* COMMENT */).length === 1) { patchFlag |= 2048 /* DEV_ROOT_FRAGMENT */; patchFlagText += `, ${PatchFlagNames[2048 /* DEV_ROOT_FRAGMENT */]}`; } return createVNodeCall(context, helper(FRAGMENT), createObjectExpression([keyProperty]), children, patchFlag + (` /* ${patchFlagText} */` ), undefined, undefined, true, false, branch.loc); } } else { const vnodeCall = firstChild .codegenNode; // Change createVNode to createBlock. if (vnodeCall.type === 13 /* VNODE_CALL */ && !vnodeCall.isBlock) { removeHelper(CREATE_VNODE); vnodeCall.isBlock = true; helper(OPEN_BLOCK); helper(CREATE_BLOCK); } // inject branch key injectProp(vnodeCall, keyProperty, context); return vnodeCall; } } function isSameKey(a, b) { if (!a || a.type !== b.type) { return false; } if (a.type === 6 /* ATTRIBUTE */) { if (a.value.content !== b.value.content) { return false; } } else { // directive const exp = a.exp; const branchExp = b.exp; if (exp.type !== branchExp.type) { return false; } if (exp.type !== 4 /* SIMPLE_EXPRESSION */ || (exp.isStatic !== branchExp.isStatic || exp.content !== branchExp.content)) { return false; } } return true; } function getParentCondition(node) { while (true) { if (node.type === 19 /* JS_CONDITIONAL_EXPRESSION */) { if (node.alternate.type === 19 /* JS_CONDITIONAL_EXPRESSION */) { node = node.alternate; } else { return node; } } else if (node.type === 20 /* JS_CACHE_EXPRESSION */) { node = node.value; } } } const transformFor = createStructuralDirectiveTransform('for', (node, dir, context) => { const { helper, removeHelper } = context; return processFor(node, dir, context, forNode => { // create the loop render function expression now, and add the // iterator on exit after all children have been traversed const renderExp = createCallExpression(helper(RENDER_LIST), [ forNode.source ]); const keyProp = findProp(node, `key`); const keyProperty = keyProp ? createObjectProperty(`key`, keyProp.type === 6 /* ATTRIBUTE */ ? createSimpleExpression(keyProp.value.content, true) : keyProp.exp) : null; const isStableFragment = forNode.source.type === 4 /* SIMPLE_EXPRESSION */ && forNode.source.constType > 0 /* NOT_CONSTANT */; const fragmentFlag = isStableFragment ? 64 /* STABLE_FRAGMENT */ : keyProp ? 128 /* KEYED_FRAGMENT */ : 256 /* UNKEYED_FRAGMENT */; forNode.codegenNode = createVNodeCall(context, helper(FRAGMENT), undefined, renderExp, fragmentFlag + (` /* ${PatchFlagNames[fragmentFlag]} */` ), undefined, undefined, true /* isBlock */, !isStableFragment /* disableTracking */, node.loc); return () => { // finish the codegen now that all children have been traversed let childBlock; const isTemplate = isTemplateNode(node); const { children } = forNode; // check <template v-for> key placement if (isTemplate) { node.children.some(c => { if (c.type === 1 /* ELEMENT */) { const key = findProp(c, 'key'); if (key) { context.onError(createCompilerError(32 /* X_V_FOR_TEMPLATE_KEY_PLACEMENT */, key.loc)); return true; } } }); } const needFragmentWrapper = children.length !== 1 || children[0].type !== 1 /* ELEMENT */; const slotOutlet = isSlotOutlet(node) ? node : isTemplate && node.children.length === 1 && isSlotOutlet(node.children[0]) ? node.children[0] // api-extractor somehow fails to infer this : null; if (slotOutlet) { // <slot v-for="..."> or <template v-for="..."><slot/></template> childBlock = slotOutlet.codegenNode; if (isTemplate && keyProperty) { // <template v-for="..." :key="..."><slot/></template> // we need to inject the key to the renderSlot() call. // the props for renderSlot is passed as the 3rd argument. injectProp(childBlock, keyProperty, context); } } else if (needFragmentWrapper) { // <template v-for="..."> with text or multi-elements // should generate a fragment block for each loop childBlock = createVNodeCall(context, helper(FRAGMENT), keyProperty ? createObjectExpression([keyProperty]) : undefined, node.children, 64 /* STABLE_FRAGMENT */ + (` /* ${PatchFlagNames[64 /* STABLE_FRAGMENT */]} */` ), undefined, undefined, true); } else { // Normal element v-for. Directly use the child's codegenNode // but mark it as a block. childBlock = children[0] .codegenNode; if (isTemplate && keyProperty) { injectProp(childBlock, keyProperty, context); } if (childBlock.isBlock !== !isStableFragment) { if (childBlock.isBlock) { // switch from block to vnode removeHelper(OPEN_BLOCK); removeHelper(CREATE_BLOCK); } else { // switch from vnode to block removeHelper(CREATE_VNODE); } } childBlock.isBlock = !isStableFragment; if (childBlock.isBlock) { helper(OPEN_BLOCK); helper(CREATE_BLOCK); } else { helper(CREATE_VNODE); } } renderExp.arguments.push(createFunctionExpression(createForLoopParams(forNode.parseResult), childBlock, true /* force newline */)); }; }); }); // target-agnostic transform used for both Client and SSR function processFor(node, dir, context, processCodegen) { if (!dir.exp) { context.onError(createCompilerError(30 /* X_V_FOR_NO_EXPRESSION */, dir.loc)); return; } const parseResult = parseForExpression( // can only be simple expression because vFor transform is applied // before expression transform. dir.exp, context); if (!parseResult) { context.onError(createCompilerError(31 /* X_V_FOR_MALFORMED_EXPRESSION */, dir.loc)); return; } const { addIdentifiers, removeIdentifiers, scopes } = context; const { source, value, key, index } = parseResult; const forNode = { type: 11 /* FOR */, loc: dir.loc, source, valueAlias: value, keyAlias: key, objectIndexAlias: index, parseResult, children: isTemplateNode(node) ? node.children : [node] }; context.replaceNode(forNode); // bookkeeping scopes.vFor++; const onExit = processCodegen && processCodegen(forNode); return () => { scopes.vFor--; if (onExit) onExit(); }; } const forAliasRE = /([\s\S]*?)\s+(?:in|of)\s+([\s\S]*)/; // This regex doesn't cover the case if key or index aliases have destructuring, // but those do not make sense in the first place, so this works in practice. const forIteratorRE = /,([^,\}\]]*)(?:,([^,\}\]]*))?$/; const stripParensRE = /^\(|\)$/g; function parseForExpression(input, context) { const loc = input.loc; const exp = input.content; const inMatch = exp.match(forAliasRE); if (!inMatch) return; const [, LHS, RHS] = inMatch; const result = { source: createAliasExpression(loc, RHS.trim(), exp.indexOf(RHS, LHS.length)), value: undefined, key: undefined, index: undefined }; { validateBrowserExpression(result.source, context); } let valueContent = LHS.trim() .replace(stripParensRE, '') .trim(); const trimmedOffset = LHS.indexOf(valueContent); const iteratorMatch = valueContent.match(forIteratorRE); if (iteratorMatch) { valueContent = valueContent.replace(forIteratorRE, '').trim(); const keyContent = iteratorMatch[1].trim(); let keyOffset; if (keyContent) { keyOffset = exp.indexOf(keyContent, trimmedOffset + valueContent.length); result.key = createAliasExpression(loc, keyContent, keyOffset); { validateBrowserExpression(result.key, context, true); } } if (iteratorMatch[2]) { const indexContent = iteratorMatch[2].trim(); if (indexContent) { result.index = createAliasExpression(loc, indexContent, exp.indexOf(indexContent, result.key ? keyOffset + keyContent.length : trimmedOffset + valueContent.length)); { validateBrowserExpression(result.index, context, true); } } } } if (valueContent) { result.value = createAliasExpression(loc, valueContent, trimmedOffset); { validateBrowserExpression(result.value, context, true); } } return result; } function createAliasExpression(range, content, offset) { return createSimpleExpression(content, false, getInnerRange(range, offset, content.length)); } function createForLoopParams({ value, key, index }) { const params = []; if (value) { params.push(value); } if (key) { if (!value) { params.push(createSimpleExpression(`_`, false)); } params.push(key); } if (index) { if (!key) { if (!value) { params.push(createSimpleExpression(`_`, false)); } params.push(createSimpleExpression(`__`, false)); } params.push(index); } return params; } const defaultFallback = createSimpleExpression(`undefined`, false); // A NodeTransform that: // 1. Tracks scope identifiers for scoped slots so that they don't get prefixed // by transformExpression. This is only applied in non-browser builds with // { prefixIdentifiers: true }. // 2. Track v-slot depths so that we know a slot is inside another slot. // Note the exit callback is executed before buildSlots() on the same node, // so only nested slots see positive numbers. const trackSlotScopes = (node, context) => { if (node.type === 1 /* ELEMENT */ && (node.tagType === 1 /* COMPONENT */ || node.tagType === 3 /* TEMPLATE */)) { // We are only checking non-empty v-slot here // since we only care about slots that introduce scope variables. const vSlot = findDir(node, 'slot'); if (vSlot) { vSlot.exp; context.scopes.vSlot++; return () => { context.scopes.vSlot--; }; } } }; const buildClientSlotFn = (props, children, loc) => createFunctionExpression(props, children, false /* newline */, true /* isSlot */, children.length ? children[0].loc : loc); // Instead of being a DirectiveTransform, v-slot processing is called during // transformElement to build the slots object for a component. function buildSlots(node, context, buildSlotFn = buildClientSlotFn) { context.helper(WITH_CTX); const { children, loc } = node; const slotsProperties = []; const dynamicSlots = []; const buildDefaultSlotProperty = (props, children) => createObjectProperty(`default`, buildSlotFn(props, children, loc)); // If the slot is inside a v-for or another v-slot, force it to be dynamic // since it likely uses a scope variable. let hasDynamicSlots = context.scopes.vSlot > 0 || context.scopes.vFor > 0; // 1. Check for slot with slotProps on component itself. // <Comp v-slot="{ prop }"/> const onComponentSlot = findDir(node, 'slot', true); if (onComponentSlot) { const { arg, exp } = onComponentSlot; if (arg && !isStaticExp(arg)) { hasDynamicSlots = true; } slotsProperties.push(createObjectProperty(arg || createSimpleExpression('default', true), buildSlotFn(exp, children, loc))); } // 2. Iterate through children and check for template slots // <template v-slot:foo="{ prop }"> let hasTemplateSlots = false; let hasNamedDefaultSlot = false; const implicitDefaultChildren = []; const seenSlotNames = new Set(); for (let i = 0; i < children.length; i++) { const slotElement = children[i]; let slotDir; if (!isTemplateNode(slotElement) || !(slotDir = findDir(slotElement, 'slot', true))) { // not a <template v-slot>, skip. if (slotElement.type !== 3 /* COMMENT */) { implicitDefaultChildren.push(slotElement); } continue; } if (onComponentSlot) { // already has on-component slot - this is incorrect usage. context.onError(createCompilerError(36 /* X_V_SLOT_MIXED_SLOT_USAGE */, slotDir.loc)); break; } hasTemplateSlots = true; const { children: slotChildren, loc: slotLoc } = slotElement; const { arg: slotName = createSimpleExpression(`default`, true), exp: slotProps, loc: dirLoc } = slotDir; // check if name is dynamic. let staticSlotName; if (isStaticExp(slotName)) { staticSlotName = slotName ? slotName.content : `default`; } else { hasDynamicSlots = true; } const slotFunction = buildSlotFn(slotProps, slotChildren, slotLoc); // check if this slot is conditional (v-if/v-for) let vIf; let vElse; let vFor; if ((vIf = findDir(slotElement, 'if'))) { hasDynamicSlots = true; dynamicSlots.push(createConditionalExpression(vIf.exp, buildDynamicSlot(slotName, slotFunction), defaultFallback)); } else if ((vElse = findDir(slotElement, /^else(-if)?$/, true /* allowEmpty */))) { // find adjacent v-if let j = i; let prev; while (j--) { prev = children[j]; if (prev.type !== 3 /* COMMENT */) { break; } } if (prev && isTemplateNode(prev) && findDir(prev, 'if')) { // remove node children.splice(i, 1); i--; // attach this slot to previous conditional let conditional = dynamicSlots[dynamicSlots.length - 1]; while (conditional.alternate.type === 19 /* JS_CONDITIONAL_EXPRESSION */) { conditional = conditional.alternate; } conditional.alternate = vElse.exp ? createConditionalExpression(vElse.exp, buildDynamicSlot(slotName, slotFunction), defaultFallback) : buildDynamicSlot(slotName, slotFunction); } else { context.onError(createCompilerError(29 /* X_V_ELSE_NO_ADJACENT_IF */, vElse.loc)); } } else if ((vFor = findDir(slotElement, 'for'))) { hasDynamicSlots = true; const parseResult = vFor.parseResult || parseForExpression(vFor.exp, context); if (parseResult) { // Render the dynamic slots as an array and add it to the createSlot() // args. The runtime knows how to handle it appropriately. dynamicSlots.push(createCallExpression(context.helper(RENDER_LIST), [ parseResult.source, createFunctionExpression(createForLoopParams(parseResult), buildDynamicSlot(slotName, slotFunction), true /* force newline */) ])); } else { context.onError(createCompilerError(31 /* X_V_FOR_MALFORMED_EXPRESSION */, vFor.loc)); } } else { // check duplicate static names if (staticSlotName) { if (seenSlotNames.has(staticSlotName)) { context.onError(createCompilerError(37 /* X_V_SLOT_DUPLICATE_SLOT_NAMES */, dirLoc)); continue; } seenSlotNames.add(staticSlotName); if (staticSlotName === 'default') { hasNamedDefaultSlot = true; } } slotsProperties.push(createObjectProperty(slotName, slotFunction)); } } if (!onComponentSlot) { if (!hasTemplateSlots) { // implicit default slot (on component) slotsProperties.push(buildDefaultSlotProperty(undefined, children)); } else if (implicitDefaultChildren.length) { // implicit default slot (mixed with named slots) if (hasNamedDefaultSlot) { context.onError(createCompilerError(38 /* X_V_SLOT_EXTRANEOUS_DEFAULT_SLOT_CHILDREN */, implicitDefaultChildren[0].loc)); } else { slotsProperties.push(buildDefaultSlotProperty(undefined, implicitDefaultChildren)); } } } const slotFlag = hasDynamicSlots ? 2 /* DYNAMIC */ : hasForwardedSlots(node.children) ? 3 /* FORWARDED */ : 1 /* STABLE */; let slots = createObjectExpression(slotsProperties.concat(createObjectProperty(`_`, // 2 = compiled but dynamic = can skip normalization, but must run diff // 1 = compiled and static = can skip normalization AND diff as optimized createSimpleExpression(slotFlag + (` /* ${slotFlagsText[slotFlag]} */` ), false))), loc); if (dynamicSlots.length) { slots = createCallExpression(context.helper(CREATE_SLOTS), [ slots, createArrayExpression(dynamicSlots) ]); } return { slots, hasDynamicSlots }; } function buildDynamicSlot(name, fn) { return createObjectExpression([ createObjectProperty(`name`, name), createObjectProperty(`fn`, fn) ]); } function hasForwardedSlots(children) { for (let i = 0; i < children.length; i++) { const child = children[i]; switch (child.type) { case 1 /* ELEMENT */: if (child.tagType === 2 /* SLOT */ || (child.tagType === 0 /* ELEMENT */ && hasForwardedSlots(child.children))) { return true; } break; case 9 /* IF */: if (hasForwardedSlots(child.branches)) return true; break; case 10 /* IF_BRANCH */: case 11 /* FOR */: if (hasForwardedSlots(child.children)) return true; break; } } return false; } // some directive transforms (e.g. v-model) may return a symbol for runtime // import, which should be used instead of a resolveDirective call. const directiveImportMap = new WeakMap(); // generate a JavaScript AST for this element's codegen const transformElement = (node, context) => { // perform the work on exit, after all child expressions have been // processed and merged. return function postTransformElement() { node = context.currentNode; if (!(node.type === 1 /* ELEMENT */ && (node.tagType === 0 /* ELEMENT */ || node.tagType === 1 /* COMPONENT */))) { return; } const { tag, props } = node; const isComponent = node.tagType === 1 /* COMPONENT */; // The goal of the transform is to create a codegenNode implementing the // VNodeCall interface. const vnodeTag = isComponent ? resolveComponentType(node, context) : `"${tag}"`; const isDynamicComponent = isObject(vnodeTag) && vnodeTag.callee === RESOLVE_DYNAMIC_COMPONENT; let vnodeProps; let vnodeChildren; let vnodePatchFlag; let patchFlag = 0; let vnodeDynamicProps; let dynamicPropNames; let vnodeDirectives; let shouldUseBlock = // dynamic component may resolve to plain elements isDynamicComponent || vnodeTag === TELEPORT || vnodeTag === SUSPENSE || (!isComponent && // <svg> and <foreignObject> must be forced into blocks so that block // updates inside get proper isSVG flag at runtime. (#639, #643) // This is technically web-specific, but splitting the logic out of core // leads to too much unnecessary complexity. (tag === 'svg' || tag === 'foreignObject' || // #938: elements with dynamic keys should be forced into blocks findProp(node, 'key', true))); // props if (props.length > 0) { const propsBuildResult = buildProps(node, context); vnodeProps = propsBuildResult.props; patchFlag = propsBuildResult.patchFlag; dynamicPropNames = propsBuildResult.dynamicPropNames; const directives = propsBuildResult.directives; vnodeDirectives = directives && directives.length ? createArrayExpression(directives.map(dir => buildDirectiveArgs(dir, context))) : undefined; } // children if (node.children.length > 0) { if (vnodeTag === KEEP_ALIVE) { // Although a built-in component, we compile KeepAlive with raw children // instead of slot functions so that it can be used inside Transition // or other Transition-wrapping HOCs. // To ensure correct updates with block optimizations, we need to: // 1. Force keep-alive into a block. This avoids its children being // collected by a parent block. shouldUseBlock = true; // 2. Force keep-alive to always be updated, since it uses raw children. patchFlag |= 1024 /* DYNAMIC_SLOTS */; if (node.children.length > 1) { context.onError(createCompilerError(44 /* X_KEEP_ALIVE_INVALID_CHILDREN */, { start: node.children[0].loc.start, end: node.children[node.children.length - 1].loc.end, source: '' })); } } const shouldBuildAsSlots = isComponent && // Teleport is not a real component and has dedicated runtime handling vnodeTag !== TELEPORT && // explained above. vnodeTag !== KEEP_ALIVE; if (shouldBuildAsSlots) { const { slots, hasDynamicSlots } = buildSlots(node, context); vnodeChildren = slots; if (hasDynamicSlots) { patchFlag |= 1024 /* DYNAMIC_SLOTS */; } } else if (node.children.length === 1 && vnodeTag !== TELEPORT) { const child = node.children[0]; const type = child.type; // check for dynamic text children const hasDynamicTextChild = type === 5 /* INTERPOLATION */ || type === 8 /* COMPOUND_EXPRESSION */; if (hasDynamicTextChild && getConstantType(child, context) === 0 /* NOT_CONSTANT */) { patchFlag |= 1 /* TEXT */; } // pass directly if the only child is a text node // (plain / interpolation / expression) if (hasDynamicTextChild || type === 2 /* TEXT */) { vnodeChildren = child; } else { vnodeChildren = node.children; } } else { vnodeChildren = node.children; } } // patchFlag & dynamicPropNames if (patchFlag !== 0) { { if (patchFlag < 0) { // special flags (negative and mutually exclusive) vnodePatchFlag = patchFlag + ` /* ${PatchFlagNames[patchFlag]} */`; } else { // bitwise flags const flagNames = Object.keys(PatchFlagNames) .map(Number) .filter(n => n > 0 && patchFlag & n) .map(n => PatchFlagNames[n]) .join(`, `); vnodePatchFlag = patchFlag + ` /* ${flagNames} */`; } } if (dynamicPropNames && dynamicPropNames.length) { vnodeDynamicProps = stringifyDynamicPropNames(dynamicPropNames); } } node.codegenNode = createVNodeCall(context, vnodeTag, vnodeProps, vnodeChildren, vnodePatchFlag, vnodeDynamicProps, vnodeDirectives, !!shouldUseBlock, false /* disableTracking */, node.loc); }; }; function resolveComponentType(node, context, ssr = false) { const { tag } = node; // 1. dynamic component const isProp = isComponentTag(tag) ? findProp(node, 'is') : findDir(node, 'is'); if (isProp) { const exp = isProp.type === 6 /* ATTRIBUTE */ ? isProp.value && createSimpleExpression(isProp.value.content, true) : isProp.exp; if (exp) { return createCallExpression(context.helper(RESOLVE_DYNAMIC_COMPONENT), [ exp ]); } } // 2. built-in components (Teleport, Transition, KeepAlive, Suspense...) const builtIn = isCoreComponent(tag) || context.isBuiltInComponent(tag); if (builtIn) { // built-ins are simply fallthroughs / have special handling during ssr // so we don't need to import their runtime equivalents if (!ssr) context.helper(builtIn); return builtIn; } // 5. user component (resolve) context.helper(RESOLVE_COMPONENT); context.components.add(tag); return toValidAssetId(tag, `component`); } function buildProps(node, context, props = node.props, ssr = false) { const { tag, loc: elementLoc } = node; const isComponent = node.tagType === 1 /* COMPONENT */; let properties = []; const mergeArgs = []; const runtimeDirectives = []; // patchFlag analysis let patchFlag = 0; let hasRef = false; let hasClassBinding = false; let hasStyleBinding = false; let hasHydrationEventBinding = false; let hasDynamicKeys = false; let hasVnodeHook = false; const dynamicPropNames = []; const analyzePatchFlag = ({ key, value }) => { if (isStaticExp(key)) { const name = key.content; const isEventHandler = isOn(name); if (!isComponent && isEventHandler && // omit the flag for click handlers because hydration gives click // dedicated fast path. name.toLowerCase() !== 'onclick' && // omit v-model handlers name !== 'onUpdate:modelValue' && // omit onVnodeXXX hooks !isReservedProp(name)) { hasHydrationEventBinding = true; } if (isEventHandler && isReservedProp(name)) { hasVnodeHook = true; } if (value.type === 20 /* JS_CACHE_EXPRESSION */ || ((value.type === 4 /* SIMPLE_EXPRESSION */ || value.type === 8 /* COMPOUND_EXPRESSION */) && getConstantType(value, context) > 0)) { // skip if the prop is a cached handler or has constant value return; } if (name === 'ref') { hasRef = true; } else if (name === 'class' && !isComponent) { hasClassBinding = true; } else if (name === 'style' && !isComponent) { hasStyleBinding = true; } else if (name !== 'key' && !dynamicPropNames.includes(name)) { dynamicPropNames.push(name); } } else { hasDynamicKeys = true; } }; for (let i = 0; i < props.length; i++) { // static attribute const prop = props[i]; if (prop.type === 6 /* ATTRIBUTE */) { const { loc, name, value } = prop; let isStatic = true; if (name === 'ref') { hasRef = true; } // skip :is on <component> if (name === 'is' && isComponentTag(tag)) { continue; } properties.push(createObjectProperty(createSimpleExpression(name, true, getInnerRange(loc, 0, name.length)), createSimpleExpression(value ? value.content : '', isStatic, value ? value.loc : loc))); } else { // directives const { name, arg, exp, loc } = prop; const isBind = name === 'bind'; const isOn = name === 'on'; // skip v-slot - it is handled by its dedicated transform. if (name === 'slot') { if (!isComponent) { context.onError(createCompilerError(39 /* X_V_SLOT_MISPLACED */, loc)); } continue; } // skip v-once - it is handled by its dedicated transform. if (name === 'once') { continue; } // skip v-is and :is on <component> if (name === 'is' || (isBind && isComponentTag(tag) && isBindKey(arg, 'is'))) { continue; } // skip v-on in SSR compilation if (isOn && ssr) { continue; } // special case for v-bind and v-on with no argument if (!arg && (isBind || isOn)) { hasDynamicKeys = true; if (exp) { if (properties.length) { mergeArgs.push(createObjectExpression(dedupeProperties(properties), elementLoc)); properties = []; } if (isBind) { mergeArgs.push(exp); } else { // v-on="obj" -> toHandlers(obj) mergeArgs.push({ type: 14 /* JS_CALL_EXPRESSION */, loc, callee: context.helper(TO_HANDLERS), arguments: [exp] }); } } else { context.onError(createCompilerError(isBind ? 33 /* X_V_BIND_NO_EXPRESSION */ : 34 /* X_V_ON_NO_EXPRESSION */, loc)); } continue; } const directiveTransform = context.directiveTransforms[name]; if (directiveTransform) { // has built-in directive transform. const { props, needRuntime } = directiveTransform(prop, node, context); !ssr && props.forEach(analyzePatchFlag); properties.push(...props); if (needRuntime) { runtimeDirectives.push(prop); if (isSymbol(needRuntime)) { directiveImportMap.set(prop, needRuntime); } } } else { // no built-in transform, this is a user custom directive. runtimeDirectives.push(prop); } } } let propsExpression = undefined; // has v-bind="object" or v-on="object", wrap with mergeProps if (mergeArgs.length) { if (properties.length) { mergeArgs.push(createObjectExpression(dedupeProperties(properties), elementLoc)); } if (mergeArgs.length > 1) { propsExpression = createCallExpression(context.helper(MERGE_PROPS), mergeArgs, elementLoc); } else { // single v-bind with nothing else - no need for a mergeProps call propsExpression = mergeArgs[0]; } } else if (properties.length) { propsExpression = createObjectExpression(dedupeProperties(properties), elementLoc); } // patchFlag analysis if (hasDynamicKeys) { patchFlag |= 16 /* FULL_PROPS */; } else { if (hasClassBinding) { patchFlag |= 2 /* CLASS */; } if (hasStyleBinding) { patchFlag |= 4 /* STYLE */; } if (dynamicPropNames.length) { patchFlag |= 8 /* PROPS */; } if (hasHydrationEventBinding) { patchFlag |= 32 /* HYDRATE_EVENTS */; } } if ((patchFlag === 0 || patchFlag === 32 /* HYDRATE_EVENTS */) && (hasRef || hasVnodeHook || runtimeDirectives.length > 0)) { patchFlag |= 512 /* NEED_PATCH */; } return { props: propsExpression, directives: runtimeDirectives, patchFlag, dynamicPropNames }; } // Dedupe props in an object literal. // Literal duplicated attributes would have been warned during the parse phase, // however, it's possible to encounter duplicated `onXXX` handlers with different // modifiers. We also need to merge static and dynamic class / style attributes. // - onXXX handlers / style: merge into array // - class: merge into single expression with concatenation function dedupeProperties(properties) { const knownProps = new Map(); const deduped = []; for (let i = 0; i < properties.length; i++) { const prop = properties[i]; // dynamic keys are always allowed if (prop.key.type === 8 /* COMPOUND_EXPRESSION */ || !prop.key.isStatic) { deduped.push(prop); continue; } const name = prop.key.content; const existing = knownProps.get(name); if (existing) { if (name === 'style' || name === 'class' || name.startsWith('on')) { mergeAsArray(existing, prop); } // unexpected duplicate, should have emitted error during parse } else { knownProps.set(name, prop); deduped.push(prop); } } return deduped; } function mergeAsArray(existing, incoming) { if (existing.value.type === 17 /* JS_ARRAY_EXPRESSION */) { existing.value.elements.push(incoming.value); } else { existing.value = createArrayExpression([existing.value, incoming.value], existing.loc); } } function buildDirectiveArgs(dir, context) { const dirArgs = []; const runtime = directiveImportMap.get(dir); if (runtime) { // built-in directive with runtime dirArgs.push(context.helperString(runtime)); } else { { // inject statement for resolving directive context.helper(RESOLVE_DIRECTIVE); context.directives.add(dir.name); dirArgs.push(toValidAssetId(dir.name, `directive`)); } } const { loc } = dir; if (dir.exp) dirArgs.push(dir.exp); if (dir.arg) { if (!dir.exp) { dirArgs.push(`void 0`); } dirArgs.push(dir.arg); } if (Object.keys(dir.modifiers).length) { if (!dir.arg) { if (!dir.exp) { dirArgs.push(`void 0`); } dirArgs.push(`void 0`); } const trueExpression = createSimpleExpression(`true`, false, loc); dirArgs.push(createObjectExpression(dir.modifiers.map(modifier => createObjectProperty(modifier, trueExpression)), loc)); } return createArrayExpression(dirArgs, dir.loc); } function stringifyDynamicPropNames(props) { let propsNamesString = `[`; for (let i = 0, l = props.length; i < l; i++) { propsNamesString += JSON.stringify(props[i]); if (i < l - 1) propsNamesString += ', '; } return propsNamesString + `]`; } function isComponentTag(tag) { return tag[0].toLowerCase() + tag.slice(1) === 'component'; } const transformSlotOutlet = (node, context) => { if (isSlotOutlet(node)) { const { children, loc } = node; const { slotName, slotProps } = processSlotOutlet(node, context); const slotArgs = [ context.prefixIdentifiers ? `_ctx.$slots` : `$slots`, slotName ]; if (slotProps) { slotArgs.push(slotProps); } if (children.length) { if (!slotProps) { slotArgs.push(`{}`); } slotArgs.push(createFunctionExpression([], children, false, false, loc)); } if (context.scopeId && !context.slotted) { if (!slotProps) { slotArgs.push(`{}`); } if (!children.length) { slotArgs.push(`undefined`); } slotArgs.push(`true`); } node.codegenNode = createCallExpression(context.helper(RENDER_SLOT), slotArgs, loc); } }; function processSlotOutlet(node, context) { let slotName = `"default"`; let slotProps = undefined; const nonNameProps = []; for (let i = 0; i < node.props.length; i++) { const p = node.props[i]; if (p.type === 6 /* ATTRIBUTE */) { if (p.value) { if (p.name === 'name') { slotName = JSON.stringify(p.value.content); } else { p.name = camelize(p.name); nonNameProps.push(p); } } } else { if (p.name === 'bind' && isBindKey(p.arg, 'name')) { if (p.exp) slotName = p.exp; } else { if (p.name === 'bind' && p.arg && isStaticExp(p.arg)) { p.arg.content = camelize(p.arg.content); } nonNameProps.push(p); } } } if (nonNameProps.length > 0) { const { props, directives } = buildProps(node, context, nonNameProps); slotProps = props; if (directives.length) { context.onError(createCompilerError(35 /* X_V_SLOT_UNEXPECTED_DIRECTIVE_ON_SLOT_OUTLET */, directives[0].loc)); } } return { slotName, slotProps }; } const fnExpRE = /^\s*([\w$_]+|\([^)]*?\))\s*=>|^\s*function(?:\s+[\w$]+)?\s*\(/; const transformOn = (dir, node, context, augmentor) => { const { loc, modifiers, arg } = dir; if (!dir.exp && !modifiers.length) { context.onError(createCompilerError(34 /* X_V_ON_NO_EXPRESSION */, loc)); } let eventName; if (arg.type === 4 /* SIMPLE_EXPRESSION */) { if (arg.isStatic) { const rawName = arg.content; // for all event listeners, auto convert it to camelCase. See issue #2249 eventName = createSimpleExpression(toHandlerKey(camelize(rawName)), true, arg.loc); } else { // #2388 eventName = createCompoundExpression([ `${context.helperString(TO_HANDLER_KEY)}(`, arg, `)` ]); } } else { // already a compound expression. eventName = arg; eventName.children.unshift(`${context.helperString(TO_HANDLER_KEY)}(`); eventName.children.push(`)`); } // handler processing let exp = dir.exp; if (exp && !exp.content.trim()) { exp = undefined; } let shouldCache = context.cacheHandlers && !exp; if (exp) { const isMemberExp = isMemberExpression(exp.content); const isInlineStatement = !(isMemberExp || fnExpRE.test(exp.content)); const hasMultipleStatements = exp.content.includes(`;`); { validateBrowserExpression(exp, context, false, hasMultipleStatements); } if (isInlineStatement || (shouldCache && isMemberExp)) { // wrap inline statement in a function expression exp = createCompoundExpression([ `${isInlineStatement ? `$event` : `${``}(...args)`} => ${hasMultipleStatements ? `{` : `(`}`, exp, hasMultipleStatements ? `}` : `)` ]); } } let ret = { props: [ createObjectProperty(eventName, exp || createSimpleExpression(`() => {}`, false, loc)) ] }; // apply extended compiler augmentor if (augmentor) { ret = augmentor(ret); } if (shouldCache) { // cache handlers so that it's always the same handler being passed down. // this avoids unnecessary re-renders when users use inline handlers on // components. ret.props[0].value = context.cache(ret.props[0].value); } return ret; }; // v-bind without arg is handled directly in ./transformElements.ts due to it affecting // codegen for the entire props object. This transform here is only for v-bind // *with* args. const transformBind = (dir, node, context) => { const { exp, modifiers, loc } = dir; const arg = dir.arg; if (arg.type !== 4 /* SIMPLE_EXPRESSION */) { arg.children.unshift(`(`); arg.children.push(`) || ""`); } else if (!arg.isStatic) { arg.content = `${arg.content} || ""`; } // .prop is no longer necessary due to new patch behavior // .sync is replaced by v-model:arg if (modifiers.includes('camel')) { if (arg.type === 4 /* SIMPLE_EXPRESSION */) { if (arg.isStatic) { arg.content = camelize(arg.content); } else { arg.content = `${context.helperString(CAMELIZE)}(${arg.content})`; } } else { arg.children.unshift(`${context.helperString(CAMELIZE)}(`); arg.children.push(`)`); } } if (!exp || (exp.type === 4 /* SIMPLE_EXPRESSION */ && !exp.content.trim())) { context.onError(createCompilerError(33 /* X_V_BIND_NO_EXPRESSION */, loc)); return { props: [createObjectProperty(arg, createSimpleExpression('', true, loc))] }; } return { props: [createObjectProperty(arg, exp)] }; }; // Merge adjacent text nodes and expressions into a single expression // e.g. <div>abc {{ d }} {{ e }}</div> should have a single expression node as child. const transformText = (node, context) => { if (node.type === 0 /* ROOT */ || node.type === 1 /* ELEMENT */ || node.type === 11 /* FOR */ || node.type === 10 /* IF_BRANCH */) { // perform the transform on node exit so that all expressions have already // been processed. return () => { const children = node.children; let currentContainer = undefined; let hasText = false; for (let i = 0; i < children.length; i++) { const child = children[i]; if (isText(child)) { hasText = true; for (let j = i + 1; j < children.length; j++) { const next = children[j]; if (isText(next)) { if (!currentContainer) { currentContainer = children[i] = { type: 8 /* COMPOUND_EXPRESSION */, loc: child.loc, children: [child] }; } // merge adjacent text node into current currentContainer.children.push(` + `, next); children.splice(j, 1); j--; } else { currentContainer = undefined; break; } } } } if (!hasText || // if this is a plain element with a single text child, leave it // as-is since the runtime has dedicated fast path for this by directly // setting textContent of the element. // for component root it's always normalized anyway. (children.length === 1 && (node.type === 0 /* ROOT */ || (node.type === 1 /* ELEMENT */ && node.tagType === 0 /* ELEMENT */)))) { return; } // pre-convert text nodes into createTextVNode(text) calls to avoid // runtime normalization. for (let i = 0; i < children.length; i++) { const child = children[i]; if (isText(child) || child.type === 8 /* COMPOUND_EXPRESSION */) { const callArgs = []; // createTextVNode defaults to single whitespace, so if it is a // single space the code could be an empty call to save bytes. if (child.type !== 2 /* TEXT */ || child.content !== ' ') { callArgs.push(child); } // mark dynamic text with flag so it gets patched inside a block if (!context.ssr && getConstantType(child, context) === 0 /* NOT_CONSTANT */) { callArgs.push(1 /* TEXT */ + (` /* ${PatchFlagNames[1 /* TEXT */]} */` )); } children[i] = { type: 12 /* TEXT_CALL */, content: child, loc: child.loc, codegenNode: createCallExpression(context.helper(CREATE_TEXT), callArgs) }; } } }; } }; const seen = new WeakSet(); const transformOnce = (node, context) => { if (node.type === 1 /* ELEMENT */ && findDir(node, 'once', true)) { if (seen.has(node)) { return; } seen.add(node); context.helper(SET_BLOCK_TRACKING); return () => { const cur = context.currentNode; if (cur.codegenNode) { cur.codegenNode = context.cache(cur.codegenNode, true /* isVNode */); } }; } }; const transformModel = (dir, node, context) => { const { exp, arg } = dir; if (!exp) { context.onError(createCompilerError(40 /* X_V_MODEL_NO_EXPRESSION */, dir.loc)); return createTransformProps(); } const rawExp = exp.loc.source; const expString = exp.type === 4 /* SIMPLE_EXPRESSION */ ? exp.content : rawExp; // im SFC <script setup> inline mode, the exp may have been transformed into // _unref(exp) context.bindingMetadata[rawExp]; const maybeRef = !true /* SETUP_CONST */; if (!isMemberExpression(expString) && !maybeRef) { context.onError(createCompilerError(41 /* X_V_MODEL_MALFORMED_EXPRESSION */, exp.loc)); return createTransformProps(); } const propName = arg ? arg : createSimpleExpression('modelValue', true); const eventName = arg ? isStaticExp(arg) ? `onUpdate:${arg.content}` : createCompoundExpression(['"onUpdate:" + ', arg]) : `onUpdate:modelValue`; let assignmentExp; const eventArg = context.isTS ? `($event: any)` : `$event`; { assignmentExp = createCompoundExpression([ `${eventArg} => (`, exp, ` = $event)` ]); } const props = [ // modelValue: foo createObjectProperty(propName, dir.exp), // "onUpdate:modelValue": $event => (foo = $event) createObjectProperty(eventName, assignmentExp) ]; // modelModifiers: { foo: true, "bar-baz": true } if (dir.modifiers.length && node.tagType === 1 /* COMPONENT */) { const modifiers = dir.modifiers .map(m => (isSimpleIdentifier(m) ? m : JSON.stringify(m)) + `: true`) .join(`, `); const modifiersKey = arg ? isStaticExp(arg) ? `${arg.content}Modifiers` : createCompoundExpression([arg, ' + "Modifiers"']) : `modelModifiers`; props.push(createObjectProperty(modifiersKey, createSimpleExpression(`{ ${modifiers} }`, false, dir.loc, 2 /* CAN_HOIST */))); } return createTransformProps(props); }; function createTransformProps(props = []) { return { props }; } function getBaseTransformPreset(prefixIdentifiers) { return [ [ transformOnce, transformIf, transformFor, ...([transformExpression] ), transformSlotOutlet, transformElement, trackSlotScopes, transformText ], { on: transformOn, bind: transformBind, model: transformModel } ]; } // we name it `baseCompile` so that higher order compilers like // @vue/compiler-dom can export `compile` while re-exporting everything else. function baseCompile(template, options = {}) { const onError = options.onError || defaultOnError; const isModuleMode = options.mode === 'module'; /* istanbul ignore if */ { if (options.prefixIdentifiers === true) { onError(createCompilerError(45 /* X_PREFIX_ID_NOT_SUPPORTED */)); } else if (isModuleMode) { onError(createCompilerError(46 /* X_MODULE_MODE_NOT_SUPPORTED */)); } } const prefixIdentifiers = !true ; if (options.cacheHandlers) { onError(createCompilerError(47 /* X_CACHE_HANDLER_NOT_SUPPORTED */)); } if (options.scopeId && !isModuleMode) { onError(createCompilerError(48 /* X_SCOPE_ID_NOT_SUPPORTED */)); } const ast = isString(template) ? baseParse(template, options) : template; const [nodeTransforms, directiveTransforms] = getBaseTransformPreset(); transform(ast, extend({}, options, { prefixIdentifiers, nodeTransforms: [ ...nodeTransforms, ...(options.nodeTransforms || []) // user transforms ], directiveTransforms: extend({}, directiveTransforms, options.directiveTransforms || {} // user transforms ) })); return generate(ast, extend({}, options, { prefixIdentifiers })); } const noopDirectiveTransform = () => ({ props: [] }); const V_MODEL_RADIO = Symbol(`vModelRadio` ); const V_MODEL_CHECKBOX = Symbol(`vModelCheckbox` ); const V_MODEL_TEXT = Symbol(`vModelText` ); const V_MODEL_SELECT = Symbol(`vModelSelect` ); const V_MODEL_DYNAMIC = Symbol(`vModelDynamic` ); const V_ON_WITH_MODIFIERS = Symbol(`vOnModifiersGuard` ); const V_ON_WITH_KEYS = Symbol(`vOnKeysGuard` ); const V_SHOW = Symbol(`vShow` ); const TRANSITION$1 = Symbol(`Transition` ); const TRANSITION_GROUP = Symbol(`TransitionGroup` ); registerRuntimeHelpers({ [V_MODEL_RADIO]: `vModelRadio`, [V_MODEL_CHECKBOX]: `vModelCheckbox`, [V_MODEL_TEXT]: `vModelText`, [V_MODEL_SELECT]: `vModelSelect`, [V_MODEL_DYNAMIC]: `vModelDynamic`, [V_ON_WITH_MODIFIERS]: `withModifiers`, [V_ON_WITH_KEYS]: `withKeys`, [V_SHOW]: `vShow`, [TRANSITION$1]: `Transition`, [TRANSITION_GROUP]: `TransitionGroup` }); /* eslint-disable no-restricted-globals */ let decoder; function decodeHtmlBrowser(raw) { (decoder || (decoder = document.createElement('div'))).innerHTML = raw; return decoder.textContent; } const isRawTextContainer = /*#__PURE__*/ makeMap('style,iframe,script,noscript', true); const parserOptions = { isVoidTag, isNativeTag: tag => isHTMLTag(tag) || isSVGTag(tag), isPreTag: tag => tag === 'pre', decodeEntities: decodeHtmlBrowser , isBuiltInComponent: (tag) => { if (isBuiltInType(tag, `Transition`)) { return TRANSITION$1; } else if (isBuiltInType(tag, `TransitionGroup`)) { return TRANSITION_GROUP; } }, // https://html.spec.whatwg.org/multipage/parsing.html#tree-construction-dispatcher getNamespace(tag, parent) { let ns = parent ? parent.ns : 0 /* HTML */; if (parent && ns === 2 /* MATH_ML */) { if (parent.tag === 'annotation-xml') { if (tag === 'svg') { return 1 /* SVG */; } if (parent.props.some(a => a.type === 6 /* ATTRIBUTE */ && a.name === 'encoding' && a.value != null && (a.value.content === 'text/html' || a.value.content === 'application/xhtml+xml'))) { ns = 0 /* HTML */; } } else if (/^m(?:[ions]|text)$/.test(parent.tag) && tag !== 'mglyph' && tag !== 'malignmark') { ns = 0 /* HTML */; } } else if (parent && ns === 1 /* SVG */) { if (parent.tag === 'foreignObject' || parent.tag === 'desc' || parent.tag === 'title') { ns = 0 /* HTML */; } } if (ns === 0 /* HTML */) { if (tag === 'svg') { return 1 /* SVG */; } if (tag === 'math') { return 2 /* MATH_ML */; } } return ns; }, // https://html.spec.whatwg.org/multipage/parsing.html#parsing-html-fragments getTextMode({ tag, ns }) { if (ns === 0 /* HTML */) { if (tag === 'textarea' || tag === 'title') { return 1 /* RCDATA */; } if (isRawTextContainer(tag)) { return 2 /* RAWTEXT */; } } return 0 /* DATA */; } }; // Parse inline CSS strings for static style attributes into an object. // This is a NodeTransform since it works on the static `style` attribute and // converts it into a dynamic equivalent: // style="color: red" -> :style='{ "color": "red" }' // It is then processed by `transformElement` and included in the generated // props. const transformStyle = node => { if (node.type === 1 /* ELEMENT */) { node.props.forEach((p, i) => { if (p.type === 6 /* ATTRIBUTE */ && p.name === 'style' && p.value) { // replace p with an expression node node.props[i] = { type: 7 /* DIRECTIVE */, name: `bind`, arg: createSimpleExpression(`style`, true, p.loc), exp: parseInlineCSS(p.value.content, p.loc), modifiers: [], loc: p.loc }; } }); } }; const parseInlineCSS = (cssText, loc) => { const normalized = parseStringStyle(cssText); return createSimpleExpression(JSON.stringify(normalized), false, loc, 3 /* CAN_STRINGIFY */); }; function createDOMCompilerError(code, loc) { return createCompilerError(code, loc, DOMErrorMessages ); } const DOMErrorMessages = { [49 /* X_V_HTML_NO_EXPRESSION */]: `v-html is missing expression.`, [50 /* X_V_HTML_WITH_CHILDREN */]: `v-html will override element children.`, [51 /* X_V_TEXT_NO_EXPRESSION */]: `v-text is missing expression.`, [52 /* X_V_TEXT_WITH_CHILDREN */]: `v-text will override element children.`, [53 /* X_V_MODEL_ON_INVALID_ELEMENT */]: `v-model can only be used on <input>, <textarea> and <select> elements.`, [54 /* X_V_MODEL_ARG_ON_ELEMENT */]: `v-model argument is not supported on plain elements.`, [55 /* X_V_MODEL_ON_FILE_INPUT_ELEMENT */]: `v-model cannot be used on file inputs since they are read-only. Use a v-on:change listener instead.`, [56 /* X_V_MODEL_UNNECESSARY_VALUE */]: `Unnecessary value binding used alongside v-model. It will interfere with v-model's behavior.`, [57 /* X_V_SHOW_NO_EXPRESSION */]: `v-show is missing expression.`, [58 /* X_TRANSITION_INVALID_CHILDREN */]: `<Transition> expects exactly one child element or component.`, [59 /* X_IGNORED_SIDE_EFFECT_TAG */]: `Tags with side effect (<script> and <style>) are ignored in client component templates.` }; const transformVHtml = (dir, node, context) => { const { exp, loc } = dir; if (!exp) { context.onError(createDOMCompilerError(49 /* X_V_HTML_NO_EXPRESSION */, loc)); } if (node.children.length) { context.onError(createDOMCompilerError(50 /* X_V_HTML_WITH_CHILDREN */, loc)); node.children.length = 0; } return { props: [ createObjectProperty(createSimpleExpression(`innerHTML`, true, loc), exp || createSimpleExpression('', true)) ] }; }; const transformVText = (dir, node, context) => { const { exp, loc } = dir; if (!exp) { context.onError(createDOMCompilerError(51 /* X_V_TEXT_NO_EXPRESSION */, loc)); } if (node.children.length) { context.onError(createDOMCompilerError(52 /* X_V_TEXT_WITH_CHILDREN */, loc)); node.children.length = 0; } return { props: [ createObjectProperty(createSimpleExpression(`textContent`, true), exp ? createCallExpression(context.helperString(TO_DISPLAY_STRING), [exp], loc) : createSimpleExpression('', true)) ] }; }; const transformModel$1 = (dir, node, context) => { const baseResult = transformModel(dir, node, context); // base transform has errors OR component v-model (only need props) if (!baseResult.props.length || node.tagType === 1 /* COMPONENT */) { return baseResult; } if (dir.arg) { context.onError(createDOMCompilerError(54 /* X_V_MODEL_ARG_ON_ELEMENT */, dir.arg.loc)); } function checkDuplicatedValue() { const value = findProp(node, 'value'); if (value) { context.onError(createDOMCompilerError(56 /* X_V_MODEL_UNNECESSARY_VALUE */, value.loc)); } } const { tag } = node; const isCustomElement = context.isCustomElement(tag); if (tag === 'input' || tag === 'textarea' || tag === 'select' || isCustomElement) { let directiveToUse = V_MODEL_TEXT; let isInvalidType = false; if (tag === 'input' || isCustomElement) { const type = findProp(node, `type`); if (type) { if (type.type === 7 /* DIRECTIVE */) { // :type="foo" directiveToUse = V_MODEL_DYNAMIC; } else if (type.value) { switch (type.value.content) { case 'radio': directiveToUse = V_MODEL_RADIO; break; case 'checkbox': directiveToUse = V_MODEL_CHECKBOX; break; case 'file': isInvalidType = true; context.onError(createDOMCompilerError(55 /* X_V_MODEL_ON_FILE_INPUT_ELEMENT */, dir.loc)); break; default: // text type checkDuplicatedValue(); break; } } } else if (hasDynamicKeyVBind(node)) { // element has bindings with dynamic keys, which can possibly contain // "type". directiveToUse = V_MODEL_DYNAMIC; } else { // text type checkDuplicatedValue(); } } else if (tag === 'select') { directiveToUse = V_MODEL_SELECT; } else { // textarea checkDuplicatedValue(); } // inject runtime directive // by returning the helper symbol via needRuntime // the import will replaced a resolveDirective call. if (!isInvalidType) { baseResult.needRuntime = context.helper(directiveToUse); } } else { context.onError(createDOMCompilerError(53 /* X_V_MODEL_ON_INVALID_ELEMENT */, dir.loc)); } // native vmodel doesn't need the `modelValue` props since they are also // passed to the runtime as `binding.value`. removing it reduces code size. baseResult.props = baseResult.props.filter(p => !(p.key.type === 4 /* SIMPLE_EXPRESSION */ && p.key.content === 'modelValue')); return baseResult; }; const isEventOptionModifier = /*#__PURE__*/ makeMap(`passive,once,capture`); const isNonKeyModifier = /*#__PURE__*/ makeMap( // event propagation management `stop,prevent,self,` + // system modifiers + exact `ctrl,shift,alt,meta,exact,` + // mouse `middle`); // left & right could be mouse or key modifiers based on event type const maybeKeyModifier = /*#__PURE__*/ makeMap('left,right'); const isKeyboardEvent = /*#__PURE__*/ makeMap(`onkeyup,onkeydown,onkeypress`, true); const resolveModifiers = (key, modifiers) => { const keyModifiers = []; const nonKeyModifiers = []; const eventOptionModifiers = []; for (let i = 0; i < modifiers.length; i++) { const modifier = modifiers[i]; if (isEventOptionModifier(modifier)) { // eventOptionModifiers: modifiers for addEventListener() options, // e.g. .passive & .capture eventOptionModifiers.push(modifier); } else { // runtimeModifiers: modifiers that needs runtime guards if (maybeKeyModifier(modifier)) { if (isStaticExp(key)) { if (isKeyboardEvent(key.content)) { keyModifiers.push(modifier); } else { nonKeyModifiers.push(modifier); } } else { keyModifiers.push(modifier); nonKeyModifiers.push(modifier); } } else { if (isNonKeyModifier(modifier)) { nonKeyModifiers.push(modifier); } else { keyModifiers.push(modifier); } } } } return { keyModifiers, nonKeyModifiers, eventOptionModifiers }; }; const transformClick = (key, event) => { const isStaticClick = isStaticExp(key) && key.content.toLowerCase() === 'onclick'; return isStaticClick ? createSimpleExpression(event, true) : key.type !== 4 /* SIMPLE_EXPRESSION */ ? createCompoundExpression([ `(`, key, `) === "onClick" ? "${event}" : (`, key, `)` ]) : key; }; const transformOn$1 = (dir, node, context) => { return transformOn(dir, node, context, baseResult => { const { modifiers } = dir; if (!modifiers.length) return baseResult; let { key, value: handlerExp } = baseResult.props[0]; const { keyModifiers, nonKeyModifiers, eventOptionModifiers } = resolveModifiers(key, modifiers); // normalize click.right and click.middle since they don't actually fire if (nonKeyModifiers.includes('right')) { key = transformClick(key, `onContextmenu`); } if (nonKeyModifiers.includes('middle')) { key = transformClick(key, `onMouseup`); } if (nonKeyModifiers.length) { handlerExp = createCallExpression(context.helper(V_ON_WITH_MODIFIERS), [ handlerExp, JSON.stringify(nonKeyModifiers) ]); } if (keyModifiers.length && // if event name is dynamic, always wrap with keys guard (!isStaticExp(key) || isKeyboardEvent(key.content))) { handlerExp = createCallExpression(context.helper(V_ON_WITH_KEYS), [ handlerExp, JSON.stringify(keyModifiers) ]); } if (eventOptionModifiers.length) { const modifierPostfix = eventOptionModifiers.map(capitalize).join(''); key = isStaticExp(key) ? createSimpleExpression(`${key.content}${modifierPostfix}`, true) : createCompoundExpression([`(`, key, `) + "${modifierPostfix}"`]); } return { props: [createObjectProperty(key, handlerExp)] }; }); }; const transformShow = (dir, node, context) => { const { exp, loc } = dir; if (!exp) { context.onError(createDOMCompilerError(57 /* X_V_SHOW_NO_EXPRESSION */, loc)); } return { props: [], needRuntime: context.helper(V_SHOW) }; }; const warnTransitionChildren = (node, context) => { if (node.type === 1 /* ELEMENT */ && node.tagType === 1 /* COMPONENT */) { const component = context.isBuiltInComponent(node.tag); if (component === TRANSITION$1) { return () => { if (node.children.length && hasMultipleChildren(node)) { context.onError(createDOMCompilerError(58 /* X_TRANSITION_INVALID_CHILDREN */, { start: node.children[0].loc.start, end: node.children[node.children.length - 1].loc.end, source: '' })); } }; } } }; function hasMultipleChildren(node) { // #1352 filter out potential comment nodes. const children = (node.children = node.children.filter(c => c.type !== 3 /* COMMENT */)); const child = children[0]; return (children.length !== 1 || child.type === 11 /* FOR */ || (child.type === 9 /* IF */ && child.branches.some(hasMultipleChildren))); } const ignoreSideEffectTags = (node, context) => { if (node.type === 1 /* ELEMENT */ && node.tagType === 0 /* ELEMENT */ && (node.tag === 'script' || node.tag === 'style')) { context.onError(createDOMCompilerError(59 /* X_IGNORED_SIDE_EFFECT_TAG */, node.loc)); context.removeNode(); } }; const DOMNodeTransforms = [ transformStyle, ...([warnTransitionChildren] ) ]; const DOMDirectiveTransforms = { cloak: noopDirectiveTransform, html: transformVHtml, text: transformVText, model: transformModel$1, on: transformOn$1, show: transformShow }; function compile$1(template, options = {}) { return baseCompile(template, extend({}, parserOptions, options, { nodeTransforms: [ // ignore <script> and <tag> // this is not put inside DOMNodeTransforms because that list is used // by compiler-ssr to generate vnode fallback branches ignoreSideEffectTags, ...DOMNodeTransforms, ...(options.nodeTransforms || []) ], directiveTransforms: extend({}, DOMDirectiveTransforms, options.directiveTransforms || {}), transformHoist: null })); } // This entry is the "full-build" that includes both the runtime { initDev(); } const compileCache = Object.create(null); function compileToFunction(template, options) { if (!isString(template)) { if (template.nodeType) { template = template.innerHTML; } else { warn(`invalid template option: `, template); return NOOP; } } const key = template; const cached = compileCache[key]; if (cached) { return cached; } if (template[0] === '#') { const el = document.querySelector(template); if (!el) { warn(`Template element not found or is empty: ${template}`); } // __UNSAFE__ // Reason: potential execution of JS expressions in in-DOM template. // The user must make sure the in-DOM template is trusted. If it's rendered // by the server, the template should not contain any user data. template = el ? el.innerHTML : ``; } const { code } = compile$1(template, extend({ hoistStatic: true, onError(err) { { const message = `Template compilation error: ${err.message}`; const codeFrame = err.loc && generateCodeFrame(template, err.loc.start.offset, err.loc.end.offset); warn(codeFrame ? `${message}\n${codeFrame}` : message); } } }, options)); // The wildcard import results in a huge object with every export // with keys that cannot be mangled, and can be quite heavy size-wise. // In the global build we know `Vue` is available globally so we can avoid // the wildcard object. const render = (new Function(code)() ); render._rc = true; return (compileCache[key] = render); } registerRuntimeCompiler(compileToFunction); exports.BaseTransition = BaseTransition; exports.Comment = Comment; exports.Fragment = Fragment; exports.KeepAlive = KeepAlive; exports.Static = Static; exports.Suspense = Suspense; exports.Teleport = Teleport; exports.Text = Text; exports.Transition = Transition; exports.TransitionGroup = TransitionGroup; exports.callWithAsyncErrorHandling = callWithAsyncErrorHandling; exports.callWithErrorHandling = callWithErrorHandling; exports.camelize = camelize; exports.capitalize = capitalize; exports.cloneVNode = cloneVNode; exports.compile = compileToFunction; exports.computed = computed$1; exports.createApp = createApp; exports.createBlock = createBlock; exports.createCommentVNode = createCommentVNode; exports.createHydrationRenderer = createHydrationRenderer; exports.createRenderer = createRenderer; exports.createSSRApp = createSSRApp; exports.createSlots = createSlots; exports.createStaticVNode = createStaticVNode; exports.createTextVNode = createTextVNode; exports.createVNode = createVNode; exports.customRef = customRef; exports.defineAsyncComponent = defineAsyncComponent; exports.defineComponent = defineComponent; exports.defineEmit = defineEmit; exports.defineProps = defineProps; exports.getCurrentInstance = getCurrentInstance; exports.getTransitionRawChildren = getTransitionRawChildren; exports.h = h; exports.handleError = handleError; exports.hydrate = hydrate; exports.initCustomFormatter = initCustomFormatter; exports.inject = inject; exports.isProxy = isProxy; exports.isReactive = isReactive; exports.isReadonly = isReadonly; exports.isRef = isRef; exports.isRuntimeOnly = isRuntimeOnly; exports.isVNode = isVNode; exports.markRaw = markRaw; exports.mergeProps = mergeProps; exports.nextTick = nextTick; exports.onActivated = onActivated; exports.onBeforeMount = onBeforeMount; exports.onBeforeUnmount = onBeforeUnmount; exports.onBeforeUpdate = onBeforeUpdate; exports.onDeactivated = onDeactivated; exports.onErrorCaptured = onErrorCaptured; exports.onMounted = onMounted; exports.onRenderTracked = onRenderTracked; exports.onRenderTriggered = onRenderTriggered; exports.onUnmounted = onUnmounted; exports.onUpdated = onUpdated; exports.openBlock = openBlock; exports.popScopeId = popScopeId; exports.provide = provide; exports.proxyRefs = proxyRefs; exports.pushScopeId = pushScopeId; exports.queuePostFlushCb = queuePostFlushCb; exports.reactive = reactive; exports.readonly = readonly; exports.ref = ref; exports.registerRuntimeCompiler = registerRuntimeCompiler; exports.render = render; exports.renderList = renderList; exports.renderSlot = renderSlot; exports.resolveComponent = resolveComponent; exports.resolveDirective = resolveDirective; exports.resolveDynamicComponent = resolveDynamicComponent; exports.resolveTransitionHooks = resolveTransitionHooks; exports.setBlockTracking = setBlockTracking; exports.setDevtoolsHook = setDevtoolsHook; exports.setTransitionHooks = setTransitionHooks; exports.shallowReactive = shallowReactive; exports.shallowReadonly = shallowReadonly; exports.shallowRef = shallowRef; exports.ssrContextKey = ssrContextKey; exports.ssrUtils = ssrUtils; exports.toDisplayString = toDisplayString; exports.toHandlerKey = toHandlerKey; exports.toHandlers = toHandlers; exports.toRaw = toRaw; exports.toRef = toRef; exports.toRefs = toRefs; exports.transformVNodeArgs = transformVNodeArgs; exports.triggerRef = triggerRef; exports.unref = unref; exports.useContext = useContext; exports.useCssModule = useCssModule; exports.useCssVars = useCssVars; exports.useSSRContext = useSSRContext; exports.useTransitionState = useTransitionState; exports.vModelCheckbox = vModelCheckbox; exports.vModelDynamic = vModelDynamic; exports.vModelRadio = vModelRadio; exports.vModelSelect = vModelSelect; exports.vModelText = vModelText; exports.vShow = vShow; exports.version = version; exports.warn = warn; exports.watch = watch; exports.watchEffect = watchEffect; exports.withCtx = withCtx; exports.withDirectives = withDirectives; exports.withKeys = withKeys; exports.withModifiers = withModifiers; exports.withScopeId = withScopeId; Object.defineProperty(exports, '__esModule', { value: true }); return exports; }({}));
github_javascript
2025-12-09T18:34:56Z
https://github.com/zishuvo1/bus-ticket-booking/blob/4ae9090b812e578be9a9bfb37541f33511e3021f/vue.global.js
{}
# 老王订阅管理系统 (LaoWang Subscription) v1.2 > **一个用于管理续费、到期提醒的私有化部署工具** 你是否经常遇到以下问题? - ❌ 域名悄悄到期,网站打不开才发现。 - ❌ 忘记取消试用的 Netflix/Spotify 会员,白白扣费。 - ❌ VPS 服务器太多,记不清哪台机器什么时候到期。 - ❌ 信用卡办太多,错过了免息还款日。 **LaoWang Subscription** 就是为了解决这些问题而生的。它是一个基于 Vue 3 + Express 的全栈订阅管理系统,不仅仅是一张简单的Excel表格,它拥有**真正的后端检测能力**,能通过微信、Telegram 等渠道精准地提醒你"该交保护费了"! <p align="center"> <a href="https://test.199060.xyz/" target="_blank"> <img src="https://img.shields.io/badge/🔗_在线演示-test.199060.xyz-blue?style=for-the-badge&logo=google-chrome&logoColor=white" alt="Demo"> </a> </p> --- ## 📸 界面预览 ### 列表视图 (少女主题) ![列表视图](docs/images/dashboard_light.png) ### 卡片视图 (卡通主题) ![卡片视图](docs/images/dashboard_dark.png) --- ## ✨ 核心功能 ### 1. 📅 精准的订阅周期管理 - **周期支持**:年付、月付、日滚、一次性 - **自动续费**:到期后自动延长周期,无需手动修改 ### 2. 📢 多渠道即时通知 - **Telegram Bot**:即时消息推送 - **WeChat (企业微信)**:国内最稳的推送通道 - **Bark (iOS)**:苹果用户神器 - **Webhook**:自定义接入 ### 3. 💰 资产与费用统计 - **多币种支持**:CNY, USD, HKD, JPY, EUR 等 - **续费价格显示**:一眼看清每个订阅的续费成本 ### 4. 🎨 多彩主题系统 (v1.3 新增) - 🚀 **太空** - 深邃星空紫 - 💫 **霓虹** - 炫彩红黄 - 🍬 **糖果** - 柔和彩虹 - 🌸 **少女** - 粉嫩梦幻 - 🤖 **科幻** - 青紫赛博 - 🎨 **卡通** - 欢乐多彩 ### 5. 🔄 双视图切换 (v1.2 新增) - **≡ 列表视图**:传统表格布局 - **⊞ 卡片视图**:现代网格布局 --- ## 🚀 更新日志 ### v1.2 (Latest) > **亮点**:全新多彩主题 + 双视图切换 + 固定头部 + 多项BUG修复 #### 🆕 新功能 - **8 款多彩主题**:太空、霓虹、糖果、少女、海洋、科幻、卡通 + 深浅色 - **卡片视图**:≡ 列表 / ⊞ 卡片 一键切换 - **固定头部和工具栏**:滚动时仅内容区域滚动,头部保持可见 - **三端响应式适配**:桌面(4列)、平板(2列)、手机(单列)完美显示 #### 🔧 UI 优化 - "价格" 改为 "续费价格" - 价格格式改为 `10 USD` 样式 - 卡片统一高度和一致对齐 - 移除导航栏"列表"按钮(修复抖动BUG) - 视图切换图标改为 ≡ 和 ⊞ #### 🐛 Bug 修复 - **修复删除按钮无响应**:移除被浏览器阻止的 confirm 弹窗 - **修复停用/启用页面跳动**:直接更新本地状态,不重新加载列表 - **修复弹窗点击外部关闭**:弹窗仅能通过按钮关闭 - **修复卡片文字不对齐**:左侧标签左对齐,右侧数值右对齐 --- ### v1.1 > **重要**:从 v1.0 升级的用户,系统会自动迁移数据库。 - 🆕 **新增字段**:周期、价格、货币、自动续费、备注 - 🆕 **功能增强**:自动续费逻辑、农历显示开关、天气源优化 - 🐛 **Bug修复**:数据库缺字段启动崩溃、日期显示重复 --- ### v1.0 - 🎉 **首次发布** - 基础订阅管理功能 - 多渠道通知支持 - Docker 一键部署 --- ## 🚀 部署指南 (推荐 Docker) ### 方式一:Docker Run (最快) ```bash docker run -d \ --name laowang-subscription \ -p 8080:8080 \ --restart always \ -v $(pwd)/database:/app/database \ -e TZ=Asia/Shanghai \ ghcr.io/tony-wang1990/laowang-subscription:main ``` ### 方式二:Docker Compose (推荐) 创建 `docker-compose.yml`: ```yaml version: '3' services: app: image: ghcr.io/tony-wang1990/laowang-subscription:main container_name: laowang-subscription restart: always ports: - "8080:8080" volumes: - ./database:/app/database environment: - TZ=Asia/Shanghai ``` 启动: ```bash docker-compose up -d ``` ### 🔄 如何自动更新? 推荐使用 **Watchtower** 实现全自动更新: ```bash docker run -d \ --name watchtower \ --restart always \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower \ --cleanup \ --interval 3600 \ laowang-subscription ``` > ⚠️ **关于其他容器平台**:由于本项目使用 SQLite 原生模块,Zeabur、Railway、Vercel 等 Serverless 平台可能出现编译失败或运行时崩溃的问题,**推荐使用 Docker 部署**。 --- ## ⚙️ 环境变量 | 变量名 | 默认值 | 说明 | | :--- | :--- | :--- | | `PORT` | 8080 | 服务端口 | | `JWT_SECRET` | 随机 | Session密钥 | | `TZ` | UTC | 时区 (建议设置 Asia/Shanghai) | --- ## 🤝 贡献与支持 觉得好用请点个 ⭐️ Star!有问题欢迎提 Issue。 License: MIT
# 老王订阅管理系统 (LaoWang Subscription) v1.2 > **一个用于管理续费、到期提醒的私有化部署工具** 你是否经常遇到以下问题? - ❌ 域名悄悄到期,网站打不开才发现。 - ❌ 忘记取消试用的 Netflix/Spotify 会员,白白扣费。 - ❌ VPS 服务器太多,记不清哪台机器什么时候到期。 - ❌ 信用卡办太多,错过了免息还款日。 **LaoWang Subscription** 就是为了解决这些问题而生的。它是一个基于 Vue 3 + Express 的全栈订阅管理系统,不仅仅是一张简单的Excel表格,它拥有**真正的后端检测能力**,能通过微信、Telegram 等渠道精准地提醒你"该交保护费了"! <p align="center"> <a href="https://test.199060.xyz/" target="_blank"> <img src="https://img.shields.io/badge/🔗_在线演示-test.199060.xyz-blue?style=for-the-badge&logo=google-chrome&logoColor=white" alt="Demo"> </a> </p> --- ## 📸 界面预览 ### 列表视图 (少女主题) ![列表视图](docs/images/dashboard_light.png) ### 卡片视图 (卡通主题) ![卡片视图](docs/images/dashboard_dark.png) --- ## ✨ 核心功能 ### 1. 📅 精准的订阅周期管理 - **周期支持**:年付、月付、日滚、一次性 - **自动续费**:到期后自动延长周期,无需手动修改 ### 2. 📢 多渠道即时通知 - **Telegram Bot**:即时消息推送 - **WeChat (企业微信)**:国内最稳的推送通道 - **Bark (iOS)**:苹果用户神器 - **Webhook**:自定义接入 ### 3. 💰 资产与费用统计 - **多币种支持**:CNY, USD, HKD, JPY, EUR 等 - **续费价格显示**:一眼看清每个订阅的续费成本 ### 4. 🎨 多彩主题系统 (v1.3 新增) - 🚀 **太空** - 深邃星空紫 - 💫 **霓虹** - 炫彩红黄 - 🍬 **糖果** - 柔和彩虹 - 🌸 **少女** - 粉嫩梦幻 - 🤖 **科幻** - 青紫赛博 - 🎨 **卡通** - 欢乐多彩 ### 5. 🔄 双视图切换 (v1.2 新增) - **≡ 列表视图**:传统表格布局 - **⊞ 卡片视图**:现代网格布局 --- ## 🚀 更新日志 ### v1.2 (Latest) > **亮点**:全新多彩主题 + 双视图切换 + 固定头部 + 多项BUG修复 #### 🆕 新功能 - **8 款多彩主题**:太空、霓虹、糖果、少女、海洋、科幻、卡通 + 深浅色 - **卡片视图**:≡ 列表 / ⊞ 卡片 一键切换 - **固定头部和工具栏**:滚动时仅内容区域滚动,头部保持可见 - **三端响应式适配**:桌面(4列)、平板(2列)、手机(单列)完美显示 #### 🔧 UI 优化 - "价格" 改为 "续费价格" - 价格格式改为 `10 USD` 样式 - 卡片统一高度和一致对齐 - 移除导航栏"列表"按钮(修复抖动BUG) - 视图切换图标改为 ≡ 和 ⊞ #### 🐛 Bug 修复 - **修复删除按钮无响应**:移除被浏览器阻止的 confirm 弹窗 - **修复停用/启用页面跳动**:直接更新本地状态,不重新加载列表 - **修复弹窗点击外部关闭**:弹窗仅能通过按钮关闭 - **修复卡片文字不对齐**:左侧标签左对齐,右侧数值右对齐 --- ### v1.1 > **重要**:从 v1.0 升级的用户,系统会自动迁移数据库。 - 🆕 **新增字段**:周期、价格、货币、自动续费、备注 - 🆕 **功能增强**:自动续费逻辑、农历显示开关、天气源优化 - 🐛 **Bug修复**:数据库缺字段启动崩溃、日期显示重复 --- ### v1.0 - 🎉 **首次发布** - 基础订阅管理功能 - 多渠道通知支持 - Docker 一键部署 --- ## 🚀 部署指南 (推荐 Docker) ### 方式一:Docker Run (最快) ```bash docker run -d \ --name laowang-subscription \ -p 8080:8080 \ --restart always \ -v $(pwd)/database:/app/database \ -e TZ=Asia/Shanghai \ ghcr.io/tony-wang1990/laowang-subscription:main ``` ### 方式二:Docker Compose (推荐) 创建 `docker-compose.yml`: ```yaml version: '3' services: app: image: ghcr.io/tony-wang1990/laowang-subscription:main container_name: laowang-subscription restart: always ports: - "8080:8080" volumes: - ./database:/app/database environment: - TZ=Asia/Shanghai ``` 启动: ```bash docker-compose up -d ``` ### 🔄 如何自动更新? 推荐使用 **Watchtower** 实现全自动更新: ```bash docker run -d \ --name watchtower \ --restart always \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower \ --cleanup \ --interval 3600 \ laowang-subscription ``` > ⚠️ **关于其他容器平台**:由于本项目使用 SQLite 原生模块,Zeabur、Railway、Vercel 等 Serverless 平台可能出现编译失败或运行时崩溃的问题,**推荐使用 Docker 部署**。 --- ## ⚙️ 环境变量 | 变量名 | 默认值 | 说明 | | :--- | :--- | :--- | | `PORT` | 8080 | 服务端口 | | `JWT_SECRET` | 随机 | Session密钥 | | `TZ` | UTC | 时区 (建议设置 Asia/Shanghai) | --- ## 🤝 贡献与支持 觉得好用请点个 ⭐️ Star!有问题欢迎提 Issue。 License: MIT
github_markdown
2025-12-13T12:06:04Z
https://github.com/tony-wang1990/laowang-subscription/blob/78f0bbca6d1b187cf4b84770454720daadca9705/README.md
{}
# React + TypeScript + Vite This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules. Currently, two official plugins are available: - [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh - [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh ## React Compiler The React Compiler is enabled on this template. See [this documentation](https://react.dev/learn/react-compiler) for more information. Note: This will impact Vite dev & build performances. ## Expanding the ESLint configuration If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules: ```js export default defineConfig([ globalIgnores(['dist']), { files: ['**/*.{ts,tsx}'], extends: [ // Other configs... // Remove tseslint.configs.recommended and replace with this tseslint.configs.recommendedTypeChecked, // Alternatively, use this for stricter rules tseslint.configs.strictTypeChecked, // Optionally, add this for stylistic rules tseslint.configs.stylisticTypeChecked, // Other configs... ], languageOptions: { parserOptions: { project: ['./tsconfig.node.json', './tsconfig.app.json'], tsconfigRootDir: import.meta.dirname, }, // other options... }, }, ]) ``` You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules: ```js // eslint.config.js import reactX from 'eslint-plugin-react-x' import reactDom from 'eslint-plugin-react-dom' export default defineConfig([ globalIgnores(['dist']), { files: ['**/*.{ts,tsx}'], extends: [ // Other configs... // Enable lint rules for React reactX.configs['recommended-typescript'], // Enable lint rules for React DOM reactDom.configs.recommended, ], languageOptions: { parserOptions: { project: ['./tsconfig.node.json', './tsconfig.app.json'], tsconfigRootDir: import.meta.dirname, }, // other options... }, }, ]) ```
# React + TypeScript + Vite This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules. Currently, two official plugins are available: - [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh - [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh ## React Compiler The React Compiler is enabled on this template. See [this documentation](https://react.dev/learn/react-compiler) for more information. Note: This will impact Vite dev & build performances. ## Expanding the ESLint configuration If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules: ```js export default defineConfig([ globalIgnores(['dist']), { files: ['**/*.{ts,tsx}'], extends: [ // Other configs... // Remove tseslint.configs.recommended and replace with this tseslint.configs.recommendedTypeChecked, // Alternatively, use this for stricter rules tseslint.configs.strictTypeChecked, // Optionally, add this for stylistic rules tseslint.configs.stylisticTypeChecked, // Other configs... ], languageOptions: { parserOptions: { project: ['./tsconfig.node.json', './tsconfig.app.json'], tsconfigRootDir: import.meta.dirname, }, // other options... }, }, ]) ``` You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules: ```js // eslint.config.js import reactX from 'eslint-plugin-react-x' import reactDom from 'eslint-plugin-react-dom' export default defineConfig([ globalIgnores(['dist']), { files: ['**/*.{ts,tsx}'], extends: [ // Other configs... // Enable lint rules for React reactX.configs['recommended-typescript'], // Enable lint rules for React DOM reactDom.configs.recommended, ], languageOptions: { parserOptions: { project: ['./tsconfig.node.json', './tsconfig.app.json'], tsconfigRootDir: import.meta.dirname, }, // other options... }, }, ]) ```
github_markdown
2025-12-11T09:55:24Z
https://github.com/Frazierboy81/frontend-project-manager/blob/5a51b83733c7b830e8edc9278443649d516ffc1f/README.md
{}
<p align="center"> <h1 align="center">🧹 Windows Cleaner CLI</h1> <p align="center"> <strong>Free & Open Source Windows cleanup tool</strong> </p> <p align="center"> Scan and remove junk files, caches, logs, and more — all from your terminal. </p> </p> <p align="center"> <a href="https://www.npmjs.com/package/windows-cleaner-cli"><img src="https://img.shields.io/npm/v/windows-cleaner-cli?color=cb3837&label=npm&logo=npm" alt="npm version"></a> <a href="https://www.npmjs.com/package/windows-cleaner-cli"><img src="https://img.shields.io/npm/dm/windows-cleaner-cli?color=cb3837&logo=npm" alt="npm downloads"></a> <a href="https://github.com/guhcostan/windows-cleaner-cli/actions/workflows/ci.yml"><img src="https://github.com/guhcostan/windows-cleaner-cli/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a> </p> <p align="center"> <a href="https://nodejs.org"><img src="https://img.shields.io/node/v/windows-cleaner-cli" alt="Node.js Version"></a> <a href="https://www.microsoft.com/windows"><img src="https://img.shields.io/badge/platform-Windows-0078D6?logo=windows" alt="Platform: Windows"></a> <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-5.3-3178c6?logo=typescript&logoColor=white" alt="TypeScript"></a> <a href="https://socket.dev/npm/package/windows-cleaner-cli"><img src="https://socket.dev/api/badge/npm/package/windows-cleaner-cli" alt="Socket Badge"></a> </p> <p align="center"> <a href="https://github.com/guhcostan/windows-cleaner-cli"><img src="https://img.shields.io/github/stars/guhcostan/windows-cleaner-cli?style=social" alt="GitHub Stars"></a> </p> <p align="center"> <a href="https://ko-fi.com/guhcostan"><img src="https://img.shields.io/badge/Ko--fi-Support_this_project-FF5E5B?style=for-the-badge&logo=ko-fi&logoColor=white" alt="Support on Ko-fi"></a> </p> <p align="center"> <strong>🍎 Also available for macOS:</strong> <a href="https://github.com/guhcostan/mac-cleaner-cli">mac-cleaner-cli</a> </p> --- ## ⚡ Quick Start ```bash npx windows-cleaner-cli ``` That's it! No installation needed. The CLI will: 1. 🔍 **Scan** your PC for cleanable files 2. 📋 **Show** you what was found with sizes 3. ✅ **Let you select** exactly what to clean 4. 🗑️ **Clean** the selected items safely ## 🎬 See It In Action ``` $ npx windows-cleaner-cli 🧹 Windows Cleaner CLI ────────────────────────────────────────────────────── Scanning your PC for cleanable files... Found 32.5 GB that can be cleaned: ? Select categories to clean (space to toggle, enter to confirm): ◉ 🟢 Recycle Bin 2.1 GB (45 items) ◉ 🟢 Browser Cache 1.5 GB (4 items) ◉ 🟢 Temporary Files 549.2 MB (622 items) ◉ 🟡 User Cache Files 12.5 GB (118 items) ◉ 🟡 Development Cache 15.9 GB (14 items) Summary: Items to delete: 803 Space to free: 32.5 GB ? Proceed with cleaning? (Y/n) ✓ Cleaning Complete! ────────────────────────────────────────────────────── Recycle Bin ✓ 2.1 GB freed Browser Cache ✓ 1.5 GB freed Temporary Files ✓ 549.2 MB freed User Cache Files ✓ 12.5 GB freed Development Cache ✓ 15.9 GB freed ────────────────────────────────────────────────────── 🎉 Freed 32.5 GB of disk space! Cleaned 803 items ``` ## ✨ Features | Feature | Description | |---------|-------------| | 🚀 **One Command** | Just run `npx windows-cleaner-cli` — no complex flags | | 🎯 **Interactive** | Select exactly what you want to clean with checkboxes | | 🛡️ **Safe by Default** | Risky items hidden unless you use `--risky` | | 🔍 **Smart Scanning** | Finds caches, logs, dev files, browser data, and more | | 📱 **App Remover** | Remove apps and their associated files | | 🔧 **Maintenance** | Flush DNS cache, run Disk Cleanup, clear caches | | 🔒 **Privacy First** | 100% offline — no data ever leaves your machine | | 📦 **Minimal Dependencies** | Only 5 runtime deps, all from trusted maintainers | ## 🎯 What It Cleans ### 🟢 Safe (always safe to delete) | Category | What it cleans | |----------|---------------| | `recycle-bin` | Files in the Recycle Bin | | `temp-files` | Temporary files in TEMP and Windows\Temp | | `browser-cache` | Chrome, Edge, Firefox, Brave cache | | `chocolatey` | Chocolatey/Scoop package manager cache | | `docker` | Unused Docker images, containers, volumes | ### 🟡 Moderate (generally safe) | Category | What it cleans | |----------|---------------| | `system-cache` | Application caches in AppData\Local | | `system-logs` | System and application logs | | `dev-cache` | npm, yarn, pip, NuGet, Gradle cache | | `node-modules` | Orphaned node_modules in old projects | | `windows-update` | Old Windows Update files | | `prefetch` | Windows Prefetch data | ### 🔴 Risky (use `--risky` flag) | Category | What it cleans | |----------|---------------| | `downloads` | Downloads older than 30 days | | `itunes-backups` | iPhone and iPad backup files from iTunes | | `duplicates` | Duplicate files (keeps newest) | | `large-files` | Files larger than 500MB | ## 📖 Usage ### Basic Usage ```bash # Interactive mode — scan, select, and clean npx windows-cleaner-cli # Include risky categories npx windows-cleaner-cli --risky ``` ### Remove Apps Remove applications with their preferences, caches, and support files: ```bash npx windows-cleaner-cli uninstall ``` ### Maintenance Tasks ```bash # Flush DNS cache npx windows-cleaner-cli maintenance --dns # Run Windows Disk Cleanup npx windows-cleaner-cli maintenance --disk # Clear thumbnail cache npx windows-cleaner-cli maintenance --thumbnails # Clear font cache (requires admin) npx windows-cleaner-cli maintenance --fonts ``` ### Other Commands ```bash # List all available categories npx windows-cleaner-cli categories # Manage configuration npx windows-cleaner-cli config --init npx windows-cleaner-cli config --show # Manage backups npx windows-cleaner-cli backup --list npx windows-cleaner-cli backup --clean ``` ## 💻 Global Installation If you use this tool frequently: ```bash npm install -g windows-cleaner-cli windows-cleaner-cli ``` ## 🔒 Security | | | |---|---| | ✅ **Open Source** | All code publicly available for audit | | ✅ **No Network** | Operates 100% offline | | ✅ **Minimal Deps** | Only 5 runtime dependencies | | ✅ **CI/CD** | Every release tested with TypeScript, ESLint, and automated tests | | ✅ **Socket.dev** | Dependencies monitored for supply chain attacks | Found a vulnerability? Report it via [GitHub Security Advisories](https://github.com/guhcostan/windows-cleaner-cli/security/advisories/new). ## 🛠️ Development ```bash git clone https://github.com/guhcostan/windows-cleaner-cli.git cd windows-cleaner-cli npm install npm run dev # Run in dev mode npm test # Run tests npm run lint # Run linter npm run build # Build for production ``` ## 🤝 Contributing Contributions are welcome! Please feel free to submit a Pull Request. 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add some amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## 💚 Support If this tool saved you time or disk space, consider supporting the project! <p align="center"> <a href="https://ko-fi.com/guhcostan"><img src="https://ko-fi.com/img/githubbutton_sm.svg" alt="Support on Ko-fi"></a> </p> Your support helps maintain and improve this tool. Thank you! 🙏 ## 📄 License MIT License — see [LICENSE](LICENSE) for details. --- <p align="center"> <strong>⚠️ Disclaimer</strong><br> This tool deletes files from your system. While we've implemented safety measures, always ensure you have backups of important data. </p> <p align="center"> Made with ❤️ for Windows users everywhere </p>
<p align="center"> <h1 align="center">🧹 Windows Cleaner CLI</h1> <p align="center"> <strong>Free & Open Source Windows cleanup tool</strong> </p> <p align="center"> Scan and remove junk files, caches, logs, and more — all from your terminal. </p> </p> <p align="center"> <a href="https://www.npmjs.com/package/windows-cleaner-cli"><img src="https://img.shields.io/npm/v/windows-cleaner-cli?color=cb3837&label=npm&logo=npm" alt="npm version"></a> <a href="https://www.npmjs.com/package/windows-cleaner-cli"><img src="https://img.shields.io/npm/dm/windows-cleaner-cli?color=cb3837&logo=npm" alt="npm downloads"></a> <a href="https://github.com/guhcostan/windows-cleaner-cli/actions/workflows/ci.yml"><img src="https://github.com/guhcostan/windows-cleaner-cli/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a> </p> <p align="center"> <a href="https://nodejs.org"><img src="https://img.shields.io/node/v/windows-cleaner-cli" alt="Node.js Version"></a> <a href="https://www.microsoft.com/windows"><img src="https://img.shields.io/badge/platform-Windows-0078D6?logo=windows" alt="Platform: Windows"></a> <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-5.3-3178c6?logo=typescript&logoColor=white" alt="TypeScript"></a> <a href="https://socket.dev/npm/package/windows-cleaner-cli"><img src="https://socket.dev/api/badge/npm/package/windows-cleaner-cli" alt="Socket Badge"></a> </p> <p align="center"> <a href="https://github.com/guhcostan/windows-cleaner-cli"><img src="https://img.shields.io/github/stars/guhcostan/windows-cleaner-cli?style=social" alt="GitHub Stars"></a> </p> <p align="center"> <a href="https://ko-fi.com/guhcostan"><img src="https://img.shields.io/badge/Ko--fi-Support_this_project-FF5E5B?style=for-the-badge&logo=ko-fi&logoColor=white" alt="Support on Ko-fi"></a> </p> <p align="center"> <strong>🍎 Also available for macOS:</strong> <a href="https://github.com/guhcostan/mac-cleaner-cli">mac-cleaner-cli</a> </p> --- ## ⚡ Quick Start ```bash npx windows-cleaner-cli ``` That's it! No installation needed. The CLI will: 1. 🔍 **Scan** your PC for cleanable files 2. 📋 **Show** you what was found with sizes 3. ✅ **Let you select** exactly what to clean 4. 🗑️ **Clean** the selected items safely ## 🎬 See It In Action ``` $ npx windows-cleaner-cli 🧹 Windows Cleaner CLI ────────────────────────────────────────────────────── Scanning your PC for cleanable files... Found 32.5 GB that can be cleaned: ? Select categories to clean (space to toggle, enter to confirm): ◉ 🟢 Recycle Bin 2.1 GB (45 items) ◉ 🟢 Browser Cache 1.5 GB (4 items) ◉ 🟢 Temporary Files 549.2 MB (622 items) ◉ 🟡 User Cache Files 12.5 GB (118 items) ◉ 🟡 Development Cache 15.9 GB (14 items) Summary: Items to delete: 803 Space to free: 32.5 GB ? Proceed with cleaning? (Y/n) ✓ Cleaning Complete! ────────────────────────────────────────────────────── Recycle Bin ✓ 2.1 GB freed Browser Cache ✓ 1.5 GB freed Temporary Files ✓ 549.2 MB freed User Cache Files ✓ 12.5 GB freed Development Cache ✓ 15.9 GB freed ────────────────────────────────────────────────────── 🎉 Freed 32.5 GB of disk space! Cleaned 803 items ``` ## ✨ Features | Feature | Description | |---------|-------------| | 🚀 **One Command** | Just run `npx windows-cleaner-cli` — no complex flags | | 🎯 **Interactive** | Select exactly what you want to clean with checkboxes | | 🛡️ **Safe by Default** | Risky items hidden unless you use `--risky` | | 🔍 **Smart Scanning** | Finds caches, logs, dev files, browser data, and more | | 📱 **App Remover** | Remove apps and their associated files | | 🔧 **Maintenance** | Flush DNS cache, run Disk Cleanup, clear caches | | 🔒 **Privacy First** | 100% offline — no data ever leaves your machine | | 📦 **Minimal Dependencies** | Only 5 runtime deps, all from trusted maintainers | ## 🎯 What It Cleans ### 🟢 Safe (always safe to delete) | Category | What it cleans | |----------|---------------| | `recycle-bin` | Files in the Recycle Bin | | `temp-files` | Temporary files in TEMP and Windows\Temp | | `browser-cache` | Chrome, Edge, Firefox, Brave cache | | `chocolatey` | Chocolatey/Scoop package manager cache | | `docker` | Unused Docker images, containers, volumes | ### 🟡 Moderate (generally safe) | Category | What it cleans | |----------|---------------| | `system-cache` | Application caches in AppData\Local | | `system-logs` | System and application logs | | `dev-cache` | npm, yarn, pip, NuGet, Gradle cache | | `node-modules` | Orphaned node_modules in old projects | | `windows-update` | Old Windows Update files | | `prefetch` | Windows Prefetch data | ### 🔴 Risky (use `--risky` flag) | Category | What it cleans | |----------|---------------| | `downloads` | Downloads older than 30 days | | `itunes-backups` | iPhone and iPad backup files from iTunes | | `duplicates` | Duplicate files (keeps newest) | | `large-files` | Files larger than 500MB | ## 📖 Usage ### Basic Usage ```bash # Interactive mode — scan, select, and clean npx windows-cleaner-cli # Include risky categories npx windows-cleaner-cli --risky ``` ### Remove Apps Remove applications with their preferences, caches, and support files: ```bash npx windows-cleaner-cli uninstall ``` ### Maintenance Tasks ```bash # Flush DNS cache npx windows-cleaner-cli maintenance --dns # Run Windows Disk Cleanup npx windows-cleaner-cli maintenance --disk # Clear thumbnail cache npx windows-cleaner-cli maintenance --thumbnails # Clear font cache (requires admin) npx windows-cleaner-cli maintenance --fonts ``` ### Other Commands ```bash # List all available categories npx windows-cleaner-cli categories # Manage configuration npx windows-cleaner-cli config --init npx windows-cleaner-cli config --show # Manage backups npx windows-cleaner-cli backup --list npx windows-cleaner-cli backup --clean ``` ## 💻 Global Installation If you use this tool frequently: ```bash npm install -g windows-cleaner-cli windows-cleaner-cli ``` ## 🔒 Security | | | |---|---| | ✅ **Open Source** | All code publicly available for audit | | ✅ **No Network** | Operates 100% offline | | ✅ **Minimal Deps** | Only 5 runtime dependencies | | ✅ **CI/CD** | Every release tested with TypeScript, ESLint, and automated tests | | ✅ **Socket.dev** | Dependencies monitored for supply chain attacks | Found a vulnerability? Report it via [GitHub Security Advisories](https://github.com/guhcostan/windows-cleaner-cli/security/advisories/new). ## 🛠️ Development ```bash git clone https://github.com/guhcostan/windows-cleaner-cli.git cd windows-cleaner-cli npm install npm run dev # Run in dev mode npm test # Run tests npm run lint # Run linter npm run build # Build for production ``` ## 🤝 Contributing Contributions are welcome! Please feel free to submit a Pull Request. 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add some amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## 💚 Support If this tool saved you time or disk space, consider supporting the project! <p align="center"> <a href="https://ko-fi.com/guhcostan"><img src="https://ko-fi.com/img/githubbutton_sm.svg" alt="Support on Ko-fi"></a> </p> Your support helps maintain and improve this tool. Thank you! 🙏 ## 📄 License MIT License — see [LICENSE](LICENSE) for details. --- <p align="center"> <strong>⚠️ Disclaimer</strong><br> This tool deletes files from your system. While we've implemented safety measures, always ensure you have backups of important data. </p> <p align="center"> Made with ❤️ for Windows users everywhere </p>
github_markdown
2025-12-08T19:30:21Z
https://github.com/guhcostan/windows-cleaner-cli/blob/8547b5a89f5789bb94a58cc8d30b40c2266064ac/README.md
{}
--- description: Generate implementation report for system review --- # Execution Report Review and deeply analyze the implementation you just completed. ## Context You have just finished implementing a feature. Before moving on, reflect on: - What you implemented - How it aligns with the plan - What challenges you encountered - What diverged and why ## Generate Report Save to: `.agents/execution-reports/[feature-name].md` ### Meta Information - Plan file: [path to plan that guided this implementation] - Files added: [list with paths] - Files modified: [list with paths] - Lines changed: +X -Y ### Validation Results - Syntax & Linting: ✓/✗ [details if failed] - Type Checking: ✓/✗ [details if failed] - Unit Tests: ✓/✗ [X passed, Y failed] - Integration Tests: ✓/✗ [X passed, Y failed] ### What Went Well List specific things that worked smoothly: - [concrete examples] ### Challenges Encountered List specific difficulties: - [what was difficult and why] ### Divergences from Plan For each divergence, document: **[Divergence Title]** - Planned: [what the plan specified] - Actual: [what was implemented instead] - Reason: [why this divergence occurred] - Type: [Better approach found | Plan assumption wrong | Security concern | Performance issue | Other] ### Skipped Items List anything from the plan that was not implemented: - [what was skipped] - Reason: [why it was skipped] ### Recommendations Based on this implementation, what should change for next time? - Plan command improvements: [suggestions] - Execute command improvements: [suggestions] - CLAUDE.md additions: [suggestions]
--- description: Generate implementation report for system review --- # Execution Report Review and deeply analyze the implementation you just completed. ## Context You have just finished implementing a feature. Before moving on, reflect on: - What you implemented - How it aligns with the plan - What challenges you encountered - What diverged and why ## Generate Report Save to: `.agents/execution-reports/[feature-name].md` ### Meta Information - Plan file: [path to plan that guided this implementation] - Files added: [list with paths] - Files modified: [list with paths] - Lines changed: +X -Y ### Validation Results - Syntax & Linting: ✓/✗ [details if failed] - Type Checking: ✓/✗ [details if failed] - Unit Tests: ✓/✗ [X passed, Y failed] - Integration Tests: ✓/✗ [X passed, Y failed] ### What Went Well List specific things that worked smoothly: - [concrete examples] ### Challenges Encountered List specific difficulties: - [what was difficult and why] ### Divergences from Plan For each divergence, document: **[Divergence Title]** - Planned: [what the plan specified] - Actual: [what was implemented instead] - Reason: [why this divergence occurred] - Type: [Better approach found | Plan assumption wrong | Security concern | Performance issue | Other] ### Skipped Items List anything from the plan that was not implemented: - [what was skipped] - Reason: [why it was skipped] ### Recommendations Based on this implementation, what should change for next time? - Plan command improvements: [suggestions] - Execute command improvements: [suggestions] - CLAUDE.md additions: [suggestions]
github_markdown
2025-12-05T20:37:45Z
https://github.com/coleam00/MongoDB-RAG-Agent/blob/b048eeab220e43b2b8f8c97508e4d6d2c134a468/.claude/commands/validation/execution-report.md
{}
# EVE Preview Manager EVE Preview Manager - Yet another EVE-O-Preview clone for Linux, written in Rust. A reimplementation of my older [EVE-L_Preview](https://github.com/h0lylag/EVE-L_Preview). Inspired by [EVE-O-Preview](https://github.com/Proopai/eve-o-preview), [EVE-X-Preview](https://github.com/g0nzo83/EVE-X-Preview), [Nicotine](https://github.com/isomerc/nicotine), and [eve-l-preview](https://github.com/ilveth/eve-l-preview). ## Status This project is under active development and should be working. It's primarily designed around my own workflow and environment on NixOS. While pre-built binaries are provided, if you encounter issues, building from source is always an option. If you want to get notified of new releases, give feedback, get help troubleshooting, etc. Join the Discord: https://discord.gg/MxdW5NCjwV ## Features - Real-time thumbnail previews of all EVE client windows - Per-character and cycle group hotkeys with configurable key bindings - Customizable thumbnail appearance including size, opacity, fonts, colors, and borders - Profile-based configuration system for managing multiple setups - One-click character import for cycle groups - Optional features: cycle through logged-off clients, auto-minimize inactive windows, position inheritance for new characters, disable thumbnails altogether ## Screenshots <p align="center"> <a href="https://i.imgur.com/ztw7B1Q.png"> <img src="https://i.imgur.com/ztw7B1Q.png" alt="EVE Preview Manager in action" width="400"> </a> <a href="https://i.imgur.com/tfztoAt.png"> <img src="https://i.imgur.com/tfztoAt.png" alt="EVE Preview Manager Settings" width="400"> </a> </p> ## Usage 1. **Launch the Application**: Run `eve-preview-manager`. It starts in GUI mode and creates a system tray icon. 2. **Manage Profiles**: Use the GUI to create specific profiles for different activities (e.g., PvP, Mining). You can add, remove, or duplicate profiles to quickly switch between setups. 3. **Configure Display Settings**: Customize the look and feel of your thumbnails, including size, opacity, fonts, borders, and colors to match your preferences. 4. **Set Up Hotkeys**: Configure hotkeys to cycle between clients in your active group. 5. **Manage Characters**: - **Add Characters**: Click the "Add" button to include EVE characters in your cycle group. Active and previously detected clients will appear in the popup. - **Manual Entry**: Alternatively, switch to "Text Editor" mode to manually paste a list of character names (one per line). - **Individual Hotkeys**: Once added to the cycle group, you can bind specific hotkeys to individual characters for direct access. 6. **Save & Apply**: Click "Save & Apply" to save your current configuration and refresh the previews. 7. **Swap Profiles**: Swapping profiles can be done quickly by right-clicking the system tray icon and selecting the desired profile. **Note**: Configuration is stored in `~/.config/eve-preview-manager/config.json`. ## System Requirements - **Required:** OpenGL, fontconfig, dbus, libxkbcommon, libxcb (standard on most distros). - **Recommended:** Wayland (via XWayland). Native X11 environments are supported but users may experience issues with preview overlays fighting for Z-order and incorrect image offsets. - **Optional:** If using evdev instead of x11 hotkeys, you will need add your user to the `input` group. Not recommended unless you know what you're doing. ## Installation ### Pre-built Binaries (Ubuntu, Arch, Fedora, etc.) Download the latest release from the [Releases](https://github.com/h0lylag/EVE-Preview-Manager/releases) page: ```bash unzip eve-preview-manager-v*.zip chmod +x ./eve-preview-manager ./eve-preview-manager ``` ### NixOS Add the repo to your flake inputs: ```nix { inputs.eve-preview-manager.url = "github:h0lylag/EVE-Preview-Manager"; } ``` Then add it to your packages: ```nix environment.systemPackages = [ eve-preview-manager.packages.${pkgs.stdenv.hostPlatform.system}.default ]; ``` ### Build from Source **Build dependencies:** Rust/Cargo, pkg-config, fontconfig, dbus, X11, libxkbcommon ```bash git clone https://github.com/h0lylag/EVE-Preview-Manager.git cd EVE-Preview-Manager cargo build --release ```
# EVE Preview Manager EVE Preview Manager - Yet another EVE-O-Preview clone for Linux, written in Rust. A reimplementation of my older [EVE-L_Preview](https://github.com/h0lylag/EVE-L_Preview). Inspired by [EVE-O-Preview](https://github.com/Proopai/eve-o-preview), [EVE-X-Preview](https://github.com/g0nzo83/EVE-X-Preview), [Nicotine](https://github.com/isomerc/nicotine), and [eve-l-preview](https://github.com/ilveth/eve-l-preview). ## Status This project is under active development and should be working. It's primarily designed around my own workflow and environment on NixOS. While pre-built binaries are provided, if you encounter issues, building from source is always an option. If you want to get notified of new releases, give feedback, get help troubleshooting, etc. Join the Discord: https://discord.gg/MxdW5NCjwV ## Features - Real-time thumbnail previews of all EVE client windows - Per-character and cycle group hotkeys with configurable key bindings - Customizable thumbnail appearance including size, opacity, fonts, colors, and borders - Profile-based configuration system for managing multiple setups - One-click character import for cycle groups - Optional features: cycle through logged-off clients, auto-minimize inactive windows, position inheritance for new characters, disable thumbnails altogether ## Screenshots <p align="center"> <a href="https://i.imgur.com/ztw7B1Q.png"> <img src="https://i.imgur.com/ztw7B1Q.png" alt="EVE Preview Manager in action" width="400"> </a> <a href="https://i.imgur.com/tfztoAt.png"> <img src="https://i.imgur.com/tfztoAt.png" alt="EVE Preview Manager Settings" width="400"> </a> </p> ## Usage 1. **Launch the Application**: Run `eve-preview-manager`. It starts in GUI mode and creates a system tray icon. 2. **Manage Profiles**: Use the GUI to create specific profiles for different activities (e.g., PvP, Mining). You can add, remove, or duplicate profiles to quickly switch between setups. 3. **Configure Display Settings**: Customize the look and feel of your thumbnails, including size, opacity, fonts, borders, and colors to match your preferences. 4. **Set Up Hotkeys**: Configure hotkeys to cycle between clients in your active group. 5. **Manage Characters**: - **Add Characters**: Click the "Add" button to include EVE characters in your cycle group. Active and previously detected clients will appear in the popup. - **Manual Entry**: Alternatively, switch to "Text Editor" mode to manually paste a list of character names (one per line). - **Individual Hotkeys**: Once added to the cycle group, you can bind specific hotkeys to individual characters for direct access. 6. **Save & Apply**: Click "Save & Apply" to save your current configuration and refresh the previews. 7. **Swap Profiles**: Swapping profiles can be done quickly by right-clicking the system tray icon and selecting the desired profile. **Note**: Configuration is stored in `~/.config/eve-preview-manager/config.json`. ## System Requirements - **Required:** OpenGL, fontconfig, dbus, libxkbcommon, libxcb (standard on most distros). - **Recommended:** Wayland (via XWayland). Native X11 environments are supported but users may experience issues with preview overlays fighting for Z-order and incorrect image offsets. - **Optional:** If using evdev instead of x11 hotkeys, you will need add your user to the `input` group. Not recommended unless you know what you're doing. ## Installation ### Pre-built Binaries (Ubuntu, Arch, Fedora, etc.) Download the latest release from the [Releases](https://github.com/h0lylag/EVE-Preview-Manager/releases) page: ```bash unzip eve-preview-manager-v*.zip chmod +x ./eve-preview-manager ./eve-preview-manager ``` ### NixOS Add the repo to your flake inputs: ```nix { inputs.eve-preview-manager.url = "github:h0lylag/EVE-Preview-Manager"; } ``` Then add it to your packages: ```nix environment.systemPackages = [ eve-preview-manager.packages.${pkgs.stdenv.hostPlatform.system}.default ]; ``` ### Build from Source **Build dependencies:** Rust/Cargo, pkg-config, fontconfig, dbus, X11, libxkbcommon ```bash git clone https://github.com/h0lylag/EVE-Preview-Manager.git cd EVE-Preview-Manager cargo build --release ```
github_markdown
2025-12-15T08:43:16Z
https://github.com/h0lylag/EVE-Preview-Manager/blob/3d9767c13230277458b07709f460edc9ac1f2c27/README.md
{}
# Protobuf VSC Extension Documentation Welcome to the comprehensive documentation for the Protobuf VSC extension. This documentation covers all features, how to use them, and configuration options. ## 📚 Documentation Index ### Core Features - [Diagnostics](./diagnostics.md) - Comprehensive validation and error checking - [Code Lens](./code-lens.md) - Reference counts and metadata display - [Document Links](./document-links.md) - Clickable import paths - [Hover Information](./hover.md) - Rich symbol information on hover - [Code Actions](./code-actions.md) - Quick fixes and refactoring - [Completions](./completions.md) - Smart IntelliSense suggestions (including CEL/protovalidate) - [Symbol Search](./symbol-search.md) - Fuzzy workspace symbol search - [Snippets](./snippets.md) - Code snippets library ### Advanced Features - [Buf.yaml Support](./buf-config.md) - Integration with Buf configuration - [Templates](./templates.md) - Proto file templates - [Breaking Changes](./breaking-changes.md) - Breaking change detection - [Schema Graph](./schema-graph.md) - Visual schema visualization - [Schema Diff](./schema-diff.md) - Compare proto files against Git references - [Migration](./migration.md) - Convert proto2 to proto3 - [gRPC Integration](./grpc.md) - gRPC service analysis and code generation - [Google API Support](./google-api.md) - HTTP annotations and Google API patterns ### Developer Tools - [Toolchain Management](./toolchain.md) - Install and manage protoc/buf tools - [Code Generation](./codegen.md) - Configure and run codegen profiles - [Playground](./playground.md) - Test gRPC services interactively - [Option Inspector](./option-inspector.md) - Browse and navigate options - [Registry Management](./registry.md) - Add Buf registry dependencies ### Configuration - [Settings Reference](./settings.md) - Complete settings documentation - [Configuration Examples](./configuration-examples.md) - Common configuration patterns ### Reference - [Complete Features List](./FEATURES.md) - Comprehensive list of all features --- ## Quick Links - [Main README](../README.md) - [GitHub Issues](https://github.com/DrBlury/protobuf-vsc-extension/issues)
# Protobuf VSC Extension Documentation Welcome to the comprehensive documentation for the Protobuf VSC extension. This documentation covers all features, how to use them, and configuration options. ## 📚 Documentation Index ### Core Features - [Diagnostics](./diagnostics.md) - Comprehensive validation and error checking - [Code Lens](./code-lens.md) - Reference counts and metadata display - [Document Links](./document-links.md) - Clickable import paths - [Hover Information](./hover.md) - Rich symbol information on hover - [Code Actions](./code-actions.md) - Quick fixes and refactoring - [Completions](./completions.md) - Smart IntelliSense suggestions (including CEL/protovalidate) - [Symbol Search](./symbol-search.md) - Fuzzy workspace symbol search - [Snippets](./snippets.md) - Code snippets library ### Advanced Features - [Buf.yaml Support](./buf-config.md) - Integration with Buf configuration - [Templates](./templates.md) - Proto file templates - [Breaking Changes](./breaking-changes.md) - Breaking change detection - [Schema Graph](./schema-graph.md) - Visual schema visualization - [Schema Diff](./schema-diff.md) - Compare proto files against Git references - [Migration](./migration.md) - Convert proto2 to proto3 - [gRPC Integration](./grpc.md) - gRPC service analysis and code generation - [Google API Support](./google-api.md) - HTTP annotations and Google API patterns ### Developer Tools - [Toolchain Management](./toolchain.md) - Install and manage protoc/buf tools - [Code Generation](./codegen.md) - Configure and run codegen profiles - [Playground](./playground.md) - Test gRPC services interactively - [Option Inspector](./option-inspector.md) - Browse and navigate options - [Registry Management](./registry.md) - Add Buf registry dependencies ### Configuration - [Settings Reference](./settings.md) - Complete settings documentation - [Configuration Examples](./configuration-examples.md) - Common configuration patterns ### Reference - [Complete Features List](./FEATURES.md) - Comprehensive list of all features --- ## Quick Links - [Main README](../README.md) - [GitHub Issues](https://github.com/DrBlury/protobuf-vsc-extension/issues)
github_markdown
2025-12-15T08:28:02Z
https://github.com/DrBlury/protobuf-vsc-extension/blob/64d096249e44727369a189f59d6431cbc2b4d195/docs/README.md
{}
# Example 05: Связь Many-to-Many ## Описание Демонстрирует связь многие-ко-многим через ассоциативную таблицу. ## ER-диаграммы ### Вариант 1: Простая ассоциативная таблица (Post-Tag) ```mermaid erDiagram POST }o--o{ TAG : "tagged with" POST { int id PK string title text content } POST_TAGS { int post_id PK,FK "references posts(id)" int tag_id PK,FK "references tags(id)" } TAG { int id PK string name UK } ``` **Пояснение:** - `}o--o{` - связь многие-ко-многим - `POST_TAGS` - ассоциативная таблица (junction table) - Один пост может иметь много тегов - Один тег может быть у многих постов ### Вариант 2: Ассоциативная таблица с данными (Student-Course) ```mermaid erDiagram STUDENT }o--o{ COURSE : "enrolled in" STUDENT { int id PK string name string email } STUDENT_COURSES { int student_id PK,FK "references students(id)" int course_id PK,FK "references courses(id)" datetime enrolled_at int grade boolean completed } COURSE { int id PK string name int credits } ``` **Пояснение:** - `STUDENT_COURSES` - ассоциативная таблица с дополнительными полями - Хранит не только связь, но и данные о записи (оценка, дата, статус) - Используется как полноценная модель в SQLAlchemy ## Примеры 1. **Posts и Tags** - простая связь M2M 2. **Students и Courses** - M2M с дополнительными данными (оценки, даты) ## Как запустить ```bash python many_to_many.py ``` ## Ключевые концепции ### Простая ассоциативная таблица ```python post_tags = Table( 'post_tags', Base.metadata, Column('post_id', Integer, ForeignKey('posts.id')), Column('tag_id', Integer, ForeignKey('tags.id')) ) ``` ### Ассоциативная таблица как модель Когда нужны дополнительные данные о связи: ```python class StudentCourse(Base): student_id = Column(Integer, ForeignKey('students.id'), primary_key=True) course_id = Column(Integer, ForeignKey('courses.id'), primary_key=True) grade = Column(Integer) enrolled_at = Column(DateTime) ``` ## Результат - База данных `many_to_many.db` - Примеры обеих подходов к M2M
# Example 05: Связь Many-to-Many ## Описание Демонстрирует связь многие-ко-многим через ассоциативную таблицу. ## ER-диаграммы ### Вариант 1: Простая ассоциативная таблица (Post-Tag) ```mermaid erDiagram POST }o--o{ TAG : "tagged with" POST { int id PK string title text content } POST_TAGS { int post_id PK,FK "references posts(id)" int tag_id PK,FK "references tags(id)" } TAG { int id PK string name UK } ``` **Пояснение:** - `}o--o{` - связь многие-ко-многим - `POST_TAGS` - ассоциативная таблица (junction table) - Один пост может иметь много тегов - Один тег может быть у многих постов ### Вариант 2: Ассоциативная таблица с данными (Student-Course) ```mermaid erDiagram STUDENT }o--o{ COURSE : "enrolled in" STUDENT { int id PK string name string email } STUDENT_COURSES { int student_id PK,FK "references students(id)" int course_id PK,FK "references courses(id)" datetime enrolled_at int grade boolean completed } COURSE { int id PK string name int credits } ``` **Пояснение:** - `STUDENT_COURSES` - ассоциативная таблица с дополнительными полями - Хранит не только связь, но и данные о записи (оценка, дата, статус) - Используется как полноценная модель в SQLAlchemy ## Примеры 1. **Posts и Tags** - простая связь M2M 2. **Students и Courses** - M2M с дополнительными данными (оценки, даты) ## Как запустить ```bash python many_to_many.py ``` ## Ключевые концепции ### Простая ассоциативная таблица ```python post_tags = Table( 'post_tags', Base.metadata, Column('post_id', Integer, ForeignKey('posts.id')), Column('tag_id', Integer, ForeignKey('tags.id')) ) ``` ### Ассоциативная таблица как модель Когда нужны дополнительные данные о связи: ```python class StudentCourse(Base): student_id = Column(Integer, ForeignKey('students.id'), primary_key=True) course_id = Column(Integer, ForeignKey('courses.id'), primary_key=True) grade = Column(Integer) enrolled_at = Column(DateTime) ``` ## Результат - База данных `many_to_many.db` - Примеры обеих подходов к M2M
github_markdown
2025-12-13T18:49:05Z
https://github.com/akhtyamovpavel/SQLAlchemyTutorial/blob/89890ba6f56064290a74e0eafd00bc037650dc00/example_05/README.md
{}
# PixelTerm-C - High Performance Terminal Image Viewer *English | [中文](README_zh.md)* 🖼️ A high-performance terminal image browser written in C, based on the Chafa library. ## Overview PixelTerm-C is a C implementation of the original PixelTerm application, designed to provide significantly better performance than the Python version while maintaining all the same functionality. By leveraging the Chafa library directly instead of using subprocess calls, we eliminate the overhead of Python interpretation and external process creation. Release notes: see `CHANGELOG.md`. ## 🌟 Features - 🖼️ **Multi-format Support** - Supports JPG, PNG, GIF, BMP, WebP, TIFF and other mainstream image formats - 📁 **Smart Browsing** - Automatically detects image files in directories with directory navigation support - ⌨️ **Keyboard Navigation** - Switch between images with arrow keys, supporting various terminal environments - 📏 **Adaptive Display** - Automatically adapts to terminal size changes - 🎨️ **Minimal Interface** - No redundant information, focused on image browsing experience - ⚡️ **High Performance** - 5-10x faster than Python version with significantly lower memory usage - 🔄 **Circular Navigation** - Seamless browsing with wrap-around between first and last images - 📊 **Detailed Information** - Toggle comprehensive image metadata display - 🎯 **Blue Filenames** - Color-coded filename display for better visibility - 🏗️ **Multi-architecture Support** - Native support for both amd64 and aarch64 (ARM64) architectures - 📦 **Preloading** - Optional image preloading for faster navigation - 📋 **Smart Help** - Automatically shows version and help information when no images are found ## Performance Improvements | Metric | Python Version | C Version | Improvement | |--------|---------------|-----------|-------------| | Startup Time | ~1-2s | ~0.1-0.3s | Several times faster | | Image Switching | ~200-500ms | ~50-150ms | 2-5x faster | | Memory Usage | ~50-100MB | ~15-35MB | 2-3x reduction | | CPU Usage | High (Python + subprocess) | Medium (pure C) | Noticeable reduction | ## 🚀 Quick Start ### Install Dependencies ```bash # Ubuntu/Debian sudo apt-get install libchafa-dev libglib2.0-dev libgdk-pixbuf2.0-dev pkg-config build-essential # Arch Linux sudo pacman -S chafa glib2 gdk-pixbuf2 pkgconf base-devel ``` ### Quick Install ```bash # Install from package manager (recommended) # Arch Linux: pacman -S pixelterm-c # Or download binary for your architecture and platform # Linux AMD64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-amd64-linux chmod +x pixelterm-amd64-linux && sudo mv pixelterm-amd64-linux /usr/local/bin/pixelterm # Linux ARM64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-arm64-linux chmod +x pixelterm-arm64-linux && sudo mv pixelterm-arm64-linux /usr/local/bin/pixelterm # macOS AMD64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-amd64-macos chmod +x pixelterm-amd64-macos && sudo mv pixelterm-amd64-macos /usr/local/bin/pixelterm # macOS ARM64 (Apple Silicon): wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-arm64-macos chmod +x pixelterm-arm64-macos && sudo mv pixelterm-arm64-macos /usr/local/bin/pixelterm # Note for macOS users: If the binary fails to start due to security restrictions, run: # xattr -dr com.apple.quarantine pixelterm-arm64-macos ``` ### Building from Source ```bash git clone https://github.com/zouyonghe/PixelTerm-C.git cd PixelTerm-C make # For cross-compilation to aarch64 make CC=aarch64-linux-gnu-gcc ARCH=aarch64 ``` ### Usage ```bash # Browse images in directory (launches directly into preview grid if images exist) ./pixelterm /path/to/images # View single image (opens image viewer) ./pixelterm /path/to/image.jpg # Run in current directory ./pixelterm # Show version ./pixelterm --version # Show help ./pixelterm --help # Disable preloading ./pixelterm --no-preload /path/to/images ``` Preview grid basics: - When opening a directory with images, the app starts in the preview grid by default; from single-image view press `Enter` or `p` to enter the grid. - Use arrows/hjkl/PgUp/PgDn to move, `+`/`-` to change thumbnail size (at least 2 columns), and `Enter` to open the selected image. ## 🎮 Controls | Key | Function | |-----|----------| | ←/→ | Previous/Next image | | ↑/↓ | Move selection (preview/file manager) | | h/j/k/l | Vim-style navigation (left/down/up/right) | | PgUp/PgDn | Page up/down in preview grid | | p/Enter | Enter preview grid mode (move with arrows/PgUp/PgDn, Enter to open selected) | | +/- | Increase/decrease preview thumbnail size | | TAB | Toggle file manager (or exit when no images are loaded) | | i | Toggle image information | | r | Delete current image | | q | Return to previous view (image view exits app) | | ESC | Exit program (always) | | Ctrl+C | Force exit | File Manager: - ↑/↓ to navigate, Enter/→ to open, ← to go to parent. - Any letter key (a–z/A–Z) jumps to the next entry starting with that letter. - q returns to previous view (image view exits app); TAB toggles file manager; ESC quits the program. ## 📄 License LGPL-3.0 or later - See LICENSE file for details This project is licensed under the same license as Chafa (LGPLv3+). --- **PixelTerm-C** - Making terminals excellent image viewers with lightning speed! 🖼️
# PixelTerm-C - High Performance Terminal Image Viewer *English | [中文](README_zh.md)* 🖼️ A high-performance terminal image browser written in C, based on the Chafa library. ## Overview PixelTerm-C is a C implementation of the original PixelTerm application, designed to provide significantly better performance than the Python version while maintaining all the same functionality. By leveraging the Chafa library directly instead of using subprocess calls, we eliminate the overhead of Python interpretation and external process creation. Release notes: see `CHANGELOG.md`. ## 🌟 Features - 🖼️ **Multi-format Support** - Supports JPG, PNG, GIF, BMP, WebP, TIFF and other mainstream image formats - 📁 **Smart Browsing** - Automatically detects image files in directories with directory navigation support - ⌨️ **Keyboard Navigation** - Switch between images with arrow keys, supporting various terminal environments - 📏 **Adaptive Display** - Automatically adapts to terminal size changes - 🎨️ **Minimal Interface** - No redundant information, focused on image browsing experience - ⚡️ **High Performance** - 5-10x faster than Python version with significantly lower memory usage - 🔄 **Circular Navigation** - Seamless browsing with wrap-around between first and last images - 📊 **Detailed Information** - Toggle comprehensive image metadata display - 🎯 **Blue Filenames** - Color-coded filename display for better visibility - 🏗️ **Multi-architecture Support** - Native support for both amd64 and aarch64 (ARM64) architectures - 📦 **Preloading** - Optional image preloading for faster navigation - 📋 **Smart Help** - Automatically shows version and help information when no images are found ## Performance Improvements | Metric | Python Version | C Version | Improvement | |--------|---------------|-----------|-------------| | Startup Time | ~1-2s | ~0.1-0.3s | Several times faster | | Image Switching | ~200-500ms | ~50-150ms | 2-5x faster | | Memory Usage | ~50-100MB | ~15-35MB | 2-3x reduction | | CPU Usage | High (Python + subprocess) | Medium (pure C) | Noticeable reduction | ## 🚀 Quick Start ### Install Dependencies ```bash # Ubuntu/Debian sudo apt-get install libchafa-dev libglib2.0-dev libgdk-pixbuf2.0-dev pkg-config build-essential # Arch Linux sudo pacman -S chafa glib2 gdk-pixbuf2 pkgconf base-devel ``` ### Quick Install ```bash # Install from package manager (recommended) # Arch Linux: pacman -S pixelterm-c # Or download binary for your architecture and platform # Linux AMD64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-amd64-linux chmod +x pixelterm-amd64-linux && sudo mv pixelterm-amd64-linux /usr/local/bin/pixelterm # Linux ARM64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-arm64-linux chmod +x pixelterm-arm64-linux && sudo mv pixelterm-arm64-linux /usr/local/bin/pixelterm # macOS AMD64: wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-amd64-macos chmod +x pixelterm-amd64-macos && sudo mv pixelterm-amd64-macos /usr/local/bin/pixelterm # macOS ARM64 (Apple Silicon): wget https://github.com/zouyonghe/PixelTerm-C/releases/latest/download/pixelterm-arm64-macos chmod +x pixelterm-arm64-macos && sudo mv pixelterm-arm64-macos /usr/local/bin/pixelterm # Note for macOS users: If the binary fails to start due to security restrictions, run: # xattr -dr com.apple.quarantine pixelterm-arm64-macos ``` ### Building from Source ```bash git clone https://github.com/zouyonghe/PixelTerm-C.git cd PixelTerm-C make # For cross-compilation to aarch64 make CC=aarch64-linux-gnu-gcc ARCH=aarch64 ``` ### Usage ```bash # Browse images in directory (launches directly into preview grid if images exist) ./pixelterm /path/to/images # View single image (opens image viewer) ./pixelterm /path/to/image.jpg # Run in current directory ./pixelterm # Show version ./pixelterm --version # Show help ./pixelterm --help # Disable preloading ./pixelterm --no-preload /path/to/images ``` Preview grid basics: - When opening a directory with images, the app starts in the preview grid by default; from single-image view press `Enter` or `p` to enter the grid. - Use arrows/hjkl/PgUp/PgDn to move, `+`/`-` to change thumbnail size (at least 2 columns), and `Enter` to open the selected image. ## 🎮 Controls | Key | Function | |-----|----------| | ←/→ | Previous/Next image | | ↑/↓ | Move selection (preview/file manager) | | h/j/k/l | Vim-style navigation (left/down/up/right) | | PgUp/PgDn | Page up/down in preview grid | | p/Enter | Enter preview grid mode (move with arrows/PgUp/PgDn, Enter to open selected) | | +/- | Increase/decrease preview thumbnail size | | TAB | Toggle file manager (or exit when no images are loaded) | | i | Toggle image information | | r | Delete current image | | q | Return to previous view (image view exits app) | | ESC | Exit program (always) | | Ctrl+C | Force exit | File Manager: - ↑/↓ to navigate, Enter/→ to open, ← to go to parent. - Any letter key (a–z/A–Z) jumps to the next entry starting with that letter. - q returns to previous view (image view exits app); TAB toggles file manager; ESC quits the program. ## 📄 License LGPL-3.0 or later - See LICENSE file for details This project is licensed under the same license as Chafa (LGPLv3+). --- **PixelTerm-C** - Making terminals excellent image viewers with lightning speed! 🖼️
github_markdown
2025-12-15T07:53:09Z
https://github.com/zouyonghe/PixelTerm-C/blob/2f6153a7caaaf9deb1a3bd66db60f2277868d3de/README.md
{}
<!-- fallback_NymMix_20251214163734_38208 --> # NymMix: Advanced-encryption NymMix suite empowers enterprises with scalable, secure, distributed data-anonymization through homomorphic-encryption methodologies Implementation > Advanced rust solution leveraging modern architecture patterns and cutting-edge technology. Advanced-encryption NymMix suite empowers enterprises with scalable, secure, distributed data-anonymization through homomorphic-encryption methodologies. NymMix is designed to provide developers and professionals with a robust, efficient, and scalable solution for their rust development needs. This implementation focuses on performance, maintainability, and ease of use, incorporating industry best practices and modern software architecture patterns. The primary purpose of NymMix is to streamline development workflows and enhance productivity through innovative features and comprehensive functionality. Whether you're building enterprise applications, data processing pipelines, or interactive systems, NymMix provides the foundation you need for successful project implementation. NymMix's key benefits include: * **High-performance architecture**: Leveraging optimized algorithms and efficient data structures for maximum performance. * **Modern development patterns**: Implementing contemporary software engineering practices and design patterns. * **Comprehensive testing**: Extensive test coverage ensuring reliability and maintainability. # Key Features * **Memory-safe Rust implementation**: Advanced implementation with optimized performance and comprehensive error handling. * **Async/await for concurrent processing**: Advanced implementation with optimized performance and comprehensive error handling. * **Zero-cost abstractions**: Advanced implementation with optimized performance and comprehensive error handling. * **Cross-platform compatibility**: Advanced implementation with optimized performance and comprehensive error handling. * **High-performance algorithms**: Advanced implementation with optimized performance and comprehensive error handling. # Technology Stack * **Rust**: Primary development language providing performance, reliability, and extensive ecosystem support. * **Modern tooling**: Utilizing contemporary development tools and frameworks for enhanced productivity. * **Testing frameworks**: Comprehensive testing infrastructure ensuring code quality and reliability. # Installation To install NymMix, follow these steps: 1. Clone the repository: 2. Follow the installation instructions in the documentation for your specific environment. # Configuration NymMix supports various configuration options to customize behavior and optimize performance for your specific use case. Configuration can be managed through environment variables, configuration files, or programmatic settings. ## # Configuration Options The following configuration parameters are available: * **Verbose Mode**: Enable detailed logging for debugging purposes * **Output Format**: Customize the output format (JSON, CSV, XML) * **Performance Settings**: Adjust memory usage and processing threads * **Network Settings**: Configure timeout and retry policies # Contributing Contributions to NymMix are welcome and appreciated! We value community input and encourage developers to help improve this project. ## # How to Contribute 1. Fork the NymMix repository. 2. Create a new branch for your feature or fix. 3. Implement your changes, ensuring they adhere to the project's coding standards and guidelines. 4. Submit a pull request, providing a detailed description of your changes. ## # Development Guidelines * Follow the existing code style and formatting conventions * Write comprehensive tests for new features * Update documentation when adding new functionality * Ensure all tests pass before submitting your pull request # License This project is licensed under the MIT License. See the [LICENSE](https://github.com/muskitma/NymMix/blob/main/LICENSE) file for details.
<!-- fallback_NymMix_20251214163734_38208 --> # NymMix: Advanced-encryption NymMix suite empowers enterprises with scalable, secure, distributed data-anonymization through homomorphic-encryption methodologies Implementation > Advanced rust solution leveraging modern architecture patterns and cutting-edge technology. Advanced-encryption NymMix suite empowers enterprises with scalable, secure, distributed data-anonymization through homomorphic-encryption methodologies. NymMix is designed to provide developers and professionals with a robust, efficient, and scalable solution for their rust development needs. This implementation focuses on performance, maintainability, and ease of use, incorporating industry best practices and modern software architecture patterns. The primary purpose of NymMix is to streamline development workflows and enhance productivity through innovative features and comprehensive functionality. Whether you're building enterprise applications, data processing pipelines, or interactive systems, NymMix provides the foundation you need for successful project implementation. NymMix's key benefits include: * **High-performance architecture**: Leveraging optimized algorithms and efficient data structures for maximum performance. * **Modern development patterns**: Implementing contemporary software engineering practices and design patterns. * **Comprehensive testing**: Extensive test coverage ensuring reliability and maintainability. # Key Features * **Memory-safe Rust implementation**: Advanced implementation with optimized performance and comprehensive error handling. * **Async/await for concurrent processing**: Advanced implementation with optimized performance and comprehensive error handling. * **Zero-cost abstractions**: Advanced implementation with optimized performance and comprehensive error handling. * **Cross-platform compatibility**: Advanced implementation with optimized performance and comprehensive error handling. * **High-performance algorithms**: Advanced implementation with optimized performance and comprehensive error handling. # Technology Stack * **Rust**: Primary development language providing performance, reliability, and extensive ecosystem support. * **Modern tooling**: Utilizing contemporary development tools and frameworks for enhanced productivity. * **Testing frameworks**: Comprehensive testing infrastructure ensuring code quality and reliability. # Installation To install NymMix, follow these steps: 1. Clone the repository: 2. Follow the installation instructions in the documentation for your specific environment. # Configuration NymMix supports various configuration options to customize behavior and optimize performance for your specific use case. Configuration can be managed through environment variables, configuration files, or programmatic settings. ## # Configuration Options The following configuration parameters are available: * **Verbose Mode**: Enable detailed logging for debugging purposes * **Output Format**: Customize the output format (JSON, CSV, XML) * **Performance Settings**: Adjust memory usage and processing threads * **Network Settings**: Configure timeout and retry policies # Contributing Contributions to NymMix are welcome and appreciated! We value community input and encourage developers to help improve this project. ## # How to Contribute 1. Fork the NymMix repository. 2. Create a new branch for your feature or fix. 3. Implement your changes, ensuring they adhere to the project's coding standards and guidelines. 4. Submit a pull request, providing a detailed description of your changes. ## # Development Guidelines * Follow the existing code style and formatting conventions * Write comprehensive tests for new features * Update documentation when adding new functionality * Ensure all tests pass before submitting your pull request # License This project is licensed under the MIT License. See the [LICENSE](https://github.com/muskitma/NymMix/blob/main/LICENSE) file for details.
github_markdown
2025-12-14T16:37:34Z
https://github.com/muskitma/NymMix/blob/d9bf12973c17922efe7b19de7e6e39b196bbbfdf/README.md
{}
# GSQL - Generic SQL **Parametric polymorphism for SQL schemas** GSQL is a domain-specific language that brings the power of generics/templates to database schemas. Define common patterns once, instantiate them freely. > Read the background story: > [Parametric Polymorphism for SQL](https://barish.me/blog/parametric-polymorphism-for-sql/) ## The Problem When building relational databases, you often need to duplicate table structures with minor variations. For example, in a learning management system, you might need announcements for courses, lessons, and exams—the same pattern repeated three times. Current solutions force you to choose between: - **Separate tables** - Violates DRY principles, leads to maintenance nightmares - **Polymorphic associations** - Sacrifices foreign key integrity and type safety GSQL solves this by letting you define reusable schema templates (concepts) that compile to PostgreSQL with proper foreign key constraints. ## Quick Example Here is the "LMS Dilemma" (Courses, Lessons, Exams) solved with GSQL. We define an `Announcing` pattern once and apply it to three different tables, generating strictly typed foreign keys for each. ``` // Define reusable patterns (Mixins) schema Timestamps { created_at timestamptz nonull default(NOW()); updated_at timestamptz nonull default(NOW()); } // Define a Generic Concept // Accepts a 'Target' type parameter to create a relationship concept Announcing<Target> { schema Announcements mixin Timestamps { id serial pkey; // Template variables: {Target}_id becomes course_id, lesson_id, etc. {Target}_id integer nonull ref(Target.id) ondelete(cascade); title text nonull; body text nonull; index({Target}_id); } } // Define Concrete Schemas (in actual app these would also be concepts with generics) schema Courses mixin Timestamps { id serial pkey; name text; } schema Lessons mixin Timestamps { id serial pkey; topic text; } schema Exams mixin Timestamps { id serial pkey; score int; } // Actually create tables by instantiating the Schemas/Concepts courses = Courses; lessons = Lessons; exams = Exams; // Create specific announcement tables for each entity course_announcements = Announcing<courses[course]>; lesson_announcements = Announcing<lessons[lesson]>; exam_announcements = Announcing<exams[exam]>; // Add per-instance indexes if needed index(course_announcements, created_at); ``` This generates three announcement tables with proper foreign keys: ```sql CREATE TABLE course_announcements ( id serial PRIMARY KEY, course_id integer NOT NULL REFERENCES courses(id) ON DELETE CASCADE, title text NOT NULL, body text NOT NULL, created_at timestamptz NOT NULL DEFAULT NOW(), updated_at timestamptz NOT NULL DEFAULT NOW() ); CREATE INDEX ON course_announcements (course_id); --- ... ``` ## Key Features - **Schemas**: A table definition with columns, constraints, indexes, triggers - **Concepts**: Generic schema templates with type parameters - **Mixins**: Compose reusable schema fragments - **Template variables**: Automatic field name expansion - **Sibling references**: Multiple schemas within one concept can reference each other - **Per-instance indexes**: Add indexes after instantiation - **Type-safe foreign keys**: Proper FK constraints for polymorphic patterns - **PostgreSQL output**: Compiles to PostgreSQL, integrates with migration tools like Atlas ## Try It Out Try GSQL in your browser with the [online playground](https://gsql.barish.me). ## Installation ```bash npm install @barishnamazov/gsql # or bun install @barishnamazov/gsql ``` ## Usage ### Command Line ```bash # Compile a GSQL file to SQL gsql compile schema.gsql -o schema.sql # Output to stdout gsql compile schema.gsql # Show help gsql --help ``` ### As a Library ```typescript import { compile, compileToSQL } from "@barishnamazov/gsql"; // Get detailed result const result = compile(source); if (result.success) { console.log(result.sql); } else { console.error(result.errors); } // Or just get SQL (throws on error) const sql = compileToSQL(source); ``` ## Syntax Reference ### Schemas ```gsql schema Name mixin Mixin1, Mixin2 { column_name type constraint1 constraint2; index(column1, column2) unique; check(expression); trigger name before update on each row execute function fn(); } ``` ### Concepts ```gsql concept Tagging<Target> { schema Tags { id serial pkey; name text; } schema Taggings { {Target}_id integer ref(Target.id); {Tags}_id integer ref(Tags.id); // sibling reference index({Target}_id, {Tags}_id) unique; } } users = Users; // {Target}_id becomes user_id // {Tags}_id becomes user_tag_id user_tags[user_tag], user_taggings = Tagging<users[user]>; ``` ### Instantiation ```gsql // Simple table_name = SchemaOrConcept; // With type arguments and aliases table_name = Concept<other_table[alias]>; // Multiple outputs table1, table2 = ConceptWithMultipleSchemas<type_arg>; ``` **Aliases:** When instantiating a concept: - **With alias:** `exams[examHello]` → uses `examHello_id` (preserves alias as-is) - **Without alias:** `authors` → uses `author_id` (snake_cased from parameter name `Author`) Example: ```gsql concept Announcing<Target, Author> { schema Announcements { {Target}_id integer nonull ref(Target.id); {Author}_id integer nonull ref(Author.id); } } schema Exams { id serial pkey; } schema Authors { id serial pkey; } exams = Exams; authors = Authors; // Creates table with exam_id and author_id columns // We don't need to alias authors, because the parameter name is Author announcements = Announcing<exams[exam], authors>; ``` ### Data Types - `serial`, `bigserial` - `integer`, `bigint`, `smallint` - `text`, `varchar(n)`, `char(n)` - `boolean` - `timestamptz`, `timestamp`, `date`, `time` - `jsonb`, `json` - `uuid`, `inet`, `citext` - `decimal`, `numeric`, `real` - `bytea` ### Constraints - `pkey` - Primary key - `nonull` - Not null - `unique` - Unique constraint - `default(value)` - Default value - `ref(Table.column)` - Foreign key reference - `ondelete(cascade|restrict|setnull|setdefault|noaction)` - `check(expression)` - Check constraint ## Development This is a monorepo with multiple packages: - **`packages/gsql`** - Core library and CLI (published as `@barishnamazov/gsql`) - **`packages/playground`** - Browser-based playground ### Building ```bash # Build everything npm run build # Build just the library npm run build:gsql # Build just the playground npm run build:playground ``` ### Testing ```bash npm test ``` ### Playground Development ```bash npm run dev:playground ``` After building, open `packages/playground/dist/index.html` in your browser. ### Linting and Formatting ```bash npm run lint npm run format npm run typecheck:all ``` ## License MIT
# GSQL - Generic SQL **Parametric polymorphism for SQL schemas** GSQL is a domain-specific language that brings the power of generics/templates to database schemas. Define common patterns once, instantiate them freely. > Read the background story: > [Parametric Polymorphism for SQL](https://barish.me/blog/parametric-polymorphism-for-sql/) ## The Problem When building relational databases, you often need to duplicate table structures with minor variations. For example, in a learning management system, you might need announcements for courses, lessons, and exams—the same pattern repeated three times. Current solutions force you to choose between: - **Separate tables** - Violates DRY principles, leads to maintenance nightmares - **Polymorphic associations** - Sacrifices foreign key integrity and type safety GSQL solves this by letting you define reusable schema templates (concepts) that compile to PostgreSQL with proper foreign key constraints. ## Quick Example Here is the "LMS Dilemma" (Courses, Lessons, Exams) solved with GSQL. We define an `Announcing` pattern once and apply it to three different tables, generating strictly typed foreign keys for each. ``` // Define reusable patterns (Mixins) schema Timestamps { created_at timestamptz nonull default(NOW()); updated_at timestamptz nonull default(NOW()); } // Define a Generic Concept // Accepts a 'Target' type parameter to create a relationship concept Announcing<Target> { schema Announcements mixin Timestamps { id serial pkey; // Template variables: {Target}_id becomes course_id, lesson_id, etc. {Target}_id integer nonull ref(Target.id) ondelete(cascade); title text nonull; body text nonull; index({Target}_id); } } // Define Concrete Schemas (in actual app these would also be concepts with generics) schema Courses mixin Timestamps { id serial pkey; name text; } schema Lessons mixin Timestamps { id serial pkey; topic text; } schema Exams mixin Timestamps { id serial pkey; score int; } // Actually create tables by instantiating the Schemas/Concepts courses = Courses; lessons = Lessons; exams = Exams; // Create specific announcement tables for each entity course_announcements = Announcing<courses[course]>; lesson_announcements = Announcing<lessons[lesson]>; exam_announcements = Announcing<exams[exam]>; // Add per-instance indexes if needed index(course_announcements, created_at); ``` This generates three announcement tables with proper foreign keys: ```sql CREATE TABLE course_announcements ( id serial PRIMARY KEY, course_id integer NOT NULL REFERENCES courses(id) ON DELETE CASCADE, title text NOT NULL, body text NOT NULL, created_at timestamptz NOT NULL DEFAULT NOW(), updated_at timestamptz NOT NULL DEFAULT NOW() ); CREATE INDEX ON course_announcements (course_id); --- ... ``` ## Key Features - **Schemas**: A table definition with columns, constraints, indexes, triggers - **Concepts**: Generic schema templates with type parameters - **Mixins**: Compose reusable schema fragments - **Template variables**: Automatic field name expansion - **Sibling references**: Multiple schemas within one concept can reference each other - **Per-instance indexes**: Add indexes after instantiation - **Type-safe foreign keys**: Proper FK constraints for polymorphic patterns - **PostgreSQL output**: Compiles to PostgreSQL, integrates with migration tools like Atlas ## Try It Out Try GSQL in your browser with the [online playground](https://gsql.barish.me). ## Installation ```bash npm install @barishnamazov/gsql # or bun install @barishnamazov/gsql ``` ## Usage ### Command Line ```bash # Compile a GSQL file to SQL gsql compile schema.gsql -o schema.sql # Output to stdout gsql compile schema.gsql # Show help gsql --help ``` ### As a Library ```typescript import { compile, compileToSQL } from "@barishnamazov/gsql"; // Get detailed result const result = compile(source); if (result.success) { console.log(result.sql); } else { console.error(result.errors); } // Or just get SQL (throws on error) const sql = compileToSQL(source); ``` ## Syntax Reference ### Schemas ```gsql schema Name mixin Mixin1, Mixin2 { column_name type constraint1 constraint2; index(column1, column2) unique; check(expression); trigger name before update on each row execute function fn(); } ``` ### Concepts ```gsql concept Tagging<Target> { schema Tags { id serial pkey; name text; } schema Taggings { {Target}_id integer ref(Target.id); {Tags}_id integer ref(Tags.id); // sibling reference index({Target}_id, {Tags}_id) unique; } } users = Users; // {Target}_id becomes user_id // {Tags}_id becomes user_tag_id user_tags[user_tag], user_taggings = Tagging<users[user]>; ``` ### Instantiation ```gsql // Simple table_name = SchemaOrConcept; // With type arguments and aliases table_name = Concept<other_table[alias]>; // Multiple outputs table1, table2 = ConceptWithMultipleSchemas<type_arg>; ``` **Aliases:** When instantiating a concept: - **With alias:** `exams[examHello]` → uses `examHello_id` (preserves alias as-is) - **Without alias:** `authors` → uses `author_id` (snake_cased from parameter name `Author`) Example: ```gsql concept Announcing<Target, Author> { schema Announcements { {Target}_id integer nonull ref(Target.id); {Author}_id integer nonull ref(Author.id); } } schema Exams { id serial pkey; } schema Authors { id serial pkey; } exams = Exams; authors = Authors; // Creates table with exam_id and author_id columns // We don't need to alias authors, because the parameter name is Author announcements = Announcing<exams[exam], authors>; ``` ### Data Types - `serial`, `bigserial` - `integer`, `bigint`, `smallint` - `text`, `varchar(n)`, `char(n)` - `boolean` - `timestamptz`, `timestamp`, `date`, `time` - `jsonb`, `json` - `uuid`, `inet`, `citext` - `decimal`, `numeric`, `real` - `bytea` ### Constraints - `pkey` - Primary key - `nonull` - Not null - `unique` - Unique constraint - `default(value)` - Default value - `ref(Table.column)` - Foreign key reference - `ondelete(cascade|restrict|setnull|setdefault|noaction)` - `check(expression)` - Check constraint ## Development This is a monorepo with multiple packages: - **`packages/gsql`** - Core library and CLI (published as `@barishnamazov/gsql`) - **`packages/playground`** - Browser-based playground ### Building ```bash # Build everything npm run build # Build just the library npm run build:gsql # Build just the playground npm run build:playground ``` ### Testing ```bash npm test ``` ### Playground Development ```bash npm run dev:playground ``` After building, open `packages/playground/dist/index.html` in your browser. ### Linting and Formatting ```bash npm run lint npm run format npm run typecheck:all ``` ## License MIT
github_markdown
2025-12-15T09:55:39Z
https://github.com/BarishNamazov/gsql/blob/ac3a93f2c3117cd6df0eee2548584fc85fb34b12/packages/gsql/README.md
{}
import os import re import argparse import ffmpeg import warnings warnings.filterwarnings('ignore') import torch import torch.nn as nn from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration import wan from wan.utils.utils import cache_video, cache_image from vace.models.wan.configs import WAN_CONFIGS from module import ProjMLP, WanVaceProj, process_vace_data def get_parser(): parser = argparse.ArgumentParser() parser.add_argument("--qwenvl_path", type=str, required=True) parser.add_argument("--vace_path", type=str, required=True) parser.add_argument("--proj_path", type=str, required=True) parser.add_argument("--prompt", type=str, required=True) parser.add_argument("--save_path", type=str, required=True) return parser def main(): args = get_parser().parse_args() cfg = WAN_CONFIGS["vace-1.3B"] prompt = args.prompt visual_content_list = re.findall(r'###(.*?)###', prompt) for visual_content_path in visual_content_list: if visual_content_path.endswith((".png", ".jpg", ".jpeg")): PAD_TOKEN = "<IMGPAD>" elif visual_content_path.endswith((".mp4", ".avi", ".mov")): PAD_TOKEN = "<VIDPAD>" else: assert False, "Unsupported file type" prompt = prompt.replace(f"###{visual_content_path}###", PAD_TOKEN) qwenvl_model = Qwen2_5_VLForConditionalGeneration.from_pretrained(args.qwenvl_path, device_map="auto").to("cuda") qwenvl_processor = AutoProcessor.from_pretrained(args.qwenvl_path) wan_vace = WanVaceProj(config=cfg, checkpoint_dir=args.vace_path, device_id=0) wan_vace.model.text_len = 2048 projector = ProjMLP(input_dim=qwenvl_model.config.text_config.hidden_size, t5_dim=4096).to("cuda") state_dict = torch.load(args.proj_path) projector.load_state_dict({k.replace('module.', ''): v for k, v in state_dict.items()}) messages = [{"role": "user", "content": [{"type": "text", "text": prompt}]}] with torch.no_grad(): inputs = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ) inputs = inputs.to("cuda") generated_ids = qwenvl_model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)] output_text = qwenvl_processor.batch_decode(generated_ids_trimmed, skip_special_tokens=False, clean_up_tokenization_spaces=False) output_text = output_text[0].replace('<|im_end|>', '').strip() size = (832, 480) src_mask = src_video = src_ref = None if '<CFI>' in output_text: print(output_text) num_frames = 1 if '<BORES>' in output_text: match = re.search(r'<BORES>([^<]+)<EORES>', output_text) match_str = match.group(1) width, height = map(int, match_str.split(',')) size = (width, height) if '<BOEDIT>' in output_text: match = re.search(r'<BOEDIT>([^<]+)<EOEDIT>', output_text) match_str = match.group(1) mask_id, source_id = map(int, match_str.split(',')) mask_path = visual_content_list[mask_id] src_path = visual_content_list[source_id] result = process_vace_data(task="inpainting", mode="mask", video=src_path, mask=mask_path, save_fps=24) src_mask = result['src_mask'] src_video = result['src_video'] if '<REF>' in output_text: ref_path = visual_content_list[0] result = process_vace_data(task="image_reference", mode="plain", image=ref_path) src_ref = result['src_ref_images'] elif '<CFV>' in output_text: print(output_text) num_frames = 81 if '<BORES>' in output_text: match = re.search(r'<BORES>([^<]+)<EORES>', output_text) match_str = match.group(1) width, height = map(int, match_str.split(',')) size = (width, height) if '<BONF>' in output_text: match = re.search(r'<BONF>([^<]+)<EONF>', output_text) match_str = match.group(1) num_frames = int(match_str) + 1 if '<BOFIDX>' in output_text: match = re.search(r'<BOFIDX>([^<]+)<EOFIDX>', output_text) match_str = match.group(1) mode = "firstframe" if match_str == "match_str" else "lastframe" result = process_vace_data(task="frameref", mode=mode, image=visual_content_list[0]) src_mask = result['src_mask'] src_video = result['src_video'] if '<BOEDIT>' in output_text: src_path = visual_content_list[0] result = process_vace_data(task="inpainting", mode="salient", video=src_path, save_fps=24) src_mask = result['src_mask'] src_video = result['src_video'] if '<REF>' in output_text: ref_path = visual_content_list[0] result = process_vace_data(task="image_reference", mode="plain", image=ref_path) src_ref = result['src_ref_images'] if '<CTRL>' in output_text: src_video = visual_content_list[0] else: if len(visual_content_list) == 0: messages = [{"role": "user", "content": [{"type": "text", "text": prompt}]}] else: messages = [{"role": "user", "content": [{"type": "text", "text": prompt}, {"type": "image", "image": visual_content_list[0]}]}] with torch.no_grad(): inputs = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ) inputs = inputs.to("cuda") generated_ids = qwenvl_model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)] output_text = qwenvl_processor.batch_decode(generated_ids_trimmed, skip_special_tokens=False, clean_up_tokenization_spaces=False) output_text = output_text[0].replace('<|im_end|>', '').strip() print(output_text) return with torch.no_grad(): inputs_states = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=False, return_tensors="pt", padding=True ).to("cuda") qwenvl_outputs = qwenvl_model.model(inputs_states).last_hidden_state.to("cuda") qwenvl_proj_feat = projector(qwenvl_outputs[:, 1:, :].float()) src_video, src_mask, src_ref_images = wan_vace.prepare_source([src_video], [src_mask], [None if src_ref is None else src_ref.split(',')], num_frames, size, "cuda") video = wan_vace.generate( prompt, qwenvl_proj_feat, src_video, src_mask, src_ref_images, size=size, frame_num=num_frames, seed=0 ) os.makedirs(os.path.split(args.save_path)[0], exist_ok=True) if num_frames == 1: cache_image(tensor=video[:, 0, ...], save_file=args.save_path, nrow=1, normalize=True, value_range=(-1, 1)) else: cache_video(tensor=video[None], save_file=args.save_path, fps=24, nrow=1, normalize=True, value_range=(-1, 1)) if __name__ == "__main__": main()
import os import re import argparse import ffmpeg import warnings warnings.filterwarnings('ignore') import torch import torch.nn as nn from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration import wan from wan.utils.utils import cache_video, cache_image from vace.models.wan.configs import WAN_CONFIGS from module import ProjMLP, WanVaceProj, process_vace_data def get_parser(): parser = argparse.ArgumentParser() parser.add_argument("--qwenvl_path", type=str, required=True) parser.add_argument("--vace_path", type=str, required=True) parser.add_argument("--proj_path", type=str, required=True) parser.add_argument("--prompt", type=str, required=True) parser.add_argument("--save_path", type=str, required=True) return parser def main(): args = get_parser().parse_args() cfg = WAN_CONFIGS["vace-1.3B"] prompt = args.prompt visual_content_list = re.findall(r'###(.*?)###', prompt) for visual_content_path in visual_content_list: if visual_content_path.endswith((".png", ".jpg", ".jpeg")): PAD_TOKEN = "<IMGPAD>" elif visual_content_path.endswith((".mp4", ".avi", ".mov")): PAD_TOKEN = "<VIDPAD>" else: assert False, "Unsupported file type" prompt = prompt.replace(f"###{visual_content_path}###", PAD_TOKEN) qwenvl_model = Qwen2_5_VLForConditionalGeneration.from_pretrained(args.qwenvl_path, device_map="auto").to("cuda") qwenvl_processor = AutoProcessor.from_pretrained(args.qwenvl_path) wan_vace = WanVaceProj(config=cfg, checkpoint_dir=args.vace_path, device_id=0) wan_vace.model.text_len = 2048 projector = ProjMLP(input_dim=qwenvl_model.config.text_config.hidden_size, t5_dim=4096).to("cuda") state_dict = torch.load(args.proj_path) projector.load_state_dict({k.replace('module.', ''): v for k, v in state_dict.items()}) messages = [{"role": "user", "content": [{"type": "text", "text": prompt}]}] with torch.no_grad(): inputs = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ) inputs = inputs.to("cuda") generated_ids = qwenvl_model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)] output_text = qwenvl_processor.batch_decode(generated_ids_trimmed, skip_special_tokens=False, clean_up_tokenization_spaces=False) output_text = output_text[0].replace('<|im_end|>', '').strip() size = (832, 480) src_mask = src_video = src_ref = None if '<CFI>' in output_text: print(output_text) num_frames = 1 if '<BORES>' in output_text: match = re.search(r'<BORES>([^<]+)<EORES>', output_text) match_str = match.group(1) width, height = map(int, match_str.split(',')) size = (width, height) if '<BOEDIT>' in output_text: match = re.search(r'<BOEDIT>([^<]+)<EOEDIT>', output_text) match_str = match.group(1) mask_id, source_id = map(int, match_str.split(',')) mask_path = visual_content_list[mask_id] src_path = visual_content_list[source_id] result = process_vace_data(task="inpainting", mode="mask", video=src_path, mask=mask_path, save_fps=24) src_mask = result['src_mask'] src_video = result['src_video'] if '<REF>' in output_text: ref_path = visual_content_list[0] result = process_vace_data(task="image_reference", mode="plain", image=ref_path) src_ref = result['src_ref_images'] elif '<CFV>' in output_text: print(output_text) num_frames = 81 if '<BORES>' in output_text: match = re.search(r'<BORES>([^<]+)<EORES>', output_text) match_str = match.group(1) width, height = map(int, match_str.split(',')) size = (width, height) if '<BONF>' in output_text: match = re.search(r'<BONF>([^<]+)<EONF>', output_text) match_str = match.group(1) num_frames = int(match_str) + 1 if '<BOFIDX>' in output_text: match = re.search(r'<BOFIDX>([^<]+)<EOFIDX>', output_text) match_str = match.group(1) mode = "firstframe" if match_str == "match_str" else "lastframe" result = process_vace_data(task="frameref", mode=mode, image=visual_content_list[0]) src_mask = result['src_mask'] src_video = result['src_video'] if '<BOEDIT>' in output_text: src_path = visual_content_list[0] result = process_vace_data(task="inpainting", mode="salient", video=src_path, save_fps=24) src_mask = result['src_mask'] src_video = result['src_video'] if '<REF>' in output_text: ref_path = visual_content_list[0] result = process_vace_data(task="image_reference", mode="plain", image=ref_path) src_ref = result['src_ref_images'] if '<CTRL>' in output_text: src_video = visual_content_list[0] else: if len(visual_content_list) == 0: messages = [{"role": "user", "content": [{"type": "text", "text": prompt}]}] else: messages = [{"role": "user", "content": [{"type": "text", "text": prompt}, {"type": "image", "image": visual_content_list[0]}]}] with torch.no_grad(): inputs = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ) inputs = inputs.to("cuda") generated_ids = qwenvl_model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)] output_text = qwenvl_processor.batch_decode(generated_ids_trimmed, skip_special_tokens=False, clean_up_tokenization_spaces=False) output_text = output_text[0].replace('<|im_end|>', '').strip() print(output_text) return with torch.no_grad(): inputs_states = qwenvl_processor.apply_chat_template( messages, tokenize=True, add_generation_prompt=False, return_tensors="pt", padding=True ).to("cuda") qwenvl_outputs = qwenvl_model.model(inputs_states).last_hidden_state.to("cuda") qwenvl_proj_feat = projector(qwenvl_outputs[:, 1:, :].float()) src_video, src_mask, src_ref_images = wan_vace.prepare_source([src_video], [src_mask], [None if src_ref is None else src_ref.split(',')], num_frames, size, "cuda") video = wan_vace.generate( prompt, qwenvl_proj_feat, src_video, src_mask, src_ref_images, size=size, frame_num=num_frames, seed=0 ) os.makedirs(os.path.split(args.save_path)[0], exist_ok=True) if num_frames == 1: cache_image(tensor=video[:, 0, ...], save_file=args.save_path, nrow=1, normalize=True, value_range=(-1, 1)) else: cache_video(tensor=video[None], save_file=args.save_path, fps=24, nrow=1, normalize=True, value_range=(-1, 1)) if __name__ == "__main__": main()
github_python
2025-12-11T13:37:46Z
https://github.com/ali-vilab/Unison/blob/2014063ce531f77cb0f6356c66de3f474c7b6f27/inference.py
{}
#!/usr/bin/env python3 """SQLite database for robust notification tracking.""" import sqlite3 import json from pathlib import Path from datetime import datetime, timedelta from typing import Set, Dict, List, Optional, Tuple import logging logger = logging.getLogger(__name__) class NotificationDB: """Database for tracking notification processing state.""" def __init__(self, db_path: str = "queue/notifications.db"): """Initialize the notification database.""" self.db_path = Path(db_path) self.db_path.parent.mkdir(exist_ok=True, parents=True) self.conn = None self._init_db() def _init_db(self): """Initialize database schema.""" self.conn = sqlite3.connect(self.db_path, check_same_thread=False) self.conn.row_factory = sqlite3.Row # Create main notifications table self.conn.execute(""" CREATE TABLE IF NOT EXISTS notifications ( uri TEXT PRIMARY KEY, indexed_at TEXT NOT NULL, processed_at TEXT, status TEXT NOT NULL DEFAULT 'pending', reason TEXT, author_handle TEXT, author_did TEXT, text TEXT, parent_uri TEXT, root_uri TEXT, error TEXT, metadata TEXT ) """) # Create indexes for faster lookups self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_indexed_at ON notifications(indexed_at DESC) """) self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_status ON notifications(status) """) self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_author_handle ON notifications(author_handle) """) # Create session tracking table self.conn.execute(""" CREATE TABLE IF NOT EXISTS sessions ( id INTEGER PRIMARY KEY AUTOINCREMENT, started_at TEXT NOT NULL, ended_at TEXT, last_seen_at TEXT, notifications_processed INTEGER DEFAULT 0, notifications_skipped INTEGER DEFAULT 0, notifications_error INTEGER DEFAULT 0 ) """) self.conn.commit() def add_notification(self, notif_dict: Dict) -> bool: """Add a notification to the database.""" try: # Handle None input if not notif_dict: return False # Extract key fields uri = notif_dict.get('uri', '') if not uri: return False indexed_at = notif_dict.get('indexed_at', '') reason = notif_dict.get('reason', '') author = notif_dict.get('author', {}) if notif_dict.get('author') else {} author_handle = author.get('handle', '') if author else '' author_did = author.get('did', '') if author else '' # Extract text from record if available (handle None records) record = notif_dict.get('record') or {} text = record.get('text', '')[:500] if record else '' # Extract thread info parent_uri = None root_uri = None if record and 'reply' in record and record['reply']: reply_info = record['reply'] if reply_info and isinstance(reply_info, dict): parent_info = reply_info.get('parent', {}) root_info = reply_info.get('root', {}) if parent_info: parent_uri = parent_info.get('uri') if root_info: root_uri = root_info.get('uri') # Store additional metadata as JSON metadata = { 'cid': notif_dict.get('cid'), 'labels': notif_dict.get('labels', []), 'is_read': notif_dict.get('is_read', False) } self.conn.execute(""" INSERT OR IGNORE INTO notifications (uri, indexed_at, reason, author_handle, author_did, text, parent_uri, root_uri, status, metadata) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 'pending', ?) """, (uri, indexed_at, reason, author_handle, author_did, text, parent_uri, root_uri, json.dumps(metadata))) self.conn.commit() return True except Exception as e: logger.error(f"Error adding notification to DB: {e}") return False def is_processed(self, uri: str) -> bool: """Check if a notification has been processed.""" cursor = self.conn.execute(""" SELECT status FROM notifications WHERE uri = ? """, (uri,)) row = cursor.fetchone() if row: return row['status'] in ['processed', 'ignored', 'no_reply'] return False def mark_processed(self, uri: str, status: str = 'processed', error: str = None): """Mark a notification as processed.""" try: self.conn.execute(""" UPDATE notifications SET status = ?, processed_at = ?, error = ? WHERE uri = ? """, (status, datetime.now().isoformat(), error, uri)) self.conn.commit() except Exception as e: logger.error(f"Error marking notification processed: {e}") def get_unprocessed(self, limit: int = 100) -> List[Dict]: """Get unprocessed notifications.""" cursor = self.conn.execute(""" SELECT * FROM notifications WHERE status = 'pending' ORDER BY indexed_at ASC LIMIT ? """, (limit,)) return [dict(row) for row in cursor] def get_latest_processed_time(self) -> Optional[str]: """Get the timestamp of the most recently processed notification.""" cursor = self.conn.execute(""" SELECT MAX(indexed_at) as latest FROM notifications WHERE status IN ('processed', 'ignored', 'no_reply') """) row = cursor.fetchone() return row['latest'] if row and row['latest'] else None def cleanup_old_records(self, days: int = 7): """Remove records older than specified days.""" cutoff_date = (datetime.now() - timedelta(days=days)).isoformat() deleted = self.conn.execute(""" DELETE FROM notifications WHERE indexed_at < ? AND status IN ('processed', 'ignored', 'no_reply', 'error') """, (cutoff_date,)).rowcount self.conn.commit() if deleted > 0: logger.info(f"Cleaned up {deleted} old notification records") # Vacuum to reclaim space self.conn.execute("VACUUM") def get_stats(self) -> Dict: """Get database statistics.""" stats = {} # Count by status cursor = self.conn.execute(""" SELECT status, COUNT(*) as count FROM notifications GROUP BY status """) for row in cursor: stats[f"status_{row['status']}"] = row['count'] # Total count cursor = self.conn.execute("SELECT COUNT(*) as total FROM notifications") stats['total'] = cursor.fetchone()['total'] # Recent activity (last 24h) yesterday = (datetime.now() - timedelta(days=1)).isoformat() cursor = self.conn.execute(""" SELECT COUNT(*) as recent FROM notifications WHERE indexed_at > ? """, (yesterday,)) stats['recent_24h'] = cursor.fetchone()['recent'] return stats def start_session(self) -> int: """Start a new processing session.""" cursor = self.conn.execute(""" INSERT INTO sessions (started_at, last_seen_at) VALUES (?, ?) """, (datetime.now().isoformat(), datetime.now().isoformat())) self.conn.commit() return cursor.lastrowid def update_session(self, session_id: int, processed: int = 0, skipped: int = 0, error: int = 0): """Update session statistics.""" self.conn.execute(""" UPDATE sessions SET last_seen_at = ?, notifications_processed = notifications_processed + ?, notifications_skipped = notifications_skipped + ?, notifications_error = notifications_error + ? WHERE id = ? """, (datetime.now().isoformat(), processed, skipped, error, session_id)) self.conn.commit() def end_session(self, session_id: int): """End a processing session.""" self.conn.execute(""" UPDATE sessions SET ended_at = ? WHERE id = ? """, (datetime.now().isoformat(), session_id)) self.conn.commit() def get_processed_uris(self, limit: int = 10000) -> Set[str]: """Get set of processed URIs for compatibility with existing code.""" cursor = self.conn.execute(""" SELECT uri FROM notifications WHERE status IN ('processed', 'ignored', 'no_reply') ORDER BY processed_at DESC LIMIT ? """, (limit,)) return {row['uri'] for row in cursor} def migrate_from_json(self, json_path: str = "queue/processed_notifications.json"): """Migrate data from the old JSON format.""" json_file = Path(json_path) if not json_file.exists(): return try: with open(json_file, 'r') as f:
#!/usr/bin/env python3 """SQLite database for robust notification tracking.""" import sqlite3 import json from pathlib import Path from datetime import datetime, timedelta from typing import Set, Dict, List, Optional, Tuple import logging logger = logging.getLogger(__name__) class NotificationDB: """Database for tracking notification processing state.""" def __init__(self, db_path: str = "queue/notifications.db"): """Initialize the notification database.""" self.db_path = Path(db_path) self.db_path.parent.mkdir(exist_ok=True, parents=True) self.conn = None self._init_db() def _init_db(self): """Initialize database schema.""" self.conn = sqlite3.connect(self.db_path, check_same_thread=False) self.conn.row_factory = sqlite3.Row # Create main notifications table self.conn.execute(""" CREATE TABLE IF NOT EXISTS notifications ( uri TEXT PRIMARY KEY, indexed_at TEXT NOT NULL, processed_at TEXT, status TEXT NOT NULL DEFAULT 'pending', reason TEXT, author_handle TEXT, author_did TEXT, text TEXT, parent_uri TEXT, root_uri TEXT, error TEXT, metadata TEXT ) """) # Create indexes for faster lookups self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_indexed_at ON notifications(indexed_at DESC) """) self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_status ON notifications(status) """) self.conn.execute(""" CREATE INDEX IF NOT EXISTS idx_author_handle ON notifications(author_handle) """) # Create session tracking table self.conn.execute(""" CREATE TABLE IF NOT EXISTS sessions ( id INTEGER PRIMARY KEY AUTOINCREMENT, started_at TEXT NOT NULL, ended_at TEXT, last_seen_at TEXT, notifications_processed INTEGER DEFAULT 0, notifications_skipped INTEGER DEFAULT 0, notifications_error INTEGER DEFAULT 0 ) """) self.conn.commit() def add_notification(self, notif_dict: Dict) -> bool: """Add a notification to the database.""" try: # Handle None input if not notif_dict: return False # Extract key fields uri = notif_dict.get('uri', '') if not uri: return False indexed_at = notif_dict.get('indexed_at', '') reason = notif_dict.get('reason', '') author = notif_dict.get('author', {}) if notif_dict.get('author') else {} author_handle = author.get('handle', '') if author else '' author_did = author.get('did', '') if author else '' # Extract text from record if available (handle None records) record = notif_dict.get('record') or {} text = record.get('text', '')[:500] if record else '' # Extract thread info parent_uri = None root_uri = None if record and 'reply' in record and record['reply']: reply_info = record['reply'] if reply_info and isinstance(reply_info, dict): parent_info = reply_info.get('parent', {}) root_info = reply_info.get('root', {}) if parent_info: parent_uri = parent_info.get('uri') if root_info: root_uri = root_info.get('uri') # Store additional metadata as JSON metadata = { 'cid': notif_dict.get('cid'), 'labels': notif_dict.get('labels', []), 'is_read': notif_dict.get('is_read', False) } self.conn.execute(""" INSERT OR IGNORE INTO notifications (uri, indexed_at, reason, author_handle, author_did, text, parent_uri, root_uri, status, metadata) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 'pending', ?) """, (uri, indexed_at, reason, author_handle, author_did, text, parent_uri, root_uri, json.dumps(metadata))) self.conn.commit() return True except Exception as e: logger.error(f"Error adding notification to DB: {e}") return False def is_processed(self, uri: str) -> bool: """Check if a notification has been processed.""" cursor = self.conn.execute(""" SELECT status FROM notifications WHERE uri = ? """, (uri,)) row = cursor.fetchone() if row: return row['status'] in ['processed', 'ignored', 'no_reply'] return False def mark_processed(self, uri: str, status: str = 'processed', error: str = None): """Mark a notification as processed.""" try: self.conn.execute(""" UPDATE notifications SET status = ?, processed_at = ?, error = ? WHERE uri = ? """, (status, datetime.now().isoformat(), error, uri)) self.conn.commit() except Exception as e: logger.error(f"Error marking notification processed: {e}") def get_unprocessed(self, limit: int = 100) -> List[Dict]: """Get unprocessed notifications.""" cursor = self.conn.execute(""" SELECT * FROM notifications WHERE status = 'pending' ORDER BY indexed_at ASC LIMIT ? """, (limit,)) return [dict(row) for row in cursor] def get_latest_processed_time(self) -> Optional[str]: """Get the timestamp of the most recently processed notification.""" cursor = self.conn.execute(""" SELECT MAX(indexed_at) as latest FROM notifications WHERE status IN ('processed', 'ignored', 'no_reply') """) row = cursor.fetchone() return row['latest'] if row and row['latest'] else None def cleanup_old_records(self, days: int = 7): """Remove records older than specified days.""" cutoff_date = (datetime.now() - timedelta(days=days)).isoformat() deleted = self.conn.execute(""" DELETE FROM notifications WHERE indexed_at < ? AND status IN ('processed', 'ignored', 'no_reply', 'error') """, (cutoff_date,)).rowcount self.conn.commit() if deleted > 0: logger.info(f"Cleaned up {deleted} old notification records") # Vacuum to reclaim space self.conn.execute("VACUUM") def get_stats(self) -> Dict: """Get database statistics.""" stats = {} # Count by status cursor = self.conn.execute(""" SELECT status, COUNT(*) as count FROM notifications GROUP BY status """) for row in cursor: stats[f"status_{row['status']}"] = row['count'] # Total count cursor = self.conn.execute("SELECT COUNT(*) as total FROM notifications") stats['total'] = cursor.fetchone()['total'] # Recent activity (last 24h) yesterday = (datetime.now() - timedelta(days=1)).isoformat() cursor = self.conn.execute(""" SELECT COUNT(*) as recent FROM notifications WHERE indexed_at > ? """, (yesterday,)) stats['recent_24h'] = cursor.fetchone()['recent'] return stats def start_session(self) -> int: """Start a new processing session.""" cursor = self.conn.execute(""" INSERT INTO sessions (started_at, last_seen_at) VALUES (?, ?) """, (datetime.now().isoformat(), datetime.now().isoformat())) self.conn.commit() return cursor.lastrowid def update_session(self, session_id: int, processed: int = 0, skipped: int = 0, error: int = 0): """Update session statistics.""" self.conn.execute(""" UPDATE sessions SET last_seen_at = ?, notifications_processed = notifications_processed + ?, notifications_skipped = notifications_skipped + ?, notifications_error = notifications_error + ? WHERE id = ? """, (datetime.now().isoformat(), processed, skipped, error, session_id)) self.conn.commit() def end_session(self, session_id: int): """End a processing session.""" self.conn.execute(""" UPDATE sessions SET ended_at = ? WHERE id = ? """, (datetime.now().isoformat(), session_id)) self.conn.commit() def get_processed_uris(self, limit: int = 10000) -> Set[str]: """Get set of processed URIs for compatibility with existing code.""" cursor = self.conn.execute(""" SELECT uri FROM notifications WHERE status IN ('processed', 'ignored', 'no_reply') ORDER BY processed_at DESC LIMIT ? """, (limit,)) return {row['uri'] for row in cursor} def migrate_from_json(self, json_path: str = "queue/processed_notifications.json"): """Migrate data from the old JSON format.""" json_file = Path(json_path) if not json_file.exists(): return try: with open(json_file, 'r') as f: uris = json.load(f) migrated = 0 for uri in uris: # Add as processed with unknown timestamp self.conn.execute(""" INSERT OR IGNORE INTO notifications (uri, indexed_at, status, processed_at) VALUES (?, ?, 'processed', ?) """, (uri, datetime.now().isoformat(), datetime.now().isoformat())) migrated += 1 self.conn.commit() logger.info(f"Migrated {migrated} URIs from JSON to database") # Rename old file to backup backup_path = json_file.with_suffix('.json.backup') json_file.rename(backup_path) logger.info(f"Renamed old JSON file to {backup_path}") except Exception as e: logger.error(f"Error migrating from JSON: {e}") def close(self): """Close database connection.""" if self.conn: self.conn.close()
github_python
2025-12-06T01:34:40Z
https://github.com/letta-ai/example-social-agent/blob/e7a2a05485176719adcbf8b0426b756bbb9c3898/notification_db.py
{}
import pandas as pd from sklearn.model_selection import train_test_split from .config import ( RAW_DATA_PATH, PROCESSED_DATA_DIR, TARGET_COL, TIME_COL, TEST_SIZE, RANDOM_STATE, ) def load_raw(path: str | None = None) -> pd.DataFrame: csv_path = RAW_DATA_PATH if path is None else path df = pd.read_csv(csv_path) return df def engineer_time_features(df: pd.DataFrame) -> pd.DataFrame: """Add hour_of_day, day_of_week, is_weekend derived from timestamp.""" df = df.copy() df[TIME_COL] = pd.to_datetime(df[TIME_COL]) df["hour_of_day"] = df[TIME_COL].dt.hour df["day_of_week"] = df[TIME_COL].dt.dayofweek # Monday=0 df["is_weekend"] = df["day_of_week"].isin([5, 6]).astype(int) return df def split_train_test(df: pd.DataFrame): if TARGET_COL not in df.columns: raise ValueError(f"Target column '{TARGET_COL}' missing from data.") train_df, test_df = train_test_split( df, test_size=TEST_SIZE, stratify=df[TARGET_COL], random_state=RANDOM_STATE, ) return train_df, test_df def save_processed(train_df: pd.DataFrame, test_df: pd.DataFrame) -> None: train_path = PROCESSED_DATA_DIR / "sessions_train.csv" test_path = PROCESSED_DATA_DIR / "sessions_test.csv" train_df.to_csv(train_path, index=False) test_df.to_csv(test_path, index=False) print(f"Saved train data to: {train_path}") print(f"Saved test data to: {test_path}") def main() -> None: df = load_raw() print(f"Loaded raw data: {df.shape}") df_feat = engineer_time_features(df) print("Engineered time features:") print(df_feat[[TIME_COL, "hour_of_day", "day_of_week", "is_weekend"]].head()) train_df, test_df = split_train_test(df_feat) print(f"Train: {train_df.shape}, Test: {test_df.shape}") print("Train satisfaction distribution:") print(train_df[TARGET_COL].value_counts(normalize=True).rename("train_ratio")) print("Test satisfaction distribution:") print(test_df[TARGET_COL].value_counts(normalize=True).rename("test_ratio")) save_processed(train_df, test_df) if __name__ == "__main__": main()
import pandas as pd from sklearn.model_selection import train_test_split from .config import ( RAW_DATA_PATH, PROCESSED_DATA_DIR, TARGET_COL, TIME_COL, TEST_SIZE, RANDOM_STATE, ) def load_raw(path: str | None = None) -> pd.DataFrame: csv_path = RAW_DATA_PATH if path is None else path df = pd.read_csv(csv_path) return df def engineer_time_features(df: pd.DataFrame) -> pd.DataFrame: """Add hour_of_day, day_of_week, is_weekend derived from timestamp.""" df = df.copy() df[TIME_COL] = pd.to_datetime(df[TIME_COL]) df["hour_of_day"] = df[TIME_COL].dt.hour df["day_of_week"] = df[TIME_COL].dt.dayofweek # Monday=0 df["is_weekend"] = df["day_of_week"].isin([5, 6]).astype(int) return df def split_train_test(df: pd.DataFrame): if TARGET_COL not in df.columns: raise ValueError(f"Target column '{TARGET_COL}' missing from data.") train_df, test_df = train_test_split( df, test_size=TEST_SIZE, stratify=df[TARGET_COL], random_state=RANDOM_STATE, ) return train_df, test_df def save_processed(train_df: pd.DataFrame, test_df: pd.DataFrame) -> None: train_path = PROCESSED_DATA_DIR / "sessions_train.csv" test_path = PROCESSED_DATA_DIR / "sessions_test.csv" train_df.to_csv(train_path, index=False) test_df.to_csv(test_path, index=False) print(f"Saved train data to: {train_path}") print(f"Saved test data to: {test_path}") def main() -> None: df = load_raw() print(f"Loaded raw data: {df.shape}") df_feat = engineer_time_features(df) print("Engineered time features:") print(df_feat[[TIME_COL, "hour_of_day", "day_of_week", "is_weekend"]].head()) train_df, test_df = split_train_test(df_feat) print(f"Train: {train_df.shape}, Test: {test_df.shape}") print("Train satisfaction distribution:") print(train_df[TARGET_COL].value_counts(normalize=True).rename("train_ratio")) print("Test satisfaction distribution:") print(test_df[TARGET_COL].value_counts(normalize=True).rename("test_ratio")) save_processed(train_df, test_df) if __name__ == "__main__": main()
github_python
2025-12-05T16:30:45Z
https://github.com/AmirhosseinHonardoust/AI-Assistant-Satisfaction-Prediction-Engine/blob/8d20b9156cb6b40e36c98a32839069c349f9088a/src/data_prep.py
{}
# Copyright (c) 2015 CensoredUsername # This module provides tools for safely analyizing pickle files programmatically import sys PY3 = sys.version_info >= (3, 0) PY2 = not PY3 import types import pickle import struct try: # only available (and needed) from 3.4 onwards. from importlib.machinery import ModuleSpec except: pass if PY3: from io import BytesIO as StringIO else: from cStringIO import StringIO __all__ = [ "load", "loads", "safe_load", "safe_loads", "safe_dump", "safe_dumps", "fake_package", "remove_fake_package", "FakeModule", "FakePackage", "FakePackageLoader", "FakeClassType", "FakeClassFactory", "FakeClass", "FakeStrict", "FakeWarning", "FakeIgnore", "FakeUnpicklingError", "FakeUnpickler", "SafeUnpickler", "SafePickler", ] # Fake class implementation class FakeClassType(type): """The metaclass used to create fake classes. To support comparisons between fake classes and :class:`FakeModule` instances custom behaviour is defined here which follows this logic: If the other object does not have ``other.__name__`` set, they are not equal. Else if it does not have ``other.__module__`` set, they are equal if ``self.__module__ + "." + self.__name__ == other.__name__``. Else, they are equal if ``self.__module__ == other.__module__ and self.__name__ == other.__name__`` Using this behaviour, ``==``, ``!=``, ``hash()``, ``isinstance()`` and ``issubclass()`` are implemented allowing comparison between :class:`FakeClassType` instances and :class:`FakeModule` instances to succeed if they are pretending to be in the same place in the python module hierarchy. To create a fake class using this metaclass, you can either use this metaclass directly or inherit from the fake class base instances given below. When doing this, the module that this fake class is pretending to be in should be specified using the *module* argument when the metaclass is called directly or a :attr:``__module__`` class attribute in a class statement. This is a subclass of :class:`type`. """ # instance creation logic def __new__(cls, name, bases, attributes, module=None): # This would be a lie attributes.pop("__qualname__", None) # figure out what module we should say we're in # note that if no module is explicitly passed, the current module will be chosen # due to the class statement implicitly specifying __module__ as __name__ if module is not None: attributes["__module__"] = module if "__module__" not in attributes: raise TypeError( "No module has been specified for FakeClassType {0}".format(name) ) # assemble instance return type.__new__(cls, name, bases, attributes) def __init__(self, name, bases, attributes, module=None): type.__init__(self, name, bases, attributes) # comparison logic def __eq__(self, other): if not hasattr(other, "__name__"): return False if hasattr(other, "__module__"): return ( self.__module__ == other.__module__ and self.__name__ == other.__name__ ) else: return self.__module__ + "." + self.__name__ == other.__name__ def __ne__(self, other): return not self == other def __hash__(self): return hash(self.__module__ + "." + self.__name__) def __instancecheck__(self, instance): return self.__subclasscheck__(instance.__class__) def __subclasscheck__(self, subclass): return self == subclass or ( bool(subclass.__bases__) and any(self.__subclasscheck__(base) for base in subclass.__bases__) ) # PY2 doesn't like the PY3 way of metaclasses and PY3 doesn't support the PY2 way # so we call the metaclass directly FakeClass = FakeClassType( "FakeClass", (), {"__doc__": """ A barebones instance of :class:`FakeClassType`. Inherit from this to create fake classes. """}, module=__name__, ) class FakeStrict(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args or kwargs: raise FakeUnpicklingError( "{0} was instantiated with unexpected arguments {1}, {2}".format( cls, args, kwargs ) ) return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): raise FakeUnpicklingError( "{0}.__setstate__() got unexpected arguments {1}".format( self.__class__, state ) ) else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeWarning(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args or kwargs: print( "{0} was instantiated with unexpected arguments {1}, {2}".format( cls, args, kwargs ) ) self._new_args = args return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): print( "{0}.__setstate__() got unexpected arguments {1}".format( self.__class__, state ) ) self._setstate_args = state else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeIgnore(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args: self._new_args = args if kwargs: self._new_kwargs = kwargs return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): self._setstate_args = state else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeClassFactory(object): """Factory of fake classses. It will create fake class definitions on demand based on the passed arguments. """ def __init__(self, special_cases=(), default_class=FakeStrict): """*special_cases* should be an iterable containing fake classes which should be treated as special cases during the fake unpickling process. This way you can specify custom methods and attributes on these classes as they're used during unpickling. *default_class* should be a FakeClassType instance which will be subclassed to create the necessary non-special case fake classes during unpickling. This should usually be set to :class:`FakeStrict`, :class:`FakeWarning` or :class:`FakeIgnore`. These classes have :meth:`__new__` and :meth:`__setstate__` methods which extract data from the pickle stream and provide means of inspecting the stream when it is not clear how the data should be interpreted. As an example, we can define the fake class generated for definition bar in module foo, which has a :meth:`__str__` method which returns ``"baz"``:: class bar(FakeStrict, object): def __str__(self): return "baz" special_cases = [bar] Alternatively they can also be instantiated using :class:`FakeClassType` directly:: special_cases = [FakeClassType(c.__name__, c.__bases__, c.__dict__, c.__module__)] """ self.special_cases = dict( ((i.__module__, i.__name__), i) for i in special_cases ) self.default = default_class self.class_cache = {} def __call__(self, name, module): """Return the right class for the specified *module* and *name*. This class will either be one of the special cases in case the name and module match, or a subclass of *default_class* will be created with the correct name and module. Created class definitions are cached per factory instance. """ # Check if we've got this class cached klass = self.class_cache.get((module, name), None) if klass is not None: return klass klass = self.special_cases.get((module, name), None) if not klass: # generate a new class def which inherits from the default fake class klass = type(name, (self.default,), {"__module__": module}) self.class_cache[(module, name)] = klass return klass # Fake module implementation class FakeModule(types.ModuleType): """An object which pretends to be a module. *name* is the name of the module and should be a ``"."`` separated alphanumeric string. On initialization the module is added to sys.modules so it can be imported properly. Further if *name* is a submodule and if its parent does not exist, it will automatically create a parent :class:`FakeModule`. This operates recursively until the parent is a top-level module or when the parent is an existing module. If any fake submodules are removed from this module they will automatically be removed from :data:`sys.modules`. Just as :class:`FakeClassType`, it supports comparison with :class:`FakeClassType` instances, using the following logic:
# Copyright (c) 2015 CensoredUsername # This module provides tools for safely analyizing pickle files programmatically import sys PY3 = sys.version_info >= (3, 0) PY2 = not PY3 import types import pickle import struct try: # only available (and needed) from 3.4 onwards. from importlib.machinery import ModuleSpec except: pass if PY3: from io import BytesIO as StringIO else: from cStringIO import StringIO __all__ = [ "load", "loads", "safe_load", "safe_loads", "safe_dump", "safe_dumps", "fake_package", "remove_fake_package", "FakeModule", "FakePackage", "FakePackageLoader", "FakeClassType", "FakeClassFactory", "FakeClass", "FakeStrict", "FakeWarning", "FakeIgnore", "FakeUnpicklingError", "FakeUnpickler", "SafeUnpickler", "SafePickler", ] # Fake class implementation class FakeClassType(type): """The metaclass used to create fake classes. To support comparisons between fake classes and :class:`FakeModule` instances custom behaviour is defined here which follows this logic: If the other object does not have ``other.__name__`` set, they are not equal. Else if it does not have ``other.__module__`` set, they are equal if ``self.__module__ + "." + self.__name__ == other.__name__``. Else, they are equal if ``self.__module__ == other.__module__ and self.__name__ == other.__name__`` Using this behaviour, ``==``, ``!=``, ``hash()``, ``isinstance()`` and ``issubclass()`` are implemented allowing comparison between :class:`FakeClassType` instances and :class:`FakeModule` instances to succeed if they are pretending to be in the same place in the python module hierarchy. To create a fake class using this metaclass, you can either use this metaclass directly or inherit from the fake class base instances given below. When doing this, the module that this fake class is pretending to be in should be specified using the *module* argument when the metaclass is called directly or a :attr:``__module__`` class attribute in a class statement. This is a subclass of :class:`type`. """ # instance creation logic def __new__(cls, name, bases, attributes, module=None): # This would be a lie attributes.pop("__qualname__", None) # figure out what module we should say we're in # note that if no module is explicitly passed, the current module will be chosen # due to the class statement implicitly specifying __module__ as __name__ if module is not None: attributes["__module__"] = module if "__module__" not in attributes: raise TypeError( "No module has been specified for FakeClassType {0}".format(name) ) # assemble instance return type.__new__(cls, name, bases, attributes) def __init__(self, name, bases, attributes, module=None): type.__init__(self, name, bases, attributes) # comparison logic def __eq__(self, other): if not hasattr(other, "__name__"): return False if hasattr(other, "__module__"): return ( self.__module__ == other.__module__ and self.__name__ == other.__name__ ) else: return self.__module__ + "." + self.__name__ == other.__name__ def __ne__(self, other): return not self == other def __hash__(self): return hash(self.__module__ + "." + self.__name__) def __instancecheck__(self, instance): return self.__subclasscheck__(instance.__class__) def __subclasscheck__(self, subclass): return self == subclass or ( bool(subclass.__bases__) and any(self.__subclasscheck__(base) for base in subclass.__bases__) ) # PY2 doesn't like the PY3 way of metaclasses and PY3 doesn't support the PY2 way # so we call the metaclass directly FakeClass = FakeClassType( "FakeClass", (), {"__doc__": """ A barebones instance of :class:`FakeClassType`. Inherit from this to create fake classes. """}, module=__name__, ) class FakeStrict(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args or kwargs: raise FakeUnpicklingError( "{0} was instantiated with unexpected arguments {1}, {2}".format( cls, args, kwargs ) ) return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): raise FakeUnpicklingError( "{0}.__setstate__() got unexpected arguments {1}".format( self.__class__, state ) ) else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeWarning(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args or kwargs: print( "{0} was instantiated with unexpected arguments {1}, {2}".format( cls, args, kwargs ) ) self._new_args = args return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): print( "{0}.__setstate__() got unexpected arguments {1}".format( self.__class__, state ) ) self._setstate_args = state else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeIgnore(FakeClass, object): def __new__(cls, *args, **kwargs): self = FakeClass.__new__(cls) if args: self._new_args = args if kwargs: self._new_kwargs = kwargs return self def __setstate__(self, state): slotstate = None if ( isinstance(state, tuple) and len(state) == 2 and (state[0] is None or isinstance(state[0], dict)) and (state[1] is None or isinstance(state[1], dict)) ): state, slotstate = state if state: # Don't have to check for slotstate here since it's either None or a dict if not isinstance(state, dict): self._setstate_args = state else: self.__dict__.update(state) if slotstate: self.__dict__.update(slotstate) class FakeClassFactory(object): """Factory of fake classses. It will create fake class definitions on demand based on the passed arguments. """ def __init__(self, special_cases=(), default_class=FakeStrict): """*special_cases* should be an iterable containing fake classes which should be treated as special cases during the fake unpickling process. This way you can specify custom methods and attributes on these classes as they're used during unpickling. *default_class* should be a FakeClassType instance which will be subclassed to create the necessary non-special case fake classes during unpickling. This should usually be set to :class:`FakeStrict`, :class:`FakeWarning` or :class:`FakeIgnore`. These classes have :meth:`__new__` and :meth:`__setstate__` methods which extract data from the pickle stream and provide means of inspecting the stream when it is not clear how the data should be interpreted. As an example, we can define the fake class generated for definition bar in module foo, which has a :meth:`__str__` method which returns ``"baz"``:: class bar(FakeStrict, object): def __str__(self): return "baz" special_cases = [bar] Alternatively they can also be instantiated using :class:`FakeClassType` directly:: special_cases = [FakeClassType(c.__name__, c.__bases__, c.__dict__, c.__module__)] """ self.special_cases = dict( ((i.__module__, i.__name__), i) for i in special_cases ) self.default = default_class self.class_cache = {} def __call__(self, name, module): """Return the right class for the specified *module* and *name*. This class will either be one of the special cases in case the name and module match, or a subclass of *default_class* will be created with the correct name and module. Created class definitions are cached per factory instance. """ # Check if we've got this class cached klass = self.class_cache.get((module, name), None) if klass is not None: return klass klass = self.special_cases.get((module, name), None) if not klass: # generate a new class def which inherits from the default fake class klass = type(name, (self.default,), {"__module__": module}) self.class_cache[(module, name)] = klass return klass # Fake module implementation class FakeModule(types.ModuleType): """An object which pretends to be a module. *name* is the name of the module and should be a ``"."`` separated alphanumeric string. On initialization the module is added to sys.modules so it can be imported properly. Further if *name* is a submodule and if its parent does not exist, it will automatically create a parent :class:`FakeModule`. This operates recursively until the parent is a top-level module or when the parent is an existing module. If any fake submodules are removed from this module they will automatically be removed from :data:`sys.modules`. Just as :class:`FakeClassType`, it supports comparison with :class:`FakeClassType` instances, using the following logic: If the object does not have ``other.__name__`` set, they are not equal. Else if the other object does not have ``other.__module__`` set, they are equal if: ``self.__name__ == other.__name__`` Else, they are equal if: ``self.__name__ == other.__module__ + "." + other.__name__`` Using this behaviour, ``==``, ``!=``, ``hash()``, ``isinstance()`` and ``issubclass()`` are implemented allowing comparison between :class:`FakeClassType` instances and :class:`FakeModule` instances to succeed if they are pretending to bein the same place in the python module hierarchy. It inherits from :class:`types.ModuleType`. """ def __init__(self, name): super(FakeModule, self).__init__(name) sys.modules[name] = self if "." in name: parent_name, child_name = name.rsplit(".", 1) try: __import__(parent_name) parent = sys.modules[parent_name] except: parent = FakeModule(parent_name) setattr(parent, child_name, self) def __repr__(self): return "<module '{0}' (fake)>".format(self.__name__) def __str__(self): return self.__repr__() def __setattr__(self, name, value): # If a fakemodule is removed we need to remove its entry from sys.modules if ( name in self.__dict__ and isinstance(self.__dict__[name], FakeModule) and not isinstance(value, FakeModule) ): self.__dict__[name]._remove() self.__dict__[name] = value def __delattr__(self, name): if isinstance(self.__dict__[name], FakeModule): self.__dict__[name]._remove() del self.__dict__[name] def _remove(self): """Removes this module from :data:`sys.modules` and calls :meth:`_remove` on any sub-FakeModules. """ for i in tuple(self.__dict__.keys()): if isinstance(self.__dict__[i], FakeModule): self.__dict__[i]._remove() del self.__dict__[i] del sys.modules[self.__name__] def __eq__(self, other): if not hasattr(other, "__name__"): return False othername = other.__name__ if hasattr(other, "__module__"): othername = other.__module__ + "." + other.__name__ return self.__name__ == othername def __ne__(self, other): return not self == other def __hash__(self): return hash(self.__name__) def __instancecheck__(self, instance): return self.__subclasscheck__(instance.__class__) def __subclasscheck__(self, subclass): return self == subclass or ( bool(subclass.__bases__) and any(self.__subclasscheck__(base) for base in subclass.__bases__) ) class FakePackage(FakeModule): """A :class:`FakeModule` subclass which lazily creates :class:`FakePackage` instances on its attributes when they're requested. This ensures that any attribute of this module is a valid FakeModule which can be used to compare against fake classes. """ __path__ = [] def __call__(self, *args, **kwargs): # This mainly exists to print a nicer error message when # someone tries to call a FakePackage instance raise TypeError( "'{0}' FakePackage object is not callable".format(self.__name__) ) def __getattr__(self, name): modname = self.__name__ + "." + name mod = sys.modules.get(modname, None) if mod is None: try: __import__(modname) except: mod = FakePackage(modname) else: mod = sys.modules[modname] return mod class FakePackageLoader(object): """A :term:`loader` of :class:`FakePackage` modules. When added to :data:`sys.meta_path` it will ensure that any attempt to import module *root* or its submodules results in a FakePackage. Together with the attribute creation from :class:`FakePackage` this ensures that any attempt to get a submodule from module *root* results in a FakePackage, creating the illusion that *root* is an actual package tree. This class is both a `finder` and a `loader` """ def __init__(self, root): self.root = root # the old way of loading modules. find_module returns a loader for the # given module. In this case, that is this object itself again. def find_module(self, fullname, path=None): if fullname == self.root or fullname.startswith(self.root + "."): return self else: return None # the new way of loading modules. It returns a ModuleSpec, that has # the loader attribute set to this class. def find_spec(self, fullname, path, target=None): if fullname == self.root or fullname.startswith(self.root + "."): return ModuleSpec(fullname, self) else: return None # loader methods. This loads the module. def load_module(self, fullname): return FakePackage(fullname) # Fake unpickler implementation class FakeUnpicklingError(pickle.UnpicklingError): """Error raised when there is not enough information to perform the fake unpickling process completely. It inherits from :exc:`pickle.UnpicklingError`. """ pass class FakeUnpickler(pickle.Unpickler if PY2 else pickle._Unpickler): """A forgiving unpickler. On uncountering references to class definitions in the pickle stream which it cannot locate, it will create fake classes and if necessary fake modules to house them in. Since it still allows access to all modules and builtins, it should only be used to unpickle trusted data. *file* is the :term:`binary file` to unserialize. The optional keyword arguments are *class_factory*, *encoding and *errors*. *class_factory* can be used to control how the missing class definitions are created. If set to ``None``, ``FakeClassFactory((), FakeStrict)`` will be used. In Python 3, the optional keyword arguments *encoding* and *errors* can be used to indicate how the unpickler should deal with pickle streams generated in python 2, specifically how to deal with 8-bit string instances. If set to "bytes" it will load them as bytes objects, otherwise it will attempt to decode them into unicode using the given *encoding* and *errors* arguments. It inherits from :class:`pickle.Unpickler`. (In Python 3 this is actually ``pickle._Unpickler``) """ if PY2: def __init__( self, file, class_factory=None, encoding="bytes", errors="strict" ): pickle.Unpickler.__init__( self, file, ) self.class_factory = class_factory or FakeClassFactory() else: def __init__( self, file, class_factory=None, encoding="bytes", errors="strict" ): super().__init__( file, fix_imports=False, encoding=encoding, errors=errors ) self.class_factory = class_factory or FakeClassFactory() def find_class(self, module, name): mod = sys.modules.get(module, None) if mod is None: try: __import__(module) except: mod = FakeModule(module) else: mod = sys.modules[module] klass = getattr(mod, name, None) if klass is None or isinstance(klass, FakeModule): klass = self.class_factory(name, module) setattr(mod, name, klass) return klass class SafeUnpickler(FakeUnpickler): """A safe unpickler. It will create fake classes for any references to class definitions in the pickle stream. Further it can block access to the extension registry making this unpickler safe to use on untrusted data. *file* is the :term:`binary file` to unserialize. The optional keyword arguments are *class_factory*, *safe_modules*, *use_copyreg*, *encoding* and *errors*. *class_factory* can be used to control how the missing class definitions are created. If set to ``None``, ``FakeClassFactory((), FakeStrict)`` will be used. *safe_modules* can be set to a set of strings of module names, which will be regarded as safe by the unpickling process, meaning that it will import objects from that module instead of generating fake classes (this does not apply to objects in submodules). *use_copyreg* is a boolean value indicating if it's allowed to use extensions from the pickle extension registry (documented in the :mod:`copyreg` module). In Python 3, the optional keyword arguments *encoding* and *errors* can be used to indicate how the unpickler should deal with pickle streams generated in python 2, specifically how to deal with 8-bit string instances. If set to "bytes" it will load them as bytes objects, otherwise it will attempt to decode them into unicode using the given *encoding* and *errors* arguments. This function can be used to unpickle untrusted data safely with the default class_factory when *safe_modules* is empty and *use_copyreg* is False. It inherits from :class:`pickle.Unpickler`. (In Python 3 this is actually ``pickle._Unpickler``) It should be noted though that when the unpickler tries to get a nonexistent attribute of a safe module, an :exc:`AttributeError` will be raised. This inherits from :class:`FakeUnpickler` """ def __init__( self, file, class_factory=None, safe_modules=(), unsafe_modules=(), use_copyreg=False, encoding="bytes", errors="strict", ): FakeUnpickler.__init__( self, file, class_factory, encoding=encoding, errors=errors ) self.safe_modules = set(safe_modules) self.unsafe_modules = set(unsafe_modules) self.use_copyreg = use_copyreg self.has_blocked_unsafe_build_instr = False # Hook the BUILD opcode to our custom method. self.dispatch[pickle.BUILD[0]] = self.load_build def find_class(self, module, name): # __main__ can be manipulated so it's # never safe to load real classes from it. if module == "__main__": return self.class_factory(name, module) if ( module in self.unsafe_modules or f"{module}.{name}" in self.unsafe_modules ): print(f"Warning: {module}.{name} is unsafe") if module in self.safe_modules: if not sys.modules.get(module): return self.class_factory(name, module) mod = sys.modules[module] if not hasattr(mod, "__all__") or name in mod.__all__: klass = getattr(mod, name) return klass return self.class_factory(name, module) def get_extension(self, code): if self.use_copyreg: return FakeUnpickler.get_extension(self, code) else: return self.class_factory("extension_code_{0}".format(code), "copyreg") def _state_contains_fake_class(self, obj): """Recursively check if an object or its contents are FakeClass instances.""" if isinstance(obj, FakeClass): return True if isinstance(obj, (list, tuple, set)): return any(self._state_contains_fake_class(item) for item in obj) if isinstance(obj, dict): return any( self._state_contains_fake_class(k) or self._state_contains_fake_class(v) for k, v in obj.items() ) return False def load_build(self): """Custom handler for the BUILD opcode to prevent setting state of a real object with a fake object (potentially dangerous). """ state = self.stack.pop() if not state: return inst = self.stack[-1] # Prevent a real object from being configured with a fake one. if not isinstance(inst, FakeClass) and self._state_contains_fake_class( state ): self.has_blocked_unsafe_build_instr = True # Return to prevent inst.__setstate__(state) from being called. return inst.__setstate__(state) class SafePickler(pickle.Pickler if PY2 else pickle._Pickler): """A pickler which can repickle object hierarchies containing objects created by SafeUnpickler. Due to reasons unknown, pythons pickle implementation will normally check if a given class actually matches with the object specified at the __module__ and __name__ of the class. Since this check is performed with object identity instead of object equality we cannot fake this from the classes themselves, and we need to override the method used for normally saving classes. """ def save_global(self, obj, name=None, pack=None): if isinstance(obj, FakeClassType): if PY2: self.write(pickle.GLOBAL + obj.__module__ + "\n" + obj.__name__ + "\n") elif self.proto >= 4: self.save(obj.__module__) self.save(obj.__name__) self.write(pickle.STACK_GLOBAL) else: self.write( pickle.GLOBAL + (obj.__module__ + "\n" + obj.__name__ + "\n").decode("utf-8") ) self.memoize(obj) return if PY2: pickle.Pickler.save_global(self, obj, name) else: super().save_global(self, obj, name) # the main API def load(file, class_factory=None, encoding="bytes", errors="errors"): """Read a pickled object representation from the open binary :term:`file object` *file* and return the reconstitutded object hierarchy specified therein, generating any missing class definitions at runtime. This is equivalent to ``FakeUnpickler(file).load()``. The optional keyword arguments are *class_factory*, *encoding* and *errors*. *class_factory* can be used to control how the missing class definitions are created. If set to ``None``, ``FakeClassFactory({}, 'strict')`` will be used. In Python 3, the optional keyword arguments *encoding* and *errors* can be used to indicate how the unpickler should deal with pickle streams generated in python 2, specifically how to deal with 8-bit string instances. If set to "bytes" it will load them as bytes objects, otherwise it will attempt to decode them into unicode using the given *encoding* and *errors* arguments. This function should only be used to unpickle trusted data. """ return FakeUnpickler( file, class_factory, encoding=encoding, errors=errors ).load() def loads(string, class_factory=None, encoding="bytes", errors="errors"): """Simjilar to :func:`load`, but takes an 8-bit string (bytes in Python 3, str in Python 2) as its first argument instead of a binary :term:`file object`. """ return FakeUnpickler( StringIO(string), class_factory, encoding=encoding, errors=errors ).load() def safe_load( file, class_factory=None, safe_modules=(), use_copyreg=False, encoding="bytes", errors="errors", ): """Read a pickled object representation from the open binary :term:`file object` *file* and return the reconstitutded object hierarchy specified therein, substituting any class definitions by fake classes, ensuring safety in the unpickling process. This is equivalent to ``SafeUnpickler(file).load()``. The optional keyword arguments are *class_factory*, *safe_modules*, *use_copyreg*, *encoding* and *errors*. *class_factory* can be used to control how the missing class definitions are created. If set to ``None``, ``FakeClassFactory({}, 'strict')`` will be used. *safe_modules* can be set to a set of strings of module names, which will be regarded as safe by the unpickling process, meaning that it will import objects from that module instead of generating fake classes (this does not apply to objects in submodules). *use_copyreg* is a boolean value indicating if it's allowed to use extensions from the pickle extension registry (documented in the :mod:`copyreg` module). In Python 3, the optional keyword arguments *encoding* and *errors* can be used to indicate how the unpickler should deal with pickle streams generated in python 2, specifically how to deal with 8-bit string instances. If set to "bytes" it will load them as bytes objects, otherwise it will attempt to decode them into unicode using the given *encoding* and *errors* arguments. This function can be used to unpickle untrusted data safely with the default class_factory when *safe_modules* is empty and *use_copyreg* is False. """ return SafeUnpickler( file, class_factory, safe_modules, use_copyreg, encoding=encoding, errors=errors, ).load() def safe_loads( string, class_factory=None, safe_modules=(), unsafe_modules=(), use_copyreg=False, encoding="bytes", errors="errors", ): """Similar to :func:`safe_load`, but takes an 8-bit string (bytes in Python 3, str in Python 2) as its first argument instead of a binary :term:`file object`. """ return SafeUnpickler( StringIO(string), class_factory, safe_modules, unsafe_modules, use_copyreg, encoding=encoding, errors=errors, ).load() def safe_dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL): """A convenience function wrapping SafePickler. It functions similarly to pickle.dump """ SafePickler(file, protocol).dump(obj) def safe_dumps(obj, protocol=pickle.HIGHEST_PROTOCOL): """A convenience function wrapping SafePickler. It functions similarly to pickle.dumps """ file = StringIO() SafePickler(file, protocol).dump(obj) return file.getvalue() def fake_package(name): """Mounts a fake package tree with the name *name*. This causes any attempt to import module *name*, attributes of the module or submodules will return a :class:`FakePackage` instance which implements the same behaviour. These :class:`FakePackage` instances compare properly with :class:`FakeClassType` instances allowing you to code using FakePackages as if the modules and their attributes actually existed. This is implemented by creating a :class:`FakePackageLoader` instance with root *name* and inserting it in the first spot in :data:`sys.meta_path`. This ensures that importing the module and submodules will work properly. Further the :class:`FakePackage` instances take care of generating submodules as attributes on request. If a fake package tree with the same *name* is already registered, no new fake package tree will be mounted. This returns the :class:`FakePackage` instance *name*. """ if name in sys.modules and isinstance(sys.modules[name], FakePackage): return sys.modules[name] else: loader = FakePackageLoader(name) sys.meta_path.insert(0, loader) return __import__(name) def remove_fake_package(name): """Removes the fake package tree mounted at *name*. This works by first looking for any FakePackageLoaders in :data:`sys.path` with their root set to *name* and removing them from sys.path. Next it will find the top-level :class:`FakePackage` instance *name* and from this point traverse the tree of created submodules, removing them from :data:`sys.path` and removing their attributes. After this the modules are not registered anymore and if they are not referenced from user code anymore they will be garbage collected. If no fake package tree *name* exists a :exc:`ValueError` will be raised. """ # Get the package entry via its entry in sys.modules package = sys.modules.get(name, None) if package is None: raise ValueError("No fake package with the name {0} found".format(name)) if not isinstance(package, FakePackage): raise ValueError("The module {0} is not a fake package".format(name)) # Attempt to remove the loader from sys.meta_path loaders = [ i for i in sys.meta_path if isinstance(i, FakePackageLoader) and i.root == name ] for loader in loaders: sys.meta_path.remove(loader) # Remove all module and submodule entries from sys.modules package._remove() # It is impossible to kill references to the modules, but all traces # of it have been removed from the import machinery and the submodule # tree structure has been broken up.
github_python
2025-12-10T15:26:04Z
https://github.com/google/saferpickle/blob/1e10532f78f2fe4231162136b4cef5d9086be197/third_party/corrupy/picklemagic.py
{}
""" HRM ACT V2: Transformer Baseline for Architecture Ablation This is an architecture ablation of the Hierarchical Reasoning Model (HRM). Key changes from V1: 1. REMOVED hierarchical split (no separate H and L levels) 2. REMOVED inner cycles (no H_cycles/L_cycles loops within reasoning) 3. KEPT ACT outer loop structure intact 4. KEPT all data preprocessing, embeddings, and evaluation infrastructure Architecture: Single-level transformer that processes the full 30x30 grid as a 900-token sequence, with the same positional encodings and sparse embeddings as V1. """ from typing import Tuple, List, Dict, Optional from dataclasses import dataclass import math import torch import torch.nn.functional as F from torch import nn from pydantic import BaseModel from trm.models.common import trunc_normal_init_ from trm.models.layers import rms_norm, SwiGLU, Attention, RotaryEmbedding, CosSin, CastedEmbedding, CastedLinear from trm.models.sparse_embedding import CastedSparseEmbedding @dataclass class Model_ACTV2InnerCarry: z_H: torch.Tensor @dataclass class Model_ACTV2Carry: inner_carry: Model_ACTV2InnerCarry steps: torch.Tensor halted: torch.Tensor current_data: Dict[str, torch.Tensor] class Model_ACTV2Config(BaseModel): batch_size: int seq_len: int puzzle_emb_ndim: int = 0 num_puzzle_identifiers: int vocab_size: int H_cycles: int H_layers: int # Transformer config hidden_size: int expansion: float num_heads: int pos_encodings: str rms_norm_eps: float = 1e-5 rope_theta: float = 10000.0 # Halting Q-learning config halt_max_steps: int halt_exploration_prob: float act_enabled: bool = True # If False, always run halt_max_steps (no early stopping during training) act_inference: bool = False # If True, use adaptive computation during inference forward_dtype: str = "bfloat16" class Model_ACTV2Block(nn.Module): def __init__(self, config: Model_ACTV2Config) -> None: super().__init__() self.self_attn = Attention( hidden_size=config.hidden_size, head_dim=config.hidden_size // config.num_heads, num_heads=config.num_heads, num_key_value_heads=config.num_heads, causal=False, ) self.mlp = SwiGLU( hidden_size=config.hidden_size, expansion=config.expansion, ) self.norm_eps = config.rms_norm_eps def forward(self, cos_sin: CosSin, hidden_states: torch.Tensor) -> torch.Tensor: # Post Norm # Self Attention hidden_states = rms_norm( hidden_states + self.self_attn(cos_sin=cos_sin, hidden_states=hidden_states), variance_epsilon=self.norm_eps, ) # Fully Connected hidden_states = rms_norm(hidden_states + self.mlp(hidden_states), variance_epsilon=self.norm_eps) return hidden_states class Model_ACTV2ReasoningModule(nn.Module): def __init__(self, layers: List[Model_ACTV2Block]): super().__init__() self.layers = torch.nn.ModuleList(layers) def forward(self, hidden_states: torch.Tensor, input_injection: torch.Tensor, **kwargs) -> torch.Tensor: # Input injection (add) hidden_states = hidden_states + input_injection # Layers for layer in self.layers: hidden_states = layer(hidden_states=hidden_states, **kwargs) return hidden_states class Model_ACTV2_Inner(nn.Module): def __init__(self, config: Model_ACTV2Config) -> None: super().__init__() self.config = config self.forward_dtype = getattr(torch, self.config.forward_dtype) # I/O self.embed_scale = math.sqrt(self.config.hidden_size) embed_init_std = 1.0 / self.embed_scale self.embed_tokens = CastedEmbedding( self.config.vocab_size, self.config.hidden_size, init_std=embed_init_std, cast_to=self.forward_dtype, ) self.lm_head = CastedLinear(self.config.hidden_size, self.config.vocab_size, bias=False) self.q_head = CastedLinear(self.config.hidden_size, 2, bias=True) self.puzzle_emb_len = -(self.config.puzzle_emb_ndim // -self.config.hidden_size) # ceil div if self.config.puzzle_emb_ndim > 0: # Zero init puzzle embeddings self.puzzle_emb = CastedSparseEmbedding( self.config.num_puzzle_identifiers, self.config.puzzle_emb_ndim, batch_size=self.config.batch_size, init_std=0, cast_to=self.forward_dtype, ) # LM Blocks if self.config.pos_encodings == "rope": self.rotary_emb = RotaryEmbedding( dim=self.config.hidden_size // self.config.num_heads, max_position_embeddings=self.config.seq_len + self.puzzle_emb_len, base=self.config.rope_theta, ) elif self.config.pos_encodings == "learned": self.embed_pos = CastedEmbedding( self.config.seq_len + self.puzzle_emb_len, self.config.hidden_size, init_std=embed_init_std, cast_to=self.forward_dtype, ) else: raise NotImplementedError() # Reasoning Layers self.H_level = Model_ACTV2ReasoningModule( layers=[Model_ACTV2Block(self.config) for _i in range(self.config.H_layers)] ) # Initial states self.H_init = nn.Buffer( trunc_normal_init_(torch.empty(self.config.hidden_size, dtype=self.forward_dtype), std=1), persistent=True, ) # Q head special init # Init Q to (almost) zero for faster learning during bootstrapping with torch.no_grad(): self.q_head.weight.zero_() self.q_head.bias.fill_(-5) # type: ignore def _input_embeddings(self, input: torch.Tensor, puzzle_identifiers: torch.Tensor): # Token embedding embedding = self.embed_tokens(input.to(torch.int32)) # Puzzle embeddings if self.config.puzzle_emb_ndim > 0: puzzle_embedding = self.puzzle_emb(puzzle_identifiers) pad_count = self.puzzle_emb_len * self.config.hidden_size - puzzle_embedding.shape[-1] if pad_count > 0: puzzle_embedding = F.pad(puzzle_embedding, (0, pad_count)) embedding = torch.cat( (puzzle_embedding.view(-1, self.puzzle_emb_len, self.config.hidden_size), embedding), dim=-2 ) # Position embeddings if self.config.pos_encodings == "learned": # scale by 1/sqrt(2) to maintain forward variance embedding = 0.707106781 * (embedding + self.embed_pos.embedding_weight.to(self.forward_dtype)) # Scale return self.embed_scale * embedding def empty_carry(self, batch_size: int): return Model_ACTV2InnerCarry( z_H=torch.empty( batch_size, self.config.seq_len + self.puzzle_emb_len, self.config.hidden_size, dtype=self.forward_dtype, ), ) def reset_carry(self, reset_flag: torch.Tensor, carry: Model_ACTV2InnerCarry): return Model_ACTV2InnerCarry( z_H=torch.where(reset_flag.view(-1, 1, 1), self.H_init, carry.z_H), ) def forward( self, carry: Model_ACTV2InnerCarry, batch: Dict[str, torch.Tensor] ) -> Tuple[Model_ACTV2InnerCarry, torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: seq_info = dict( cos_sin=self.rotary_emb() if hasattr(self, "rotary_emb") else None, ) # Input encoding input_embeddings = self._input_embeddings(batch["inputs"], batch["puzzle_identifiers"]) # 1-step grad z_H = self.H_level(carry.z_H, input_embeddings, **seq_info) # LM Outputs new_carry = Model_ACTV2InnerCarry( z_H=z_H.detach(), ) # New carry no grad output = self.lm_head(z_H)[:, self.puzzle_emb_len :] # Q head q_logits = self.q_head(z_H[:, 0]).to(torch.float32) return new_carry, output, (q_logits[..., 0], q_logits[..., 1]) class Model_ACTV2(nn.Module): """ACT wrapper.""" def __init__(self, config_dict: dict): super().__init__() self.config = Model_ACTV2Config(**config_dict) self.inner = Model_ACTV2_Inner(self.config) @property def puzzle_emb(self): return self.inner.puzzle_emb def initial_carry(self, batch: Dict[str, torch.Tensor]): batch_size = batch["inputs"].shape[0] return Model_ACTV2Carry( inner_carry=self.inner.empty_carry( batch_size ), # Empty is expected, it will be reseted in first pass as all sequences are halted. steps=torch.zeros((batch_size,), dtype=torch.int32), halted=torch.ones((batch_size,), dtype=torch.bool), # Default to halted current_data={k: torch.empty_like(v) for k, v in batch.items()}, ) def forward( self, carry: Model_ACTV2Carry, batch: Dict[str, torch.Tensor], compute_target_q: bool = False, ) -> Tuple[Model_ACTV2Carry, Dict[str, torch.Tensor]]: # Update data, carry (removing halted sequences) new_inner_carry = self.inner.reset_carry(carry.halted, carry.inner_carry) new_steps = torch.where(carry.halted, 0, carry.steps) new_current_data = { k: torch.where(carry.halted.view((-1,) + (1,) * (batch[k].ndim - 1)), batch[k], v) for k, v in carry.current_data.items() } # Forward inner model new_inner_carry, logits, (q_halt_logits, q_continue_logits) = self.inner( new_inner_carry, new_current_data ) outputs = {"logits": logits, "q_halt_logits":
""" HRM ACT V2: Transformer Baseline for Architecture Ablation This is an architecture ablation of the Hierarchical Reasoning Model (HRM). Key changes from V1: 1. REMOVED hierarchical split (no separate H and L levels) 2. REMOVED inner cycles (no H_cycles/L_cycles loops within reasoning) 3. KEPT ACT outer loop structure intact 4. KEPT all data preprocessing, embeddings, and evaluation infrastructure Architecture: Single-level transformer that processes the full 30x30 grid as a 900-token sequence, with the same positional encodings and sparse embeddings as V1. """ from typing import Tuple, List, Dict, Optional from dataclasses import dataclass import math import torch import torch.nn.functional as F from torch import nn from pydantic import BaseModel from trm.models.common import trunc_normal_init_ from trm.models.layers import rms_norm, SwiGLU, Attention, RotaryEmbedding, CosSin, CastedEmbedding, CastedLinear from trm.models.sparse_embedding import CastedSparseEmbedding @dataclass class Model_ACTV2InnerCarry: z_H: torch.Tensor @dataclass class Model_ACTV2Carry: inner_carry: Model_ACTV2InnerCarry steps: torch.Tensor halted: torch.Tensor current_data: Dict[str, torch.Tensor] class Model_ACTV2Config(BaseModel): batch_size: int seq_len: int puzzle_emb_ndim: int = 0 num_puzzle_identifiers: int vocab_size: int H_cycles: int H_layers: int # Transformer config hidden_size: int expansion: float num_heads: int pos_encodings: str rms_norm_eps: float = 1e-5 rope_theta: float = 10000.0 # Halting Q-learning config halt_max_steps: int halt_exploration_prob: float act_enabled: bool = True # If False, always run halt_max_steps (no early stopping during training) act_inference: bool = False # If True, use adaptive computation during inference forward_dtype: str = "bfloat16" class Model_ACTV2Block(nn.Module): def __init__(self, config: Model_ACTV2Config) -> None: super().__init__() self.self_attn = Attention( hidden_size=config.hidden_size, head_dim=config.hidden_size // config.num_heads, num_heads=config.num_heads, num_key_value_heads=config.num_heads, causal=False, ) self.mlp = SwiGLU( hidden_size=config.hidden_size, expansion=config.expansion, ) self.norm_eps = config.rms_norm_eps def forward(self, cos_sin: CosSin, hidden_states: torch.Tensor) -> torch.Tensor: # Post Norm # Self Attention hidden_states = rms_norm( hidden_states + self.self_attn(cos_sin=cos_sin, hidden_states=hidden_states), variance_epsilon=self.norm_eps, ) # Fully Connected hidden_states = rms_norm(hidden_states + self.mlp(hidden_states), variance_epsilon=self.norm_eps) return hidden_states class Model_ACTV2ReasoningModule(nn.Module): def __init__(self, layers: List[Model_ACTV2Block]): super().__init__() self.layers = torch.nn.ModuleList(layers) def forward(self, hidden_states: torch.Tensor, input_injection: torch.Tensor, **kwargs) -> torch.Tensor: # Input injection (add) hidden_states = hidden_states + input_injection # Layers for layer in self.layers: hidden_states = layer(hidden_states=hidden_states, **kwargs) return hidden_states class Model_ACTV2_Inner(nn.Module): def __init__(self, config: Model_ACTV2Config) -> None: super().__init__() self.config = config self.forward_dtype = getattr(torch, self.config.forward_dtype) # I/O self.embed_scale = math.sqrt(self.config.hidden_size) embed_init_std = 1.0 / self.embed_scale self.embed_tokens = CastedEmbedding( self.config.vocab_size, self.config.hidden_size, init_std=embed_init_std, cast_to=self.forward_dtype, ) self.lm_head = CastedLinear(self.config.hidden_size, self.config.vocab_size, bias=False) self.q_head = CastedLinear(self.config.hidden_size, 2, bias=True) self.puzzle_emb_len = -(self.config.puzzle_emb_ndim // -self.config.hidden_size) # ceil div if self.config.puzzle_emb_ndim > 0: # Zero init puzzle embeddings self.puzzle_emb = CastedSparseEmbedding( self.config.num_puzzle_identifiers, self.config.puzzle_emb_ndim, batch_size=self.config.batch_size, init_std=0, cast_to=self.forward_dtype, ) # LM Blocks if self.config.pos_encodings == "rope": self.rotary_emb = RotaryEmbedding( dim=self.config.hidden_size // self.config.num_heads, max_position_embeddings=self.config.seq_len + self.puzzle_emb_len, base=self.config.rope_theta, ) elif self.config.pos_encodings == "learned": self.embed_pos = CastedEmbedding( self.config.seq_len + self.puzzle_emb_len, self.config.hidden_size, init_std=embed_init_std, cast_to=self.forward_dtype, ) else: raise NotImplementedError() # Reasoning Layers self.H_level = Model_ACTV2ReasoningModule( layers=[Model_ACTV2Block(self.config) for _i in range(self.config.H_layers)] ) # Initial states self.H_init = nn.Buffer( trunc_normal_init_(torch.empty(self.config.hidden_size, dtype=self.forward_dtype), std=1), persistent=True, ) # Q head special init # Init Q to (almost) zero for faster learning during bootstrapping with torch.no_grad(): self.q_head.weight.zero_() self.q_head.bias.fill_(-5) # type: ignore def _input_embeddings(self, input: torch.Tensor, puzzle_identifiers: torch.Tensor): # Token embedding embedding = self.embed_tokens(input.to(torch.int32)) # Puzzle embeddings if self.config.puzzle_emb_ndim > 0: puzzle_embedding = self.puzzle_emb(puzzle_identifiers) pad_count = self.puzzle_emb_len * self.config.hidden_size - puzzle_embedding.shape[-1] if pad_count > 0: puzzle_embedding = F.pad(puzzle_embedding, (0, pad_count)) embedding = torch.cat( (puzzle_embedding.view(-1, self.puzzle_emb_len, self.config.hidden_size), embedding), dim=-2 ) # Position embeddings if self.config.pos_encodings == "learned": # scale by 1/sqrt(2) to maintain forward variance embedding = 0.707106781 * (embedding + self.embed_pos.embedding_weight.to(self.forward_dtype)) # Scale return self.embed_scale * embedding def empty_carry(self, batch_size: int): return Model_ACTV2InnerCarry( z_H=torch.empty( batch_size, self.config.seq_len + self.puzzle_emb_len, self.config.hidden_size, dtype=self.forward_dtype, ), ) def reset_carry(self, reset_flag: torch.Tensor, carry: Model_ACTV2InnerCarry): return Model_ACTV2InnerCarry( z_H=torch.where(reset_flag.view(-1, 1, 1), self.H_init, carry.z_H), ) def forward( self, carry: Model_ACTV2InnerCarry, batch: Dict[str, torch.Tensor] ) -> Tuple[Model_ACTV2InnerCarry, torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: seq_info = dict( cos_sin=self.rotary_emb() if hasattr(self, "rotary_emb") else None, ) # Input encoding input_embeddings = self._input_embeddings(batch["inputs"], batch["puzzle_identifiers"]) # 1-step grad z_H = self.H_level(carry.z_H, input_embeddings, **seq_info) # LM Outputs new_carry = Model_ACTV2InnerCarry( z_H=z_H.detach(), ) # New carry no grad output = self.lm_head(z_H)[:, self.puzzle_emb_len :] # Q head q_logits = self.q_head(z_H[:, 0]).to(torch.float32) return new_carry, output, (q_logits[..., 0], q_logits[..., 1]) class Model_ACTV2(nn.Module): """ACT wrapper.""" def __init__(self, config_dict: dict): super().__init__() self.config = Model_ACTV2Config(**config_dict) self.inner = Model_ACTV2_Inner(self.config) @property def puzzle_emb(self): return self.inner.puzzle_emb def initial_carry(self, batch: Dict[str, torch.Tensor]): batch_size = batch["inputs"].shape[0] return Model_ACTV2Carry( inner_carry=self.inner.empty_carry( batch_size ), # Empty is expected, it will be reseted in first pass as all sequences are halted. steps=torch.zeros((batch_size,), dtype=torch.int32), halted=torch.ones((batch_size,), dtype=torch.bool), # Default to halted current_data={k: torch.empty_like(v) for k, v in batch.items()}, ) def forward( self, carry: Model_ACTV2Carry, batch: Dict[str, torch.Tensor], compute_target_q: bool = False, ) -> Tuple[Model_ACTV2Carry, Dict[str, torch.Tensor]]: # Update data, carry (removing halted sequences) new_inner_carry = self.inner.reset_carry(carry.halted, carry.inner_carry) new_steps = torch.where(carry.halted, 0, carry.steps) new_current_data = { k: torch.where(carry.halted.view((-1,) + (1,) * (batch[k].ndim - 1)), batch[k], v) for k, v in carry.current_data.items() } # Forward inner model new_inner_carry, logits, (q_halt_logits, q_continue_logits) = self.inner( new_inner_carry, new_current_data ) outputs = {"logits": logits, "q_halt_logits": q_halt_logits, "q_continue_logits": q_continue_logits} with torch.no_grad(): # Step new_steps = new_steps + 1 is_last_step = new_steps >= self.config.halt_max_steps halted = is_last_step # Check if adaptive computation should be used use_adaptive = (self.config.halt_max_steps > 1) and ( (self.training and self.config.act_enabled) or (not self.training and self.config.act_inference) ) if use_adaptive: # Halt signal based on Q-values (but always halt at max steps) q_halt_signal = q_halt_logits > q_continue_logits halted = halted | q_halt_signal # Store actual steps used for logging (only during inference) if not self.training: outputs["actual_steps"] = new_steps.float() # Exploration (only during training) if self.training: min_halt_steps = ( torch.rand_like(q_halt_logits) < self.config.halt_exploration_prob ) * torch.randint_like(new_steps, low=2, high=self.config.halt_max_steps + 1) halted = halted & (new_steps >= min_halt_steps) # Compute target Q (only during training) # NOTE: No replay buffer and target networks for computing target Q-value. # As batch_size is large, there're many parallel envs. # Similar concept as PQN https://arxiv.org/abs/2407.04811 if self.training and compute_target_q: next_q_halt_logits, next_q_continue_logits = self.inner( new_inner_carry, new_current_data )[-1] outputs["target_q_continue"] = torch.sigmoid( torch.where( is_last_step, next_q_halt_logits, torch.maximum(next_q_halt_logits, next_q_continue_logits), ) ) return Model_ACTV2Carry( new_inner_carry, new_steps, halted, new_current_data ), outputs
github_python
2025-12-15T05:03:22Z
https://github.com/alphaXiv/paper-implementations/blob/8baa0591a41119b4d0659667147737b1d1202fb5/TinyRecursiveModels/src/trm/models/architectures/transformers_baseline.py
{}
from langchain_community.vectorstores import FAISS from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnablePassthrough from langchain_core.output_parsers import StrOutputParser from langchain_community.llms import Ollama import os VECTOR_DIR = "vectorstore" def get_rag_chain(user_level="beginner"): embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-MiniLM-L6-v2" ) db = FAISS.load_local( VECTOR_DIR, embeddings, allow_dangerous_deserialization=True ) retriever = db.as_retriever(search_kwargs={"k": 3}) system_prompt = f""" You are an intelligent assistant. Explain answers according to the user's level: {user_level}. Be clear, accurate, and concise. Use the provided context only. """ prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), ("human", "Context:\n{context}\n\nQuestion:\n{question}") ]) llm = Ollama(model="llama3") chain = ( { "context": retriever, "question": RunnablePassthrough() } | prompt | llm | StrOutputParser() ) return chain
from langchain_community.vectorstores import FAISS from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnablePassthrough from langchain_core.output_parsers import StrOutputParser from langchain_community.llms import Ollama import os VECTOR_DIR = "vectorstore" def get_rag_chain(user_level="beginner"): embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-MiniLM-L6-v2" ) db = FAISS.load_local( VECTOR_DIR, embeddings, allow_dangerous_deserialization=True ) retriever = db.as_retriever(search_kwargs={"k": 3}) system_prompt = f""" You are an intelligent assistant. Explain answers according to the user's level: {user_level}. Be clear, accurate, and concise. Use the provided context only. """ prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), ("human", "Context:\n{context}\n\nQuestion:\n{question}") ]) llm = Ollama(model="llama3") chain = ( { "context": retriever, "question": RunnablePassthrough() } | prompt | llm | StrOutputParser() ) return chain
github_python
2025-12-14T18:59:31Z
https://github.com/Dark-Vinaal/RAG/blob/75c41f6aa78588c71eacc80e1516077f0bef2f86/rag_pipeline.py
{}
from typing import Union, List, Dict import unicodedata # 城市每月社保上限(养老、失业、医疗) CITY_SOCIAL_UPPER_LIMITS = { "北京": {"pension": 2864.88, "unemployment": 179.06, "medical": 716.22}, "杭州": {"pension": 1994.4, "unemployment": 124.65, "medical": 498.6}, "上海": {"pension": 2984.16, "unemployment": 186.51, "medical": 746.04}, "深圳": {"pension": 2200.08, "unemployment": 221.325, "medical": 673.32}, # 养老保险上限未更新仍按27501计算; 职工一档医保 } # 城市公积金基数上限 = 城市社保上限 * 公积金比例上限 CITY_HOUSING_FUND_LIMITS = { "北京": 35811, "杭州": 40694, "上海": 37302, # 上海2024年社平工资为12434元。故2025年度缴存基数上限据此计算为37302元,上海公积金比例上限7% "深圳": 44265, } # 年度综合所得税率表(保持不变,用于工资薪金计税) TAX_RATE_TABLE = [ (36000, 0.03, 0), (144000, 0.10, 2520), (300000, 0.20, 16920), (420000, 0.25, 31920), (660000, 0.30, 52920), (960000, 0.35, 85920), (float('inf'), 0.45, 181920), ] # 月度税率表(用于年终奖单独计税,按月换算后的综合所得税率表) MONTHLY_TAX_RATE_TABLE = [ (3000, 0.03, 0), (12000, 0.10, 210), (25000, 0.20, 1410), (35000, 0.25, 2660), (55000, 0.30, 4410), (80000, 0.35, 7160), (float('inf'), 0.45, 15160), ] # ================== 城市社保 + 房租专项附加配置 ================== CITY_CONFIG: Dict[str, Dict] = { 'Beijing': { 'shebao_cap': 35283, # 社保缴费上限 'shebao_min': 6821, # 社保缴费下限 'gongjijin_cap': 35283, # 公积金缴费上限 'rate_pension': 0.08, # 养老个人 'rate_medical': 0.02, # 医疗个人 'medical_fixed': 3, # 医疗 3 元大病统筹 'rate_unemploy': 0.005, # 失业个人 'rate_housing': 0.12, # 公积金个人 'rent_deduction': 1500 # 房租专项附加,单位:元/月 }, 'Hangzhou': { 'shebao_cap': 24930, 'shebao_min': 4462, 'gongjijin_cap': 39527, 'rate_pension': 0.08, 'rate_medical': 0.02, 'medical_fixed': 0, 'rate_unemploy': 0.005, 'rate_housing': 0.12, 'rent_deduction': 1500 # 房租专项附加,单位:元/月 } } STARTING_POINT_PER_MONTH = 5000 # ================== Offer ================== # stock_annual: 假设“当年归属”的股票票面价值(税前,元) # stock_flat_rate: 某些公司 粗暴按 20% 直接扣税;其余 None 走累进 # intern_percent: 实习月薪 = base * intern_percent + allowance;或者 intern_monthly 直接写实习月薪 COMPANIES: List[Dict] = [ { "name": "AAA", "base": 23000, "months": 15, "allowance": 0, "sign_on": 0, "city": "Beijing", "stock_annual": 0, "stock_flat_rate": None, }, { "name": "BBB", "base": 20000, "months": 13, "allowance": 0, "sign_on": 0, "city": "Beijing", "stock_annual": 200000, "stock_flat_rate": 0.2, }, ] def get_tax_rate(amount: float): """年度综合所得税率表(工资全年/股票按年算用这个)""" for upper, rate, qd in TAX_RATE_TABLE: if amount <= upper: return rate, qd return TAX_RATE_TABLE[-1][1], TAX_RATE_TABLE[-1][2] def get_monthly_tax_rate(amount: float): """月度税率表(年终奖换算 & 实习按月简单算税用这个)""" for upper, rate, qd in MONTHLY_TAX_RATE_TABLE: if amount <= upper: return rate, qd return MONTHLY_TAX_RATE_TABLE[-1][1], MONTHLY_TAX_RATE_TABLE[-1][2] def get_bonus_tax_rate(monthly_amount: float): return get_monthly_tax_rate(monthly_amount) def calculate_monthly_details( monthly_salaries: Union[float, List[float]], social_security_bases: Union[float, List[float]], city: str = "北京", five_insurance_rate: float = 0.105, housing_fund_rate: float = 0.12, ) -> Dict[str, List[Union[float, Dict]]]: """ 计算本年每月详细薪资数据 返回: - monthly: 每个月的当月值明细(表格用) - annual: 全年累计值汇总(年度区域用) """ if isinstance(monthly_salaries, (int, float)): monthly_salaries = [monthly_salaries] * 12 elif isinstance(monthly_salaries, list) and len(monthly_salaries) != 12: raise ValueError("月薪需为单个数值或12个元素的列表") if isinstance(social_security_bases, (int, float)): social_security_bases = [social_security_bases] * 12 elif isinstance(social_security_bases, list) and len(social_security_bases) != 12: raise ValueError("社保基数需为单个数值或12个元素的列表") cumulative_income = 0.0 cumulative_social_housing = 0.0 cumulative_housing_fund = 0.0 cumulative_tax = 0.0 monthly_details = [] annual_summary: Dict[str, float] = {} for month in range(1, 13): current_salary = monthly_salaries[month - 1] current_social_base = social_security_bases[month - 1] pension_upper = CITY_SOCIAL_UPPER_LIMITS[city]["pension"] pension = min(current_social_base * 0.08, pension_upper) medical_upper = CITY_SOCIAL_UPPER_LIMITS[city]["medical"] medical = min(current_social_base * 0.02, medical_upper) unemployment_upper = CITY_SOCIAL_UPPER_LIMITS[city]["unemployment"] unemployment = min(current_social_base * 0.005, unemployment_upper) social_total = pension + medical + unemployment housing_limit = CITY_HOUSING_FUND_LIMITS.get(city, float('inf')) housing_fund = min(current_social_base, housing_limit) * housing_fund_rate total_social_housing = social_total + housing_fund cumulative_income += current_salary cumulative_social_housing += total_social_housing cumulative_housing_fund += housing_fund cumulative_taxable_income = cumulative_income - 5000 * month - cumulative_social_housing cumulative_monthly_tax = 0.0 for limit, rate, deduction in TAX_RATE_TABLE: if cumulative_taxable_income <= limit: cumulative_monthly_tax = cumulative_taxable_income * rate - deduction break current_month_tax = cumulative_monthly_tax - cumulative_tax current_month_tax = max(current_month_tax, 0.0) cumulative_tax = cumulative_monthly_tax takehome = current_salary - social_total - housing_fund - current_month_tax takehome = max(takehome, 0.0) monthly_details.append({ "month": month, "pre_tax_income": round(current_salary, 2), "pension": round(pension, 2), "medical": round(medical, 2), "unemployment": round(unemployment, 2), "housing_fund": round(housing_fund, 2), "taxable_income": round(cumulative_taxable_income, 2), "current_tax": round(current_month_tax, 2), "takehome": round(takehome, 2), }) total_pre_tax = round(cumulative_income, 2) total_housing_fund = round(cumulative_housing_fund, 2) total_tax = round(cumulative_tax, 2) total_takehome = round(cumulative_income - cumulative_social_housing - cumulative_tax, 2) total_takehome_with_housing = total_takehome + total_housing_fund * 2 annual_summary = { "total_pre_tax": total_pre_tax, "total_housing_fund": total_housing_fund, "total_tax": total_tax, "total_takehome": total_takehome, "total_takehome_with_housing": total_takehome_with_housing, } return {"monthly": monthly_details, "annual": annual_summary} def calculate_year_end_bonus(year_end_bonus: float) -> Dict[str, float]: """计算年终奖单独计税的个税、税率及税后金额""" if year_end_bonus <= 0: raise ValueError("年终奖金额必须大于0") monthly_income = year_end_bonus / 12 tax_rate = MONTHLY_TAX_RATE_TABLE[-1][1] quick_deduction = MONTHLY_TAX_RATE_TABLE[-1][2] for limit, rate, deduction in MONTHLY_TAX_RATE_TABLE: if monthly_income <= limit: tax_rate = rate quick_deduction = deduction break bonus_tax = year_end_bonus * tax_rate - quick_deduction bonus_after_tax = year_end_bonus - bonus_tax return { "tax": round(bonus_tax, 2), "after_tax": round(bonus_after_tax, 2), "tax_rate": round(tax_rate * 100, 2), } def calc_insurance(income: float, city: str) -> float: cfg = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']) base_sb = max(min(income, cfg['shebao_cap']), cfg['shebao_min']) base_gjj = max(min(income, cfg['gongjijin_cap']), cfg['shebao_min']) deduction = ( base_sb * (cfg['rate_pension'] + cfg['rate_unemploy'] + cfg['rate_medical']) + cfg['medical_fixed'] + base_gjj * cfg['rate_housing'] ) return deduction def display_width(text: str) -> int: width = 0 for ch in str(text): width += 2 if unicodedata.east_asian_width(ch) in ('F', 'W') else 1 return width def pad(text: str, width: int, align: str = 'left') -> str: cur = display_width(text) if cur >= width: return text spaces = ' ' * (width - cur) if align == 'right': return spaces + text return text + spaces def run_calculation(company_data: Dict) -> Dict: base = company_data['base'] months = company_data['months'] allowance = company_data['allowance'] sign_on = company_data['sign_on'] city = company_data['city'] stock_annual = company_data.get('stock_annual', 0) stock_flat_rate = company_data.get('stock_flat_rate') monthly_fixed = base + allowance bonus_months = max(0, months - 12) year_end_bonus = base * bonus_months stock_tax = 0.0 stock_net = 0.0 if stock_annual > 0: stock_tax = stock_annual * 0.20 stock_net = stock_annual - stock_tax # ---- 年终奖(单独计税)---- bonus_tax = 0.0 bonus_net = 0.0 if year_end_bonus > 0: rate, qd = get_bonus_tax_rate(year_end_bonus / 12) bonus_tax = year_end_bonus * rate - qd bonus_net = year_end_bonus - bonus_tax # ---- 工资薪金 + 签字费,按累计预扣法 ---- cumulative_income_net = 0.0 cumulative_taxable = 0.0 cumulative_tax_paid = 0.0 cumulative_social = 0.0 cumulative_housing = 0.0 monthly_nets: List[float] = [] first_month_net = 0.0 rent_deduction = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']).get('rent_deduction', 0) for m in range(1, 13): # 每月固定收入 current_income = monthly_fixed # 签字费简化并入首月工资 if m == 1: current_income += sign_on #
from typing import Union, List, Dict import unicodedata # 城市每月社保上限(养老、失业、医疗) CITY_SOCIAL_UPPER_LIMITS = { "北京": {"pension": 2864.88, "unemployment": 179.06, "medical": 716.22}, "杭州": {"pension": 1994.4, "unemployment": 124.65, "medical": 498.6}, "上海": {"pension": 2984.16, "unemployment": 186.51, "medical": 746.04}, "深圳": {"pension": 2200.08, "unemployment": 221.325, "medical": 673.32}, # 养老保险上限未更新仍按27501计算; 职工一档医保 } # 城市公积金基数上限 = 城市社保上限 * 公积金比例上限 CITY_HOUSING_FUND_LIMITS = { "北京": 35811, "杭州": 40694, "上海": 37302, # 上海2024年社平工资为12434元。故2025年度缴存基数上限据此计算为37302元,上海公积金比例上限7% "深圳": 44265, } # 年度综合所得税率表(保持不变,用于工资薪金计税) TAX_RATE_TABLE = [ (36000, 0.03, 0), (144000, 0.10, 2520), (300000, 0.20, 16920), (420000, 0.25, 31920), (660000, 0.30, 52920), (960000, 0.35, 85920), (float('inf'), 0.45, 181920), ] # 月度税率表(用于年终奖单独计税,按月换算后的综合所得税率表) MONTHLY_TAX_RATE_TABLE = [ (3000, 0.03, 0), (12000, 0.10, 210), (25000, 0.20, 1410), (35000, 0.25, 2660), (55000, 0.30, 4410), (80000, 0.35, 7160), (float('inf'), 0.45, 15160), ] # ================== 城市社保 + 房租专项附加配置 ================== CITY_CONFIG: Dict[str, Dict] = { 'Beijing': { 'shebao_cap': 35283, # 社保缴费上限 'shebao_min': 6821, # 社保缴费下限 'gongjijin_cap': 35283, # 公积金缴费上限 'rate_pension': 0.08, # 养老个人 'rate_medical': 0.02, # 医疗个人 'medical_fixed': 3, # 医疗 3 元大病统筹 'rate_unemploy': 0.005, # 失业个人 'rate_housing': 0.12, # 公积金个人 'rent_deduction': 1500 # 房租专项附加,单位:元/月 }, 'Hangzhou': { 'shebao_cap': 24930, 'shebao_min': 4462, 'gongjijin_cap': 39527, 'rate_pension': 0.08, 'rate_medical': 0.02, 'medical_fixed': 0, 'rate_unemploy': 0.005, 'rate_housing': 0.12, 'rent_deduction': 1500 # 房租专项附加,单位:元/月 } } STARTING_POINT_PER_MONTH = 5000 # ================== Offer ================== # stock_annual: 假设“当年归属”的股票票面价值(税前,元) # stock_flat_rate: 某些公司 粗暴按 20% 直接扣税;其余 None 走累进 # intern_percent: 实习月薪 = base * intern_percent + allowance;或者 intern_monthly 直接写实习月薪 COMPANIES: List[Dict] = [ { "name": "AAA", "base": 23000, "months": 15, "allowance": 0, "sign_on": 0, "city": "Beijing", "stock_annual": 0, "stock_flat_rate": None, }, { "name": "BBB", "base": 20000, "months": 13, "allowance": 0, "sign_on": 0, "city": "Beijing", "stock_annual": 200000, "stock_flat_rate": 0.2, }, ] def get_tax_rate(amount: float): """年度综合所得税率表(工资全年/股票按年算用这个)""" for upper, rate, qd in TAX_RATE_TABLE: if amount <= upper: return rate, qd return TAX_RATE_TABLE[-1][1], TAX_RATE_TABLE[-1][2] def get_monthly_tax_rate(amount: float): """月度税率表(年终奖换算 & 实习按月简单算税用这个)""" for upper, rate, qd in MONTHLY_TAX_RATE_TABLE: if amount <= upper: return rate, qd return MONTHLY_TAX_RATE_TABLE[-1][1], MONTHLY_TAX_RATE_TABLE[-1][2] def get_bonus_tax_rate(monthly_amount: float): return get_monthly_tax_rate(monthly_amount) def calculate_monthly_details( monthly_salaries: Union[float, List[float]], social_security_bases: Union[float, List[float]], city: str = "北京", five_insurance_rate: float = 0.105, housing_fund_rate: float = 0.12, ) -> Dict[str, List[Union[float, Dict]]]: """ 计算本年每月详细薪资数据 返回: - monthly: 每个月的当月值明细(表格用) - annual: 全年累计值汇总(年度区域用) """ if isinstance(monthly_salaries, (int, float)): monthly_salaries = [monthly_salaries] * 12 elif isinstance(monthly_salaries, list) and len(monthly_salaries) != 12: raise ValueError("月薪需为单个数值或12个元素的列表") if isinstance(social_security_bases, (int, float)): social_security_bases = [social_security_bases] * 12 elif isinstance(social_security_bases, list) and len(social_security_bases) != 12: raise ValueError("社保基数需为单个数值或12个元素的列表") cumulative_income = 0.0 cumulative_social_housing = 0.0 cumulative_housing_fund = 0.0 cumulative_tax = 0.0 monthly_details = [] annual_summary: Dict[str, float] = {} for month in range(1, 13): current_salary = monthly_salaries[month - 1] current_social_base = social_security_bases[month - 1] pension_upper = CITY_SOCIAL_UPPER_LIMITS[city]["pension"] pension = min(current_social_base * 0.08, pension_upper) medical_upper = CITY_SOCIAL_UPPER_LIMITS[city]["medical"] medical = min(current_social_base * 0.02, medical_upper) unemployment_upper = CITY_SOCIAL_UPPER_LIMITS[city]["unemployment"] unemployment = min(current_social_base * 0.005, unemployment_upper) social_total = pension + medical + unemployment housing_limit = CITY_HOUSING_FUND_LIMITS.get(city, float('inf')) housing_fund = min(current_social_base, housing_limit) * housing_fund_rate total_social_housing = social_total + housing_fund cumulative_income += current_salary cumulative_social_housing += total_social_housing cumulative_housing_fund += housing_fund cumulative_taxable_income = cumulative_income - 5000 * month - cumulative_social_housing cumulative_monthly_tax = 0.0 for limit, rate, deduction in TAX_RATE_TABLE: if cumulative_taxable_income <= limit: cumulative_monthly_tax = cumulative_taxable_income * rate - deduction break current_month_tax = cumulative_monthly_tax - cumulative_tax current_month_tax = max(current_month_tax, 0.0) cumulative_tax = cumulative_monthly_tax takehome = current_salary - social_total - housing_fund - current_month_tax takehome = max(takehome, 0.0) monthly_details.append({ "month": month, "pre_tax_income": round(current_salary, 2), "pension": round(pension, 2), "medical": round(medical, 2), "unemployment": round(unemployment, 2), "housing_fund": round(housing_fund, 2), "taxable_income": round(cumulative_taxable_income, 2), "current_tax": round(current_month_tax, 2), "takehome": round(takehome, 2), }) total_pre_tax = round(cumulative_income, 2) total_housing_fund = round(cumulative_housing_fund, 2) total_tax = round(cumulative_tax, 2) total_takehome = round(cumulative_income - cumulative_social_housing - cumulative_tax, 2) total_takehome_with_housing = total_takehome + total_housing_fund * 2 annual_summary = { "total_pre_tax": total_pre_tax, "total_housing_fund": total_housing_fund, "total_tax": total_tax, "total_takehome": total_takehome, "total_takehome_with_housing": total_takehome_with_housing, } return {"monthly": monthly_details, "annual": annual_summary} def calculate_year_end_bonus(year_end_bonus: float) -> Dict[str, float]: """计算年终奖单独计税的个税、税率及税后金额""" if year_end_bonus <= 0: raise ValueError("年终奖金额必须大于0") monthly_income = year_end_bonus / 12 tax_rate = MONTHLY_TAX_RATE_TABLE[-1][1] quick_deduction = MONTHLY_TAX_RATE_TABLE[-1][2] for limit, rate, deduction in MONTHLY_TAX_RATE_TABLE: if monthly_income <= limit: tax_rate = rate quick_deduction = deduction break bonus_tax = year_end_bonus * tax_rate - quick_deduction bonus_after_tax = year_end_bonus - bonus_tax return { "tax": round(bonus_tax, 2), "after_tax": round(bonus_after_tax, 2), "tax_rate": round(tax_rate * 100, 2), } def calc_insurance(income: float, city: str) -> float: cfg = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']) base_sb = max(min(income, cfg['shebao_cap']), cfg['shebao_min']) base_gjj = max(min(income, cfg['gongjijin_cap']), cfg['shebao_min']) deduction = ( base_sb * (cfg['rate_pension'] + cfg['rate_unemploy'] + cfg['rate_medical']) + cfg['medical_fixed'] + base_gjj * cfg['rate_housing'] ) return deduction def display_width(text: str) -> int: width = 0 for ch in str(text): width += 2 if unicodedata.east_asian_width(ch) in ('F', 'W') else 1 return width def pad(text: str, width: int, align: str = 'left') -> str: cur = display_width(text) if cur >= width: return text spaces = ' ' * (width - cur) if align == 'right': return spaces + text return text + spaces def run_calculation(company_data: Dict) -> Dict: base = company_data['base'] months = company_data['months'] allowance = company_data['allowance'] sign_on = company_data['sign_on'] city = company_data['city'] stock_annual = company_data.get('stock_annual', 0) stock_flat_rate = company_data.get('stock_flat_rate') monthly_fixed = base + allowance bonus_months = max(0, months - 12) year_end_bonus = base * bonus_months stock_tax = 0.0 stock_net = 0.0 if stock_annual > 0: stock_tax = stock_annual * 0.20 stock_net = stock_annual - stock_tax # ---- 年终奖(单独计税)---- bonus_tax = 0.0 bonus_net = 0.0 if year_end_bonus > 0: rate, qd = get_bonus_tax_rate(year_end_bonus / 12) bonus_tax = year_end_bonus * rate - qd bonus_net = year_end_bonus - bonus_tax # ---- 工资薪金 + 签字费,按累计预扣法 ---- cumulative_income_net = 0.0 cumulative_taxable = 0.0 cumulative_tax_paid = 0.0 cumulative_social = 0.0 cumulative_housing = 0.0 monthly_nets: List[float] = [] first_month_net = 0.0 rent_deduction = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']).get('rent_deduction', 0) for m in range(1, 13): # 每月固定收入 current_income = monthly_fixed # 签字费简化并入首月工资 if m == 1: current_income += sign_on # 五险一金按固定月收入算,不把签字费算进基数 cfg = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']) base_sb = max(min(monthly_fixed, cfg['shebao_cap']), cfg['shebao_min']) base_gjj = max(min(monthly_fixed, cfg['gongjijin_cap']), cfg['shebao_min']) housing_part = base_gjj * cfg['rate_housing'] social_part = base_sb * (cfg['rate_pension'] + cfg['rate_unemploy'] + cfg['rate_medical']) + cfg['medical_fixed'] insurance = social_part + housing_part # 专项附加只有房租:起征点 + 房租专项 taxable = max(0, current_income - STARTING_POINT_PER_MONTH - rent_deduction - insurance) cumulative_taxable += taxable rate, qd = get_tax_rate(cumulative_taxable) cum_tax = cumulative_taxable * rate - qd cur_tax = max(0, cum_tax - cumulative_tax_paid) net = current_income - cur_tax - insurance if m == 1: first_month_net = net cumulative_tax_paid += cur_tax cumulative_income_net += net cumulative_social += social_part cumulative_housing += housing_part monthly_nets.append(net) monthly_max = max(monthly_nets) if monthly_nets else 0.0 monthly_min = min(monthly_nets) if monthly_nets else 0.0 total_net = cumulative_income_net + bonus_net + stock_net return { "name": company_data['name'], "total_gross_w": (monthly_fixed * 12 + year_end_bonus + sign_on + stock_annual) / 10000, "stock_gross_w": stock_annual / 10000, "stock_net_w": stock_net / 10000, "salary_net_w": (cumulative_income_net + bonus_net) / 10000, "first_month_w": first_month_net / 10000, "monthly_min_w": monthly_min / 10000, "monthly_max_w": monthly_max / 10000, "annual_tax_w": cumulative_tax_paid, "annual_social_w": cumulative_social, "annual_housing_w": cumulative_housing, "total_net_w": total_net / 10000, } def run_internship_calculation(company_data: Dict, months: int = 3) -> Dict | None: """ 假设三月入职实习,连实习 3 个月: - 不缴社保公积金; - 有房租专项附加; - 税按“工资薪金”用月度税率简单算(不按全年累计)。 """ city = company_data['city'] intern_monthly = company_data.get('intern_monthly') if intern_monthly is None: intern_percent = company_data.get('intern_percent') if intern_percent is None: return None intern_monthly = company_data['base'] * intern_percent + company_data.get('allowance', 0) rent_deduction = CITY_CONFIG.get(city, CITY_CONFIG['Beijing']).get('rent_deduction', 0) taxable_per_month = max(0, intern_monthly - STARTING_POINT_PER_MONTH - rent_deduction) monthly_nets: List[float] = [] cumulative_taxable = 0.0 cumulative_tax_paid = 0.0 for _ in range(months): cumulative_taxable += taxable_per_month if cumulative_taxable <= 0: cur_tax = 0.0 else: rate, qd = get_monthly_tax_rate(cumulative_taxable) cum_tax = cumulative_taxable * rate - qd cur_tax = max(0.0, cum_tax - cumulative_tax_paid) cumulative_tax_paid += cur_tax net = intern_monthly - cur_tax monthly_nets.append(net) if monthly_nets: net_month_avg = sum(monthly_nets) / len(monthly_nets) net_total = sum(monthly_nets) else: net_month_avg = 0.0 net_total = 0.0 return { "name": company_data['name'], "intern_city": city, "intern_months": months, "intern_gross_month_w": intern_monthly / 10000, "intern_net_month_w": net_month_avg / 10000, "intern_gross_total_w": intern_monthly * months / 10000, "intern_net_total_w": net_total / 10000, } FULLTIME_COLUMNS = [ {"key": "name", "title": "公司", "width": 16, "align": "left"}, {"key": "total_gross_w", "title": "税前总包", "width": 10, "align": "right"}, {"key": "stock_gross_w", "title": "税前股票", "width": 10, "align": "right"}, {"key": "salary_net_w", "title": "工资到手", "width": 10, "align": "right"}, {"key": "first_month_w", "title": "首月到手", "width": 12, "align": "right"}, {"key": "monthly_min_w", "title": "月到手最小", "width": 12, "align": "right"}, {"key": "monthly_max_w", "title": "月到手最大", "width": 12, "align": "right"}, {"key": "stock_net_w", "title": "股票/期权到手", "width": 12, "align": "right"}, {"key": "annual_tax_w", "title": "年个税", "width": 10, "align": "right"}, {"key": "annual_social_w", "title": "年社保", "width": 10, "align": "right"}, {"key": "annual_housing_w", "title": "年公积金", "width": 10, "align": "right"}, # {"key": "intern_net_month_w","title": "实习月均到手", "width": 12, "align": "right"}, {"key": "total_net_w", "title": "总到手", "width": 10, "align": "right"}, ] INTERNSHIP_COLUMNS = [ {"key": "name", "title": "公司", "width": 16, "align": "left"}, {"key": "intern_city", "title": "城市", "width": 8, "align": "left"}, {"key": "intern_gross_month_w", "title": "实习月薪", "width": 12, "align": "right"}, {"key": "intern_net_month_w", "title": "实习月到手", "width": 12, "align": "right"}, {"key": "intern_months", "title": "实习月数", "width": 8, "align": "right"}, {"key": "intern_gross_total_w", "title": "实习总税前", "width": 12, "align": "right"}, {"key": "intern_net_total_w", "title": "实习总到手", "width": 12, "align": "right"}, ] def main(): header = " | ".join( pad(col["title"], col["width"], col["align"]) for col in FULLTIME_COLUMNS ) separator_len = sum(col["width"] for col in FULLTIME_COLUMNS) + 3 * (len(FULLTIME_COLUMNS) - 1) print(header) print("-" * separator_len) results: List[Dict] = [] for comp in COMPANIES: res = run_calculation(comp) # 如果有实习 offer,把实习月均到手也挂到总表里;否则置 0 intern_res = run_internship_calculation(comp, months=3) if intern_res: res["intern_net_month_w"] = intern_res["intern_net_month_w"] else: res["intern_net_month_w"] = 0.0 results.append(res) for res in results: row_items = [] for col in FULLTIME_COLUMNS: key = col["key"] if key == "name": value = res[key] elif key in {"annual_tax_w", "annual_social_w", "annual_housing_w"}: value = f"{res[key]:.0f}" else: value = f"{res[key]:.1f}" row_items.append(pad(value, col["width"], col["align"])) print(" | ".join(row_items)) print("-" * separator_len) best = max(results, key=lambda x: x['total_net_w']) if __name__ == "__main__": main()
github_python
2025-12-12T04:15:45Z
https://github.com/RubiaCx/CNTaxCalculator/blob/f17c505f14bf2ad85d64ee3917aa194dce01e276/tax.py
{}
# darwiniabridge.py """ Main module for DarwiniaBridge application. """ import argparse import logging import sys from typing import Optional class DarwiniaBridge: """Main class for DarwiniaBridge functionality.""" def __init__(self, verbose: bool = False): """Initialize with verbosity setting.""" self.verbose = verbose self.logger = self._setup_logging() def _setup_logging(self) -> logging.Logger: """Configure logging based on verbosity.""" logger = logging.getLogger(__name__) level = logging.DEBUG if self.verbose else logging.INFO logger.setLevel(level) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s' )) logger.addHandler(handler) return logger def run(self) -> bool: """Main execution method.""" try: self.logger.info("Starting DarwiniaBridge processing") # Add your main logic here self.logger.info("Processing completed successfully") return True except Exception as e: self.logger.error("Processing failed: %s", str(e), exc_info=self.verbose) return False def main(): """Command line entry point.""" parser = argparse.ArgumentParser(description="DarwiniaBridge - A powerful utility") parser.add_argument('-v', '--verbose', action='store_true', help='Enable verbose logging') args = parser.parse_args() app = DarwiniaBridge(verbose=args.verbose) if not app.run(): sys.exit(1) if __name__ == "__main__": main()
# darwiniabridge.py """ Main module for DarwiniaBridge application. """ import argparse import logging import sys from typing import Optional class DarwiniaBridge: """Main class for DarwiniaBridge functionality.""" def __init__(self, verbose: bool = False): """Initialize with verbosity setting.""" self.verbose = verbose self.logger = self._setup_logging() def _setup_logging(self) -> logging.Logger: """Configure logging based on verbosity.""" logger = logging.getLogger(__name__) level = logging.DEBUG if self.verbose else logging.INFO logger.setLevel(level) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s' )) logger.addHandler(handler) return logger def run(self) -> bool: """Main execution method.""" try: self.logger.info("Starting DarwiniaBridge processing") # Add your main logic here self.logger.info("Processing completed successfully") return True except Exception as e: self.logger.error("Processing failed: %s", str(e), exc_info=self.verbose) return False def main(): """Command line entry point.""" parser = argparse.ArgumentParser(description="DarwiniaBridge - A powerful utility") parser.add_argument('-v', '--verbose', action='store_true', help='Enable verbose logging') args = parser.parse_args() app = DarwiniaBridge(verbose=args.verbose) if not app.run(): sys.exit(1) if __name__ == "__main__": main()
github_python
2025-12-14T18:06:13Z
https://github.com/astabrutie8/DarwiniaBridge/blob/99e7ee6df8002ea280219aecdbc2ca056734d8b6/darwiniabridge.py
{}
import json import time import uuid import os import requests import asyncio from typing import Optional, List, Dict, Any, Union class FlowClient: def __init__(self, cookies: Dict[str, str] = None): self.base_url = "https://aisandbox-pa.googleapis.com" self.api_key = "AIzaSyBtrm0o5ab1c-Ec8ZuLcGt3oJAA5VWt3pY" # Public key from logs self.tool_name = "PINHOLE" self.cookies = cookies or {} # Load cookies from file if not provided if not self.cookies: self.load_cookies() self.session = requests.Session() # 1. Construct Cookie String cookie_str = "; ".join([f"{k}={v}" for k, v in self.cookies.items()]) # 2. Extract CSRF Token (Next-Auth specific) # Format often: "token|hash" -> we need "token" csrf_token = None for k, v in self.cookies.items(): if "csrf-token" in k: if "|" in v: csrf_token = v.split("|")[0] else: csrf_token = v break # 3. Build Headers headers = { "Content-Type": "application/json", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "Origin": "https://labs.google", "Referer": "https://labs.google/", "Cookie": cookie_str, "x-goog-authuser": "0" # Try default auth user } if csrf_token: headers["x-csrf-token"] = csrf_token headers["x-next-auth-csrf-token"] = csrf_token print(f"🔑 Extracted CSRF Token: {csrf_token[:10]}...") # 4. Load Bearer Token Override if os.path.exists("auth_token.json"): try: with open("auth_token.json", "r") as f: auth_data = json.load(f) if "Authorization" in auth_data: headers["Authorization"] = auth_data["Authorization"] print("🔑 Loaded Bearer Token from auth_token.json") except Exception as e: print(f"⚠️ Failed to load auth_token: {e}") self.session.headers.update(headers) self.project_id = None self.session_id = f";{int(time.time() * 1000)}" # Auto-validate self.validate_auth() def validate_auth(self): """Checks if the cookies are valid by fetching a simple resource.""" try: # Using create_project as a test, or better, fetch user history which is read-only-ish print("🔍 Verifying API connection...") # We use fetchUserHistoryDirectly as it's a safe GET request used in polling url = "https://labs.google/fx/api/trpc/media.fetchUserHistoryDirectly" params = { "input": '{"json":{"type":"ASSET_MANAGER","pageSize":1,"responseScope":"RESPONSE_SCOPE_UNSPECIFIED","cursor":null},"meta":{"values":{"cursor":["undefined"]}}}' } resp = self.session.get(url, params=params) if resp.status_code == 200: print("✅ API Connection Verified! Cookies are valid.") elif resp.status_code == 401: print("❌ Authentication Failed (401). Your cookies might be expired or incomplete.") print(" 👉 Please export ALL cookies from labs.google and update them.") else: print(f"⚠️ API Connection Warning: HTTP {resp.status_code}") # print(resp.text[:200]) except Exception as e: print(f"⚠️ Validation skipped due to error: {e}") def load_cookies(self, path: str = "cookies.json"): """Load cookies from a JSON file.""" try: with open(path, "r", encoding="utf-8") as f: cookie_list = json.load(f) for cookie in cookie_list: self.cookies[cookie['name']] = cookie['value'] print(f"✅ Loaded {len(self.cookies)} cookies.") except Exception as e: print(f"⚠️ Failed to load cookies: {e}") def _get_client_context(self) -> Dict[str, Any]: """Returns the standard client context for requests.""" if not self.project_id: # Try to create a project first if not set, or use a default/random one # For now, we'll try to create one or use a placeholder if creation fails try: self.create_project() except: print("⚠️ Project creation failed, using temporary ID") self.project_id = str(uuid.uuid4()) return { "sessionId": self.session_id, "projectId": self.project_id, "tool": self.tool_name, "userPaygateTier": "PAYGATE_TIER_ONE" } def create_project(self, title: str = None) -> str: """Creates a new project and returns its ID.""" if not title: title = f"Project - {int(time.time())}" url = "https://labs.google/fx/api/trpc/project.createProject" payload = { "json": { "projectTitle": title, "toolName": self.tool_name } } resp = self.session.post(url, json=payload) resp.raise_for_status() data = resp.json() self.project_id = data["result"]["data"]["json"]["result"]["projectId"] print(f"✅ Created project: {self.project_id}") return self.project_id def generate_video( self, prompt: str, aspect_ratio: str = "VIDEO_ASPECT_RATIO_LANDSCAPE", model: str = "veo_3_1_t2v_fast", count: int = 1, seed: int = None ) -> List[str]: """ Generates a video from text. Returns a list of operation IDs (one per request). """ url = f"{self.base_url}/v1/video:batchAsyncGenerateVideoText" requests_list = [] for i in range(count): # Use different seeds for variation current_seed = (seed + i) if seed else int(time.time() * 1000 + i) % 2147483647 requests_list.append({ "aspectRatio": aspect_ratio, "seed": current_seed, "textInput": { "prompt": prompt }, "videoModelKey": model, "metadata": { "sceneId": str(uuid.uuid4()) } }) payload = { "clientContext": self._get_client_context(), "requests": requests_list } print(f"🚀 Sending generation request... (Model: {model}, Count: {count})") resp = self.session.post(url, json=payload) if resp.status_code != 200: print(f"❌ Error: {resp.text}") resp.raise_for_status() data = resp.json() print(f"[GENERATE] Full response: {json.dumps(data, indent=2)[:1000]}...") ops = data.get("operations", []) # Return both operation IDs and the full response for debugging op_info = [] for op in ops: info = { "name": op.get("operation", {}).get("name"), "sceneId": op.get("sceneId"), "status": op.get("status") } op_info.append(info) print(f"[GENERATE] Operation: {info}") return op_info if op_info else [] def generate_video_from_image( self, start_image_id: str, prompt: str, end_image_id: str = None, aspect_ratio: str = "VIDEO_ASPECT_RATIO_LANDSCAPE", model: str = "veo_3_1_i2v_s_fast_fl", seed: int = None ) -> List[str]: """ Generates a video from a start image (and optional end image). Note: image_id is the mediaId from uploaded assets (e.g. "CAMaJD...") """ url = f"{self.base_url}/v1/video:batchAsyncGenerateVideoStartAndEndImage" if seed is None: seed = int(time.time() * 1000) % 2147483647 req_data = { "aspectRatio": aspect_ratio, "seed": seed, "textInput": {"prompt": prompt}, "videoModelKey": model, "startImage": {"mediaId": start_image_id}, "metadata": {"sceneId": str(uuid.uuid4())} } if end_image_id: req_data["endImage"] = {"mediaId": end_image_id} payload = { "clientContext": self._get_client_context(), "requests": [req_data] } print(f"🚀 Sending image-to-video request... (Model: {model})") resp = self.session.post(url, json=payload) if resp.status_code != 200: print(f"❌ Error: {resp.text}") resp.raise_for_status() data = resp.json() ops = data.get("operations", []) return [op["operation"]["name"] for op in ops] def generate_image( self, prompt: str, aspect_ratio: str = "IMAGE_ASPECT_RATIO_LANDSCAPE", model: str = "GEM_PIX_2", count: int = 4, seed: int = None ) -> Dict[str, Any]: """ Generates images. Unlike video, this might return results immediately or a job ID. Based on logs, it returns a 'media' list directly if successful. """ if not self.project_id: self._get_client_context() # Ensure project ID exists url = f"{self.base_url}/v1/projects/{self.project_id}/flowMedia:batchGenerateImages" requests_list = [] for i in range(count): current_seed = (seed + i) if seed else int(time.time() * 1000 + i) % 2147483647 requests_list.append({ "clientContext": { "sessionId": self.
import json import time import uuid import os import requests import asyncio from typing import Optional, List, Dict, Any, Union class FlowClient: def __init__(self, cookies: Dict[str, str] = None): self.base_url = "https://aisandbox-pa.googleapis.com" self.api_key = "AIzaSyBtrm0o5ab1c-Ec8ZuLcGt3oJAA5VWt3pY" # Public key from logs self.tool_name = "PINHOLE" self.cookies = cookies or {} # Load cookies from file if not provided if not self.cookies: self.load_cookies() self.session = requests.Session() # 1. Construct Cookie String cookie_str = "; ".join([f"{k}={v}" for k, v in self.cookies.items()]) # 2. Extract CSRF Token (Next-Auth specific) # Format often: "token|hash" -> we need "token" csrf_token = None for k, v in self.cookies.items(): if "csrf-token" in k: if "|" in v: csrf_token = v.split("|")[0] else: csrf_token = v break # 3. Build Headers headers = { "Content-Type": "application/json", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "Origin": "https://labs.google", "Referer": "https://labs.google/", "Cookie": cookie_str, "x-goog-authuser": "0" # Try default auth user } if csrf_token: headers["x-csrf-token"] = csrf_token headers["x-next-auth-csrf-token"] = csrf_token print(f"🔑 Extracted CSRF Token: {csrf_token[:10]}...") # 4. Load Bearer Token Override if os.path.exists("auth_token.json"): try: with open("auth_token.json", "r") as f: auth_data = json.load(f) if "Authorization" in auth_data: headers["Authorization"] = auth_data["Authorization"] print("🔑 Loaded Bearer Token from auth_token.json") except Exception as e: print(f"⚠️ Failed to load auth_token: {e}") self.session.headers.update(headers) self.project_id = None self.session_id = f";{int(time.time() * 1000)}" # Auto-validate self.validate_auth() def validate_auth(self): """Checks if the cookies are valid by fetching a simple resource.""" try: # Using create_project as a test, or better, fetch user history which is read-only-ish print("🔍 Verifying API connection...") # We use fetchUserHistoryDirectly as it's a safe GET request used in polling url = "https://labs.google/fx/api/trpc/media.fetchUserHistoryDirectly" params = { "input": '{"json":{"type":"ASSET_MANAGER","pageSize":1,"responseScope":"RESPONSE_SCOPE_UNSPECIFIED","cursor":null},"meta":{"values":{"cursor":["undefined"]}}}' } resp = self.session.get(url, params=params) if resp.status_code == 200: print("✅ API Connection Verified! Cookies are valid.") elif resp.status_code == 401: print("❌ Authentication Failed (401). Your cookies might be expired or incomplete.") print(" 👉 Please export ALL cookies from labs.google and update them.") else: print(f"⚠️ API Connection Warning: HTTP {resp.status_code}") # print(resp.text[:200]) except Exception as e: print(f"⚠️ Validation skipped due to error: {e}") def load_cookies(self, path: str = "cookies.json"): """Load cookies from a JSON file.""" try: with open(path, "r", encoding="utf-8") as f: cookie_list = json.load(f) for cookie in cookie_list: self.cookies[cookie['name']] = cookie['value'] print(f"✅ Loaded {len(self.cookies)} cookies.") except Exception as e: print(f"⚠️ Failed to load cookies: {e}") def _get_client_context(self) -> Dict[str, Any]: """Returns the standard client context for requests.""" if not self.project_id: # Try to create a project first if not set, or use a default/random one # For now, we'll try to create one or use a placeholder if creation fails try: self.create_project() except: print("⚠️ Project creation failed, using temporary ID") self.project_id = str(uuid.uuid4()) return { "sessionId": self.session_id, "projectId": self.project_id, "tool": self.tool_name, "userPaygateTier": "PAYGATE_TIER_ONE" } def create_project(self, title: str = None) -> str: """Creates a new project and returns its ID.""" if not title: title = f"Project - {int(time.time())}" url = "https://labs.google/fx/api/trpc/project.createProject" payload = { "json": { "projectTitle": title, "toolName": self.tool_name } } resp = self.session.post(url, json=payload) resp.raise_for_status() data = resp.json() self.project_id = data["result"]["data"]["json"]["result"]["projectId"] print(f"✅ Created project: {self.project_id}") return self.project_id def generate_video( self, prompt: str, aspect_ratio: str = "VIDEO_ASPECT_RATIO_LANDSCAPE", model: str = "veo_3_1_t2v_fast", count: int = 1, seed: int = None ) -> List[str]: """ Generates a video from text. Returns a list of operation IDs (one per request). """ url = f"{self.base_url}/v1/video:batchAsyncGenerateVideoText" requests_list = [] for i in range(count): # Use different seeds for variation current_seed = (seed + i) if seed else int(time.time() * 1000 + i) % 2147483647 requests_list.append({ "aspectRatio": aspect_ratio, "seed": current_seed, "textInput": { "prompt": prompt }, "videoModelKey": model, "metadata": { "sceneId": str(uuid.uuid4()) } }) payload = { "clientContext": self._get_client_context(), "requests": requests_list } print(f"🚀 Sending generation request... (Model: {model}, Count: {count})") resp = self.session.post(url, json=payload) if resp.status_code != 200: print(f"❌ Error: {resp.text}") resp.raise_for_status() data = resp.json() print(f"[GENERATE] Full response: {json.dumps(data, indent=2)[:1000]}...") ops = data.get("operations", []) # Return both operation IDs and the full response for debugging op_info = [] for op in ops: info = { "name": op.get("operation", {}).get("name"), "sceneId": op.get("sceneId"), "status": op.get("status") } op_info.append(info) print(f"[GENERATE] Operation: {info}") return op_info if op_info else [] def generate_video_from_image( self, start_image_id: str, prompt: str, end_image_id: str = None, aspect_ratio: str = "VIDEO_ASPECT_RATIO_LANDSCAPE", model: str = "veo_3_1_i2v_s_fast_fl", seed: int = None ) -> List[str]: """ Generates a video from a start image (and optional end image). Note: image_id is the mediaId from uploaded assets (e.g. "CAMaJD...") """ url = f"{self.base_url}/v1/video:batchAsyncGenerateVideoStartAndEndImage" if seed is None: seed = int(time.time() * 1000) % 2147483647 req_data = { "aspectRatio": aspect_ratio, "seed": seed, "textInput": {"prompt": prompt}, "videoModelKey": model, "startImage": {"mediaId": start_image_id}, "metadata": {"sceneId": str(uuid.uuid4())} } if end_image_id: req_data["endImage"] = {"mediaId": end_image_id} payload = { "clientContext": self._get_client_context(), "requests": [req_data] } print(f"🚀 Sending image-to-video request... (Model: {model})") resp = self.session.post(url, json=payload) if resp.status_code != 200: print(f"❌ Error: {resp.text}") resp.raise_for_status() data = resp.json() ops = data.get("operations", []) return [op["operation"]["name"] for op in ops] def generate_image( self, prompt: str, aspect_ratio: str = "IMAGE_ASPECT_RATIO_LANDSCAPE", model: str = "GEM_PIX_2", count: int = 4, seed: int = None ) -> Dict[str, Any]: """ Generates images. Unlike video, this might return results immediately or a job ID. Based on logs, it returns a 'media' list directly if successful. """ if not self.project_id: self._get_client_context() # Ensure project ID exists url = f"{self.base_url}/v1/projects/{self.project_id}/flowMedia:batchGenerateImages" requests_list = [] for i in range(count): current_seed = (seed + i) if seed else int(time.time() * 1000 + i) % 2147483647 requests_list.append({ "clientContext": { "sessionId": self.session_id, "projectId": self.project_id, "tool": self.tool_name }, "seed": current_seed, "imageModelName": model, "imageAspectRatio": aspect_ratio, "prompt": prompt, "imageInputs": [] }) payload = {"requests": requests_list} print(f"🚀 Sending image generation request... (Count: {count})") print(f"[IMAGE API] Model: {model}, Ratio: {aspect_ratio}") resp = self.session.post(url, json=payload) if resp.status_code != 200: print(f"❌ Error: {resp.status_code}") # Save full error and request to file for debugging with open('image_error.txt', 'w', encoding='utf-8') as f: f.write(f"Status: {resp.status_code}\n") f.write(f"Response: {resp.text}\n") f.write(f"\n--- Request Payload ---\n") f.write(f"Model: {model}\n") f.write(f"Ratio: {aspect_ratio}\n") f.write(f"Count: {count}\n") print(f"❌ Full error saved to image_error.txt") resp.raise_for_status() return resp.json() def get_generation_result(self, operation_id: str) -> Optional[Dict[str, Any]]: """ Checks the status of an operation (media generation). The API seems to use the media ID to check status, but the operation ID might be a temporary handle. Actually, looking at the logs, the polling happens via: GET /v1/media/{mediaGenerationId}?key=... BUT, the initial response gives us an 'operation' name like "a13bd...". We need to transform this or wait for it. Wait! The operation name IS the ID we track. But where do we poll it? Usually Google APIs use /v1/operations/{name}, but the logs showed polling /v1/media/{mediaGenerationId}. Looking closely at the logs: Response to generate: "operations": [{"operation": {"name": "OP_ID"}, "sceneId": "...", "status": "PENDING"}] Then immediately: GET /v1/media/CAUSJ... (long base64-like ID) The 'mediaGenerationId' is NOT the operation ID. We might need to fetch the operation status to GET the mediaGenerationId. However, in the logs, the `mediaGenerationId` appears in the `GET` request. Where did the client get it? Ah, looking at the logs again... The `generate` response ONLY has `operation.name`. There must be an endpoint to check the operation status which returns the `mediaGenerationId`. Let's try to poll `/v1/operations/{name}` or similar if it exists (standard Google pattern). Actually, let's look at the logs again for "operations" in the URL. No grep results for "operations/" URL. Wait, I see "mediaGenerationId" in the GET request. Maybe the operation name IS the mediaId? Let's check the format. Operation Name: "a13bd1b5a45cda8eb8124ef9629a023f" (Hex string) Media ID: "CAUSJDc3Mj..." (Base64 string) They are different. There MUST be a missing link. How does the client know the media ID? Maybe I missed a log entry or the client calculates it? Unlikely to calculate. Let's assume there is a `getOperation` or `listOperations` call I missed, OR maybe the `batchAsync` returns it in a field I missed? Let's look at the `batchAsync` response body again. ```json { "operations": [ { "operation": {"name": "a13bd..."}, "sceneId": "ba4c...", "status": "MEDIA_GENERATION_STATUS_PENDING" } ] } ``` Nothing else. Hypothesis: The client polls `https://labs.google/fx/api/trpc/media.fetchUserHistory` or `media.get` using TRPC? Let's check the logs for `fetchUserHistory` or similar right after generation. Yes! Line 1177: `media.fetchUserHistoryDirectly`. And Line 1207 response contains `userWorkflows` with `mediaGenerationId`. So the flow is: 1. Call Generate -> Get Operation ID (and implicit success) 2. Poll `media.fetchUserHistoryDirectly` (or `fetchProjectWorkflows`) to see new items. 3. Match the new item (maybe by time or just take the latest) to get `mediaGenerationId`. 4. Poll `/v1/media/{mediaGenerationId}` for the actual URL. Let's implement `fetch_latest_media`. """ pass def fetch_latest_workflow(self, project_id: str = None, media_type: str = "VIDEO") -> Optional[Dict[str, Any]]: """ Fetches the latest workflow/media from history. Args: project_id: Optional project ID to filter by media_type: "VIDEO" uses PINHOLE, "IMAGE" uses ASSET_MANAGER Returns: The workflow dict containing 'name' (the media ID) and 'media' details """ url = "https://labs.google/fx/api/trpc/media.fetchUserHistoryDirectly" # CRITICAL: Use "PINHOLE" for VIDEO history, "ASSET_MANAGER" for IMAGE type_param = "PINHOLE" if media_type == "VIDEO" else "ASSET_MANAGER" input_data = { "json": { "type": type_param, "pageSize": 10, # Get more items to find our generation "responseScope": "RESPONSE_SCOPE_UNSPECIFIED", "cursor": None }, "meta": {"values": {"cursor": ["undefined"]}} } params = {"input": json.dumps(input_data)} print(f"[HISTORY] Querying {type_param} history...") resp = self.session.get(url, params=params) if resp.status_code != 200: print(f"[HISTORY] ⚠️ Failed to fetch history: {resp.status_code}") return None try: data = resp.json() workflows = data.get("result", {}).get("data", {}).get("json", {}).get("result", {}).get("userWorkflows", []) print(f"[HISTORY] Found {len(workflows)} workflows") if not workflows: return None # If filtering by project, find matching workflow if project_id: for wf in workflows: wf_pid = wf.get('media', {}).get('mediaGenerationId', {}).get('projectId') if wf_pid == project_id: media_id = wf.get('name', '') print(f"[HISTORY] ✅ Found matching project! MediaID: {media_id[:40]}...") return wf print(f"[HISTORY] No workflow matching projectId={project_id[:20]}...") # Return first workflow wf = workflows[0] media_id = wf.get('name', '') print(f"[HISTORY] First workflow MediaID: {media_id[:40]}...") return wf except Exception as e: print(f"[HISTORY] ⚠️ Error parsing history: {e}") import traceback traceback.print_exc() return None def fetch_workflows(self, project_id: str = None, media_type: str = "VIDEO", limit: int = 20) -> List[Dict]: """ Fetches a list of recent user workflows. """ url = "https://labs.google/fx/api/trpc/media.fetchUserHistoryDirectly" type_param = "PINHOLE" if media_type == "VIDEO" else "ASSET_MANAGER" input_data = { "json": { "type": type_param, "pageSize": limit, "responseScope": "RESPONSE_SCOPE_UNSPECIFIED", "cursor": None }, "meta": {"values": {"cursor": ["undefined"]}} } # Only include projectId if provided if project_id: input_data["json"]["projectId"] = project_id params = {"input": json.dumps(input_data)} try: resp = self.session.get(url, params=params) if resp.status_code != 200: print(f"[HISTORY] ⚠️ Failed: {resp.status_code}") return [] data = resp.json() workflows = data.get("result", {}).get("data", {}).get("json", {}).get("result", {}).get("userWorkflows", []) # Filter by project_id if provided and not handled by API if project_id: filtered = [] for wf in workflows: wf_pid = wf.get('media', {}).get('mediaGenerationId', {}).get('projectId') if wf_pid == project_id: filtered.append(wf) return filtered return workflows except Exception as e: print(f"[HISTORY] Error: {e}") return [] def get_video_status(self, media_generation_id: str) -> Dict[str, Any]: """ Gets the status and URL of a specific media generation. """ url = f"{self.base_url}/v1/media/{media_generation_id}" params = { "key": self.api_key, "clientContext.tool": self.tool_name, "returnUriOnly": "true" } resp = self.session.get(url, params=params) if resp.status_code != 200: return {"status": "ERROR", "error": resp.text} return resp.json() async def poll_for_completion(self, media_generation_id: str, timeout: int = 300) -> Dict[str, Any]: """ Polls until the video is ready or fails. """ start_time = time.time() while time.time() - start_time < timeout: result = self.get_video_status(media_generation_id) # Check if video object exists and has URL if "video" in result: video_info = result["video"] if "fifeUrl" in video_info: print("✅ Generation Complete!") return video_info # If it's an image generation, it might be different structure (handled in generate_image mostly) # But the API for media/{id} returns consistent structure usually. # TODO: robust status checking. # The API doesn't explicitly say "PENDING" in the GET /media response in the logs I saw? # Actually, I didn't see a "PENDING" response for GET /media in the logs snippet (it was 200 OK with URL). # It might return 404 or a different status if not ready. print("⏳ Waiting for generation...") await asyncio.sleep(5) return {"status": "TIMEOUT"} # Test execution (optional, can be called from main) if __name__ == "__main__": client = FlowClient() # Test project creation # pid = client.create_project()
github_python
2025-12-12T05:47:14Z
https://github.com/396001000/flow-Gemini-api/blob/9ff26bdee9fc8a35c42f5217afd9fb1abda8f19a/flow_api.py
{}
"""Vanilla Java->Python translation using state machine.""" import asyncio import re from dataclasses import dataclass from pathlib import Path from encompass.llm.ollama import OllamaModel llm = OllamaModel(model="llama3.1") def extract(text): if m := re.search(r"```python\s*(.*?)\s*```", text, re.DOTALL | re.I): return m.group(1).strip() if m := re.search(r"```\s*(.*?)\s*```", text, re.DOTALL): return m.group(1).strip() return text.strip() if text.strip().startswith(("class ", "def ", "import ")) else "" def valid(code): try: compile(code, "<s>", "exec") return bool(code) except Exception: return False @dataclass class State: java: str name: str step: str = "translate" result: str = "" tries: int = 0 class Machine: def __init__(self, java, name, max_tries=3): self.s = State(java=java, name=name) self.max = max_tries async def run(self): while True: if self.s.step == "translate": r = await llm.generate( f"Translate to Python:\n```java\n{self.s.java}\n```", max_tokens=2048 ) self.s.result = extract(r) self.s.step = "validate" elif self.s.step == "validate": if valid(self.s.result): return self.s.result self.s.tries += 1 self.s.step = "translate" if self.s.tries < self.max else "done" else: return self.s.result if valid(self.s.result) else "" async def main(): src = Path("examples/code_translation/input/jMinBpe/src/com/minbpe") out = Path("examples/code_translation/simple_experiment/output/vanilla") out.exists() and __import__("shutil").rmtree(out) out.mkdir(parents=True, exist_ok=True) for f in sorted(src.rglob("*.java")): print(f"{f.name}...", end=" ", flush=True) code = await Machine(f.read_text(), f.stem).run() if code: (out / f"{f.stem}.py").write_text(code) print(f"✓ {len(code.splitlines())} lines") else: print("✗") if __name__ == "__main__": asyncio.run(main())
"""Vanilla Java->Python translation using state machine.""" import asyncio import re from dataclasses import dataclass from pathlib import Path from encompass.llm.ollama import OllamaModel llm = OllamaModel(model="llama3.1") def extract(text): if m := re.search(r"```python\s*(.*?)\s*```", text, re.DOTALL | re.I): return m.group(1).strip() if m := re.search(r"```\s*(.*?)\s*```", text, re.DOTALL): return m.group(1).strip() return text.strip() if text.strip().startswith(("class ", "def ", "import ")) else "" def valid(code): try: compile(code, "<s>", "exec") return bool(code) except Exception: return False @dataclass class State: java: str name: str step: str = "translate" result: str = "" tries: int = 0 class Machine: def __init__(self, java, name, max_tries=3): self.s = State(java=java, name=name) self.max = max_tries async def run(self): while True: if self.s.step == "translate": r = await llm.generate( f"Translate to Python:\n```java\n{self.s.java}\n```", max_tokens=2048 ) self.s.result = extract(r) self.s.step = "validate" elif self.s.step == "validate": if valid(self.s.result): return self.s.result self.s.tries += 1 self.s.step = "translate" if self.s.tries < self.max else "done" else: return self.s.result if valid(self.s.result) else "" async def main(): src = Path("examples/code_translation/input/jMinBpe/src/com/minbpe") out = Path("examples/code_translation/simple_experiment/output/vanilla") out.exists() and __import__("shutil").rmtree(out) out.mkdir(parents=True, exist_ok=True) for f in sorted(src.rglob("*.java")): print(f"{f.name}...", end=" ", flush=True) code = await Machine(f.read_text(), f.stem).run() if code: (out / f"{f.stem}.py").write_text(code) print(f"✓ {len(code.splitlines())} lines") else: print("✗") if __name__ == "__main__": asyncio.run(main())
github_python
2025-12-14T21:08:02Z
https://github.com/nitin966/encompass/blob/a14327711683594e053d05e290d2b7e1ff7ad8b1/examples/code_translation/simple_experiment/baseline_vanilla.py
{}
Luang Phor Kring Hamso Phra Kru Sunthon Anukit (Kring Hamso) (พระครูสุนทรานุกิจ (กริ่ง หํโส); 23 July 1898 – 26 December 1987), affectionately known as Luang Pho Kring Wat Sam Chuk or Luang Pho Kring Wat Yang, was a Thai Theravāda Buddhist monk and respected ecclesiastical administrator in Suphan Buri Province. He served as abbot of both Wat Yang (Si Prachan District) and Wat Sam Chuk (Sam Chuk District), and for over forty years was the District Monk Leader (Chao Khana Amphoe) of Sam Chuk. He is regarded as one of the most prominent monks of Suphan Buri in the 20th century. Early life Kring Rungratri was born on 23 July 1898 (B.E. 2441) at house No. 3, Tambon Si Prachan, Si Prachan District, Suphan Buri Province, the eldest of five children of Mr. Yam and Mrs. Tong Rungratri. He was a relative to Phra Methee Thammasarn (Sai Thammasaro), the long-serving District Monk Leader of Si Prachan and abbot of Wat Ban Krang. As a boy he learned traditional Khmer and Thai script at a local temple. In 1909 (B.E. 2452) he attended Wat Kaeo School in Bangkok (later incorporated into Wat Samphanthawong, Bangkok), where he completed primary education. Monastic career On 27 April 1915 (B.E. 2458), at the age of 17, he was ordained as a samanera (novice) at Wat Yang in Si Prachan District by Phra Ajahn Phong Phrommasaro, the abbot, who became his principal teacher. He received full bhikkhu ordination on 20 May 1918 (B.E. 2461) at the same temple. His preceptor (upajjhāya) was Phra Kru Pluem, abbot of Wat Phrao. He was given the Pali name Hamso (หํโส). After ordination he continued his studies in Bangkok, first at Wat Suthat Thepwararam under Somdet Phra Mahā Samana Chao Kromma Phra Prohmmuni (later Somdet Phra Ariyavongsagatanana (Pa Tissatevo)), and later at Wat Thepthidaram at the invitation of his relative Phra Samu Sai Thammasaro. He began formal Pali studies but was forced to return to Suphan Buri upon the death of his preceptor Phra Ajahn Phong. Local villagers requested that he assume responsibility for Wat Yang; he became acting abbot around 1924–1926 and was officially appointed abbot in 1933 (B.E. 2476). In 1946 (B.E. 2489), Somdet Phra Vanarat (Phuen Tissadatto) appointed him acting District Monk Leader of Sam Chuk District. He relocated to Wat Sam Chuk, where he later became the confirmed District Monk Leader and abbot. Ecclesiastical ranks 1933 – Abbot of Wat Yang; titled Phra Athikan Kring Hamso 1936 – Granted the honorary title Phra Kru Prathuan 1947 – Appointed Phra Kru Sanyabat, Tri class, with the title Phra Kru Sunthon Anukit 1949 – Promoted to Tho class 1957 – Promoted to Ek class 1966 – Promoted to Special class (the highest rank for a Phra Kru Sanyabat) Later years and death In his later years Luang Pho Kring suffered a stroke and received treatment at the Monks’ Hospital in Bangkok and at Wat Thepthidaram. From 1985 (B.E. 2528) onward he resided permanently at Wat Yang for easier medical care. He died peacefully at Wat Yang on 26 December 1987 (B.E. 2530) at 5:30 p.m., aged 89, having completed 70 vassa (rains retreats). His royal-sponsored cremation ceremony was held at Wat Sam Chuk on 23 April 1988. Legacy A life-size statue of Luang Pho Kring, consecrated on 21 March 1987, is enshrined in the memorial pavilion for former abbots at Wat Yang, Si Prachan District.
Luang Phor Kring Hamso Phra Kru Sunthon Anukit (Kring Hamso) (พระครูสุนทรานุกิจ (กริ่ง หํโส); 23 July 1898 – 26 December 1987), affectionately known as Luang Pho Kring Wat Sam Chuk or Luang Pho Kring Wat Yang, was a Thai Theravāda Buddhist monk and respected ecclesiastical administrator in Suphan Buri Province. He served as abbot of both Wat Yang (Si Prachan District) and Wat Sam Chuk (Sam Chuk District), and for over forty years was the District Monk Leader (Chao Khana Amphoe) of Sam Chuk. He is regarded as one of the most prominent monks of Suphan Buri in the 20th century. Early life Kring Rungratri was born on 23 July 1898 (B.E. 2441) at house No. 3, Tambon Si Prachan, Si Prachan District, Suphan Buri Province, the eldest of five children of Mr. Yam and Mrs. Tong Rungratri. He was a relative to Phra Methee Thammasarn (Sai Thammasaro), the long-serving District Monk Leader of Si Prachan and abbot of Wat Ban Krang. As a boy he learned traditional Khmer and Thai script at a local temple. In 1909 (B.E. 2452) he attended Wat Kaeo School in Bangkok (later incorporated into Wat Samphanthawong, Bangkok), where he completed primary education. Monastic career On 27 April 1915 (B.E. 2458), at the age of 17, he was ordained as a samanera (novice) at Wat Yang in Si Prachan District by Phra Ajahn Phong Phrommasaro, the abbot, who became his principal teacher. He received full bhikkhu ordination on 20 May 1918 (B.E. 2461) at the same temple. His preceptor (upajjhāya) was Phra Kru Pluem, abbot of Wat Phrao. He was given the Pali name Hamso (หํโส). After ordination he continued his studies in Bangkok, first at Wat Suthat Thepwararam under Somdet Phra Mahā Samana Chao Kromma Phra Prohmmuni (later Somdet Phra Ariyavongsagatanana (Pa Tissatevo)), and later at Wat Thepthidaram at the invitation of his relative Phra Samu Sai Thammasaro. He began formal Pali studies but was forced to return to Suphan Buri upon the death of his preceptor Phra Ajahn Phong. Local villagers requested that he assume responsibility for Wat Yang; he became acting abbot around 1924–1926 and was officially appointed abbot in 1933 (B.E. 2476). In 1946 (B.E. 2489), Somdet Phra Vanarat (Phuen Tissadatto) appointed him acting District Monk Leader of Sam Chuk District. He relocated to Wat Sam Chuk, where he later became the confirmed District Monk Leader and abbot. Ecclesiastical ranks 1933 – Abbot of Wat Yang; titled Phra Athikan Kring Hamso 1936 – Granted the honorary title Phra Kru Prathuan 1947 – Appointed Phra Kru Sanyabat, Tri class, with the title Phra Kru Sunthon Anukit 1949 – Promoted to Tho class 1957 – Promoted to Ek class 1966 – Promoted to Special class (the highest rank for a Phra Kru Sanyabat) Later years and death In his later years Luang Pho Kring suffered a stroke and received treatment at the Monks’ Hospital in Bangkok and at Wat Thepthidaram. From 1985 (B.E. 2528) onward he resided permanently at Wat Yang for easier medical care. He died peacefully at Wat Yang on 26 December 1987 (B.E. 2530) at 5:30 p.m., aged 89, having completed 70 vassa (rains retreats). His royal-sponsored cremation ceremony was held at Wat Sam Chuk on 23 April 1988. Legacy A life-size statue of Luang Pho Kring, consecrated on 21 March 1987, is enshrined in the memorial pavilion for former abbots at Wat Yang, Si Prachan District.
wikipedia_english
2025-12-01T02:18:43Z
https://en.wikipedia.org/wiki/Luang_Phor_Kring_Hamso
{"title": "Luang Phor Kring Hamso", "entry_created_at": "2025-12-01T02:18:43Z", "crawled_at": "2025-12-15T12:52:17Z"}
Video about production of Mykhailo Dobkin's campaign ad 2007 controversy involving a Ukrainian mayoral candidate A still from the video, featuring Dobkin On 27 September 2007, a video was published on YouTube that contained a montage of leaked takes made in December 2005 in preparation for an electoral campaign ad of Mykhailo Dobkin. Dobkin ("Dopa", Допа), the person in the video, was running for city mayor of Kharkiv, the second-largest city in Ukraine; while Hennadiy Kernes ("Gyepa", Гепа), then city council secretary and later mayor of the city, was directing the filming. The editor's identity behind the compilation is unknown, but Rostyslav Kasianenko, then editor-in-chief of Gorodskoy dozor, a local news outlet close to the campaign, alleged that the video appeared due to a rival candidate's revenge campaign, which was started because a Dobkin-aligned local TV station's propaganda effectively ended his run for mayor. The duo's profanity-laden Russian-language dialogue went viral on social media. The next day, it reached the top 10 of the most daily YouTube video views in the world, and has become a source for many quotes and memes circulated in the Russian internet community. A follow-up video of unclear origin but supposedly also including fragments from the production scene were published in January 2008. Leak City council secretary and filming director Hennadiy Kernes (left) and mayoral candidate Mykhailo Dobkin (right) are the main figures in the video (here photographed in 2012) Mykhailo Dobkin was running for the office of mayor of Kharkiv, the second-largest city in Ukraine. In December 2005, Dobkin and his assistant Hennadiy Kernes, then city council secretary, were recording a speech to inhabitants of the city. Meanwhile, Vladyslav Protas, a local businessman, was running as a candidate informally supported by the rival camp led by then-governor of Kharkiv Oblast, Arsen Avakov. According a 2017 statement that Rostislav Kasianenko, then editor-in-chief of Gorodskoy dozor, a local news outlet close to the campaign, gave to a journalist, Protas did not pay his film editor, so the disgruntled employee went to the newspaper and leaked Protas's preparations leading to his address. The kompromat was then repeatedly aired on Channel 7, a local TV station then owned by Kernes and Avakov. This destroyed the businessman's campaign, who vowed revenge. Dobkin, supported by the pro-Russian Party of Regions, won the election on 26 March 2006, easily defeating incumbent Volodymyr Shumilkin (ru) after the latter chose to go against the city's political trends and side with President Viktor Yushchenko's Our Ukraine party. Kasianenko alleged that somebody agreed to download the uncut two-hour recording from the TV channel's servers and give it to Protas for $500, and this was distilled into the original video. The journalist then suggested that Protas paid about $80,000 for promotion of the story at various outlets, but then Kernes himself started testing the leak on focus groups to see their reaction - which was apparently that of "sincere and innocent laughter". On 27 September 2007, the cuts were leaked to several local TV stations, which broadcast the censored versions, and to YouTube, this time with the original soundtrack. This happened just three days before the snap election to the Verkhovna Rada, Ukraine's parliament. Contents The video features several cuts of Dobkin trying to read a prepared speech in Russian, repeatedly struggling to do it properly and complaining about the quality of the speechwriting. He and Kernes, the main voice from behind the scenes, are shown to engage in a dialogue that contains ample profanity. The YouTube video was published under a generic title "Mikhail Dobkin, mayor of the city of Kharkiv". The two protagonists later became known as "Gyepa and Dopa" - the former was Kernes's prison moniker and coincidentally is a Ukrainian word meaning "buttocks", while the latter's nickname is close to Ukrainian дупа, a vulgar term for the same body part. Some video duplicates are thus known under the duo's nicknames. Some excerpts from the dialogue, including with the immediate context, are shown in the table below; and citations are given for news outlets that noted the quotes. Reactions The video became an instant hit in the Runet-adjacent world and went viral. On 28 September, it reached eighth place in the ranking of most YouTube video views on that day in the world, at over 100,000 views, and reportedly stayed there at least until 6 October. As of December 2025, the original video reached 8.5 million views, not counting any duplicates or remakes. Dobkin's and Kernes's public profile rose so much it arguably could compete with the prominence of leaders from the Commonwealth of Independent States. The BBC, citing a Ukrainian blogger, attributed the "epic" character of the video not only to the vulgar catchphrases but also to the appearance that Dobkin had a bleak understanding of what the speech was about and to his terrible reading even after being coached. The video spawned numerous parodies imitating the dialogue, including in settings like a Mortal Kombat-styled battle. The dialogue was also emulated in Election Day 2 (ru), a Russian satirical comedy published in 2016. In spring 2011, Hennadiy Minaiev (uk), the mayor of Sumy, was explicitly comparing the YouTube popularity of his gaffe against that of Dobkin's, arguing that his metrics are better. The Party of Regions' campaign director, Borys Kolesnikov, assured at the time of publication that while he was laughing at the video, he would not change his mind about the leadership in Kharkiv. From protagonists Shortly after the interview's publication, Dobkin implied that the video was doctored, accused the leakers of trying to destroy the relationship between Kernes and Dobkin and estimated the sale price for the leak at $60,000. Dobkin admitted in retrospect that his attitude varied from "anger to fully positive relationship" and that he used to rewatch it often, but as of 2017 he no longer does. In 2011, Dobkin said that "well, we laughed a month or two but what's the result? Where are we and where are those who made it ?" (At the time, Dobkin served as appointed governor of Kharkiv Oblast while Kernes was elected mayor of Kharkiv, in which position he served until his death in 2020). The next year, Russian director Nikita Mikhalkov used the video incident in his documentary about political philosopher Ivan Ilyin to convince viewers that holding elections in Russia is harmful because, he argued, there was no other way that Russian politicians were elected and that elections did not prevent "Mishas with his PR-men" from getting to power. Dobkin was reportedly offended by the comparison. He wrote back that he was "not expecting that rated humble ability to change the choices of your compatriots so highly". Kernes, on the other hand, embraced the fame. The politician repeatedly insisted that there was nothing wrong with the style of communication. Kernes said that that he knew the video "by heart" and that his favorite quote was that about the "dull face". In a 2013 interview with Dmitry Gordon, he recalled that he saw the video before Dobkin and was not agitated upon seeing it, and that he advised Dobkin that "politics starts from scandal; here ha a scandal." He thought that the controversial fame was actually to Dobkin's benefit. When asked if the authors of the "idiotic" texts were punished, Kernes disagreed with Dobkin's description. However, in an apparent leaked phone conversation with oligarch Ihor Kolomoyskyi, Kernes suggested that there was no profit from the video, just the "pain in the ass". In the call, Kolomoyskyi badgered Kernes with his proposal to sign an agreement to transfer copyright for the video so that he could reap 75% of the profits; Kernes repeatedly dodged the question before ultimately declining.
Video about production of Mykhailo Dobkin's campaign ad 2007 controversy involving a Ukrainian mayoral candidate A still from the video, featuring Dobkin On 27 September 2007, a video was published on YouTube that contained a montage of leaked takes made in December 2005 in preparation for an electoral campaign ad of Mykhailo Dobkin. Dobkin ("Dopa", Допа), the person in the video, was running for city mayor of Kharkiv, the second-largest city in Ukraine; while Hennadiy Kernes ("Gyepa", Гепа), then city council secretary and later mayor of the city, was directing the filming. The editor's identity behind the compilation is unknown, but Rostyslav Kasianenko, then editor-in-chief of Gorodskoy dozor, a local news outlet close to the campaign, alleged that the video appeared due to a rival candidate's revenge campaign, which was started because a Dobkin-aligned local TV station's propaganda effectively ended his run for mayor. The duo's profanity-laden Russian-language dialogue went viral on social media. The next day, it reached the top 10 of the most daily YouTube video views in the world, and has become a source for many quotes and memes circulated in the Russian internet community. A follow-up video of unclear origin but supposedly also including fragments from the production scene were published in January 2008. Leak City council secretary and filming director Hennadiy Kernes (left) and mayoral candidate Mykhailo Dobkin (right) are the main figures in the video (here photographed in 2012) Mykhailo Dobkin was running for the office of mayor of Kharkiv, the second-largest city in Ukraine. In December 2005, Dobkin and his assistant Hennadiy Kernes, then city council secretary, were recording a speech to inhabitants of the city. Meanwhile, Vladyslav Protas, a local businessman, was running as a candidate informally supported by the rival camp led by then-governor of Kharkiv Oblast, Arsen Avakov. According a 2017 statement that Rostislav Kasianenko, then editor-in-chief of Gorodskoy dozor, a local news outlet close to the campaign, gave to a journalist, Protas did not pay his film editor, so the disgruntled employee went to the newspaper and leaked Protas's preparations leading to his address. The kompromat was then repeatedly aired on Channel 7, a local TV station then owned by Kernes and Avakov. This destroyed the businessman's campaign, who vowed revenge. Dobkin, supported by the pro-Russian Party of Regions, won the election on 26 March 2006, easily defeating incumbent Volodymyr Shumilkin (ru) after the latter chose to go against the city's political trends and side with President Viktor Yushchenko's Our Ukraine party. Kasianenko alleged that somebody agreed to download the uncut two-hour recording from the TV channel's servers and give it to Protas for $500, and this was distilled into the original video. The journalist then suggested that Protas paid about $80,000 for promotion of the story at various outlets, but then Kernes himself started testing the leak on focus groups to see their reaction - which was apparently that of "sincere and innocent laughter". On 27 September 2007, the cuts were leaked to several local TV stations, which broadcast the censored versions, and to YouTube, this time with the original soundtrack. This happened just three days before the snap election to the Verkhovna Rada, Ukraine's parliament. Contents The video features several cuts of Dobkin trying to read a prepared speech in Russian, repeatedly struggling to do it properly and complaining about the quality of the speechwriting. He and Kernes, the main voice from behind the scenes, are shown to engage in a dialogue that contains ample profanity. The YouTube video was published under a generic title "Mikhail Dobkin, mayor of the city of Kharkiv". The two protagonists later became known as "Gyepa and Dopa" - the former was Kernes's prison moniker and coincidentally is a Ukrainian word meaning "buttocks", while the latter's nickname is close to Ukrainian дупа, a vulgar term for the same body part. Some video duplicates are thus known under the duo's nicknames. Some excerpts from the dialogue, including with the immediate context, are shown in the table below; and citations are given for news outlets that noted the quotes. Reactions The video became an instant hit in the Runet-adjacent world and went viral. On 28 September, it reached eighth place in the ranking of most YouTube video views on that day in the world, at over 100,000 views, and reportedly stayed there at least until 6 October. As of December 2025, the original video reached 8.5 million views, not counting any duplicates or remakes. Dobkin's and Kernes's public profile rose so much it arguably could compete with the prominence of leaders from the Commonwealth of Independent States. The BBC, citing a Ukrainian blogger, attributed the "epic" character of the video not only to the vulgar catchphrases but also to the appearance that Dobkin had a bleak understanding of what the speech was about and to his terrible reading even after being coached. The video spawned numerous parodies imitating the dialogue, including in settings like a Mortal Kombat-styled battle. The dialogue was also emulated in Election Day 2 (ru), a Russian satirical comedy published in 2016. In spring 2011, Hennadiy Minaiev (uk), the mayor of Sumy, was explicitly comparing the YouTube popularity of his gaffe against that of Dobkin's, arguing that his metrics are better. The Party of Regions' campaign director, Borys Kolesnikov, assured at the time of publication that while he was laughing at the video, he would not change his mind about the leadership in Kharkiv. From protagonists Shortly after the interview's publication, Dobkin implied that the video was doctored, accused the leakers of trying to destroy the relationship between Kernes and Dobkin and estimated the sale price for the leak at $60,000. Dobkin admitted in retrospect that his attitude varied from "anger to fully positive relationship" and that he used to rewatch it often, but as of 2017 he no longer does. In 2011, Dobkin said that "well, we laughed a month or two but what's the result? Where are we and where are those who made it ?" (At the time, Dobkin served as appointed governor of Kharkiv Oblast while Kernes was elected mayor of Kharkiv, in which position he served until his death in 2020). The next year, Russian director Nikita Mikhalkov used the video incident in his documentary about political philosopher Ivan Ilyin to convince viewers that holding elections in Russia is harmful because, he argued, there was no other way that Russian politicians were elected and that elections did not prevent "Mishas with his PR-men" from getting to power. Dobkin was reportedly offended by the comparison. He wrote back that he was "not expecting that rated humble ability to change the choices of your compatriots so highly". Kernes, on the other hand, embraced the fame. The politician repeatedly insisted that there was nothing wrong with the style of communication. Kernes said that that he knew the video "by heart" and that his favorite quote was that about the "dull face". In a 2013 interview with Dmitry Gordon, he recalled that he saw the video before Dobkin and was not agitated upon seeing it, and that he advised Dobkin that "politics starts from scandal; here ha a scandal." He thought that the controversial fame was actually to Dobkin's benefit. When asked if the authors of the "idiotic" texts were punished, Kernes disagreed with Dobkin's description. However, in an apparent leaked phone conversation with oligarch Ihor Kolomoyskyi, Kernes suggested that there was no profit from the video, just the "pain in the ass". In the call, Kolomoyskyi badgered Kernes with his proposal to sign an agreement to transfer copyright for the video so that he could reap 75% of the profits; Kernes repeatedly dodged the question before ultimately declining.
wikipedia_english
2025-12-10T01:25:05Z
https://en.wikipedia.org/wiki/Video_about_production_of_Mykhailo_Dobkin's_campaign_ad
{"title": "Video about production of Mykhailo Dobkin's campaign ad", "entry_created_at": "2025-12-10T01:25:05Z", "crawled_at": "2025-12-15T12:52:18Z"}
Jason Eaton (footballer) Former English footballer Jason Eaton (born 29 January 1969) is a former English footballer who most prominently played for Cheltenham Town. Eaton was born in Bristol, England. At age 17, he signed for local side Bristol Rovers as a central forward. After playing 1 season at Rovers, manager Gerry Francis did not renew his contract. He then played for Clevedon and Trowbridge before he signed for Bristol City, who Eaton was a childhood fan of, and played for them for half a season. In November 1990 he signed for semi-professional team Gloucester City for a fee of around £10,000. He played for Gloucester for just under 2 years before signing for rivals Cheltenham Town for £19,000. Eaton had his most successful spell at Cheltenham, winning the 1998 FA Trophy final with them, scoring a 79th minute header against Southport at Wembley Stadium. After playing 6 seasons with Cheltenham, Eaton signed for Yeovil Town and played for a handful of local teams before once again joining Gloucester in 2004. At his last season at Gloucester, he only scored once before being released from the club.
Jason Eaton (footballer) Former English footballer Jason Eaton (born 29 January 1969) is a former English footballer who most prominently played for Cheltenham Town. Eaton was born in Bristol, England. At age 17, he signed for local side Bristol Rovers as a central forward. After playing 1 season at Rovers, manager Gerry Francis did not renew his contract. He then played for Clevedon and Trowbridge before he signed for Bristol City, who Eaton was a childhood fan of, and played for them for half a season. In November 1990 he signed for semi-professional team Gloucester City for a fee of around £10,000. He played for Gloucester for just under 2 years before signing for rivals Cheltenham Town for £19,000. Eaton had his most successful spell at Cheltenham, winning the 1998 FA Trophy final with them, scoring a 79th minute header against Southport at Wembley Stadium. After playing 6 seasons with Cheltenham, Eaton signed for Yeovil Town and played for a handful of local teams before once again joining Gloucester in 2004. At his last season at Gloucester, he only scored once before being released from the club.
wikipedia_english
2025-12-07T02:04:45Z
https://en.wikipedia.org/wiki/Jason_Eaton_(footballer)
{"title": "Jason Eaton (footballer)", "entry_created_at": "2025-12-07T02:04:45Z", "crawled_at": "2025-12-15T12:52:18Z"}
NGC 6209 Galaxy in the constellation Apus NGC 6209 is a spiral galaxy in the constellation of Apus. Its velocity with respect to the cosmic microwave background is 5,916±11 km/s, which corresponds to a Hubble distance of 284.6 ± 19.9 Mly (87.26 ± 6.11 Mpc). However, 13 non-redshift measurements give a closer mean distance of 247.90 ± 5.50 Mly (76.008 ± 1.685 Mpc). It was discovered by British astronomer John Herschel on 28 June 1835. NGC 6209 is a Seyfert II galaxy, i.e. it has a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable. Additionally, NGC 6209 has a possible active galactic nucleus, i.e. it has a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. Supernovae Two supernovae have been observed in NGC 6209: SN 1998cx (Type Ia, mag. 17.8) was discovered by Alexander Wassilieff on 4 July 1998. SN 2009fz (Type IIb, mag. 16.5) was discovered by the CHilean Automatic Supernova sEarch (CHASE) on 8 June 2009.
NGC 6209 Galaxy in the constellation Apus NGC 6209 is a spiral galaxy in the constellation of Apus. Its velocity with respect to the cosmic microwave background is 5,916±11 km/s, which corresponds to a Hubble distance of 284.6 ± 19.9 Mly (87.26 ± 6.11 Mpc). However, 13 non-redshift measurements give a closer mean distance of 247.90 ± 5.50 Mly (76.008 ± 1.685 Mpc). It was discovered by British astronomer John Herschel on 28 June 1835. NGC 6209 is a Seyfert II galaxy, i.e. it has a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable. Additionally, NGC 6209 has a possible active galactic nucleus, i.e. it has a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. Supernovae Two supernovae have been observed in NGC 6209: SN 1998cx (Type Ia, mag. 17.8) was discovered by Alexander Wassilieff on 4 July 1998. SN 2009fz (Type IIb, mag. 16.5) was discovered by the CHilean Automatic Supernova sEarch (CHASE) on 8 June 2009.
wikipedia_english
2025-12-01T19:48:32Z
https://en.wikipedia.org/wiki/NGC_6209
{"title": "NGC 6209", "entry_created_at": "2025-12-01T19:48:32Z", "crawled_at": "2025-12-15T12:52:19Z"}
L'Adone L'Adone (Adonis) is an epic poem in Italian by Giovan Battista Marino, first published in Paris in 1623 by Olivier de Varennes (1598-1666) and dedicated to Louis XIII. It tells the love story of Venus and the eponymous Adonis and with 5,124 ottave and 40,992 verses is one of the longest poems in Italian literature - it is slightly longer than Orlando furioso and around three times as long as the Divina Commedia and Gerusalemme liberata. Before its twenty canti, the volume also contains a preface by the French critic Jean Chapelain justifying the poem as an epic but not heroic "poem of peace", followed by an introductory letter addressed to queen Marie de' Medici (then ruling as regent for her son Louis) asking her to intercede with the king on the poet's behalf. Before each canto is a prose 'Argomenti' ('Arguments') by Fortuniano Sanvitale and Allegorie ('Allegories'), attributed to don Lorenzo Scoto. These are both intended to demonstrate the text's moral significance and its message that (as stated in the preface) "immoderate pleasure ends in pain". Each canto is preceded by prose "Arguments" composed by Fortuniano Sanvitale and "Allegories" attributed to Don Lorenzo Scoto, which are supposed to explain the moral meaning of the text (whose teaching, as stated in the preface, is "immoderate pleasure ends in pain"). The last nineteen canti each have a title and a six-octave preface, though the first canto has a twelve octave preface. Writing Marino took his whole life to write the work, starting during his time in Naples and finally publishing it in Paris. Its progress can be tracked through mentions in his letters and in prefaces to his other works. He wrote of it as an idyllic poem in 1584, including descriptions of his loves and of his death. In 1605 the work seems to have first been published in three canti, one each on falling in love, loves and death. As of 1614 it consisted of "just over a thousand verses" in four canti (loves, amusements, departure, death) and the following year Marino wrote to Fortuniano Sanvitale from Turin, stating that the poem was divided up into twelve canti and was as long as Gerusalemme liberata and that he intended to publish it when he reached Paris. On arrival in Paris in 1616 Marino wrote a letter stating that the poem "is divided up into 24 canti and is as long as Orlando Furioso", although - as mentioned above - it was later published in 20 canti not 24. This clearly indicates how complex the poem already was even at that stage. The poem's true size is shown by a manuscript now in Paris, known as "Adone1616", which contains its first three canti, in which it is dedicated to Maria de' Medici and her favourite Concino Concini. However, the abrupt change in the political situation at court (with Louis seizing power, removing his mother and having Concini violently killed) forced Marino to revisit the poem. It was completely revised between 1617 and 1621 and expanded to the huge twenty canti of the final version, dedicated to Louis with Maria's intercession, with the original ottave dedicated to Concini downgraded to become canto XI. A 1614 document known as the 'Claretti Letter' shows where Marino go the material for this full revision. It is a dedicatory letter which opens the third part of Marino's Rime, signed by Onorato Claretti but in Marino's handwriting. Some scattered references in the poet's letters speak at length of the many poetic projects he was then drafting, of which little or no other trace remains, namely Trasformazioni, Gerusalemme distrutta (of which only canto VII survives, published posthumously), Polifemo and Polinnia. The descriptions the poet gives of these lost works matches many parts of L'Adone and they were probably recycled in various ways, possibly in Canti V-VII and XIX-XX. Some letters (such as a 1619 letter from Marino to Ottavio Magnanini stating that Adonis would be killed by Mars in the form of a boar, something that does not happen in the poem) and from the vastness of the errata (in which whole sequences of stanzas are sometimes added), Marino was like Ariosto in that he kept interacting with the poem until the very end, even while it was being printed. Plot Problem with sources Poetic structure Reception Church As soon as the poem was published Marino returned to Italy, where he had unresolved questions to answer to the Inquisition. A work like L'Adone was a poor fit for Italy under the newly-elected Pope Urban VIII and sumptuous Barberini Rome. Urban's intervention against Marino was one of the first acts of his pontificate and aimed at clearing up the ambiguous relations between the intelligentsia and the Church and discouraging the spread of some cultural attitudes. It could even be seen as the first step on the road to the condemnation of Galileo Galilei in 1633. On 22 April 1624 cardinal Giannettino Doria lodged a complaint against the poem (though ironically he was dedicatee of Lira III), followed by a condemnation from Urban (who would sign all three decrees against the poem) on 11 June. That condemnation did allow for the possibility of Marino correcting the poem and left the question of publishing the work in Rome open. Marino was very keen on publishing the work in Rome but had no more than a month to spare to bring it about. He made some corrections before leaving Rome for Naples, leaving further corrections to Antonio Bruni and Girolamo Preti under the instruction of Father Vincenzo Martinelli, 'socius' of the Master of the Sacro Palazzo (the top papal censorship body). However, nothing was done until the year was almost over - Marino did not continue to make corrections and Martinelli, Bruni and Preti do not appear to have rewritten a single line, even though Martinelli had officially been put in charge of correcting the work. Clearly Marino's friends and the Holy Office itself had got to know the work well enough to realise that lasiciviousness played a minor role in its overall structure and that the incriminating passages were among the least serious even by the criteria of that time (as supported by Giovanni Pozzi). Even so, the sacred material continually alluded to behind the poem's plot certainly annoyed the catholic elite. Finally, on 27 November, the poem was condemned as having "corrupt morals due to its extreme obscenity", though the harsh sentence was not made public. His friend cardinal Carlo Emanuel Pio of Savoy took on the case of L'Adone. A second condemnation followed on 17 July 1625, after Marino's death, leading his friends and men of letters (especially from the Accademia degli Umoristi) to begin a decades long campaign to seek a compromise from the Holy Office. Nothing concrete remains of what would eventually be done to the body of L'Adone. The campaigners fought on several fronts, firstly in a series of hagiographies, before gradually concentrating their energies on the quarrel sparked by "L'Occhiale" by Tommaso Stigliani (1627). A third condemnation on 5 November 1626 was definitive, though the work continued to be republished for the rest of the century both abroad (especially by Elsevier), in Venice and in 1789 in Livorno (with the fake publication place of London). Even so, this third condemnation had a broader and longer significance that transcends even the demands of that climate and that papacy. Urban issued another condemnation against the rest of the work on 12 April 1688, as did pope Innocent XI on 27 September 1678. The work was hugely successful even after all these condemnations and remained in print. Literary critics As with Gerusalemme liberata, the poem unleashed a storm as soon as it was released - Tommaso Stigliani's pamphlet L'occhiale (1627, Venice) denounced its literary "thefts" and the general plot's incoherence. This was followed by several more over the course of the century both against Marino and supporting him: Agostino Lampugnani, Antiocchiale Andrea Barbazza, Le Strigliate a Tommaso Stigliani per Robusto Pogommega (1629) and Le Staffilate di Giovanni Capponi (1637); Girolamo Aleandro il Giovane, Difesa dell'Adone (1629); Gauges de Gozze, Vaglio etrusco e una Difesa d'alcuni luoghi principali dell'Adone rimasti manoscritti; Scipione Errico, L'Occhiale appannato (1629); Nicola Villani, Uccellatura di Vincenzo Foresi all'Occhiale del cavalier Tommaso Stigliani (1630) e Considerationi di Messer Fagiano sopra la seconda parte dell'Occhiale del cavalier Stigliani (1631); Angelico Aprosio Il vaglio critico di Masoto Galistoni da Terama, sopra Il mondo nuovo del cavalier Tomaso Stigliani da Matera (1637), Il buratto (1642), L'Occhiale stritolato (1642), La sferza poetica di Sapricio Saprici... per risposta alla Prima censura dell'Adone del Cavalier Marino fatta del Cavalier Tommaso Stigliani (1643) and Del veratro: apologia di Sapricio Saprici per risposta alla seconda censura dell'Adone del cavalier Marino, fatta dal cavalier Tommaso Stigliani (le cui due parti uscirono invertite, la I. nel 1645 e la II. nel 1647); Teofilo Gallaccini, Considerazioni sopra l'Occhiale, Giovanni Pietro D'Alessandro, Difesa dell'Adone; Francesco Busenello, La Coltre, ovvero Lo Stigliani sbalzato The criticisms usually centred on three aspects of the work: the lack of unity in its generaly plot, with a large number of interruptions and subplots the frankly erotic tone of part of the work, also connected with its religious themes (for example, the Christological traits in the portrayal of Adonis could have been meant to ridicule Catholicism but more probably the author's exclusive classicism meant he lent them no significance) the literary imitations and plagiarism flaunted in it, especially those to the detriment of contemporaries such as Stigliani himself Marino himself only replied to these criticisms privately and indirectly in his letters, lending little credence to the opinions of the 'pedantuzzi', the various comparisons with Gerusalemme liberata and the accusations of plagiarism.
L'Adone L'Adone (Adonis) is an epic poem in Italian by Giovan Battista Marino, first published in Paris in 1623 by Olivier de Varennes (1598-1666) and dedicated to Louis XIII. It tells the love story of Venus and the eponymous Adonis and with 5,124 ottave and 40,992 verses is one of the longest poems in Italian literature - it is slightly longer than Orlando furioso and around three times as long as the Divina Commedia and Gerusalemme liberata. Before its twenty canti, the volume also contains a preface by the French critic Jean Chapelain justifying the poem as an epic but not heroic "poem of peace", followed by an introductory letter addressed to queen Marie de' Medici (then ruling as regent for her son Louis) asking her to intercede with the king on the poet's behalf. Before each canto is a prose 'Argomenti' ('Arguments') by Fortuniano Sanvitale and Allegorie ('Allegories'), attributed to don Lorenzo Scoto. These are both intended to demonstrate the text's moral significance and its message that (as stated in the preface) "immoderate pleasure ends in pain". Each canto is preceded by prose "Arguments" composed by Fortuniano Sanvitale and "Allegories" attributed to Don Lorenzo Scoto, which are supposed to explain the moral meaning of the text (whose teaching, as stated in the preface, is "immoderate pleasure ends in pain"). The last nineteen canti each have a title and a six-octave preface, though the first canto has a twelve octave preface. Writing Marino took his whole life to write the work, starting during his time in Naples and finally publishing it in Paris. Its progress can be tracked through mentions in his letters and in prefaces to his other works. He wrote of it as an idyllic poem in 1584, including descriptions of his loves and of his death. In 1605 the work seems to have first been published in three canti, one each on falling in love, loves and death. As of 1614 it consisted of "just over a thousand verses" in four canti (loves, amusements, departure, death) and the following year Marino wrote to Fortuniano Sanvitale from Turin, stating that the poem was divided up into twelve canti and was as long as Gerusalemme liberata and that he intended to publish it when he reached Paris. On arrival in Paris in 1616 Marino wrote a letter stating that the poem "is divided up into 24 canti and is as long as Orlando Furioso", although - as mentioned above - it was later published in 20 canti not 24. This clearly indicates how complex the poem already was even at that stage. The poem's true size is shown by a manuscript now in Paris, known as "Adone1616", which contains its first three canti, in which it is dedicated to Maria de' Medici and her favourite Concino Concini. However, the abrupt change in the political situation at court (with Louis seizing power, removing his mother and having Concini violently killed) forced Marino to revisit the poem. It was completely revised between 1617 and 1621 and expanded to the huge twenty canti of the final version, dedicated to Louis with Maria's intercession, with the original ottave dedicated to Concini downgraded to become canto XI. A 1614 document known as the 'Claretti Letter' shows where Marino go the material for this full revision. It is a dedicatory letter which opens the third part of Marino's Rime, signed by Onorato Claretti but in Marino's handwriting. Some scattered references in the poet's letters speak at length of the many poetic projects he was then drafting, of which little or no other trace remains, namely Trasformazioni, Gerusalemme distrutta (of which only canto VII survives, published posthumously), Polifemo and Polinnia. The descriptions the poet gives of these lost works matches many parts of L'Adone and they were probably recycled in various ways, possibly in Canti V-VII and XIX-XX. Some letters (such as a 1619 letter from Marino to Ottavio Magnanini stating that Adonis would be killed by Mars in the form of a boar, something that does not happen in the poem) and from the vastness of the errata (in which whole sequences of stanzas are sometimes added), Marino was like Ariosto in that he kept interacting with the poem until the very end, even while it was being printed. Plot Problem with sources Poetic structure Reception Church As soon as the poem was published Marino returned to Italy, where he had unresolved questions to answer to the Inquisition. A work like L'Adone was a poor fit for Italy under the newly-elected Pope Urban VIII and sumptuous Barberini Rome. Urban's intervention against Marino was one of the first acts of his pontificate and aimed at clearing up the ambiguous relations between the intelligentsia and the Church and discouraging the spread of some cultural attitudes. It could even be seen as the first step on the road to the condemnation of Galileo Galilei in 1633. On 22 April 1624 cardinal Giannettino Doria lodged a complaint against the poem (though ironically he was dedicatee of Lira III), followed by a condemnation from Urban (who would sign all three decrees against the poem) on 11 June. That condemnation did allow for the possibility of Marino correcting the poem and left the question of publishing the work in Rome open. Marino was very keen on publishing the work in Rome but had no more than a month to spare to bring it about. He made some corrections before leaving Rome for Naples, leaving further corrections to Antonio Bruni and Girolamo Preti under the instruction of Father Vincenzo Martinelli, 'socius' of the Master of the Sacro Palazzo (the top papal censorship body). However, nothing was done until the year was almost over - Marino did not continue to make corrections and Martinelli, Bruni and Preti do not appear to have rewritten a single line, even though Martinelli had officially been put in charge of correcting the work. Clearly Marino's friends and the Holy Office itself had got to know the work well enough to realise that lasiciviousness played a minor role in its overall structure and that the incriminating passages were among the least serious even by the criteria of that time (as supported by Giovanni Pozzi). Even so, the sacred material continually alluded to behind the poem's plot certainly annoyed the catholic elite. Finally, on 27 November, the poem was condemned as having "corrupt morals due to its extreme obscenity", though the harsh sentence was not made public. His friend cardinal Carlo Emanuel Pio of Savoy took on the case of L'Adone. A second condemnation followed on 17 July 1625, after Marino's death, leading his friends and men of letters (especially from the Accademia degli Umoristi) to begin a decades long campaign to seek a compromise from the Holy Office. Nothing concrete remains of what would eventually be done to the body of L'Adone. The campaigners fought on several fronts, firstly in a series of hagiographies, before gradually concentrating their energies on the quarrel sparked by "L'Occhiale" by Tommaso Stigliani (1627). A third condemnation on 5 November 1626 was definitive, though the work continued to be republished for the rest of the century both abroad (especially by Elsevier), in Venice and in 1789 in Livorno (with the fake publication place of London). Even so, this third condemnation had a broader and longer significance that transcends even the demands of that climate and that papacy. Urban issued another condemnation against the rest of the work on 12 April 1688, as did pope Innocent XI on 27 September 1678. The work was hugely successful even after all these condemnations and remained in print. Literary critics As with Gerusalemme liberata, the poem unleashed a storm as soon as it was released - Tommaso Stigliani's pamphlet L'occhiale (1627, Venice) denounced its literary "thefts" and the general plot's incoherence. This was followed by several more over the course of the century both against Marino and supporting him: Agostino Lampugnani, Antiocchiale Andrea Barbazza, Le Strigliate a Tommaso Stigliani per Robusto Pogommega (1629) and Le Staffilate di Giovanni Capponi (1637); Girolamo Aleandro il Giovane, Difesa dell'Adone (1629); Gauges de Gozze, Vaglio etrusco e una Difesa d'alcuni luoghi principali dell'Adone rimasti manoscritti; Scipione Errico, L'Occhiale appannato (1629); Nicola Villani, Uccellatura di Vincenzo Foresi all'Occhiale del cavalier Tommaso Stigliani (1630) e Considerationi di Messer Fagiano sopra la seconda parte dell'Occhiale del cavalier Stigliani (1631); Angelico Aprosio Il vaglio critico di Masoto Galistoni da Terama, sopra Il mondo nuovo del cavalier Tomaso Stigliani da Matera (1637), Il buratto (1642), L'Occhiale stritolato (1642), La sferza poetica di Sapricio Saprici... per risposta alla Prima censura dell'Adone del Cavalier Marino fatta del Cavalier Tommaso Stigliani (1643) and Del veratro: apologia di Sapricio Saprici per risposta alla seconda censura dell'Adone del cavalier Marino, fatta dal cavalier Tommaso Stigliani (le cui due parti uscirono invertite, la I. nel 1645 e la II. nel 1647); Teofilo Gallaccini, Considerazioni sopra l'Occhiale, Giovanni Pietro D'Alessandro, Difesa dell'Adone; Francesco Busenello, La Coltre, ovvero Lo Stigliani sbalzato The criticisms usually centred on three aspects of the work: the lack of unity in its generaly plot, with a large number of interruptions and subplots the frankly erotic tone of part of the work, also connected with its religious themes (for example, the Christological traits in the portrayal of Adonis could have been meant to ridicule Catholicism but more probably the author's exclusive classicism meant he lent them no significance) the literary imitations and plagiarism flaunted in it, especially those to the detriment of contemporaries such as Stigliani himself Marino himself only replied to these criticisms privately and indirectly in his letters, lending little credence to the opinions of the 'pedantuzzi', the various comparisons with Gerusalemme liberata and the accusations of plagiarism. Musical adaptations Ottavio Tronsarelli, La catena d'Adone, favola boschereccia (Ciotti, Venezia 1626), set to music by Domenico Mazzocchi, revived and recorded in the modern era Paolo Vendramin, Adone. Tragedia musicale rappresentata in Venezia l'anno 1639 nel teatro de' SS. Giovanni e Paolo (Sarzina, Venice 1640), Giovan Matteo Giannini, L'Adone. Drama per musica (Venice 1676), L'Adone. Intermedio musicale per l'Accademia degl'Uniti (Bosio, Venice c.1690) Rinaldo Cialli, La Falsirena. Drama per musica da rappresentarsi nel teatro di S. Angelo l'anno 1690 (Nicolini, Venezia, 1690 circa). Critical editions L'Adone, ed. critica e commento a cura di G. Pozzi, Milano, Mondadori, 1976 ; L'Adone, ed. critica a cura di M. Pieri, Bari, Laterza, 1975-1977. L'Adone, ed. critica e commento a cura di M. Pieri, Roma, Istituto Poligrafico dello Stato, 1996. L'Adone, ed. critica e commento riveduto e corretto a cura di M. Pieri, La Finestra editrice, Lavis 2007 ISBN 978-8888097-69-5. L'Adone, ed. critica e commento a cura di E. Russo, Milano, Rizzoli, 2013.
wikipedia_english
2025-12-01T19:47:35Z
https://en.wikipedia.org/wiki/L'Adone
{"title": "L'Adone", "entry_created_at": "2025-12-01T19:47:35Z", "crawled_at": "2025-12-15T12:52:20Z"}
Germiston South (House of Assembly of South Africa constituency) Germiston South (Afrikaans: Germiston-Suid) was a short-lived constituency in the Transvaal Province of South Africa, which existed only for the 1938 general election. It covered a part of the East Rand centred on the southern part of Germiston. It elected one member to the House of Assembly and one to the Transvaal Provincial Council. Franchise notes When the Union of South Africa was formed in 1910, the electoral qualifications in use in each pre-existing colony were kept in place. In the Transvaal Colony, and its predecessor the South African Republic, the vote was restricted to white men, and as such, elections in the Transvaal Province were held on a whites-only franchise from the beginning. The franchise was also restricted by property and education qualifications until the 1933 general election, following the passage of the Women's Enfranchisement Act, 1930 and the Franchise Laws Amendment Act, 1931. From then on, the franchise was given to all white citizens aged 21 or over. Non-whites remained disenfranchised until the end of apartheid and the introduction of universal suffrage in 1994. History Germiston North was only contested once, in 1938, when it was held by the United Party's J. G. N. Strauss over a divided field. In 1943, Germiston was reconfigured into a "town" and a District seat, and Strauss stood for and won election in Germiston District. Members Detailed results Elections in the 1930s This section is an excerpt from Results of the 1938 South African general election § Germiston South.
Germiston South (House of Assembly of South Africa constituency) Germiston South (Afrikaans: Germiston-Suid) was a short-lived constituency in the Transvaal Province of South Africa, which existed only for the 1938 general election. It covered a part of the East Rand centred on the southern part of Germiston. It elected one member to the House of Assembly and one to the Transvaal Provincial Council. Franchise notes When the Union of South Africa was formed in 1910, the electoral qualifications in use in each pre-existing colony were kept in place. In the Transvaal Colony, and its predecessor the South African Republic, the vote was restricted to white men, and as such, elections in the Transvaal Province were held on a whites-only franchise from the beginning. The franchise was also restricted by property and education qualifications until the 1933 general election, following the passage of the Women's Enfranchisement Act, 1930 and the Franchise Laws Amendment Act, 1931. From then on, the franchise was given to all white citizens aged 21 or over. Non-whites remained disenfranchised until the end of apartheid and the introduction of universal suffrage in 1994. History Germiston North was only contested once, in 1938, when it was held by the United Party's J. G. N. Strauss over a divided field. In 1943, Germiston was reconfigured into a "town" and a District seat, and Strauss stood for and won election in Germiston District. Members Detailed results Elections in the 1930s This section is an excerpt from Results of the 1938 South African general election § Germiston South.
wikipedia_english
2025-12-01T19:46:51Z
https://en.wikipedia.org/wiki/Germiston_South_(House_of_Assembly_of_South_Africa_constituency)
{"title": "Germiston South (House of Assembly of South Africa constituency)", "entry_created_at": "2025-12-01T19:46:51Z", "crawled_at": "2025-12-15T12:52:20Z"}
Alfredo Erquicia Spanish military officer (1897–1978) In this Spanish name, the first or paternal surname is Erquicia and the second or maternal family name is Aranda. Alfredo Erquicia Aranda (24 October 1897 – 22 October 1978) was a Spanish military officer. Biography He entered the Toledo Infantry Academy in 1913, promoted to lieutenant in 1918 and assigned to the Infantry Regiment "Asturias" No. 31, participating in the Rif War, during which he commanded a company of the Moroccan indigenous police. After the outbreak of the Spanish Civil War, he joined the Nationalist faction. In the early days of the conflict, he organized the so-called "Volunteer Mounted Police Group" in Seville, an auxiliary surveillance force specializing in rearguard repression, of which Erquicia himself would be the commander. At the beginning of 1937, he participated in the conquest of Málaga, leading the "Antequera–Abdalajís" column, operating in the north of the Province of Málaga. During the course of the war, he also commanded the 2nd Brigade of the 32nd Division and the 2nd Brigade of the 102nd Division. Later, after rising to the rank of colonel, he obtained command of the 22nd Division, that covered the Cordoba front. During the Francoist dictatorship he continued his military career, serving as head of the divisional infantry of the 23rd and 22nd divisions, and later as commander of the Armoured Division No. 1 "Brunete", based in Madrid. In 1959, after rising to the rank of lieutenant general, he was appointed Captain General of the Canary Islands. Later, in 1962, he would be appointed head of the Spanish Army of North Africa and Governor General of the plazas de soberanía. He went into reserve in 1967. He died in Jerez de la Frontera on 22 October 1978. Awards Grand Cross of the Royal and Military Order of Saint Hermenegild (1947) Grand Cross of Military Merit (1949) Grand Cross of Naval Merit (1966)
Alfredo Erquicia Spanish military officer (1897–1978) In this Spanish name, the first or paternal surname is Erquicia and the second or maternal family name is Aranda. Alfredo Erquicia Aranda (24 October 1897 – 22 October 1978) was a Spanish military officer. Biography He entered the Toledo Infantry Academy in 1913, promoted to lieutenant in 1918 and assigned to the Infantry Regiment "Asturias" No. 31, participating in the Rif War, during which he commanded a company of the Moroccan indigenous police. After the outbreak of the Spanish Civil War, he joined the Nationalist faction. In the early days of the conflict, he organized the so-called "Volunteer Mounted Police Group" in Seville, an auxiliary surveillance force specializing in rearguard repression, of which Erquicia himself would be the commander. At the beginning of 1937, he participated in the conquest of Málaga, leading the "Antequera–Abdalajís" column, operating in the north of the Province of Málaga. During the course of the war, he also commanded the 2nd Brigade of the 32nd Division and the 2nd Brigade of the 102nd Division. Later, after rising to the rank of colonel, he obtained command of the 22nd Division, that covered the Cordoba front. During the Francoist dictatorship he continued his military career, serving as head of the divisional infantry of the 23rd and 22nd divisions, and later as commander of the Armoured Division No. 1 "Brunete", based in Madrid. In 1959, after rising to the rank of lieutenant general, he was appointed Captain General of the Canary Islands. Later, in 1962, he would be appointed head of the Spanish Army of North Africa and Governor General of the plazas de soberanía. He went into reserve in 1967. He died in Jerez de la Frontera on 22 October 1978. Awards Grand Cross of the Royal and Military Order of Saint Hermenegild (1947) Grand Cross of Military Merit (1949) Grand Cross of Naval Merit (1966)
wikipedia_english
2025-12-01T23:52:12Z
https://en.wikipedia.org/wiki/Alfredo_Erquicia
{"title": "Alfredo Erquicia", "entry_created_at": "2025-12-01T23:52:12Z", "crawled_at": "2025-12-15T12:52:23Z"}
Halecania lobulata Species of lichen-forming fungus Halecania lobulata is a species of crustose lichen in the family Leprocaulaceae. The lichen forms small rosettes up to 5 mm across that grow over other crustose lichens on calcareous rocks, with a greyish to yellowish-brown surface that breaks into tiny leaf-like lobes at the margins. It produces isousnic acid and related compounds that give a yellow reaction when tested with potassium hydroxide solution. The species is known from North Korea and South Korea, where it grows on exposed rock outcrops in mixed forests and near waterfalls. Taxonomy Halecania lobulata was described as a new species in 2005 by Pieter van den Boom and John Elix, as part of their study of rock-dwelling Halecania from Asia; in that paper they introduced both H. lobulata and H. pakistanica, noting that the former is also lichenicolous (lichen-dwelling). The type specimen of H. lobulata was collected by Siegfried Huneck in 1986 on a near-vertical granite wall beside a waterfall in the Isoonam-Tal, Mount Myohyang (North Korea), where it was growing over crustose lichens allied to Hymenelia; the holotype is deposited in the Graz herbarium (GZU). Within Halecania, the species is characterised by its small, rosette-forming, distinctly lobulate thallus and its lichenicolous habit on Hymenelia on calcareous rock. In overall appearance it resembles species of the genus Solenopsora, but van den Boom and Elix pointed out that H. lobulata is readily distinguished by its halonate, single-septum ascospores and by its chemistry. Description The thallus of Halecania lobulata forms small, tightly attached rosettes up to about 0.5 cm across and a few tenths of a millimetre thick. The centre is cracked to areolate, breaking towards the margin into small, leaf-like lobes that are flat to slightly convex and often slightly raised at the tips. The upper surface is smooth, matt to faintly shiny and greyish to yellowish brown, becoming a little paler when wet, and lacks a distinct upper cortex. In section, the upper part of the thallus is made up of dark brown fungal hyphae containing many tiny refractive granules, especially around the apothecia. The photobiont layer is diffuse, with green algal cells filling most of the upper thallus, while the underside is pale brown and composed of loosely interwoven hyphae, sometimes bordered by a dark bluish-black prothallus. Apothecia are usually common near the centre, small (generally up to about 0.5 mm in diameter) and closely attached, with dark brown to almost black, non-pruinose discs surrounded by a thin, pale yellowish-brown thalline margin. The asci are of the Catillaria type and contain eight oblong-ellipsoid, 1-septate ascospores, each surrounded by a gelatinous halo that swells in potassium hydroxide KOH solution. Tiny immersed pycnidia produce short rod-shaped conidia, and chemically the thallus gives a K+ (yellow) reaction and contains isousnic acid as its main lichen product, together with smaller amounts of atranorin, confluentic, stictic and cryptostictic acids. Habitat and distribution Halecania lobulata is a lichenicolous species that grows over Hymenelia on calcareous rock. The type material consists of numerous very small fragments scraped from crustose thalli on a granite wall, and no other associated lichen species were present in the collection. The locality is in the Myohyangsan area of North Korea; van den Boom and Elix noted that the summit of Myohyangsan reaches 1,909 m, but the precise altitude of the collection site was not specified. In 2015 Halecania lobulata was recorded from South Korea, on rock outcrops on Hoemunsan in North Jeolla Province at around 470 m elevation. At this site it grew on exposed rock on a steep, south-west-facing slope in mixed forest dominated by Quercus variabilis and Pinus densiflora, with other broadleaved trees and shrubs, where rock surfaces made up roughly a third of the slope.
Halecania lobulata Species of lichen-forming fungus Halecania lobulata is a species of crustose lichen in the family Leprocaulaceae. The lichen forms small rosettes up to 5 mm across that grow over other crustose lichens on calcareous rocks, with a greyish to yellowish-brown surface that breaks into tiny leaf-like lobes at the margins. It produces isousnic acid and related compounds that give a yellow reaction when tested with potassium hydroxide solution. The species is known from North Korea and South Korea, where it grows on exposed rock outcrops in mixed forests and near waterfalls. Taxonomy Halecania lobulata was described as a new species in 2005 by Pieter van den Boom and John Elix, as part of their study of rock-dwelling Halecania from Asia; in that paper they introduced both H. lobulata and H. pakistanica, noting that the former is also lichenicolous (lichen-dwelling). The type specimen of H. lobulata was collected by Siegfried Huneck in 1986 on a near-vertical granite wall beside a waterfall in the Isoonam-Tal, Mount Myohyang (North Korea), where it was growing over crustose lichens allied to Hymenelia; the holotype is deposited in the Graz herbarium (GZU). Within Halecania, the species is characterised by its small, rosette-forming, distinctly lobulate thallus and its lichenicolous habit on Hymenelia on calcareous rock. In overall appearance it resembles species of the genus Solenopsora, but van den Boom and Elix pointed out that H. lobulata is readily distinguished by its halonate, single-septum ascospores and by its chemistry. Description The thallus of Halecania lobulata forms small, tightly attached rosettes up to about 0.5 cm across and a few tenths of a millimetre thick. The centre is cracked to areolate, breaking towards the margin into small, leaf-like lobes that are flat to slightly convex and often slightly raised at the tips. The upper surface is smooth, matt to faintly shiny and greyish to yellowish brown, becoming a little paler when wet, and lacks a distinct upper cortex. In section, the upper part of the thallus is made up of dark brown fungal hyphae containing many tiny refractive granules, especially around the apothecia. The photobiont layer is diffuse, with green algal cells filling most of the upper thallus, while the underside is pale brown and composed of loosely interwoven hyphae, sometimes bordered by a dark bluish-black prothallus. Apothecia are usually common near the centre, small (generally up to about 0.5 mm in diameter) and closely attached, with dark brown to almost black, non-pruinose discs surrounded by a thin, pale yellowish-brown thalline margin. The asci are of the Catillaria type and contain eight oblong-ellipsoid, 1-septate ascospores, each surrounded by a gelatinous halo that swells in potassium hydroxide KOH solution. Tiny immersed pycnidia produce short rod-shaped conidia, and chemically the thallus gives a K+ (yellow) reaction and contains isousnic acid as its main lichen product, together with smaller amounts of atranorin, confluentic, stictic and cryptostictic acids. Habitat and distribution Halecania lobulata is a lichenicolous species that grows over Hymenelia on calcareous rock. The type material consists of numerous very small fragments scraped from crustose thalli on a granite wall, and no other associated lichen species were present in the collection. The locality is in the Myohyangsan area of North Korea; van den Boom and Elix noted that the summit of Myohyangsan reaches 1,909 m, but the precise altitude of the collection site was not specified. In 2015 Halecania lobulata was recorded from South Korea, on rock outcrops on Hoemunsan in North Jeolla Province at around 470 m elevation. At this site it grew on exposed rock on a steep, south-west-facing slope in mixed forest dominated by Quercus variabilis and Pinus densiflora, with other broadleaved trees and shrubs, where rock surfaces made up roughly a third of the slope.
wikipedia_english
2025-12-01T23:47:52Z
https://en.wikipedia.org/wiki/Halecania_lobulata
{"title": "Halecania lobulata", "entry_created_at": "2025-12-01T23:47:52Z", "crawled_at": "2025-12-15T12:52:24Z"}
Nationalist Republican Left Party Political party in Spain The Nationalist Republican Left Party (Catalan: Partit Nacionalista Republicà d'Esquerra, PNRE) was a Catalan political party founded in October 1933 as a result of a schism from the Republican Left of Catalonia (ERC). The party quickly rebuilt bridges with the ERC and entered the ERC-led government in 1934, as a result of which a number of prominent leaders were arrested and imprisoned following the events of 6th October 1934. The PNRE was ultimately reabsorbed into the ERC in February 1936. Members of the party were known colloquially as lluhins (from the surname of party leader, Joan Lluhí), or panarres (a humorous adaptation of the initials of the party). Background The founding group of the party included several government ministers for the ERC within the Generalitat de Catalunya, Joan Lluhí i Vallescà, Josep Tarradellas, Joan Casanelles i Ibars, Antoni Xirau i Palau and Carles Martí i Feced, as well as several other prominent ERC members. Together they were known as the L'Opinió group, after the socialist, republican and federalist Catalan-language newspaper of the same name, with which the members of the group were closely associated. The group aligned itself with the French Radical-Socialist tradition, combining this ideology with a Catalan nationalism that advocated for autonomy within a Spanish federalist framework. The group had key ideological divisions with the dominant Estat Català faction of the party (with whom the group had originally joined to form the ERC), with that faction seeking Catalan independence rather than federalism within a Spanish republic, and with the Estat Català not as ideologically left-wing as the L'Opinió group. The group's members considered themselves faithful to the original ideals of the ERC, and were strongly critical of what they saw as the authoritarianism of Francesc Macià and of the "scandalous lack of discipline" of leaders and militants of the Estat Català faction. These criticisms laid the groundwork for their expulsion from the ERC. Expulsion from the ERC The L'Opinió group's discontent was heightened when Macià rejected the proposal by the government of the Second Spanish Republic to name Tarradellas, one of their own, as the civil governor of Barcelona, instead preferring the nomination of Claudi Ametlla. Their rift with the Estat Català faction grew ever stronger, with Lluhí claiming on the eve of their explusion that he feared that group would turn the ERC into an "antidemocratic, fascist and separatist" party. As a result of the L'Opinió group's constant critiques of the actions of the Estat Català faction and of the lack of internal democracy within the ERC, their condemnation of corruption cases, and their criticism of the ERC's performance of its duties both within the Catalan Parliament and the City Council of Barcelona, the group were ultimate expelled from the ERC in September 1933. Party formation and first election campaign Following their expulsion from the ERC, the group held a constitutional assembly in Barcelona on 15th October 1933, inaugurating the PNRE as a new party. They were also able to form their own youth wing lead by Josep Maria Lladó i Figueras, Alfred Cabanes, Rafael Font Farran, and Ferran Ludevid i Celestí Morlans, breaking with Joventuts d'Esquerra Republicana-Estat Català (JEREC), the ERC's own youth wing which was led at the time by Josep Dencàs and Miquel Badia, militants loyal to the Estat Català faction. The calling of the 1933 Spanish general election for November of that year took the newly-formed PNRE by surprise, and they had little time to mount an effective electoral campaign. While the PNRE was able to field a prominent and experienced leadership team for the new party, they were unable to produce a charismatic leader who could compete with the ERC's Lluís Companys. It also proved impossible to unite the Catalan left in one electoral coalition, and the PNRE found themselves turning to more centrist parties. As such, the PNRE participated in the 1933 election in coalition with the Partit Catalanista Republicà (PCR) and Acció Catalana Republicana (ACR) but failed to win a single seat, but they did manage to weaken the performance of the ERC's candidates at the election, which was itself an electoral goal for the nascent party. The death of Francesc Macià and the formation of Lluís Companys' new government of the Generalitat laid the ground for a new rapprochement with the ERC, and thus, in January 1934, Joan Lluhí entered the ERC-led Catalan government as Minister for Justice and Law. Government, events of 6 October and return to the ERC The PNRE's participation in the short-lived Companys government was not without controversy, and the party was marked by internal division in the months leading up to the events of 6th October 1934. Despite the PNRE's anti-separatist position, Lluís reluctantly led the party to support the 6th October declaration of an independent Catalan State, particularly fearing that Catalan autonomy within Spain might be under threat following the 1933 election of a right-wing Spanish government. Lluís also claimed that he hoped the new Catalan State might be merely the beginning of a Spanish Republic, and that its declaration would help to promote a revolutionary anti-monarchist sentiment throughout Spain. Following the events of 6th October, there was strong criticism and indeed repression of the Companys-led government in which the PNRE participated, albeit reluctantly. As a result, the publication of the PNRE-linked newspaper L'Opinió was suspended on account of its critical tone towards the right-wing government of the Spanish republic, and Tarradellas was detained together with several other leading figures of the party. Joan Lluhí himself was sentenced to life imprisonment and confined to the prison of El Puerto de Santa María, where he was imprisoned together with the ERC's Lluís Companys and the PSUC's Joan Comorera, although in May 1935 Lluhí was remanded to house arrest at his home in Barcelona, before eventually being pardoned by the new left-wing Spanish republican government of 1936. At the 1936 Spanish general election, the PNRE formed part of the Front d'Esquerres electoral coalition, and achieved greater electoral success than the previous election, winning two seats in the Cortes Generales, despite the recent scandal. The party was formally reintegrated into the ERC in May 1936, although a faithful minority of the party's membership continued to use the name of the PNRE through to the end of the Spanish Civil War.
Nationalist Republican Left Party Political party in Spain The Nationalist Republican Left Party (Catalan: Partit Nacionalista Republicà d'Esquerra, PNRE) was a Catalan political party founded in October 1933 as a result of a schism from the Republican Left of Catalonia (ERC). The party quickly rebuilt bridges with the ERC and entered the ERC-led government in 1934, as a result of which a number of prominent leaders were arrested and imprisoned following the events of 6th October 1934. The PNRE was ultimately reabsorbed into the ERC in February 1936. Members of the party were known colloquially as lluhins (from the surname of party leader, Joan Lluhí), or panarres (a humorous adaptation of the initials of the party). Background The founding group of the party included several government ministers for the ERC within the Generalitat de Catalunya, Joan Lluhí i Vallescà, Josep Tarradellas, Joan Casanelles i Ibars, Antoni Xirau i Palau and Carles Martí i Feced, as well as several other prominent ERC members. Together they were known as the L'Opinió group, after the socialist, republican and federalist Catalan-language newspaper of the same name, with which the members of the group were closely associated. The group aligned itself with the French Radical-Socialist tradition, combining this ideology with a Catalan nationalism that advocated for autonomy within a Spanish federalist framework. The group had key ideological divisions with the dominant Estat Català faction of the party (with whom the group had originally joined to form the ERC), with that faction seeking Catalan independence rather than federalism within a Spanish republic, and with the Estat Català not as ideologically left-wing as the L'Opinió group. The group's members considered themselves faithful to the original ideals of the ERC, and were strongly critical of what they saw as the authoritarianism of Francesc Macià and of the "scandalous lack of discipline" of leaders and militants of the Estat Català faction. These criticisms laid the groundwork for their expulsion from the ERC. Expulsion from the ERC The L'Opinió group's discontent was heightened when Macià rejected the proposal by the government of the Second Spanish Republic to name Tarradellas, one of their own, as the civil governor of Barcelona, instead preferring the nomination of Claudi Ametlla. Their rift with the Estat Català faction grew ever stronger, with Lluhí claiming on the eve of their explusion that he feared that group would turn the ERC into an "antidemocratic, fascist and separatist" party. As a result of the L'Opinió group's constant critiques of the actions of the Estat Català faction and of the lack of internal democracy within the ERC, their condemnation of corruption cases, and their criticism of the ERC's performance of its duties both within the Catalan Parliament and the City Council of Barcelona, the group were ultimate expelled from the ERC in September 1933. Party formation and first election campaign Following their expulsion from the ERC, the group held a constitutional assembly in Barcelona on 15th October 1933, inaugurating the PNRE as a new party. They were also able to form their own youth wing lead by Josep Maria Lladó i Figueras, Alfred Cabanes, Rafael Font Farran, and Ferran Ludevid i Celestí Morlans, breaking with Joventuts d'Esquerra Republicana-Estat Català (JEREC), the ERC's own youth wing which was led at the time by Josep Dencàs and Miquel Badia, militants loyal to the Estat Català faction. The calling of the 1933 Spanish general election for November of that year took the newly-formed PNRE by surprise, and they had little time to mount an effective electoral campaign. While the PNRE was able to field a prominent and experienced leadership team for the new party, they were unable to produce a charismatic leader who could compete with the ERC's Lluís Companys. It also proved impossible to unite the Catalan left in one electoral coalition, and the PNRE found themselves turning to more centrist parties. As such, the PNRE participated in the 1933 election in coalition with the Partit Catalanista Republicà (PCR) and Acció Catalana Republicana (ACR) but failed to win a single seat, but they did manage to weaken the performance of the ERC's candidates at the election, which was itself an electoral goal for the nascent party. The death of Francesc Macià and the formation of Lluís Companys' new government of the Generalitat laid the ground for a new rapprochement with the ERC, and thus, in January 1934, Joan Lluhí entered the ERC-led Catalan government as Minister for Justice and Law. Government, events of 6 October and return to the ERC The PNRE's participation in the short-lived Companys government was not without controversy, and the party was marked by internal division in the months leading up to the events of 6th October 1934. Despite the PNRE's anti-separatist position, Lluís reluctantly led the party to support the 6th October declaration of an independent Catalan State, particularly fearing that Catalan autonomy within Spain might be under threat following the 1933 election of a right-wing Spanish government. Lluís also claimed that he hoped the new Catalan State might be merely the beginning of a Spanish Republic, and that its declaration would help to promote a revolutionary anti-monarchist sentiment throughout Spain. Following the events of 6th October, there was strong criticism and indeed repression of the Companys-led government in which the PNRE participated, albeit reluctantly. As a result, the publication of the PNRE-linked newspaper L'Opinió was suspended on account of its critical tone towards the right-wing government of the Spanish republic, and Tarradellas was detained together with several other leading figures of the party. Joan Lluhí himself was sentenced to life imprisonment and confined to the prison of El Puerto de Santa María, where he was imprisoned together with the ERC's Lluís Companys and the PSUC's Joan Comorera, although in May 1935 Lluhí was remanded to house arrest at his home in Barcelona, before eventually being pardoned by the new left-wing Spanish republican government of 1936. At the 1936 Spanish general election, the PNRE formed part of the Front d'Esquerres electoral coalition, and achieved greater electoral success than the previous election, winning two seats in the Cortes Generales, despite the recent scandal. The party was formally reintegrated into the ERC in May 1936, although a faithful minority of the party's membership continued to use the name of the PNRE through to the end of the Spanish Civil War.
wikipedia_english
2025-12-02T23:58:17Z
https://en.wikipedia.org/wiki/Nationalist_Republican_Left_Party
{"title": "Nationalist Republican Left Party", "entry_created_at": "2025-12-02T23:58:17Z", "crawled_at": "2025-12-15T12:52:25Z"}
Evens Julmis Bahamian football referee Evens Julmis (born 1999 or 2000) is a Bahamian football referee who is the only FIFA-listed referee from The Bahamas as of 2025. Career Julmis began his refereeing career in 2017, while still attending CR Walker High School in Nassau. He played for a local school team at the competitions organized by the Government Secondary Schools Sports Association, and was also a member of the Nassau-based soccer team Dynamos FC. By the age of 22 in May 2022, Julmis got his first CONCACAF appointment when he was selected as a fourth official for the 2022–23 CONCACAF Nations League B game between Haiti and Bermuda, played on 4 June 2022 at the Bermuda National Stadium in Devonshire Parish. FIFA listed Julmis in its 2025 roster, with Julmis being the only Bahamian central referee at the level. In January 2025, he was selected for a training program and seminar in Costa Rica, as part of CONCACAF requirements for further tournaments, including the CONCACAF Gold Cup and the CONCACAF Champions Cup. After the seminar, Julmis was appointed to referee matches at the CONCACAF U-17 World Cup qualification, including a match between Saint Kitts and Nevis U-17 and the United States U-17 in Costa Rica. Prior to becoming a professional referee, Julmis was part of the Bahamian national team, as well as part of the beach soccer squad.
Evens Julmis Bahamian football referee Evens Julmis (born 1999 or 2000) is a Bahamian football referee who is the only FIFA-listed referee from The Bahamas as of 2025. Career Julmis began his refereeing career in 2017, while still attending CR Walker High School in Nassau. He played for a local school team at the competitions organized by the Government Secondary Schools Sports Association, and was also a member of the Nassau-based soccer team Dynamos FC. By the age of 22 in May 2022, Julmis got his first CONCACAF appointment when he was selected as a fourth official for the 2022–23 CONCACAF Nations League B game between Haiti and Bermuda, played on 4 June 2022 at the Bermuda National Stadium in Devonshire Parish. FIFA listed Julmis in its 2025 roster, with Julmis being the only Bahamian central referee at the level. In January 2025, he was selected for a training program and seminar in Costa Rica, as part of CONCACAF requirements for further tournaments, including the CONCACAF Gold Cup and the CONCACAF Champions Cup. After the seminar, Julmis was appointed to referee matches at the CONCACAF U-17 World Cup qualification, including a match between Saint Kitts and Nevis U-17 and the United States U-17 in Costa Rica. Prior to becoming a professional referee, Julmis was part of the Bahamian national team, as well as part of the beach soccer squad.
wikipedia_english
2025-12-02T23:53:17Z
https://en.wikipedia.org/wiki/Evens_Julmis
{"title": "Evens Julmis", "entry_created_at": "2025-12-02T23:53:17Z", "crawled_at": "2025-12-15T12:52:25Z"}