This is an old revision of the document!
Possible bug in the OBTD calculations
So there is a possible bug that I am trying to wrap my head around. In the OBTD log files from KSHELL, the OBTDs are listed in blocks where each block represents an initial and a final state aka. one specific transition. For example
w.f. J1= 0/2( 3) J2= 2/2( 1) B(L;=>), B(L ;<=) 0.00622 0.00207 <||L||> 3 1 -0.13590 0.11892 i j OBTD <i||L||j> OBTD*<||> 1 1 0.00244 5.79655 0.01416 1 2 -0.00012 1.54919 -0.00018 1 9 0.00000 0.00000 0.00000 1 10 0.00000 0.00000 0.00000 ...
This block represents the OBTDs for the transition from the 3rd 0- state to the 1st 1- state (the parity is not listed in the block, but I know it is negative). Now, there is no guarantee that the 3rd 0- state has higher energy than the 1st 1- state. It might be so, but it is not guaranteed. If the initial state however has lower excitation energy than the final state then the OBTD indices in the blocks ($i, j$) have to be swapped in order to make it so that the final state is of lower energy.
So I made a short script which runs through all of the OBTD blocks and looks up the excitation energy of the initial and final state for the given block. The result was that 628337 of 1280000 (49.09 %) blocks had initial state excitation energy lower than the final state. Thats good, as expected, some of the final states are of lower energy and some are of higher. The puzzle however is that when I make a sub-set of transitions, selecting only the transitions which are present within the $E_\gamma = [0, 3]$ MeV region of the $M1$ GSF then only 3 of 230397 (0.00 %) transitions have initial energies lower than final. I don't understand where in the process of calculating the GSF that this kind of energy sorting happens.
I think I have an idea…! TBC
Discussion