CMS Pixel Detector Miscellaneous
Phase 1 Phase 2
Layer 1 Replacement Layers 2-4
  Layer 1 Replacement Elog, all entries  Not logged in ELOG logo
ID Date Author Categorydown Subject
  266   Fri May 15 17:15:34 2020 Andrey StarodumovXRay HR testsAnalysis of HRT: M1630, M1632, M1636, M1638
Krunal proved test result of four modules and Dinko analised them.
M1630: Grade A, VCal calibration: Slope=43.5e-/Vcal, Offset=-145.4e-
M1632: Grade A, VCal calibration: Slope=45.4e-/Vcal, Offset=-290.8e-
M1636: Grade A, VCal calibration: Slope=45.9e-/Vcal, Offset=-255.2e-
M1638: Grade A, VCal calibration: Slope=43.3e-/Vcal, Offset=-183.1e-

A few comments:
1) Rates. One should distinguish X-rays rate and the hit rate seen/measured by a ROC (as correctly Maren mentioned).
X-rays rate vs tube current has been calibrated and the histogramm titles roughly reflect the X-rays rate. One could notice that
number of hits per pixel, again roughly, scaled with the X-rays rate (histo title)
2) M1638 ROC7 and ROC10 show that we see new pixel failures that were not observed in cold box tests. In this case it's not critical, since only
65/25 pixels are not responcive already at lowest rate. But we may have cases with more not responcive pixels.
3) M1638 ROC0: number of defects in cold box test is 3 but with Xrays in the summary table it's only 1. At the same time if one looks at ROC0
summary page in all Efficiency Maps and even in Hit Maps one could see 3 not responcive pixels. We should check in MoreWeb why it's so.
4) It's not critical but it would be good to understand why "col uniformity ratio" histogramm is not filled properly. This check has been introduced
to identify cases when a column performace degrades with hit rate.
5) PROCV4 is not so noisy as PROCV2, but nevertheless I think we should introduce a proper cut on a pixel noise value and activate grading on
the total number of noisy pixels in a ROC (in MoreWeb). For a given threshold and acceptable noise rate one can calculate, pixels with noise
above which level should be counted as defective.
  269   Fri May 22 16:06:43 2020 Andrey StarodumovXRay HR testsAnalysis of HRT: M1623, M1632, M1634, M1636-M1639,M1640
Module HRtest VCal calibration Grade
#defects max #noisy pix
ColdBox XRay
M1623 130 151 91 45xVCal-67e- B
M1630 80 1 385 43xVcal-290e- A
M1632 14 9 124 45xVcal-145e- A
M1634 33 81 339 43xVcal-347e- B
M1636 10 14 109 46xVcal-255e- A
M1637 71 45 175 45xVcal-182e- A
M1638 12 95 269 43xVcal-183e- B
M1639 21 96 482 43xVcal-441e- B
M1640 30 22 115 44xVcal-134e- A
  272   Mon May 25 16:58:23 2020 Andrey StarodumovXRay HR testsa few commments
These is just to record the information:
1. measured hit rates at which Efficiency and Xray hits Maps are taken are 40-50% of the 50-400MHz/cm2 that in the titles of corresponding plots
2. in 2016 the maximum rate was 400MHz/cm2 but with such rate (or better corresponding settings of HV and current of Xray tube) in double columns of certain modules (likely depending on a module position with respect to Xray beam spot) the measured rate was smaller than 300MHz/cm2 that is target rate. Somtimes extrapolation of efficiency curve to 300MHz/cm2 is too large. I think this is not correct. Unfortunately higher rates cause too many readout errors that prevent a proper measurement of the hit efficiency. May be a new DTB with than 1.2A maximum digital current will help.
3.
  279   Fri May 29 15:04:04 2020 Andrey StarodumovXRay HR testsAnalysis of HRT: M1555-M1561 and M1564
Module HRtest VCal calibration Grade
#defects max #noisy pix
ColdBox XRay
M1555: 24 44 375 44xVcal-388e- A
M1556: 54 74 507 43xVcal-370e- B
M1557: 109 63 145 43xVcal-216e- A
M1558: 127 113 103 44xVcal-145e- B
M1559: 69 59 92 46xVcal-119e- A
M1560: 35 54 66 46xVcal-123e- A
M1561: 33 44 129 45xVcal-244e- A
M1564: 26 36 93 47xVcal- 29e- A
  299   Fri Aug 28 12:02:48 2020 Andrey StarodumovXRay HR testsM1599
ROC5 has eff=93.65% and should be graded C. Somehow efficiency was not taken into account for HR test grading???
  3   Tue Aug 6 16:01:17 2019 Matej RoguljicSoftwareWrong dac settings - elComandante
If it seems that elComandante is taking wrong dac settings for tests like Reception test or full qualification, one should remember that it does NOT read values from module specific folders like "M1523", but rather from tbm-specific folders like "tbm10d". The folders from which the dacs are taken are listed in "elComandante.config", the lines which look like "tbm10d:tbm10d".
  14   Thu Sep 26 15:51:30 2019 Dinko FerencekSoftwarepXar code updated
pXar code in /home/l_tester/L1_SW/pxar/ on the lab PC was updated yesterday from https://github.com/psi46/pxar/tree/15b956255afb6590931763fd07ed454fb9837fc0 to the latest version https://github.com/psi46/pxar/tree/e17df08c7bbeb8472e8f56ccd2b9d69a113ccdc3 which among other things contains updated DAC settings for ROCs.

All the configs will have to be regenerated before the start of the module qualification.
  17   Thu Sep 26 22:09:14 2019 Dinko FerencekSoftwareDAC configuration update
In accordance with the agreement made in an email thread initiated by Danek, the following changes to DAC settings for ROCs

vsh: 30 -> 8
vclorbias: 30 -> 120
ctrlreg: 0 -> 9

were propagated into existing configuration files in

/home/l_tester/L1_SW/pxar/data/tbm10c/
/home/l_tester/L1_SW/pxar/data/tbm10d/
/home/l_tester/L1_SW/pxar/data/M1522/

on the lab PC at PSI.

It should be noted that ctrlreg was changed to the recommended value for PROC V3. For PROC V4 ctrlreg needs to be set to 17 so this is important to keep in mind when using configuration files and modules built using different versions of PROC.
  21   Wed Oct 2 12:50:52 2019 Dinko FerencekSoftwareProblem with elComandante Keithley client during full qualification
Full qualification was attempted for M1532 on Oct. 1. After the second Fulltest at -20 C finished, the Keitley client crashed with the following error
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 147, in check_busy
    self.check_busy(data[1:])
  File "/home/l_tester/L1_SW/elComandante/keithleyClient/keithleyInterface.py", line 139, in check_busy
    if data[0] == '\x11': # XON
RuntimeError: maximum recursion depth exceeded in cmp

Because of this, the IV measurement never started (the main elComandante process was simply hanging and waiting for the Keithley client to report it's ready) and the main elComandante process had to be interrupted.
  22   Sat Oct 5 22:59:58 2019 Dinko FerencekSoftwareProblem with elComandante Keithley client during full qualification
A new attempt to run the full qualification for M1532 was made on Friday, Oct. 4, but the Keithely client crashed with the same error message. This time we managed to see from log files that the crash happened after the first IV measurement at -20 C was complete and Keithley was reset to -150 V. Unfortunately, the log files were now saved for the test on Tuesday so we couldn't confirm that the crash occurred at the same point.
  23   Mon Oct 28 17:01:05 2019 Matej RoguljicSoftware Investigating the bug with the Keithley client
We took module 1529 and tried recreating the issue observed in the beginning of October. To do this in a reasonable amount of time, a "shorttest" procedure was defined which consists only of pretest and pixelalive. Three runs were taken

Run number 1: Shorttest@10,IV@10
Run number 2: Shorttest@10, IV@10,Cycle(n=1, between 10 and -10), Shorttest@10, IV@10
Run number 3: Shorttest@10, IV@10, Cycle(n=5, between 10 and -10), Shorttest@10, IV@10

In runs 1 and 2, IV was done from -5 to -155 in steps of 10
In run 3, IV was done from -5 to -405 in steps of 10

No issues were observed during the tests themselves.

Running MoReWeb shows the Temperature, Humidity and Sum of Currents plots while individual tests show only Pixel Alive map. IV plot is missing in the MoReWeb output, however, it is present in the ivCurve.log file. Tried investigating why IV is not shown, but couldn't get to the bottom of it.

Tomorrow we'll use M1529 and M1530 in a FullTest to check if the problem would appear.
  26   Tue Oct 29 16:38:32 2019 Matej RoguljicSoftware Investigating the bug with the Keithley client

Matej Roguljic wrote:
We took module 1529 and tried recreating the issue observed in the beginning of October. To do this in a reasonable amount of time, a "shorttest" procedure was defined which consists only of pretest and pixelalive. Three runs were taken

Run number 1: Shorttest@10,IV@10
Run number 2: Shorttest@10, IV@10,Cycle(n=1, between 10 and -10), Shorttest@10, IV@10
Run number 3: Shorttest@10, IV@10, Cycle(n=5, between 10 and -10), Shorttest@10, IV@10

In runs 1 and 2, IV was done from -5 to -155 in steps of 10
In run 3, IV was done from -5 to -405 in steps of 10

No issues were observed during the tests themselves.

Running MoReWeb shows the Temperature, Humidity and Sum of Currents plots while individual tests show only Pixel Alive map. IV plot is missing in the MoReWeb output, however, it is present in the ivCurve.log file. Tried investigating why IV is not shown, but couldn't get to the bottom of it.

Tomorrow we'll use M1529 and M1530 in a FullTest to check if the problem would appear.



M1530 was put in the coldbox and qualification was started, but the pxar output was full of deserializer errors. The continuous stream of errors made the log file pretty large and slowed down the execution of the qualification so it was aborted. Later it was noticed that there are green depositions on the module like it was on the HDIs irradiated at 60Co facility in Zagreb which is probably why there were so many errors.

M1521 was used instead of M1530 along with M1529. Full qualification was launched around 11:00.
  30   Wed Oct 30 16:54:58 2019 Matej RoguljicSoftware Investigating the bug with the Keithley client

Matej Roguljic wrote:

Matej Roguljic wrote:
We took module 1529 and tried recreating the issue observed in the beginning of October. To do this in a reasonable amount of time, a "shorttest" procedure was defined which consists only of pretest and pixelalive. Three runs were taken

Run number 1: Shorttest@10,IV@10
Run number 2: Shorttest@10, IV@10,Cycle(n=1, between 10 and -10), Shorttest@10, IV@10
Run number 3: Shorttest@10, IV@10, Cycle(n=5, between 10 and -10), Shorttest@10, IV@10

In runs 1 and 2, IV was done from -5 to -155 in steps of 10
In run 3, IV was done from -5 to -405 in steps of 10

No issues were observed during the tests themselves.

Running MoReWeb shows the Temperature, Humidity and Sum of Currents plots while individual tests show only Pixel Alive map. IV plot is missing in the MoReWeb output, however, it is present in the ivCurve.log file. Tried investigating why IV is not shown, but couldn't get to the bottom of it.

Tomorrow we'll use M1529 and M1530 in a FullTest to check if the problem would appear.



M1530 was put in the coldbox and qualification was started, but the pxar output was full of deserializer errors. The continuous stream of errors made the log file pretty large and slowed down the execution of the qualification so it was aborted. Later it was noticed that there are green depositions on the module like it was on the HDIs irradiated at 60Co facility in Zagreb which is probably why there were so many errors.

M1521 was used instead of M1530 along with M1529. Full qualification was launched around 11:00. It ended around 6:30 without any issues.


Running full qualification on 30.10. in the morning on modules 1510, 1529, 1521. Fourth module was not included since there was an issue with the 4th DTB which will be investigated late. Like the previous day, the qualification ended around 17:00 without issues. In conclusion, the issue is not present in the qualification setup at the moment.
  36   Wed Nov 27 18:25:08 2019 Dinko FerencekSoftwarepXar code updated
pXar code in /home/l_tester/L1_SW/pxar/ on the lab PC was updated on Monday, Nov. 25 from https://github.com/psi46/pxar/tree/e17df08c7bbeb8472e8f56ccd2b9d69a113ccdc3 to https://github.com/psi46/pxar/tree/9c3b81791738e1b8ec7dd9f0d1b68f8800f8416c which pulled in the latest updates to the pulse height optimization test. Today a few remaining updates were pulled in by going to the current HEAD of the master branch https://github.com/psi46/pxar/tree/5d358c5ebbf095a7d118cdde9e1e509c41ccc615.

The sequence of commands to update the code was the following:
cd /home/l_tester/L1_SW/pxar/
git pull origin master
cd build
cmake ..
make -j6 install
  37   Wed Nov 27 21:50:02 2019 Dinko FerencekSoftwareReorganized pXar configuration files for pixel modules
Due to different DAC settings needed for PROC V3 and V4, a single folder containing module configuration files cannot cover both ROC types. To address this problem, the existing folder 'tbm10d' in /home/l_tester/L1_SW/pxar/data/ containing configuration for PROC V4 was renamed to 'tbm10d_procv4' and a new folder for PROC V3, 'tbm10d_procv3', was created. For backward compatibility, a symbolic link 'tbm10d' was created that points to 'tbm10d_procv4'.

The difference in the configurations is in the CtrlReg DAC value which is 9 for V3 and 17 for V4.

The following lines were also added in the [defaultParameters] section in /home/l_tester/L1_SW/elComandante/config/elComandante.conf

tbm10d_procv3: tbm10d_procv3
tbm10d_procv4: tbm10d_procv4
  39   Thu Nov 28 18:23:31 2019 Dinko FerencekSoftwareReorganized pXar configuration files for pixel modules

Dinko Ferencek wrote:
Due to different DAC settings needed for PROC V3 and V4, a single folder containing module configuration files cannot cover both ROC types. To address this problem, the existing folder 'tbm10d' in /home/l_tester/L1_SW/pxar/data/ containing configuration for PROC V4 was renamed to 'tbm10d_procv4' and a new folder for PROC V3, 'tbm10d_procv3', was created. For backward compatibility, a symbolic link 'tbm10d' was created that points to 'tbm10d_procv4'.

The difference in the configurations is in the CtrlReg DAC value which is 9 for V3 and 17 for V4.

The following lines were also added in the [defaultParameters] section in /home/l_tester/L1_SW/elComandante/config/elComandante.conf

tbm10d_procv3: tbm10d_procv3
tbm10d_procv4: tbm10d_procv4


Since the new PH optimization code was added under the already existing PH test, the old configuration files would have to be updated to have the correct parameter values set for the expanded PH test. For this purpose, the old configuration folders were renamed

tbm10d_procv3 --> tbm10d_procv3_old
tbm10d_procv4 --> tbm10d_procv4_old

and new configuration files were re-generated from scratch

./mkConfig -d ../data/tbm10d_procv3 -t TBM10D -r proc600v3 -m
./mkConfig -d ../data/tbm10d_procv4 -t TBM10D -r proc600v4 -m

In addition, in both sets of configurations files, the configuration for the BB2 tab in the pXar GUI was moved from moreTestParameters.dat to testParameters.dat to have the BB2 tab available by default when starting pXar using these configuration files.
  40   Thu Nov 28 18:34:00 2019 Dinko FerencekSoftwarepXar code updated

Dinko Ferencek wrote:
pXar code in /home/l_tester/L1_SW/pxar/ on the lab PC was updated on Monday, Nov. 25 from https://github.com/psi46/pxar/tree/e17df08c7bbeb8472e8f56ccd2b9d69a113ccdc3 to https://github.com/psi46/pxar/tree/9c3b81791738e1b8ec7dd9f0d1b68f8800f8416c which pulled in the latest updates to the pulse height optimization test. Today a few remaining updates were pulled in by going to the current HEAD of the master branch https://github.com/psi46/pxar/tree/5d358c5ebbf095a7d118cdde9e1e509c41ccc615.

The sequence of commands to update the code was the following:
cd /home/l_tester/L1_SW/pxar/
git pull origin master
cd build
cmake ..
make -j6 install


pXar code updated to the current HEAD of the master branch https://github.com/psi46/pxar/tree/9eb0f3844e9c7f98d7701629c5af339632c5d84a to pick up the latest update that allows running of the fullTest() method of each of the tests from the command line and consequently also from elComandante.
  41   Thu Nov 28 18:58:30 2019 Dinko FerencekSoftwareUpdated Fulltest configuration
The Fulltest definition used on the lab PC and stored in /home/l_tester/L1_SW/elComandante/config/tests/Fulltest had the following content

pretest
FullTest
bb2
exit

where FullTest is defined in https://github.com/psi46/pxar/blob/9eb0f3844e9c7f98d7701629c5af339632c5d84a/tests/PixTestFullTest.cc#L82-L121. For added flexibility, we would like to have individual tests specified in the definition file. However, by default this implies calling the doTest() method for each of the tests while FullTest actually calls the fullTest() method. For Scurves and GainPedestal the two methods are distinct. After the latest updates from Urs to the pXar code, we are now able to change the definition to

pretest
readback
alive
bb
bb2
scurves:fulltest
trim
ph
gainpedestal:fulltest
exit

Note that readback was placed first because in the past it was observed that pXar had a tendency to crash when this test was run last.
  45   Mon Jan 20 13:47:33 2020 Dinko FerencekSoftwarepXar code updated

Dinko Ferencek wrote:

Dinko Ferencek wrote:
pXar code in /home/l_tester/L1_SW/pxar/ on the lab PC was updated on Monday, Nov. 25 from https://github.com/psi46/pxar/tree/e17df08c7bbeb8472e8f56ccd2b9d69a113ccdc3 to https://github.com/psi46/pxar/tree/9c3b81791738e1b8ec7dd9f0d1b68f8800f8416c which pulled in the latest updates to the pulse height optimization test. Today a few remaining updates were pulled in by going to the current HEAD of the master branch https://github.com/psi46/pxar/tree/5d358c5ebbf095a7d118cdde9e1e509c41ccc615.

The sequence of commands to update the code was the following:
cd /home/l_tester/L1_SW/pxar/
git pull origin master
cd build
cmake ..
make -j6 install


pXar code updated to the current HEAD of the master branch https://github.com/psi46/pxar/tree/9eb0f3844e9c7f98d7701629c5af339632c5d84a to pick up the latest update that allows running of the fullTest() method of each of the tests from the command line and consequently also from elComandante.


pXar code updated to the current HEAD of the master branch https://github.com/psi46/pxar/tree/f6e42c17c0bb3a44bdb3fa13d8f8afb6cae62a81 to pick up the latest PH optimization updates.
  49   Wed Jan 22 11:39:48 2020 Dinko FerencekSoftwareBB2 configuration
When attempting to run the FullQualification this morning, pXar crashed while running the BB2 test. After some investigation, we realized the source of the problem was the BB2 configuration missing from testParameters.dat. The configuration is available in moreTestParameters.dat but by default the mkConfig script does not put it in testParameters.dat. Since we run BB2 in the FullQualification, the configuration needs to be copied by hand every time the configuration files are regenerated.
  50   Wed Jan 22 11:50:20 2020 Dinko FerencekSoftwareModule configuration files for interactive tests
There are two module configuration folders set up for elComandante

/home/l_tester/L1_SW/pxar/data/tbm10d_procv3/
/home/l_tester/L1_SW/pxar/data/tbm10d_procv4/

These folder can also be used for interactive tests with pXar. However, in that case these folders get filled with pxar.log and pxar.root files. For some reason, elComandante when copying the module configuration files also copies all pxar.log files. To prevent unnecessary duplication of junk files, two new module configuration folders were set up for interactive tests with pXar

/home/l_tester/L1_SW/pxar/data/tbm10d_procv3_pxar/
/home/l_tester/L1_SW/pxar/data/tbm10d_procv4_pxar/

The configurations are identical to those used by elComandante. However, one needs to remember to keep the tests folders in sync with the elComandante folders once they get updated. This can be done as follows

rsync -avPSh /home/l_tester/L1_SW/pxar/data/tbm10d_procv3/ /home/l_tester/L1_SW/pxar/data/tbm10d_procv3_pxar/
rsync -avPSh /home/l_tester/L1_SW/pxar/data/tbm10d_procv4/ /home/l_tester/L1_SW/pxar/data/tbm10d_procv4_pxar/

To check if there are any extraneous pXar files present in the configuration folders, run

find /home/l_tester/L1_SW/pxar/data/tbm10d_procv?/ -type f \( -name 'pxar*.log' -o -name 'pxar*.root' \)

If any, they can be deleted by running

find /home/l_tester/L1_SW/pxar/data/tbm10d_procv?/ -type f -name \( -name 'pxar*.log' -o -name 'pxar*.root' \) -delete
  52   Thu Jan 23 15:15:45 2020 Dinko FerencekSoftwareTrimming Vcal value changed from 35 to 40
Trimming Vcal value changed from 35 to 40 in the testParameters.dat files stored in

/home/l_tester/L1_SW/pxar/data/tbm10d_procv3/
/home/l_tester/L1_SW/pxar/data/tbm10d_procv3_test/
/home/l_tester/L1_SW/pxar/data/tbm10d_procv4/
/home/l_tester/L1_SW/pxar/data/tbm10d_procv4_test/

This is the value we will use for the FullQualification.
  53   Fri Jan 24 18:12:00 2020 Dinko FerencekSoftwareMoReWeb code updates
MoReWeb code updated to include the ROC wafer info in the XrayCalibration and XRayHRQualification pages:
https://gitlab.cern.ch/CMS-IRB/MoReWeb/commit/4de8ea39050600367ae0b8e959fcc00f29be45d8

In addition, two BB2 plots with wrong axis labels were fixed:
https://gitlab.cern.ch/CMS-IRB/MoReWeb/commit/3cfa18fc3aab1a29859ba9f0f81aa0ed4c59d5c8
https://gitlab.cern.ch/CMS-IRB/MoReWeb/commit/bb6627e6ab3f4d08eb54970a7e3d3b751983e15a
  108   Mon Mar 16 10:05:23 2020 Matej RoguljicSoftwarePhQualification change
Urs made a change in pXar, in the PhOptimization algorithm. One of the changes is in the testParameters.dat where vcalhigh is set to 100 instead of 255. This was implemented on the PC used to run full qualification. A separate procedure for elComandante was created, "PhQualification.ini", which runs pretest, pixelalive, trimming, ph and gainpedestal. This procedure will need to be run on all the modules qualified before this change was made and later merged with previous full qualification results.
  112   Wed Mar 18 15:16:47 2020 Andrey StarodumovSoftwareChange in trimmig algorithm
Urs yesterday modified the algorithm and tested it. From today we are using it. Main change that threshold for trimming is not any more fixed to VthrComp=79 but calculated based on the bottom tornado value + 20. CalDel is also optimised. This allows us to avoid failures in the trimbit tests due to too high threshold, eg if the bottom of tornado close to 70 a fraction of pixels in a chip could have threshold above 70 and hence fail in the test.
  198   Wed Apr 8 15:02:37 2020 Andrey StarodumovSoftwareChange of CrtlReg for RT
So far in a blue box for RT tbm10d_procv3 parameters were used.
This is wrong, since CtrlReg=9 is better for -20C while for +10 or higher T
CtrlReg=17 is better.
From now on for RT tbm10d_procv4 will be used.
  243   Thu Apr 30 16:47:00 2020 Matej RoguljicSoftwareMoReWeb empty DAC plots
Some of the DAC parameters plots were empty in the total production overview page. All the empty plots had the number "35" in them (e.g. DAC distribution m20_1 vana 35). The problem was tracked down to the trimming configuration. Moreweb was expecting us to trim to Vcal 35, while we decided to trim to Vcal 50. I "grepped" where this was hardcoded and changed 35->50.

The places where I made changes:
  • Analyse/AbstractClasses/TestResultEnvironment.py
    'trimThr':35
  • Analyse/Configuration/GradingParameters.cfg.default
    trimThr = 35
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    TrimThresholds = ['', '35']
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    self.SubPages.append({"InitialAttributes" : {"Anchor": "DACDSpread35", "Title": "DAC parameter spread per module - 35"}, "Key": "Section","Module": "Section"})


It's interesting to note that someone had already made the change in "Analyse/Configuration/GradingParameters.cfg"
  244   Thu Apr 30 17:24:57 2020 Dinko FerencekSoftwareMoReWeb empty DAC plots

Matej Roguljic wrote:
Some of the DAC parameters plots were empty in the production overview page. All the empty plots had the number "35" in them (e.g. DAC distribution m20_1 vana 35). The problem was tracked down to the trimming configuration. Moreweb was expecting us to trim to Vcal 35, while we decided to trim to Vcal 50. I "grepped" where this was hardcoded and changed 35->50.

The places where I made changes:
  • Analyse/AbstractClasses/TestResultEnvironment.py
    'trimThr':35
  • Analyse/Configuration/GradingParameters.cfg.default
    trimThr = 35
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    TrimThresholds = ['', '35']
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    self.SubPages.append({"InitialAttributes" : {"Anchor": "DACDSpread35", "Title": "DAC parameter spread per module - 35"}, "Key": "Section","Module": "Section"})


It's interesting to note that someone had already made the change in "Analyse/Configuration/GradingParameters.cfg"


As far as I can remember, the changes in Analyse/AbstractClasses/TestResultEnvironment.py, Analyse/Configuration/GradingParameters.cfg.default and Analyse/Configuration/GradingParameters.cfg were there from before, probably made by Andrey. It is possible that you looked at the files when I was preparing logically separate commits affecting the same files which required temporarily undoing and later reapplying some of the changes to be able to separate the commits. The commits are now on GitLab https://gitlab.cern.ch/CMS-IRB/MoReWeb/-/commits/L1replacement, specifically:

435ffb98: grading parameters related to the trimming threshold updated from 35 to 50 VCal units
1987ff18: updates in the production overview page related to a change in the trimming threshold
  245   Thu Apr 30 17:33:04 2020 Andrey StarodumovSoftwareMoReWeb empty DAC plots

Matej Roguljic wrote:
Some of the DAC parameters plots were empty in the total production overview page. All the empty plots had the number "35" in them (e.g. DAC distribution m20_1 vana 35). The problem was tracked down to the trimming configuration. Moreweb was expecting us to trim to Vcal 35, while we decided to trim to Vcal 50. I "grepped" where this was hardcoded and changed 35->50.

The places where I made changes:
  • Analyse/AbstractClasses/TestResultEnvironment.py
    'trimThr':35
  • Analyse/Configuration/GradingParameters.cfg.default
    trimThr = 35
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    TrimThresholds = ['', '35']
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    self.SubPages.append({"InitialAttributes" : {"Anchor": "DACDSpread35", "Title": "DAC parameter spread per module - 35"}, "Key": "Section","Module": "Section"})


It's interesting to note that someone had already made the change in "Analyse/Configuration/GradingParameters.cfg"

I have changed
1)StandardVcal2ElectronConversionFactorfrom 50 to 44 for VCal calibration of PROC600V4 is 44el/VCal
2)TrimBitDifference from 2 to -2 for not to take into account failed trim bit test that is an artifact from trimbit test SW.
  253   Thu May 7 00:10:15 2020 Dinko FerencekSoftwareMoReWeb empty DAC plots

Andrey Starodumov wrote:

Matej Roguljic wrote:
Some of the DAC parameters plots were empty in the total production overview page. All the empty plots had the number "35" in them (e.g. DAC distribution m20_1 vana 35). The problem was tracked down to the trimming configuration. Moreweb was expecting us to trim to Vcal 35, while we decided to trim to Vcal 50. I "grepped" where this was hardcoded and changed 35->50.

The places where I made changes:
  • Analyse/AbstractClasses/TestResultEnvironment.py
    'trimThr':35
  • Analyse/Configuration/GradingParameters.cfg.default
    trimThr = 35
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    TrimThresholds = ['', '35']
  • Analyse/OverviewClasses/CMSPixel/ProductionOverview/ProductionOverviewPage/ProductionOverviewPage.py
    self.SubPages.append({"InitialAttributes" : {"Anchor": "DACDSpread35", "Title": "DAC parameter spread per module - 35"}, "Key": "Section","Module": "Section"})


It's interesting to note that someone had already made the change in "Analyse/Configuration/GradingParameters.cfg"

I have changed
1)StandardVcal2ElectronConversionFactorfrom 50 to 44 for VCal calibration of PROC600V4 is 44el/VCal
2)TrimBitDifference from 2 to -2 for not to take into account failed trim bit test that is an artifact from trimbit test SW.


1) is committed in 74b1038e.
2) was made on Mar. 24 (for more details, see this elog) and is currently left in Analyse/Configuration/GradingParameters.cfg and might be committed in the future depending on what is decided about the usage of the Trim Bit Test in module grading
$ diff Analyse/Configuration/GradingParameters.cfg.default Analyse/Configuration/GradingParameters.cfg
45c45
< TrimBitDifference = 2.
---
> TrimBitDifference = -2.

There were a few other code updates related to a change of the warm test temperature from 17 to 10 C. Those were committed in 3a98fef8.
  255   Thu May 7 00:56:50 2020 Dinko FerencekSoftwareMoReWeb updates related to the BB2 test
Andrey noticed that results of the BB2 test (here example for ROC 12 in M1675)



were not properly propagated to the ROC Summary



This was fixed in d9a1258a. However, looking at the summary for ROC 5 in the same module after the fix





it became apparent that dead pixels were double-counted under the dead bumps despite the fact they were supposed to be subtracted here. From the following debugging printout
Chip 5 Pixel Defects Grade A
        total:    5
        dead:     2
        inef:     0
        mask:     0
        addr:     0
        bump:     2
        trim:     1
        tbit:     0
        nois:     0
        gain:     0
        par1:     0
        total: set([(5, 4, 69), (5, 3, 68), (5, 37, 30), (5, 38, 31), (5, 4, 6)])
        dead:  set([(5, 37, 30), (5, 3, 68)])
        inef:  set([])
        mask:  set([])
        addr:  set([])
        bump:  set([(5, 4, 69), (5, 38, 31)])
        trim:  set([(5, 4, 6)])
        tbit:  set([])
        nois:  set([])
        gain:  set([])
        par1:  set([])

it became apparent that the column and row addresses for pixels with bump defects were shifted by one. This was fixed in 415eae00



However, there was still a problem with the pixel defects info in the production overview page which was still using using the BB test results



After switching to the BB2 tests results in ac9e8844, the pixel defects info looked better



but it was still not in a complete sync with the info presented in the FullQualification Summary 1



This is due to double-counting of dead pixels which still needs to be fixed for the Production Overview.
  256   Thu May 7 01:51:03 2020 Dinko FerencekSoftwareStrange bug/feature affecting Pixel Defects info in the Production Overview page
It was observed that sometimes the Pixel Defects info in the Production Overview page is missing



It turned out this was happening for those modules for which the MoReWeb analysis was run more than once. The solution is to remove all info from the database for the affected modules

python Controller.py -d

type in, e.g. M1668, and when prompted, type in 'all' and press ENTER and confirm you want to delete all entries. After that, run

python Controller.py -m M1668

followed by

python Controller.py -p

The missing info should now be visible.
  260   Mon May 11 21:32:20 2020 Dinko FerencekSoftwareFixed double-counting of pixel defects in the production overview page
As a follow-up to this elog, double-counting of pixel defects in the production overview page was fixed in 3a2c6772.
  261   Mon May 11 21:37:43 2020 Dinko FerencekSoftwareFixed the BB defects plots in the production overview page
0407e04c: attempting to fix the BB defects plots in the production overview page (seems mostly related to the 17 to 10 C change)
f2d554c5: it appears that BB2 defect maps were not processed correctly
  262   Mon May 11 21:41:15 2020 Dinko FerencekSoftware17 to 10 C changes in the production overview page
0c513ab8: a few more updates on the main production overview page related to the 17 to 10 C change
  265   Wed May 13 23:16:37 2020 Dinko FerencekSoftwareFixed double-counting of pixel defects in the production overview page

Dinko Ferencek wrote:
As a follow-up to this elog, double-counting of pixel defects in the production overview page was fixed in 3a2c6772.


A few extra adjustments were made in:

38eaa5d6: also removed double-counting of pixel defects in module maps in the production overview page
51aadbd7: adjusted the trimmed threshold selection to the L1 replacement conditions
  270   Fri May 22 16:19:01 2020 Andrey StarodumovSoftwareChange in MoreWeb GradingParameters.cfg
Xray noise:
grade B moved from 300e to 400e
grade C moved from 400e to 500e
  273   Mon May 25 17:24:05 2020 Andrey StarodumovSoftwareChange in MoreWeb ColumnUniformityPerColumn.py
In file
~/L1_SW/MoReWeb/Analyse/TestResultClasses/CMSPixel/QualificationGroup/XRayHRQualification$ emacs Chips/Chip/ColumnUniformityPerColumn/ColumnUniformityPerColumn.py
the high and low rates at which double column uniformity is checked are hard coded. Rates for L2 was there. Now correction is added:
# Layer2 settings
# HitRateHigh = 150
# HitRateLow = 50
# Layer1 settings
HitRateHigh = 250
HitRateLow = 150
  60   Mon Feb 3 15:53:32 2020 Andrey StarodumovReception test5 modules RT: 1545, 1547, 1548, 1549, 1550
Feb 03
- RT is done and OK for all modules. All 5 modules graded A.
  61   Mon Feb 3 17:09:36 2020 Andrey StarodumovReception test2 modules RT failed: 1544, 1546
Feb 03
- RT failed
-- 1544:
--- all ROCs are programmable
--- ROC14 no reliable Threshold found, ROC15 no threshold at all

-- 1546
--- all ROCs are programmable
--- no threshold found for all ROCs
  69   Thu Feb 6 17:19:12 2020 Dinko FerencekReception testRT for 6 modules: 1551, 1552, 1553, 1554, 1555, 1556
Today reception test was run for 6 modules and looks OK for all module. The modules were graded as follows:

1551: A
1552: A
1553: B (Electrical grade B)
1554: A
1555: B (IV grade B)
1556: B (IV grade B)

Protective caps were glued to these modules.
  71   Fri Feb 7 19:40:27 2020 Dinko FerencekReception testRT for 6 modules: 1557, 1558, 1559, 1560, 1561, 1562
Reception test is done and all 6 modules were graded A.

Protective caps were glued to 4 modules: 1557, 1558, 1559, 1560
  78   Tue Feb 11 10:25:32 2020 Dinko FerencekReception testRT for 6 modules: 1563, 1564, 1565, 1566, 1567, 1568
Feb. 10

1563: Grade C, ROCs 8 and 10 not programmable, no obvious problems with wire bonds
1564: Grade B, I > 2 uA (2.17 uA)

Feb. 11

1565: Grade A
1566: Grade A
1567: Grade C, no data from ROCs 8-11, looks like a problem with one TBM1 core
1568: Grade A
  82   Wed Feb 12 15:40:13 2020 Dinko FerencekReception testRT for 3 modules: 1569, 1570, 1571; 1572 bad
1569: Grade A
1570: Grade A
1571: Grade B, I > 2 uA (3.09 uA)

1572 from the same batch of 4 assembled modules was not programmable and was not run through the reception test.
  83   Thu Feb 13 21:44:28 2020 Dinko FerencekReception testRT for 6 modules: 1573, 1574, 1575, 1576, 1577, 1578
1573: Grade A
1574: Grade B, I(150)/I(100) > 2 (4.63)
1575: Grade C, problem with one TBM core
1576: Grade A
1577: Grade A
1578: Grade A
  86   Fri Feb 14 14:40:59 2020 Dinko FerencekReception testRT for 2 modules: 1578, 1580
1579: Grade A
1580: Grade A
  116   Thu Mar 19 17:17:27 2020 Andrey StarodumovReception testM1609-M16012
Reception test done and caps are glued to modules M1609-M1612.
M1611 graded B due to double column failure on ROC13. Others graded A.
  118   Fri Mar 20 14:46:31 2020 Andrey StarodumovReception testM1615 failed
M1615 is programmable but "no working phases found"
Visual inspection of wire bonds - OK.
To module doctor!
  119   Fri Mar 20 14:53:52 2020 Andrey StarodumovReception testM1616 failed
ROC10 of M1616 is not programmable, no readout from ROC10 and ROC11.
Visual inspection of wire bonds - OK
To module doctor!
  120   Fri Mar 20 17:05:58 2020 Andrey StarodumovReception testM1617 failed
ROC8 of M1617 is programmable but no readout from it.
Silvan noticed that a corner of one ROC of this module is broken,
this is exactly ROC8.
  123   Sun Mar 22 13:57:25 2020 Danek KotlinskiReception testM1617 failed

Andrey Starodumov wrote:
ROC8 of M1617 is programmable but no readout from it.
Silvan noticed that a corner of one ROC of this module is broken,
this is exactly ROC8.


Interesting that the phase finding works fine, the width of the valid region is 4, so quite
good. ROC8 idneed does not give any hits but the token passed through it, so the overall
readout works fine. There re no readout errors.
The crack on the corner of this ROC is clearly visible.
I wonder how this module passed the tests in Helsinki?
  124   Sun Mar 22 14:02:20 2020 Danek KotlinskiReception testM1616 failed

Andrey Starodumov wrote:
ROC10 of M1616 is not programmable, no readout from ROC10 and ROC11.
Visual inspection of wire bonds - OK
To module doctor!


For me ROC10 is programmable.
It looks like there is not token pass through ROC11.
This affects the readout of ROCs 11 & 10.
Findphases fails because of the missing ROC10&11 readout.
  125   Sun Mar 22 14:05:20 2020 Danek KotlinskiReception testM1615 failed

Andrey Starodumov wrote:
M1615 is programmable but "no working phases found"
Visual inspection of wire bonds - OK.
To module doctor!


For me this module is working fine.
I could run phasefinding and obtained a perfect PixelAlive.
I left this module connected in the blue-box in order to run more advanced tests from home.
  127   Tue Mar 24 15:18:59 2020 Andrey StarodumovReception testM1621 failed Reception
On M1621 ROC8 is not programmable.
  128   Tue Mar 24 15:20:18 2020 Andrey StarodumovReception testM1623 failed Reception
M1623: all ROCs are programmable but no readout from ROC0-ROC3.
Visual inspection is OK.
To module doctor.
  130   Tue Mar 24 16:08:58 2020 Andrey StarodumovReception testM1625 failed Reception
M1625: all ROCs are programmable but no readout from ROC0-ROC3.
The same symptom as for M1623.
Visual inspection is OK.
To module doctor.
  133   Wed Mar 25 14:11:35 2020 Andrey StarodumovReception testRT of M1613 and M1614 on Mar 20
Reception test for these modules have been done on Mar20.
Grading A for both modules.
  136   Wed Mar 25 14:44:37 2020 Andrey StarodumovReception testRT of M1627 and M1628
Both modules graded A.
  144   Thu Mar 26 17:41:52 2020 Andrey StarodumovReception testRT of M1629-1632
All modules M1629, M1630, M1631 and M1632 graded A
  148   Fri Mar 27 14:28:14 2020 Andrey StarodumovReception testM1633 failed Reception
All ROCs are programmable but permanent DESERALISER ERROR, Ch6 and Ch7 event ID mismatch.
Visual inspection is OK
To module doctor
  152   Fri Mar 27 17:27:09 2020 Andrey StarodumovReception testM1635 failed Reception
All ROCs are programmable but DESER400 trailer error bits: "NO DATA" or "IDLE DATA".
ROC8-11 are affected (no data)
Visual inspection is OK
To module doctor
  160   Mon Mar 30 17:49:31 2020 Andrey StarodumovReception testRT of M1637-1640
All modules: M1637, M1638, M1639, M1649, are graded A.
  165   Tue Mar 31 17:32:47 2020 Andrey StarodumovReception testRT of M1641-M1644
All modules graded A.
  166   Tue Mar 31 17:33:37 2020 Andrey StarodumovReception testRT of M1546
This is a module with a broken TBM. Silvan put a new one on top of the broken TBM.
The module is graded A after reception test.
I'm still not sure that wire-bonds of the new TBM is lower than capacitors. I'll try to glue a cap tomorrow
to see whether we could substitute TBMs on another 6 modules with broken TBMs.
  170   Wed Apr 1 14:36:37 2020 Andrey StarodumovReception testM1646 failed Reception
M1646 showed Idig=1A, ROC6 is not programmable.
Visual inspection: scratch on a periphery (between bonding pads) of ROC6.
To module doctor!
  171   Wed Apr 1 15:32:43 2020 Andrey StarodumovReception testRT of M1546

Andrey Starodumov wrote:
This is a module with a broken TBM. Silvan put a new one on top of the broken TBM.
The module is graded A after reception test.
I'm still not sure that wire-bonds of the new TBM is lower than capacitors. I'll try to glue a cap tomorrow
to see whether we could substitute TBMs on another 6 modules with broken TBMs.

Cap has been glued to M1546. No damaged wire-bonds. Vthr-CalDel and PixelAlive are OK.
Module to be (FT) tested tomorrow.
  173   Thu Apr 2 17:06:19 2020 Andrey StarodumovReception testM1652 failed Reception
ROC1 of M1652 is not programmable.
Put in the "Bad" tray as C module.
  177   Fri Apr 3 13:55:48 2020 Andrey StarodumovReception testM1653 failed Reception
Roc12-15 are not programmable.
Visual inspection is Ok, nothing found.
To module doctor!
  179   Fri Apr 3 14:18:15 2020 Andrey StarodumovReception testRT of M1651 on April 2
Due to damaged module adapter the first Reception test failed and after MoreWeb analysis graded C.
After second Reception test (with proper connected cable) the grade is A.
To keep grade A instead of C in the MoreWeb summary table I removed the directory of the first Reception:
:~/L1_DATA/M1651_Reception_2020-04-02_16h09m_1585836596 (but the .tar file is still there), run python Controller.py -d (and remove raw with C grade from GlobalFinalResult) and rerun python Controller.py -m M1651
  186   Mon Apr 6 14:27:31 2020 Andrey StarodumovReception testM1593
Silvan has substituted the TBM0 of M1593.
I had to substitute a cable that has residuals and with which the Reception test failed completely.
The long cable has been attached.
Reception test grade: A
  187   Mon Apr 6 14:42:43 2020 Andrey StarodumovReception testRT of M1575 failed
Silvan has substituted the TBM0 of M1575.
Still no hits in ROC0-ROC3: "NO DATA" "IDLE DATA" warnings
The long cable has been attached.
To module doctor!
  188   Mon Apr 6 14:52:09 2020 Andrey StarodumovReception testRT of M1657 failed
No hits in ROC14 and ROC15.
Here is an error:
"ERROR: <datapipe.cc/CheckEventValidity:L524> Channel 5 Number of ROCs (1) != Token Chain Length (2)"
To module doctor!
  191   Tue Apr 7 14:26:48 2020 Andrey StarodumovReception testRT of M1567 failed
After exchange of TBM1 the results is the same:

WAS:
1567: Grade C, no data from ROCs 8-11, looks like a problem with one TBM1 core

NOW:
-during ThrComp-CalDel scan: WARNING: Detected DESER400 trailer error bits: "IDLE DATA"
- result: INFO: CalDel: 135 134 126 121 158 141 119 133 _ 124 _ 107 _ 126 _ 108 133 91 138 120
ROC8-ROC11 no hits!
  192   Tue Apr 7 15:25:21 2020 Andrey StarodumovReception testRT of M1662 failed
Address decoding of M1662 failed in one double column of ROC4.
To be decided what to do with this module.
Currently in C* tray.
  195   Tue Apr 7 18:02:07 2020 Andrey StarodumovReception testRT of 1654
After Silvan removed a cap and re-bonded broken and bent wires M1654 again has been tested.
RT grade is A.
Protection cap to be glued and to be (F)tested.
  196   Wed Apr 8 13:40:45 2020 Andrey StarodumovReception testRT of M1625 failed
Silvan substituted TBM on this module.
Now ROC0 does not have hits.
Grade C, goes to "Bad" tray.
  197   Wed Apr 8 13:43:15 2020 Andrey StarodumovReception testRT of M1623 failed
M1623: all ROCs are programmable but no readout from ROC8-ROC11.
Visual inspection is OK.
To module doctor.
  199   Wed Apr 8 15:20:38 2020 Andrey StarodumovReception testRT of M1665
With CtrlReg=9 RT grade was B due to 90 noisy pixels in ROC5.
Noisy in this case means that one pixel in a 2x2 cluster in a few column got 40 hits instead of 10.
With CtrlReg=17 this problem gone. RT grade is A.
  201   Wed Apr 8 17:17:13 2020 Andrey StarodumovReception testRT of M1662 failed

Andrey Starodumov wrote:
Address decoding of M1662 failed in one double column of ROC4.
To be decided what to do with this module.
Currently in C* tray.

With CtrlReg=17 the grade of module is B. Still in one double column "noisy" pixels: with hits from other 3 pixels of a cluster.
Stay in C* tray. To come back later.
  203   Thu Apr 9 14:36:10 2020 Andrey StarodumovReception testRT of M1669 and M1670
Both modules graded A.
  204   Thu Apr 9 14:49:07 2020 Andrey StarodumovReception testRT of M1650 failed
Module has been tested on April 2nd.
Under Molex connector in ROC12 471 dead or noisy pixels.
Grade C.
  207   Thu Apr 9 17:36:31 2020 Andrey StarodumovReception testRT of M1671 failed
ROC12-ROC15 no hits.
A candidate to TBM0 substitution, hence Grade C*.
To Module doctor1
  209   Tue Apr 14 15:49:05 2020 Andrey StarodumovReception testRT of M1623, M1657, M1673, M1674
M1623: Grade A, should be B due to 71 bump defects in ROC4
M1657: Grade B, due to 51 dead pixels in ROC12
M1673: Grade A, again 31 dead bumps in ROC12
M1674: Grade A, again 39 dead bumps in ROC12
  220   Mon Apr 20 15:20:07 2020 Andrey StarodumovReception testChange TBMs on M1635, M1653, M1671

Andrey Starodumov wrote:
M1635: no data from ROC8-ROC11 => change TBM1
M1653: ROC12-ROC15 not programmable => change TBM0
M1671: no data from ROC12-ROC15 => change TBM0

Modules to be given to Silvan


After TBMs have been changed:
M1635: the same no data from ROC8-ROC11
M1653: reception test Grade A
M1671: the same no data from ROC12-ROC15

M1635 and M1671 to module doctor for final decision
  294   Fri Jul 10 11:24:37 2020 Urs LangeneggerReception testproc600V3 modules
This week I tested 12 modules built with proc600v3. The module numbers are M1722 - M1733.
The results are summarized at the usual place:
http://cms.web.psi.ch/L1Replacement/WebOutput/MoReWeb/Overview/Overview.html

Cheers,
--U.
  95   Tue Mar 3 14:05:13 2020 Andrey StarodumovRe-gradingRegrading C modules: due to Noise
It has been realized by Urs that VCal to electron conversion used by MoreWeb is still 50e/VCal.
While recent calibration done by Maren with a few new modules suggest that this conversion is 43.7electrons per 1 VCal (the number from Danek).
I re-run MoreWeb analysis with 44e/Vcal for modules that are graded C for high noise (>300e).

1) Change 50-->44 in
(1) Analyse/AbstractClasses/TestResultEnvironment.py: 'StandardVcal2ElectronConversionFactor':50,
(2) Analyse/Configuration/GradingParameters.cfg:StandardVcal2ElectronConversionFactor = 50
(3) Analyse/Configuration/GradingParameters.cfg.default:StandardVcal2ElectronConversionFactor = 50

2) remove all SCurve_C*.dat files (otherwise new fit results are not written in these files)
3) run python Controller.py -r -m M1591


M1591:
- all three T grade C due to Mean Noise > 300 for ROC0 and/or both ROC0 and ROC1
- after rerun MoreWeb all but one grades are B on individual FT page but in Summary pages grades still C???
- second time at -20C: grade C is due to trimming fails in ROC0: 211 pixels have too large threshold after trimming

Trimming to be checked!!!
  97   Wed Mar 4 16:00:05 2020 Andrey StarodumovRe-gradingRegrading C M1591: due to Noise

Andrey Starodumov wrote:
It has been realized by Urs that VCal to electron conversion used by MoreWeb is still 50e/VCal.
While recent calibration done by Maren with a few new modules suggest that this conversion is 43.7electrons per 1 VCal (the number from Danek).
I re-run MoreWeb analysis with 44e/Vcal for modules that are graded C for high noise (>300e).

1) Change 50-->44 in
(1) Analyse/AbstractClasses/TestResultEnvironment.py: 'StandardVcal2ElectronConversionFactor':50,
(2) Analyse/Configuration/GradingParameters.cfg:StandardVcal2ElectronConversionFactor = 50
(3) Analyse/Configuration/GradingParameters.cfg.default:StandardVcal2ElectronConversionFactor = 50

2) remove all SCurve_C*.dat files (otherwise new fit results are not written in these files)
3) run python Controller.py -r -m M1591


M1591:
- all three T grade C due to Mean Noise > 300 for ROC0 and/or both ROC0 and ROC1
- after rerun MoreWeb all but one grades are B on individual FT page but in Summary pages grades still C???
- second time at -20C: grade C is due to trimming fails in ROC0: 211 pixels have too large threshold after trimming

Trimming to be checked!!!


To correct the Summary page one needs to remove rows from DB with C grade from previous data analysis (that for some reason stayed in DB)
using python Controller -d and then run python Controller -m MXXXX

To the directory :~/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/M1591_FullQualification_2020-02-28_08h03m_1582873436/QualificationGroup/ModuleFulltest_m20_2
file grade.txt with a content "2" (corresponds to grade B) has been added. Hence this test grade has been changed from C to B.The reason is that 211 pixels fail trimming (threshold is outside boundary) is a too low trim threshold: VCal=40. From now on 50 will be used.
  103   Wed Mar 11 17:59:48 2020 Andrey StarodumovRe-gradingRegrading C M1542: due to Noise
Follow the instruction from M1591 regrading log:
"To correct the Summary page one needs to remove rows from DB with C grade from previous data analysis (that for some reason stayed in DB)
using python Controller -d and then run python Controller -m MXXXX"
The Mean noise remains the same but threshold scaled according to the new VCal calibration (44 instead of 50).
To have corrected Mean noise one needs to refit SCurves, means run MoreWeb analysis with -r flag: "Controller -m MXXXX -r"
The modules still graded C due to relative gain spread. It will be re-tested tomorrow with new HP optimization/calibration procedure.
  132   Tue Mar 24 18:11:52 2020 Andrey StarodumovRe-gradingReanalised test results
Test resulsts of several modules have been re-analised without grading on trimbit failure.
M1614: C->B
M1613: C->A
M1609: C->B
M1618: B->A
M1612: B->B
M1610: B->B
M1608: C->B
M1606: C->C (too many badly trimmed pixels)
M1605: C->B

Most of all B gradings and one C are due to badly trimmed pixels. The threshold after trimming usually has 3(!) separated peaks.
We should understand this feature.
  139   Wed Mar 25 18:31:46 2020 Andrey StarodumovRe-gradingReanalised test results

Andrey Starodumov wrote:
Test resulsts of several modules have been re-analised without grading on trimbit failure.
M1614: C->B
M1613: C->A
M1609: C->B
M1618: B->A
M1612: B->B
M1610: B->B
M1608: C->B
M1606: C->C (too many badly trimmed pixels)
M1605: C->B

Most of all B grading and one C are due to badly trimmed pixels. The threshold after trimming usually has 3(!) separated peaks.
We should understand this feature.


More modules have been re-analysed:
1604: C->B
1603: C->B
1602: C->B
1601: B->A
1545: C->C (too many pixels on ROC14 are badly trimmed, to be retested tomorrow)
  278   Fri May 29 13:56:35 2020 Andrey StarodumovRe-gradingM1630
In ROC1 of M1630 gain calibration failed massively (3883 pixels) at +10C with CtrlReg 9.
A special test of M1630 only at +10C and with CtrlReg 17 showed no problem.
p10_1 from ~/L1_DATA/ExtraTests_ToBeKept/M1630_FullTestp10_2020-04-17_15h54m_1587131688/000_Fulltest_p10
copied to ~/L1_DATA/M1630_FullQualification_2020-04-06_08h35m_1586154934/005_Fulltest_p10
and original files from ~/L1_DATA/M1630_FullQualification_2020-04-06_08h35m_1586154934/005_Fulltest_p10
copied to ~/L1_DATA/ExtraTests_ToBeKept/p10RemovedFrom_M1630_FullQualification_2020-04-06_08h35m_1586154934/005_Fulltest_p10
This is done to have a clean ranking of modules based on # of defects.
  109   Mon Mar 16 11:04:41 2020 Matej RoguljicPhQualificationPhQualification 14.-15.3.
I ran PhQualification over the weekend with changes pulled from git (described here https://elrond.irb.hr/elog/Layer+1+Replacement/108).

14.3. M1554, M1555, M1556, M1557

First run included software changes, but I forgot to change the vcalhigh in testParameters.dat
The summary can be seen in ~/L1_SW/pxar/ana/T-20/VcalHigh255 (change T-20 to T+10 to see results for +10 degrees)

Second run was with vcalhigh 100.
The summary can be seen in ~/L1_SW/pxar/ana/T-20/Vcal100

15.3. M1558, M1559, M1560

I only ran 3 modules because DTB2 (WRE1O5) or its adapter was not working. Summary is in ~/L1_SW/pxar/ana/T-20/Vcal100





The full data from the tests are in ~/L1_DATA/MXXXX_PhQualification_...
  111   Mon Mar 16 15:20:03 2020 Matej RoguljicPhQualificationPhQualification on 16.3.
PhQualification was run on modules M1561, M1564, M1565, M1566.
  295   Wed Jul 29 17:19:43 2020 danek kotlinskiPhQualificationChange configuration for PH qialification
Preparing the new PH optimization I had to make the following modifications:

1) in elCommandante.ini
change ModuleType definition from tbm10d to tbm10d_procv4
in order to use CtrlReg=17 setting

2) in pxar/data/tbm10d_procv4
change in all dacParameter*.dat files Vsh setting from 8 to 10.

I hope these are the right locations.

D.
  298   Fri Aug 28 11:53:23 2020 Andrey StarodumovPhQualificationM1555
There is no new PH optimisation for this module?!
To be checked!
  300   Fri Aug 28 14:07:24 2020 Andrey StarodumovPhQualificationM1539
There is no new PH optimisation for this module?!
To be checked!
  302   Sat Sep 12 23:45:56 2020 Dinko FerencekPOSPOS configuration files created
pXar parameter files were converted to POS configuration files by executing the following commands on the lab PC at PSI

Step 1 (needs to be done only once, should be repeated only if there are changes in modules and/or their locations)
cd /home/l_tester/L1_SW/MoReWeb/scripts/
python queryModuleLocation.py -o module_locations.txt -f
Next, check that /home/l_tester/L1_DATA/POS_files/Folder_links/ is empty. If not, delete any folder links contained in it and run the following command
python prepareDataForPOS.py -i module_locations.txt -p /home/l_tester/L1_DATA/ -m /home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/ -l /home/l_tester/L1_DATA/POS_files/Folder_links/

Step 2
cd /home/l_tester/L1_SW/pxar2POS/
for i in `cat /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt | awk '{print $1}'`; do ./pxar2POS.py -m $i -T 50 -o /home/l_tester/L1_DATA/POS_files/Configuration_files/ -s /home/l_tester/L1_DATA/POS_files/Folder_links/ -p /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt; done

The POS configuration files are located in /home/l_tester/L1_DATA/POS_files/Configuration_files/
  305   Fri Nov 6 07:28:42 2020 danek kotlinskiPOSPOS configuration files created

Dinko Ferencek wrote:
pXar parameter files were converted to POS configuration files by executing the following commands on the lab PC at PSI

cd /home/l_tester/L1_SW/MoReWeb/scripts/
python queryModuleLocation.py -o module_locations.txt -f
python prepareDataForPOS.py -i module_locations.txt -p /home/l_tester/L1_DATA/ -m /home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/ -l /home/l_tester/L1_DATA/POS_files/Folder_links/

cd /home/l_tester/L1_SW/pxar2POS/
for i in `cat /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt | awk '{print $1}'`; do ./pxar2POS.py -m $i -T 50 -o /home/l_tester/L1_DATA/POS_files/Configuration_files/ -s /home/l_tester/L1_DATA/POS_files/Folder_links/ -p /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt; done

The POS configuration files are located in /home/l_tester/L1_DATA/POS_files/Configuration_files/


Dinko

I have finally looked more closely at the file you have generated. They seem fine exept 2 points:
1) Some TBM settings (e.g. pkam related) differ from P5 values.
This is not a problem since we will have to adjust them anyway.

2) There is one DAC setting missing.
This is DAC number 13, between VcThr and PHOffset.
This is the tricky one because it has a different name in PXAR and P5 setup files.

DAC 13: PXAR-name = "vcolorbias" P5-name="VIbias_bus"

its value is fixed to 120.

Can you please insert it.
D.
  306   Tue Nov 10 00:50:47 2020 Dinko FerencekPOSPOS configuration files created

danek kotlinski wrote:

Dinko Ferencek wrote:
pXar parameter files were converted to POS configuration files by executing the following commands on the lab PC at PSI

cd /home/l_tester/L1_SW/MoReWeb/scripts/
python queryModuleLocation.py -o module_locations.txt -f
python prepareDataForPOS.py -i module_locations.txt -p /home/l_tester/L1_DATA/ -m /home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/ -l /home/l_tester/L1_DATA/POS_files/Folder_links/

cd /home/l_tester/L1_SW/pxar2POS/
for i in `cat /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt | awk '{print $1}'`; do ./pxar2POS.py -m $i -T 50 -o /home/l_tester/L1_DATA/POS_files/Configuration_files/ -s /home/l_tester/L1_DATA/POS_files/Folder_links/ -p /home/l_tester/L1_SW/MoReWeb/scripts/module_locations.txt; done

The POS configuration files are located in /home/l_tester/L1_DATA/POS_files/Configuration_files/


Dinko

I have finally looked more closely at the file you have generated. They seem fine exept 2 points:
1) Some TBM settings (e.g. pkam related) differ from P5 values.
This is not a problem since we will have to adjust them anyway.

2) There is one DAC setting missing.
This is DAC number 13, between VcThr and PHOffset.
This is the tricky one because it has a different name in PXAR and P5 setup files.

DAC 13: PXAR-name = "vcolorbias" P5-name="VIbias_bus"

its value is fixed to 120.

Can you please insert it.
D.


Hi Danek,

I think I implemented everything that was missing. The full list of code updates is here.

Best,
Dinko
  308   Tue Jan 19 15:12:12 2021 Dinko FerencekPOSPOS configuration files created
M1560 in position bpi_sec1_lyr1_ldr1_mod3 was replaced by M1613.

The POS configuration files were re-generated and placed in /home/l_tester/L1_DATA/POS_files/Configuration_files/.

The old version of the files was moved to /home/l_tester/L1_DATA/POS_files/Configuration_files_20201110/.
  309   Mon Jan 25 13:03:22 2021 Dinko FerencekPOSPOS configuration files created
The output POS configuration files has '_Bpix_' instead of '_BPix_' in their names. The culprit was identified to be the C3 cell in the 'POS' sheet of Module_bookkeeping-L1_2020 Google spreadsheet which contained 'Bpix' instead of 'BPix' which was messing up the file names. This has been fixed now and the configuration files regenerated.

The WBC values were also changed to 164 for all modules using the following commands

cd /home/l_tester/L1_SW/pxar2POS/
./pxar2POS.py --do "dac:set:WBC:164" -o /home/l_tester/L1_DATA/POS_files/Configuration_files/ -i 1

This created a new set of configuration files with ID 2 in /home/l_tester/L1_DATA/POS_files/Configuration_files/.

The WBC values stored in ID 1 were taken from the pXar dacParameters*_C*.dat files and the above procedure makes a copy of the ID 1 files and overwrites the WBC values.
  4   Tue Aug 6 16:06:45 2019 Matej RoguljicOtherVsh and ctrlreg for v4 chips
The recommended default value for Vsh is 8, but the current version of pXar has it as 30. One should remember this when making new configuration folders from mkConfig. The recommended value of ctrlreg is 17.
  8   Tue Sep 10 15:18:20 2019 Matej RoguljicOtherModules 1504, 1505, 1520 irradiation report
Modules 1504, 1505 and 1520 were taken to Zagreb for irradiation on 13.08.19. The goal was to check the v4 behavior after irradiation. They were irradiated to 1.2 MGy and returned to PSI on 9th of September. Upon testing them at PSI, they all had issues. ROCs on M1504 and M1520 were not programmable at all, changing Vana had no effect. Vana could be set on M1505 while targeting Iana = 28 mA/ROC, however, no timing could be found for it and no working pixel could be found. Andrey and Matej took the modules under the microscope and saw a "greenish" deposits on HDI metal pads of unknown origin. There was also a bit of liquid near the HDI ID on M1520, but not on others. The residue could be shorting some pads causing issues on the module. It is still unclear whether the residue comes from the HDI, cap or is it introduced during irradiation by something in Zagreb.
  10   Fri Sep 13 15:06:51 2019 Andrey StarodumovOtherModules 1504, 1505, 1520 irradiation report
It was discovered that these modules were stored in Zagreb in the climatic lab where T was about +22C. These modules have been transported from Co60 irradiation facility to the lab on open air with T>30C and RH>70%. Water in the air under the cap condensated on the surface of HDIs and diluted residuals (from soldering, passivation etc), that after remains liquid or crystallised.
The irradiation itself does not course any damage. This is also confirmed by the fact that after two previous irradiations in Jan and Jul 2019 of modules and HDIs, samples remained in a good shape without any residuals on HSI surfaces. These samples have been kept in an office where T and RH were similar to the outside and not in the climatic lab.

We consider the the case is understood and closed.
  12   Thu Sep 19 00:52:24 2019 Dinko FerencekOtherModules 1504, 1505, 1520 irradiation report

Andrey Starodumov wrote:
It was discovered that these modules were stored in Zagreb in the climatic lab where T was about +22C. These modules have been transported from Co60 irradiation facility to the lab on open air with T>30C and RH>70%. Water in the air under the cap condensated on the surface of HDIs and diluted residuals (from soldering, passivation etc), that after remains liquid or crystallised.
The irradiation itself does not course any damage. This is also confirmed by the fact that after two previous irradiations in Jan and Jul 2019 of modules and HDIs, samples remained in a good shape without any residuals on HSI surfaces. These samples have been kept in an office where T and RH were similar to the outside and not in the climatic lab.

We consider the the case is understood and closed.


Just to clarify. The modules were not transported from Co60 the irradiation facility to the lab in open air but inside a closed Petri dish. Otherwise, there would be no risk of water condensation if the air surrounding modules was allowed to quickly mix with the air-conditioned lab air. Here the problem arose from the fact that it was not only the modules that were brought inside the lab but they were brought inside a pocket of the outside air. A closed Petri dish is not airtight but it significantly reduces mixing of the air inside the Petri dish with the surrounding lab air making it slower than the rate at which the Petri dish and the module inside it were cooling down once brought inside the lab. This could have led to water condensation if the pocket of air trapped inside the Petri dish was warm and humid and had a dew point above the lab air temperature. To prevent this from happening, the solution should be relatively simple and it would be to open the Petri dish and uncover modules before bringing them inside the lab. That way the exchange of air will be faster and the risk of condensation will be basically gone because a warm module will be quickly surrounded by the lab air which will not condense on a warmer surface.

However, there is an additional twist in this particular case. On Aug. 22, when the modules were brought in the lab, there was a thunderstorm in Zagreb in the early afternoon (https://www.zagreb.info/aktualno/zagreb-je-zahvatila-oluja-munje-i-jaka-kisa-nad-vecim-dijelom-grada/227156) with temperature around 21.5 C and RH around 75% at the time the modules were transported (around 15:20), and the whole day was relatively fresh and humid. The outside air on that day would definitely not lead to water condensation in the lab. However, before being brought in the lab, the modules were sitting in a room in the building where the Co60 irradiation facility is located so the air inside the Petri dish was likely similar to the air inside that room (the modules were sitting there for a while and there was enough time for the air temperature and humidity to equalize) and there was not much time to mix with the outside when being transported from one building to another. Unfortunately, there are no measurements of the air temperature and humidity in that room. However, it is worth mentioning that the previous day, Aug. 21, was not very hot and humid with midday temperature around 26 C and RH around 50%. It is therefore likely that the air inside that room and consequently inside the Petri dish was not very hot and humid, making the hypothesis of water condensation in the lab, if not improbable, certainly less likely.

Either way, more careful handling of modules will be needed.
Attachment 1: ZG-FER_temp_hum_2019.08.16.-09.16.png
ZG-FER_temp_hum_2019.08.16.-09.16.png
Attachment 2: pixellab-main-room-1.png
pixellab-main-room-1.png
  99   Wed Mar 11 15:34:57 2020 Urs LangeneggerOtherM1586: issues with MOLEX?
Module M1586 had passed the full qualification on 20/02/27. I had had to re-insert the cable in the Molex connector for it to become programmable.

On 2020/03/09, I tried to re-test M1586, but it was not programmable. Visual inspection revealed nothing to me. I did re-insert the cable once again, but this time this did not help.

Maybe one should try again re-inserting the cable.

Maybe these issues are an indication that the module (MOLEX) is flaky.
  100   Wed Mar 11 16:48:10 2020 Andrey StarodumovOtherM1586: issues with MOLEX?

Urs Langenegger wrote:
Module M1586 had passed the full qualification on 20/02/27. I had had to re-insert the cable in the Molex connector for it to become programmable.

On 2020/03/09, I tried to re-test M1586, but it was not programmable. Visual inspection revealed nothing to me. I did re-insert the cable once again, but this time this did not help.

Maybe one should try again re-inserting the cable.

Maybe these issues are an indication that the module (MOLEX) is flaky.


Most likely the cable contacts caused such behavior since they look damaged.
After changing the cable module do not show any more problems.
  155   Sun Mar 29 18:14:59 2020 danek kotlinskiOtherModules 1544 & 1563 in gelpack
The 2 bad modules:
M1544
M1563
Have been moved to gelpacks.
  175   Thu Apr 2 17:10:08 2020 Andrey StarodumovOtherCap glued to M1618
M1618 has been tested (FT ) on March 23 without a protection cap.
I have no idea how it's happened...
Today cap is glued and module passed Reception test again and it's A.
I think we do not need to repeat FT for this module.
I put it in a tray with good modules.
  210   Tue Apr 14 16:54:31 2020 Andrey StarodumovOtherM1633 cable disconnected
We do not have any more module holders.
M1633 was in "Module doctor" tray.
I took module off the holder and put it in a gel-pak.
  237   Wed Apr 29 08:48:06 2020 Urs LangeneggerOtherM1539
M1539 showed no readout. I tried, all without success,
- reconnecting the cable to the adapter multiple times
- connecting to the adapter in the blue box
- reconnecting the cable to the MOLEX on the module
  258   Mon May 11 14:14:05 2020 danek kotlinskiOtherM1606
On Friday I have tested M1606 at room temperature in the red cold box.
Previously it was reported that trimming does not work for ROC2.

In this test trimming was fine, only 11 pixels failed it.
See the attached 1D and 2D histograms. There is small side peak at about vcal=56 with ~100 pixels.
But this should not be a too big problem?

Also the Pulse height map looks good and the reconstructed pulse height at vcal=70
gives vcal=68.1 with rms=4.2, see the attached plot.

So I conclude that this module is fine.
Attachment 1: m1606_roc2_thr_1d.png
m1606_roc2_thr_1d.png
Attachment 2: m1606_roc2_thr_2d.png
m1606_roc2_thr_2d.png
Attachment 3: m1606_roc2_ph70.png
m1606_roc2_ph70.png
  259   Mon May 11 14:40:16 2020 danek kotlinskiOtherM1582
On Friday I have tested the module M1582 at room temperature in the blue box.
The report in MoreWeb says that this module has problems with trimming 190 pixels in ROC1.

I see not problem in ROC1. The average threshold is 50 with rms=1.37. Only 1 pixel is in the 0 bin.
See the attached 1d and 2d plots.

Also the PH looks good. The vcal 70 PH map is reconstructed at vcal 70.3 with rms of 3.9.
5159 pixels have valid gain calibrations.

I conclude that this module is fine.
Maybe it is again a DTB problem, as reported by Andrey.
D.
Attachment 1: m1582_roc1_thr_1d.png
m1582_roc1_thr_1d.png
Attachment 2: m1582_roc1_thr_2d.png
m1582_roc1_thr_2d.png
Attachment 3: m1582_roc1_ph70.png
m1582_roc1_ph70.png
  264   Wed May 13 17:57:45 2020 Andrey StarodumovOtherL1_DATA backup
L1_DATA files are backed up to the LaCie disk
  275   Tue May 26 23:00:50 2020 Dinko FerencekOtherProblem with external disk filling up too quickly
The external hard disk (LaCie) used to back up the L1 replacement data completely filled up after transferring ~70 GB worth of data even though its capacity is 2 TB. The backup consists of copying all .tar files and the WebOutput/ subfolder from /home/l_tester/L1_DATA/ to /media/l_tester/LaCie/L1_DATA/ The corresponding rsync command is

rsync -avPSh --include="M????_*.tar" --include="WebOutput/***" --exclude="*" /home/l_tester/L1_DATA/ /media/l_tester/LaCie/L1_DATA/

It was discovered that /home/l_tester/L1_DATA/WebOutput/ was by mistake duplicated inside /media/l_tester/LaCie/L1_DATA/WebOutput/ However, this could still not explain the full disk.

The size of the tar files looked fine but /media/l_tester/LaCie/L1_DATA/WebOutput/ was 1.8 TB in size while /home/l_tester/L1_DATA/WebOutput/ was taking up only 50 GB and apart from the above-mentioned duplication, there was no other obvious duplication.

It turned out the file system on the external hard disk had a block size of 512 KB which is unusually large. This is typically set to 4 KB. In practice this meant that every file (and even folder), no matter how small, always occupied at least 512 KB on the disk. For example, I saw the following

l_tester@pc11366:~$ cd /media/l_tester/LaCie/
l_tester@pc11366:/media/l_tester/LaCie$ du -hs Warranty.pdf
512K Warranty.pdf
l_tester@pc11366:/media/l_tester/LaCie$ du -hs --apparent-size Warranty.pdf
94K Warranty.pdf

And in a case like ours, where there are a lot of subfolders and files, many of whom are small, a lot of disk space is effectively wasted.

The file system used on the external disk was exFAT. According to this page, the default block size (in the page they call it the cluster size) for the exFAT file system scales with the drive size and this is the likely reason why the size of 512 KB was used (however, 512 KB is still larger than the largest block size used by default). The main partition on the external disk was finally reformatted as follows

sudo mkfs.exfat -n LaCie -s 8 /dev/sdd2

which set the block size to 4 KB.
  303   Fri Sep 18 15:45:21 2020 Andrey StarodumovOtherM2211 and M2122
I made a mistake and instead M2122 used ID M2211 in the .ini file.
Hence now we do not have entry for M2122 but have 2 entries for M2211: one is of Sept 18 and another one called old of Sep 17).
Test results of Sep 17 are for 2211
Test results of Sep 18 are for 2122

To be corrected later
  307   Mon Jan 18 13:53:44 2021 Andrey StarodumovOtherDTB tests
M2217
--->DTB 154 (one of the red cold box setup):
- flat cable (with HV):
>adctest
clk low = -231.0 mV high= 212.0 mV amplitude = 443.0 mVpp (differential)
ctr low = -246.0 mV high= 235.0 mV amplitude = 481.0 mVpp (differential)
sda low = -250.0 mV high= 223.0 mV amplitude = 473.0 mVpp (differential)
rda low = -65.0 mV high= 48.0 mV amplitude = 113.0 mVpp (differential)
sdata1 low = -171.0 mV high= 155.0 mV amplitude = 326.0 mVpp (differential)
sdata2 low = -173.0 mV high= 149.0 mV amplitude = 322.0 mVpp (differential)

- twisted pairs cable:
several ERRORS during VthrCompCalDel and PixelAlive: ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (119) != TBM ID (120)
>adctest
clk low = -196.0 mV high= 177.0 mV amplitude = 373.0 mVpp (differential)
ctr low = -245.0 mV high= 236.0 mV amplitude = 481.0 mVpp (differential)
sda low = -249.0 mV high= 224.0 mV amplitude = 473.0 mVpp (differential)
rda low = -65.0 mV high= 55.0 mV amplitude = 120.0 mVpp (differential)
sdata1 low = -120.0 mV high= 101.0 mV amplitude = 221.0 mVpp (differential)
sdata2 low = -124.0 mV high= 102.0 mV amplitude = 226.0 mVpp (differential)

GOOD with flat cable but r/o ERRORs with twisted pairs cable


--->DTB 126 twisted pairs cable:
VthrCompCalDel test: many errors like
ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (68) != TBM ID (69),
test not converged!
>adctest
clk low = -193.0 mV high= 181.0 mV amplitude = 374.0 mVpp (differential)
ctr low = -252.0 mV high= 242.0 mV amplitude = 494.0 mVpp (differential)
sda low = -243.0 mV high= 227.0 mV amplitude = 470.0 mVpp (differential)
rda low = -67.0 mV high= 47.0 mV amplitude = 114.0 mVpp (differential)
sdata1 low = -122.0 mV high= 103.0 mV amplitude = 225.0 mVpp (differential)
sdata2 low = -120.0 mV high= 107.0 mV amplitude = 227.0 mVpp (differential)

BAD DTB!!!


--->DTB 140
VthrCompCalDel test: many errors like
ERROR:[14:19:38.397] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:19:38.398] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (4) != TBM ID (5)
test not converged!
>adctest
clk low = -194.0 mV high= 184.0 mV amplitude = 378.0 mVpp (differential)
ctr low = -248.0 mV high= 248.0 mV amplitude = 496.0 mVpp (differential)
sda low = -231.0 mV high= 230.0 mV amplitude = 461.0 mVpp (differential)
rda low = -67.0 mV high= 56.0 mV amplitude = 123.0 mVpp (differential)
sdata1 low = -123.0 mV high= 98.0 mV amplitude = 221.0 mVpp (differential)
sdata2 low = -117.0 mV high= 109.0 mV amplitude = 226.0 mVpp (differential)

BAD DTB!!!

--->DTB 172
VthrCompCalDel and PixelAlive tests: many errors like
[14:23:49.222] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:23:49.222] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (192) != TBM ID (193)
[14:24:19.902] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:24:19.902] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (72) != TBM ID (73)
but converged!

DTB WORKS but with ERRORs!!!


--->DTB 139
VthrCompCalDel and PixelAlive tests: many errors like
[14:29:57.160] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:29:57.160] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (159) != TBM ID (160)
and inefficient tornados for ROC8-15
PixelAlive: zero efficiency for all ROCs and many errors

[14:31:25.335] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:31:25.335] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (61) != TBM ID (63)
>adctest
clk low = -191.0 mV high= 194.0 mV amplitude = 385.0 mVpp (differential)
ctr low = -248.0 mV high= 254.0 mV amplitude = 502.0 mVpp (differential)
sda low = -239.0 mV high= 237.0 mV amplitude = 476.0 mVpp (differential)
rda low = -50.0 mV high= 66.0 mV amplitude = 116.0 mVpp (differential)
sdata1 low = -114.0 mV high= 113.0 mV amplitude = 227.0 mVpp (differential)
sdata2 low = -111.0 mV high= 116.0 mV amplitude = 227.0 mVpp (differential)

BAD DTB!!!



--->DTB 94
VthrCompCalDel and PixelAlive tests: many errors like
[14:38:28.017] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:38:28.017] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (133) != TBM ID (134)
and inefficient tornados for all ROC
PixelAlive: zero efficiency for all ROCs and many errors

BAD DTB!!!


--->DTB 65:
[14:44:25.326] ERROR: <PixMonitorFrame.cc/Update:L121> analog current reading unphysical
[14:44:25.326] ERROR: <PixMonitorFrame.cc/Update:L124> digital current reading unphysical

BAD DTB!!!

---> DTB 136 (from my drawer)
[14:47:23.231] ERROR: <hal.cc/status:L112> Testboard not initialized yet!
[14:47:23.231] ERROR: <PixMonitorFrame.cc/Update:L121> analog current reading unphysical
[14:47:23.231] ERROR: <PixMonitorFrame.cc/Update:L124> digital current reading unphysical

BAD DTB!!!


---> DTB 63 (from my drawer)
NO ERRORs!!!
>adctest
clk low = -148.0 mV high= 175.0 mV amplitude = 323.0 mVpp (differential)
ctr low = -245.0 mV high= 230.0 mV amplitude = 475.0 mVpp (differential)
sda low = -251.0 mV high= 225.0 mV amplitude = 476.0 mVpp (differential)
rda low = -61.0 mV high= 60.0 mV amplitude = 121.0 mVpp (differential)
sdata1 low = -121.0 mV high= 101.0 mV amplitude = 222.0 mVpp (differential)
sdata2 low = -124.0 mV high= 102.0 mV amplitude = 226.0 mVpp (differential)

VERY GOOD DTB: no errors with twisted pairs cable!!!


---> DTB 162 (from my drawer)
VthrComp test: many errors like
[14:54:07.161] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:54:07.161] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (195) != TBM ID (196)
[14:54:07.162] ERROR: <datapipe.cc/CheckEventID:L486> Channel 3 Event ID mismatch: local ID (195) != TBM ID (196)
[14:54:07.162] WARNING: Channel 2 ROC 0: Readback start marker after 15 readouts!
and inefficient tornados for ROC8-15

PixelAlive:
many errors like
[14:55:27.941] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[14:55:27.941] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (193) != TBM ID (194)
[14:55:27.941] ERROR: <datapipe.cc/CheckEventID:L486> Channel 3 Event ID mismatch: local ID (193) != TBM ID (194)
[14:55:27.941] WARNING: Channel 2 ROC 0: Readback start marker after 29 readouts!
and inneficient maps for ROC8-15

>adctest
clk low = -197.0 mV high= 178.0 mV amplitude = 375.0 mVpp (differential)
ctr low = -255.0 mV high= 236.0 mV amplitude = 491.0 mVpp (differential)
sda low = -245.0 mV high= 230.0 mV amplitude = 475.0 mVpp (differential)
rda low = -72.0 mV high= 49.0 mV amplitude = 121.0 mVpp (differential)
sdata1 low = -124.0 mV high= 97.0 mV amplitude = 221.0 mVpp (differential)
sdata2 low = -125.0 mV high= 102.0 mV amplitude = 227.0 mVpp (differential)

WORKS but with ERRORS!!!

---> DTB 170 (from my drawer)
VthrComp test: many errors like
[15:04:20.259] WARNING: Detected DESER400 trailer error bits: "CODE ERROR"
[15:04:20.259] ERROR: <datapipe.cc/CheckEventID:L486> Channel 2 Event ID mismatch: local ID (30) != TBM ID (31)
test mot converged!

BAD DTB!!!


--->DTB 18 (fixed on the red cold box setup):
NO ERRORs!!!
>adctest
clk low = -203.0 mV high= 187.0 mV amplitude = 390.0 mVpp (differential)
ctr low = -252.0 mV high= 236.0 mV amplitude = 488.0 mVpp (differential)
sda low = -248.0 mV high= 230.0 mV amplitude = 478.0 mVpp (differential)
rda low = -60.0 mV high= 48.0 mV amplitude = 108.0 mVpp (differential)
sdata1 low = -114.0 mV high= 111.0 mV amplitude = 225.0 mVpp (differential)
sdata2 low = -120.0 mV high= 109.0 mV amplitude = 229.0 mVpp (differential)

VERY GOOD DTB: no errors with twisted pairs cable!!!
  301   Wed Sep 2 16:09:24 2020 Matej RoguljicModules for PM1595 switch with M1558
Module 1595 was foreseen to go on the inner ladder 5, position -2 (negative two). During the pre-installation test, we saw it had a high leakage current, ~8 microAmps at room temperature. Therefore, we decided to place module 1558 in its place instead of it.
  62   Tue Feb 4 19:35:38 2020 Dinko FerencekModule transfer7 modules prepared for transfer to ETH: 1536, 1537, 1538, 1540, 1541, 1542, 1543
The following modules have been prepared today for transport to ETH for x-ray qualification:
1536, 1537, 1538, 1540, 1541, 1542, 1543

The FullQualification tar files for these modules have been copied to the common CERNBox folder shared with ETH.
  66   Wed Feb 5 16:18:30 2020 Dinko FerencekModule transferModule location tracking added to the assembly spreadsheet
To keep track of the module location, two columns were added to the module assembly spreadsheet https://docs.google.com/spreadsheets/d/12m2ESCLPH5AyWuuYV4KV1c5O_NEy7uP5PQKzVsdr3wQ/edit?usp=sharing that contain the dates when a given module is shipped to ETHZ and when it is returned to PSI.
  80   Wed Feb 12 10:43:04 2020 Dinko FerencekModule transfer9 modules sent to ETH: 1539, 1545, 1547, 1548, 1549, 1550, 1551, 1552, 1553
Yesterday the following modules were sent to ETH:
1539, 1545, 1547, 1548, 1549, 1550, 1551, 1552, 1553

The FullQualification tar files for these modules have been copied to the common CERNBox folder shared with ETH and the transfer dates have been added to the assembly spreadsheet.
  92   Thu Feb 27 10:14:09 2020 danek kotlinskiModule transferM1562 to DESY
M1562 was not qualified during a full test since the test did not complete.
Looking at the module I see that roc 2 has a lot of noise row 79.
I tried to mask all pixels in this row but the masking does not work.
With this feature one can run pixel alive but trimming fails completely for roc 2.
The only way to trim the module is to disable the whole roc 2.

Since it seems to me that we will never want to use this module in the detector
I have given it to Jory to be used in the DESY test beam.
She already has 2-3 module from the pre-production, so with older versions of HDIs & ROCs.

I have also looked at 2 other modules which were classified as C 1557 and 1558.
Both look fine to me but I have not run the PH optimization on them.

D.
  135   Wed Mar 25 14:42:55 2020 danek kotlinskiModule transfermodules 1545 & 1542 back from ETH
M1545 & M1542 were returned from ETH to PSI for further testing.
  202   Thu Apr 9 14:19:40 2020 danek kotlinskiModule transferModules moved to gel-pack
The following good module class A/B have been moved to gel-packs:
1607, 1608, 1609, 1610, 1612, 1613, 1614, 1618, 1619, 1620, 1622, 1624, 1626, 1627, 1628.

The following bad module, not-working and C class have been moved to gel-packs:
1544, 1563, 1567, 1675, 1593, 1594, 1611, 1615, 1616, 1617, 1621, 1625, 1646, 1650, 1652
  205   Thu Apr 9 16:13:51 2020 danek kotlinskiModule transferModules moved to gel-pack

danek kotlinski wrote:
The following good module class A/B have been moved to gel-packs:
1607, 1608, 1609, 1610, 1612, 1613, 1614, 1618, 1619, 1620, 1622, 1624, 1626, 1627, 1628.

The following bad module, not-working and C class have been moved to gel-packs:
1544, 1563, 1567, 1675, 1593, 1594, 1611, 1615, 1616, 1617, 1621, 1625, 1646, 1650, 1652


A few corrections to the list of bad modules:
- On April 7 M1593 has been (F)-tested with a grade B due to Rel.gain width and mean noise of a few ROCs.
- 1611 should be C* since ROC13 has not working double column (160pixels) + 27 trimbit failures but very good trimmed threshold distribution, so these 27 trimbit failures should be ignored.
  214   Wed Apr 15 17:14:42 2020 danek kotlinskiModule transfermove 4 modules to gel-packs
Moved to gel-apcks:
1629 B
1631 B
1660 C
1665 classifed as B in MoreWeb but has 170 pixel failures
  216   Wed Apr 15 17:33:53 2020 danek kotlinskiModule transfermove 4 modules to gel-packs

danek kotlinski wrote:
Moved to gel-apcks:
1629 B
1631 B
1660 C
1665 classifed as B in MoreWeb but has 170 pixel failures


M1665 is graded B since there is no a single ROC with >4% of damaged pixels (max 120 in ROC5). 170 pixel failures are totally in the module.
  242   Thu Apr 30 15:38:43 2020 danek kotlinskiModule transferM1635 & M1671 transferred to gel-pack
Two bad modules have been placed in gel-packs: 1635 & 1671.
  267   Mon May 18 14:05:01 2020 Andrey StarodumovModule transfer8 modules shipped to ETHZ
M1555, M1556, M1557,
M1558, M1559, M1560,
M1561, M1564
  268   Tue May 19 13:43:55 2020 Andrey StarodumovModule transfer9 modules shipped to PSI
Quick check: Leakage current, set Vana, VthrCompCalDel and PixelAlive
Module Current@-150V Programmable Readout
M1623 -0.335uA OK OK
M1630 -0.430uA OK OK
M1632 -0.854uA OK OK
M1634 -0.243uA OK OK
M1636 -0.962uA OK OK
M1637 -0.452uA OK OK
M1638 -0.440uA OK OK
M1639 -0.760uA OK OK
M1640 -0.354uA OK OK
  276   Thu May 28 14:38:43 2020 Andrey StarodumovModule transfer18 Modules shipped to ETHZ
1536, 1537, 1539, 1540, 1541,
1543, 1545, 1547, 1548, 1550,
1551, 1552, 1553, 1554, 1565,
1566, 1568, 1569
  277   Thu May 28 14:40:57 2020 Andrey StarodumovModule transfer8 modules shipped to PSI
Quick check: Leakage current, set Vana, VthrCompCalDel and PixelAlive
Module Current@-150V Programmable Readout
M1555 -6.000uA OK OK Current is rising up to 11uA with time after PixelAlive is done at +22C !!!
M1556 -1.282uA OK OK
M1557 -0.600uA OK OK
M1558 -0.835uA OK OK
M1559 -0.930uA OK OK
M1560 -0.745uA OK OK
M1561 -0.770uA OK OK
M1564 -1.755uA OK OK
  284   Thu Jun 4 15:32:50 2020 Andrey StarodumovModule transfer18 Modules shipped to PSI
Quick check: Leakage current, set Vana, VthrCompCalDel and PixelAlive
T=+24C

Module Current@-150V Programmable Readout

M1569 -0.690uA OK OK
M1568 -0.530uA OK OK
M1566 -1.530uA OK OK
M1565 -1.430uA OK OK
M1554 -0.970uA OK OK
M1553 -1.300uA OK OK
M1552 -0.919uA OK OK
M1551 -1.310uA OK OK
M1550 -1.708uA OK OK
M1548 -0.470uA OK OK
M1547 -1.510uA OK OK
M1545 -0.770uA OK OK
M1543 -0.800uA OK OK
M1541 -0.750uA OK OK
M1540 -1.440uA OK OK
M1539 -0.680uA OK OK
M1537 -0.816uA OK OK
M1536 -4.270uA OK OK
  287   Tue Jun 9 17:19:00 2020 danek kotlinskiModule transfergel-pack transfers
The following modules, which were already X-ray tested and came back from ETHZ,
have been transferred to gel_packs:
1623
1630
1632
1634
1636
1637
1639
1640

The following "good" modules have been transferred from gel-packs to frames,
they still have to go to ETHZ for X-ray testing:
1620
1622
1624
1626
1627
1628
1629
1631

D.
  290   Tue Jun 16 19:23:37 2020 danek kotlinskiModule transfermodules from and to ETH, list 3
The 27 modules from list 2 have been transferred back to PSI.

The following 18 modules have been transferred to ETH:

1605
1606
1607
1608
1609
1610
1612
1614
1615

1613
1618
1619
1629
1622
1624
1626
1627
1628
  291   Thu Jun 18 20:04:41 2020 danek kotlinskiModule transferModules to ETH, transport #6
Today Lea has delivered another group of 18 modules to the ETH:
1604
1603
1602
1601
1600
1599
1598
1597
1596

1629
1631
1641
1642
1643
1644
1645
1647
1648

No modules were brought back, so there are now 18+18=36 modules at ETH.
  292   Thu Jul 2 13:48:20 2020 danek kotlinskiModule transferTransport #7 to ETH
Lea brought the last batch of 8 modules to ETH:
1542
1593
1665
1672
1673
1674
1675
1676
  304   Fri Oct 23 13:33:04 2020 danek kotlinskiModule transferL1 & L2 modules to go to P5
The following modules will be transported to CERN/P5 as spares.

1) L1
1538
1542
1608
1613

also take a D (broken) module 1671 for setup testing.

2) L2
2278
2160
2258
2293
2078
2026
2122
2155
2035
2036
2298
2244

2269 was at the top of Andrey's list but I just discovered that it does not have the cap,
so we leave it at PSI.

D.K.
  76   Sun Feb 9 18:36:13 2020 Dinko FerencekModule gradingManual grading
The procedure for manual grading is described in https://github.com/psi46/MoReWeb/pull/120.

In short, inside the test folder it is necessary to place a text file called grade.txt that contains just one number representing the manual grade:

1=A
2=B
3=C


Addendum:
It appears that manual grading does not work properly if one reruns MoReWeb with grade.txt added. It is necessary to first delete any entries related to the FullQualification using the following command
python Controller -d

after which you need to specify for which module you want to delete entries, e.g. M1537. Once done, you need to run MoReWeb for this module, e.g.
python Controller -m M1537
  81   Wed Feb 12 11:15:04 2020 Dinko FerencekModule gradingProblem with dead trimbits understood
It was noticed that some modules are graded C because of typically just one ROC having a large number of dead trimbits. One example is ROC 10 in M1537



At the same time, for this same ROC the Vcal Threshold Trimmed distribution looks fine


as well as the distribution of trim bits



It turns out that the trim bit test is failing in this and other similar cases because of the tornado plot that is shifted up more than is typical

In the trim bit test code, the Vthrcomp was raised from 35 to 50 but for cases like this one, this is still not high enough. We therefore further increased the value of Vthrcomp to 70.

Grading procedure:
We will manually regrade to B all those modules graded as C due to dead trimbits provided the Vcal Threshold Trimmed distribution looks fine, the number of trimming problems is below the threshold for grade C (167), the distribution of trim bits looks reasonable, and they are not graded C for any other reason.

Side remark:
It should be noted that the Trim Bit Test distribution and the way the number of dead trimbits is counted will not always catch cases when the trim bit test algorithm failed. For instance, in the MoReWeb output for ROC 10 in M1537 one does not immediately see that there are many underflow entries

which arise from the fact that the untrimmed threshold used in the test is too might (Vthrcomp value too low) and Vcal that passes the threshold is not found and left at the default value of 0


This problem of not spotting the algorithmic failure is particularly severe in the case of ROC 2 of the same module M1537 where it goes completely unnoticed in the summary table

it is very hard to spot from the plot (because there is no statistics box showing the underflow)

but the problem is there for a large number of pixels



Even from the tornado plot one would not expect problems

but this tornado is for one particular pixel (column 12, row 22) and there is no guarantee that for other pixels the trim bit test won't fail.
  129   Tue Mar 24 15:41:29 2020 Andrey StarodumovModule gradingChange in MoreWeb GradingParameters.cfg
On March 3d the Vcal calibration parameter has been changed:
- StandardVcal2ElectronConversionFactor from 50 to 44 (electrons/VCal)

Om March 24 the following changes done:
- trimThr from 35 to 50 (to synchronize with current target trimming threshold of Vcal=50)
- TrimBitDifference from 2. to -2. This means that difference between trimmed and untrimmed
threshold close to 0 (<2 as it was) will be ignored.
  142   Thu Mar 26 10:41:24 2020 danek kotlinskiModule gradingM1606
I have looked more closely at M1606.
Trimming show problems in ROC2, 8 pixels have 0 threshold and for 61 pixels thr=-1.
Strange because from Scurves the 61 have a high ~62 threshold but no failure.
The PixelAlive and PH is OK for these 61 pixels, the 8 are just completely dead.
Trimbit test shows that 61 pixels failing.
So maybe this is the first time we see a real trimbit failure!

I suggest to grate this module C+( or *).
  143   Thu Mar 26 15:37:18 2020 UrsModule gradingM1606
attached find the threshold difference distributions for all four trimbits for all ROCs
Attachment 1: anaTrim-thrDiff-data__M1606__pxar.pdf
anaTrim-thrDiff-data__M1606__pxar.pdf
  161   Tue Mar 31 06:54:12 2020 danek kotlinskiModule gradingM1542
I have tested M1542.
It looks not bad but there is a group of pixels in the lower/left corner of ROC6 which behave different.
One can also see that something is not right in some PH optimisation plots.
See the attached plot.

We should probably leave this module as grade C+.
D.
Attachment 1: Canvas_1.png
Canvas_1.png
  246   Fri May 1 19:34:01 2020 danek kotlinskiModule gradingM1582
M1582 was classified as C because of 167 pixels failing trimming in ROC1.
I have tested this module.
The attached plots show the 1d & 2d threshold distributions.
The average threshold is 49.98 with rms=1.39 there is 1 pixel failing (at 0) and 1 pixel with a very low threshold of 37.
I think this ROC is OK, actually it is very nice.
D.
Attachment 1: m1582_roc1_thr.png
m1582_roc1_thr.png
Attachment 2: m1582_roc1_thr_2d.png
m1582_roc1_thr_2d.png
  254   Thu May 7 00:27:41 2020 Dinko FerencekModule gradingComment about TrimBitDifference and its impact on the Trim Bit Test
To expand on the following elog, on Mar. 24 Andrey changed the TrimBitDifference parameter in Analyse/Configuration/GradingParameters.cfg from 2 to -2
$ diff Analyse/Configuration/GradingParameters.cfg.default Analyse/Configuration/GradingParameters.cfg
45c45
< TrimBitDifference = 2.
---
> TrimBitDifference = -2.

From the way this parameter is used here, one can see from this line that setting the TrimBitDifference to any negative value effectively turns off the test.

More details about problems with the Trim Bit Test can be found in this elog.
  285   Fri Jun 5 13:47:11 2020 Andrey StarodumovModule gradingM1613
FT of this modules has been done twice: March 23 and March 25, in both cases with the final test SW. On March 23 module was graded C due to many trim bit failures. That is why it was retested. But after failure in trimbits were excluded from the grading, FT of Mar 23 looks better then later FT of Mar25. That is why FT of Mar 23 is kept.
  297   Fri Aug 28 11:13:33 2020 Andrey StarodumovModule gradingM1615
M1615: one pixel in ROC10 unmaskable hence should be graded C. Otherwise the module is of grade A
To be checked!
  141   Wed Mar 25 20:17:16 2020 danek kotlinskiModule doctorThe list of modules tested today by Wolfram
M1542 nothing on sdata3 (12-15), bad output or wire-bond
M1544 ROC15 timing off, can be made to work (roc 15 vdig 11, ..), but probably not usable for the detector
M1545 full readout, ok, upgraded to C*
M1546 TBM1 output alpha broken (sdata1) => replace TBM1
M1563 ROC 8 thinks he is roc 10 ==> check address wire bond on roc 8?
M1567 roc 11 missing, sdata2 (b-channel)
M1572 sticker says : both TBMs broken, no SDA => replace both TBMs?
M1593 tbm0, core B (roc 0-3), no decodable output, core/stack working ok, output beta+ high ~ 2V
M1594 roc 0 dead
M1615 nothing on sdata 3 (12-15), very bad cable (corrosion?) re-running with new cable
M1616 ROC 10 not programmable, no roc headers on sdata2 (b) ==> TODO check clock to roc 10
M1617 readout but no hits in ROC 8, mechanical damage on chip edge
M1621 ROC 8 not programmable, no roc headers or tbm trailers on sdata2a (tbm1b) ==> TODO check clock on ROC 8
M1623 probably no cal-trig-reset to roc 0-3
M1625 definitely not cal-trig-reset to roc 0-3 (CTR- at 0V), damaged when trying to pull wire-bonds
M1575 no readout from rocs 0,1 (tbm0b, sdata4), all rocs programmable ==> TODO check CTR roc 0
  150   Fri Mar 27 14:46:04 2020 Andrey StarodumovModule doctorWolfram's summay from Mar 25
M1542 nothing on sdata3 (12-15), bad output or wire-bond
M1544 ROC15 timing off, can be made to work (roc 15 vdig 11, ..), but probably not usable for the detector
M1545 full readout, ok, upgraded to C*
M1546 TBM1 output alpha broken (sdata1) => replace TBM1
M1563 ROC 8 thinks he is roc 10 ==> check address wire bond on roc 8?
M1567 roc 11 missing, sdata2 (b-channel)
M1572 sticker says : both TBMs broken, no SDA => replace both TBMs?
M1575 no readout from rocs 0,1 (tbm0b, sdata4), all rocs programmable ==> TODO check CTR roc 0
M1593 tbm0, core B (roc 0-3), no decodable output, core/stack working ok, output beta+ high ~ 2V
M1594 roc 0 dead
M1615 nothing on sdata 3 (12-15), very bad cable (corrosion?) re-running with new cable
M1616 ROC 10 not programmable, no roc headers on sdata2 (b) ==> TODO check clock to roc 10
M1617 readout but no hits in ROC 8, mechanical damage on chip edge
M1621 ROC 8 not programmable, no roc headers or tbm trailers on sdata2a (tbm1b) ==> TODO check clock on ROC 8
M1623 probably no cal-trig-reset to roc 0-3
M1625 definitely not cal-trig-reset to roc 0-3 (CTR- at 0V), damaged when trying to pull wire-bonds
  151   Fri Mar 27 16:41:16 2020 Andrey StarodumovModule doctorWolfram's summay from Mar 25

Andrey Starodumov wrote:
M1542 nothing on sdata3 (12-15), bad output or wire-bond
M1544 ROC15 timing off, can be made to work (roc 15 vdig 11, ..), but probably not usable for the detector
M1545 full readout, ok, upgraded to C*
M1546 TBM1 output alpha broken (sdata1) => replace TBM1
M1563 ROC 8 thinks he is roc 10 ==> check address wire bond on roc 8?
M1567 roc 11 missing, sdata2 (b-channel)
M1572 sticker says : both TBMs broken, no SDA => replace both TBMs?
M1575 no readout from rocs 0,1 (tbm0b, sdata4), all rocs programmable ==> TODO check CTR roc 0
M1593 tbm0, core B (roc 0-3), no decodable output, core/stack working ok, output beta+ high ~ 2V
M1594 roc 0 dead
M1615 nothing on sdata 3 (12-15), very bad cable (corrosion?) re-running with new cable
M1616 ROC 10 not programmable, no roc headers on sdata2 (b) ==> TODO check clock to roc 10
M1617 readout but no hits in ROC 8, mechanical damage on chip edge
M1621 ROC 8 not programmable, no roc headers or tbm trailers on sdata2a (tbm1b) ==> TODO check clock on ROC 8
M1623 probably no cal-trig-reset to roc 0-3
M1625 definitely not cal-trig-reset to roc 0-3 (CTR- at 0V), damaged when trying to pull wire-bonds


M1542 is fully OK. It's graded C due to very broad relative Gain distribution of ROC6.
If we will have time it would be useful to take a close look at this module.
It remains graded C.
  183   Fri Apr 3 17:39:19 2020 Andrey StarodumovModule doctorM1654
After a protection cap glued to M1654, I observed a few strongly bent wire bonds and probably some shorts on ROC8. One (VD+) of them is even detached (on HDI side). Pads from 25/26 till 35 are affected.
Nevertheless the Reception test gave the same results as before the cap gluing: grade A.
It would be useful if Wolfram take a look and decide what to do. Silvan proposed to remove cap and repair wire-bonds.
I could test the procedure on a dummy module (I have one) with glued cap and, if successful, do the same with M1654.
For the moment the module is placed in Module doctor tray.
  213   Wed Apr 15 08:48:09 2020 danek kotlinskiModule doctorWolfram's tests from 14/4/20
two of are fixed and can be re-tested
M1623 tbm bond
M1657 bond roc 15


the others need further investigation, maybe a new tbm, maybe closer wire-bond inspection

M1633 roc 0-3 programmable, but no readout, unclear
M1635 roc 12-15 programmable, but no readout (except for roc 12), no token passed
M1653 roc 12-15 not programmable, otherwise ok, sda? tbm?
M1671 some problem with roc 14/15, unclear
  5   Wed Aug 7 11:49:07 2019 Matej RoguljicModule assemblyHDI glue irradiation tests
Six different glues were used to glue the cap on six dummy modules. They were all irradiated to 120 MRad in Zagreb after which they were taken to PSI and tested with the tweezers under the microscope.

Standard glue (Dow-cornig), used in CMS - it became "two-component". Solid part on the capacitors and the liquid part on the cap. Surface tension was actually holding the cap quite strongly. Silvan deemed it the second best glue

Two component epoxy adhesive - the strongest glue out of all tested. The only drawback of it was that during the application of the glue to the capacitors, it left puddles around some of them since it is quite liquid before curing. This might have been prevented if we had a proper glue stamp. The glue stamp used at the time contained too much excess glue, and if another one was printed, with small grooves where the capacitors are, the amount of glue transferred should be lower and no puddles should appear. This glue was graded the best of all the tested glues. EDIT: the glue stamp with smaller grooves was printed and now there are no puddles anymore.

SG-20 (black) - sticks, but not really well. It was also quite soft and malleable. Because it doesn't stick very well, it is not recommended for gluing the cap.

WS-200 - almost identical to the SG-20, just a different color.

Terosan MS939 - doesn't stick, not recommended.

Ergo 6521 - doesn't stick, not recommended.

Took photos of them under the microscope. There are two photos for each glue. First one is while the cap is at rest, while on the second there is an upwards force applied with the tweezers. All of them except the two component epoxy adhesive detached when force was applied. With the epoxy adehsive, the module started lifting, but the glue held.
Attachment 1: dowCornig_con.jpg
dowCornig_con.jpg
Attachment 2: dowCornig_split.jpg
dowCornig_split.jpg
Attachment 3: epoxy_2component_con.jpg
epoxy_2component_con.jpg
Attachment 4: epoxy_2component_split.jpg
epoxy_2component_split.jpg
Attachment 5: sg20_con.jpg
sg20_con.jpg
Attachment 6: sg20_split.jpg
sg20_split.jpg
Attachment 7: terosan_ms939_con.jpg
terosan_ms939_con.jpg
Attachment 8: terosan_ms939_split.jpg
terosan_ms939_split.jpg
Attachment 9: ws200_con.jpg
ws200_con.jpg
Attachment 10: ws200_split.jpg
ws200_split.jpg
  11   Wed Sep 18 15:45:45 2019 Matej RoguljicModule assemblyHDI sparking under HV test
HDI 8010 passed all test except the HV. Under the HV test, some sparks could be heard and even seen by eye on the right TBM (closer to the HV pad!). Testing it again, showed that it was indeed damaged by the spark. Sparking craters could be seen with the microscope between pads 4 and 5 of the TBM.

The same thing happened a month later, on HDI 8012. Again, it passed all the test until HV when sparks appeared and damaged the TBM, and, curiously enough, the damage was on the same position on the TBM as with 8010.

HDI 8011 was put under the HV test as well, however, this one didn't have any wires bonded to the TBMs because gold plating was missing on several of the wire bonds. Sparking could be heard, but we couldn't located where the damage happened. There is one potential candidate, shown on the photo attached to this log entry.

On 18.09. a possible explanation for the sparking was thought of. Wolfram and Matej put one of the "faulty" HDIs under HV test and noticed that sometimes a spark could be seen between the HDI handle and the aluminum base plate! It looks like the HV pin rests near the border of the HV pad and the ending of the HDI. Because of that, there is a discharge from the HV pin to the HDI handle. The discharge connects to the ROC wire bond pads closest to the HV pad providing a way to affect the TBM! This is further strengthened by the fact that the damaged TBM pad is connected to the first four ROCs closest to the HV pad. To test this, we put kapton tape on the HDI handle, near the HV pad and tested the HDI again and there were no sparks.

The HDIs to be used for L1 replacement are slightly longer to prevent HV jumping to the sensor. Incidentally, this might also solve the sparking issue. If the issue is not solved on the longer HDIs, HDI holders will have to be further isolated in the region close to the HV pad.
Attachment 1: HDI_8010.jpg
HDI_8010.jpg
Attachment 2: HDI_8010_unzoom.jpg
HDI_8010_unzoom.jpg
Attachment 3: HDI_8012.jpg
HDI_8012.jpg
Attachment 4: HDI_8012_2.jpg
HDI_8012_2.jpg
Attachment 5: HDI_8011_1.jpg
HDI_8011_1.jpg
Attachment 6: HDI_8011_2.jpg
HDI_8011_2.jpg
Attachment 7: HDI_8011_3.jpg
HDI_8011_3.jpg
  16   Thu Sep 26 21:48:56 2019 Dinko FerencekModule assemblyCap gluing training
Today I performed my first cap gluing. As an exercise it was first done on two dummy modules and later on a pre-production module M1522. Before the cap gluing the module M1522 was visually inspected and a Reception test was run. Before the cap gluing everything looked fine. After the cap gluing the module M15222 was again visually inspected and it looked like wire bonds in one of the corners might have been slightly bent and some glue got deposited on some of the wire bonds. The Reception test will be repeated tomorrow.
  19   Fri Sep 27 23:22:02 2019 Dinko FerencekModule assemblyCap gluing training
Attached are the bump bonding threshold maps before and after cap gluing and the (before - after) difference for M1522.
Attachment 1: M1522_beforeGluing.png
M1522_beforeGluing.png
Attachment 2: M1522_afterGluing.png
M1522_afterGluing.png
Attachment 3: M1522_diff.png
M1522_diff.png
  20   Wed Oct 2 12:41:16 2019 Dinko FerencekModule assemblyFirst production modules assembled
First production modules (M1530, M1531, M1532) were built on Monday, Sep. 30 2019.

There was a problem with wire bonding of ROC5 on M1530. There is a small crater on one of the ROC pads which appears to had been created by the BareModule probe card needle.

After initial tests with pXar of the 3 modules, noticed the following:

M1530: data missing from ROCs 4 and 5 (expected based on the wire bond problem on ROC5) but otherwise looks fine
M1531: could set Vana but setvthrcompcaldel and pixelalive not showing any data
M1532: looks fine.

Caps were glued to M1530 and M1532 (damaged cap was glued to M1530) and Reception tests were run before and after gluing. Attached are the bump bonding threshold maps before and after cap gluing and the (before - after) difference for M1530 and M1532.
Attachment 1: M1530_beforeGluing.png
M1530_beforeGluing.png
Attachment 2: M1530_afterGluing.png
M1530_afterGluing.png
Attachment 3: M1530_diff.png
M1530_diff.png
Attachment 4: M1532_beforeGluing.png
M1532_beforeGluing.png
Attachment 5: M1532_afterGluing.png
M1532_afterGluing.png
Attachment 6: M1532_diff.png
M1532_diff.png
  25   Tue Oct 29 16:36:23 2019 Matej RoguljicModule assemblyAssembly lab cap gluing jigs
There are two jigs for cap gluing in the assembly lab, circled in red and blue on the photo in the attachment. The one closer to the door (red) we used to build glue Phase 2 HDI to a dummy sensor and four Phase2 ROCs. Before changing the head, the numbers of the alignment screws were noted.

The jig closer to the doors (red): Top two pins - 5.75; Left pin - 5.5
The jig further from the doors (blue): Top two pins - 5.71, Left pin - 3.25
Attachment 1: IMG_20191029_150037.jpg
IMG_20191029_150037.jpg
  54   Fri Jan 24 18:16:09 2020 Dinko FerencekModule assemblyProtective cap gluing
Today I additionally practiced protective cap gluing by adding a protective cap to module M1536.
  63   Tue Feb 4 19:51:10 2020 Dinko FerencekModule assemblyCap gluing
Today protective caps have been glued to modules 1545, 1547, 1548, 1549, and 1550.
  65   Wed Feb 5 16:07:03 2020 Dinko FerencekModule assemblyModule assembly spreadsheet migrated to Google Sheets
Excel file stored on CERNBox containing the module assembly spreadsheet has been replaced with a Google Sheet https://docs.google.com/spreadsheets/d/12m2ESCLPH5AyWuuYV4KV1c5O_NEy7uP5PQKzVsdr3wQ/edit?usp=sharing. This change should make common editing of the spreadsheet easier.

The new spreadsheet has also been linked from the main L1 replacement web page http://cms.web.psi.ch/L1Replacement/
  68   Thu Feb 6 17:09:50 2020 Dinko FerencekModule assemblyGluing stamp improvements
Silvan has made additional grooves on the gluing stamp which now sits stably on top of the module without any rocking motion and also allows better glue distribution to all the HDI components that are supposed to receive the glue.
  75   Sun Feb 9 12:05:47 2020 Dinko FerencekModule assemblyCap gluing
Today protective caps have been glued to modules 1561 and 1562.
  85   Fri Feb 14 14:05:16 2020 Dinko FerencekModule assemblyProduction yield so far
Of 41 modules produced and tested so far (1536-1576), 6 modules were found to be bad before or during the reception test, 5 were graded C after the full qualification (one of which is possibly C*) and the remaining 30 modules were graded B (of which 4 were manually regraded from C to B).

The overall yield for good modules (A+B/Total) produced so far is (30/41)=73%
  87   Mon Feb 24 13:53:32 2020 Urs LangeneggerModule assemblySetup changes in week 8
Setup changes in week 8

Silvan considered the single-module gluing approach suboptimal. It was changed:
- we switched sides, the cap gluing is now done on the window-side of the aisle
- there are now two jigs set up for cap gluing

The procedure itself for gluing did not change: Apply the stamp with the glue to module 1, release, put module 1 into its jig, insert module 2 into gluing jig, apply stamp (because of the glue fluidity, the marks from module 1 will have disappeared and there is no need to re-apply any glue to the stamp). The caps are lowered onto the modules only once the glue has been applied to both modules.

Silvan recommended applying the weight of only one copper(?) bar, to be placed centered on the top of the z-stage.

Silvan also recommended against using ethanol. Instead we are supposed to use aceton. As a result, the plastic tweezers should not be used anymore, but rather metallic tweezers.

Finally, Silvan changed the Peltiers, and applied isolation inside Red October. We observed cooling times from 10C to -20C of 6 minutes.
  88   Mon Feb 24 14:01:29 2020 Urs LangeneggerModule assemblyCap gluing in W8
In week 8 the following modules were glued and tested:

M1581
M1582
M1583
M1584
M1585
M1586
M1587
M1588
M1589
M1591

M1590 was not glued last week because it had one HV bond broken (Silvan fixed this on Monday, 20/02/24).

M1586 initially had problems with the readout after gluing, but this was fixed by porperly closing the MOLEX connector. In the reception test, M1586 was graded 'B'.

All other modules were graded 'A' in the reception test.

Overall the double-gluing setup works very well. The modules above were glued basically in one day.
  89   Tue Feb 25 17:11:18 2020 Urs LangeneggerModule assemblyModules glued and tested Feb 24/25
Modules glued and tested Feb 24/25

The letters after the module name indicate the reception test grade.

M1590 A (after Silvan fixed broken HV bond)
M1592 A
M1595 B
M1596 A
M1597 A
M1599 C (leakage current: 16uA)
M1600 A

M1598 had a broken clock wire bond, was diagnosed by Wolfram and fixed by Silvan. Will be processed together with the other modules that had problems after taking them out of the storage box (M1593 and M1594), in case they can be fixed.
  110   Mon Mar 16 15:17:18 2020 Matej RoguljicModule assemblyM1601 and M1602
Andrey and I assembled modules 1601 and 1602 on Friday, 13.3. On Monday, 16.3. I ran reception on them (both graded A) and then glued protection caps on them.
  114   Wed Mar 18 17:43:41 2020 Andrey StarodumovModule assemblyM1605-M1608
M1605 and M1606: reception done on Mar 17
M1607 and M1608: reception done today
All: grade A

Caps glued to M1605-M1608
  98   Mon Mar 9 16:16:48 2020 Andrey StarodumovHDI testHDIs: 3001, 3002, 3003, 3004 and 4033
Today I tested 5 HDIs: 3001, 3002, 3003, 3004 and 4033
Results:
3001
- electrically OK
- HV OK
- I heard 10+ sparks during 60sec
--> HDI did not pass tests
Danek took it for HV test at his setup

3002-3004, 4033
- all showed different patterns of electrical test failures in Quadrant 0 and 3 (Q0, Q3)
- either all 3 (clock, CTR, SDA) tests fails on Q0 and/or Q3 or some of them or only Ch1 (out of two channels) fails.
--> HDIs did not pass tests
Since patterns were similar the reason could be miss-alignment of contacts. Under microscope in some cases one could see marks outside contact pads.
Conclusion:
we have to re-align the pin head of the jig with respect to HDI and repeat tests of HDIs: 3002-3004 and 4033.
  101   Wed Mar 11 17:12:05 2020 Matej RoguljicHDI testHDI testing procedure change
There is an additional test that will be used from 11.03. on all HDIs. It involves measuring the voltage between ground and the HV pin of the needle card with the goal of checking whether proper voltage is delivered to the HDI. The HDI testing script has been updated and it now prompts the user to set the voltage to -800 V, measure the voltage on the pin and write this into the test results. After this, another instruction has been added which tells the user to raise the Z-stage (needle card) before setting voltage to -1100 V. This is done to prevent sparking from the HV pad to the HV pin if the alignment is slightly off.

While measuring the voltage on the HV pin, one should keep in mind the proper settings on the Keithley. When the voltmeter probe is connected to the pin, the current reading on Keithley will go up and if the current readout range is low, it will limit the voltage even before hitting compliance. This was observed during the initial testing of the new procedure. Range should be set to maximum (100 muA) and compliance should be set to 105 muA. This is high enough that it is not reached while measuring voltage at -800 V.
  102   Wed Mar 11 17:22:57 2020 Matej RoguljicHDI test8 HDI tests on 11.3.
I tested 8 HDIs today: 5029,5030,5031,6034,6036,6035,1031 and 1032
All of them tested fine.
  104   Thu Mar 12 12:19:43 2020 Matej RoguljicHDI test5 HDI tests on 12.3.
I tested 5 HDIs on 12.3. 1030,1029,1040,1039 and 1038. 1039 failed all the electrical tests. All the others passed the tests. HDI 1038 has one wirebond which is connected to the pad on the HDI and then it extends back up (like this: TBM./\.HDI/). The connection is good, but I just want to check with Silvan later if this is a problem.
  121   Fri Mar 20 18:09:47 2020 Andrey StarodumovHDI test3 HDIs tested
5008 failed, but most likely die to the pin head mis-alignment. Test to be repeated.
5007 and 5008: passed tests but during HV measurements (-800V) with a multymeter I heard high frequency noise and instead of -800V measure either -100V or -500V but the voltage jumped significantly. This effect is to be understood.
For the moment these HDIs will not be used in production.
  126   Tue Mar 24 15:16:56 2020 Andrey StarodumovHDI test8 HDIs tested on Mar 23
1014, 1016, 2001, 2003,
2004, 4013, 4015, 5014
All are good.
  138   Wed Mar 25 18:30:23 2020 Andrey StarodumovHDI test4+2 HDI tested
2 suspicious HDIs retested and found OK: 5006/5007
5013, 5016, 3017, 3019 are OK.
  145   Thu Mar 26 17:43:06 2020 Andrey StarodumovHDI test6+1 HDI tested
HDIs: 6021, 3014, 2005, 2009, 2006 and 2010 are OK
HDI 6024 failed: SDA0 and SDA2 on Ch1 missing

Looks like lower than 800V measured effect relates to the cable.
After cable has been changed all measurements 790V
  149   Fri Mar 27 14:32:11 2020 Andrey StarodumovHDI testList of suspicious HDIs
Here are HDIs that the first time tested showed 500-600V measured on the pin or directly on the HDI pad instead of 800V:
5006, 5007, 2005
3014, 3017, 3019, 6021

I put them in the box called "For Wolfram ...." on the Wolfram's desk in the lab.

Nevertheless, I think this is an artificial issue that mmight be caused by a cable. I retested 5006, 5007 and 2005 with a new
cable and in all cases measured HV=790V what is normal.
  159   Mon Mar 30 17:40:35 2020 Andrey StarodumovHDI test3+2 HDIs (re-)tested
For 3 HDIs: 3014, 3017, 6021, HV has been re-measured. In all cases 790V is measured with 800V supplied.
2 HDIs: 1026, 5022 were tested. Both OK.
All 5 HDIs will be used in module assembly.
  168   Tue Mar 31 18:14:47 2020 Andrey StarodumovHDI test3 HDI tested
HDIs: 1027, 5002, 5003 are tested. All OK.
  174   Thu Apr 2 17:08:03 2020 Andrey StarodumovHDI test6 HDIs tested
3013, 1046, 1047, 1048, 5038, 5039 are tested. All are OK.
  184   Fri Apr 3 18:06:11 2020 Andrey StarodumovHDI test2 HDIs tested
HDIs 6018 and 6019 are passed tests.
  189   Mon Apr 6 17:07:23 2020 Andrey StarodumovHDI test8 HDIs tested
HDIs
4937 5043 5042 4040
2029 4019 4018 4017
are tested. All OK.
  194   Tue Apr 7 17:59:27 2020 Andrey StarodumovHDI test6 HDIs tested
HDIs
2030, 2031, 1021, 1022, 1023, 1035
are tested. All OK.
  208   Thu Apr 9 17:43:54 2020 Andrey StarodumovHDI test4 HDIa tested
HDIs 3029, 3031, 3032, 4042 passed tests. All OK.
  221   Mon Apr 20 17:09:12 2020 Andrey StarodumovHDI test4 HDIs tested
After staying in 90%+ RH 2 HDIs became flat. The first one was easy to mount on an HDI holder.
But after 1-2hrs the second HDI became bent again, but still remained flexible, so was also easy to mount.
I put 2 more HDIs in the same conditions and after 2 hrs was able to mount and test them.
4041, 4043, 3033, 3041 tested. All OK.
  224   Wed Apr 22 17:28:42 2020 Andrey StarodumovHDI test11 HDIs tested
After keeping HDIs in a very high RH for a dew hours (24 is fine) became flat and could be fixed on HDI holder by vacuum.
11 HDIs were tested, all good:
3036, 3043, 3034, 3044, 1034, 6006, 6007, 6005,
6002, 6004, 6001
  229   Thu Apr 23 18:01:16 2020 Andrey StarodumovHDI test4 HDIs tested
The following HDIs are tested:
6007: OK
1034: Failed due to not working Channel 1 in CLK0, CTR0, SDA0 and SDA1
6006: OK
5021: OK
  230   Fri Apr 24 13:56:11 2020 Andrey StarodumovHDI test3 HDIs tested
Following HDIs tested from the box "to be understood":
5021, 3019, 5008. All are fine.
  234   Mon Apr 27 14:21:17 2020 Andrey StarodumovHDI test3 HDIs tested
# remaining HDIs from "to be understood" box were tested after flattening them during weekend in RH=99.9% box.
6024, 4034 are OK
1039 bad: no data from A1 and A2 DTB outputs, flat output
  2   Tue Aug 6 15:15:00 2019 Matej RoguljicGeneralActivities 19.6.-4.7.2019.
17.6.-21.6.

Andrei and Matej were at PSI working on the cap gluing setup
Andrei cap glued M2029 (Box9,T110)
Matej cap glued M2331
M2331 was tested before and after cap gluing
M2233 was taken to Zagreb for irradiatio, cable of the mdule is in Box4, T040

1.7.

HDIs 3019 and 3020 were tested and both failed, results in the HDIresults
Performed characterization on one v4 chip, it will be irradiated in Zagreb and restested

2.7.

Silvan bonded the M1520 so Andrei and Matej performed full qualification on it (-20 and +10) and it took 5 hours

3.7.

Glued caps on 3 dummy modules with 3 different glues.
HDI 1317-3-008 with two component epoxy glue
HDI 1317-3-010 with SG-20 silicon glue
HDI 1317-3-004 with WS-200 silicon glue

4.7.

3 dummy modules taken to Zagreb for irradiation by Matej
M1508 was also taken
  9   Thu Sep 12 10:32:27 2019 Matej RoguljicGeneralActivities 10.9.-19.9.2019.
10.9.
Modules 1504,1505,1520 brought to PSI after irradiation to 1.2 MGy in Zagreb. All 3 were not working when put under test with pXar. 1504 and 1520 couldn't set Vana at all. 1505 could set Vana, but timing couldn't be found and pixels weren't responding. Inspection under the microscope showed some "white-greenish" deposits on the metal pads on the HDIs which are most likely shorting the ROC pads causing the module to be non-responsive

11.9.
Removed cap from 1504 to take a better look under the microscope and some photos. The good news is that the glue holds really well. The deposition looks like a crystal growth. It reminded Danek of the humidity issues they had at CERN in 2014. Touching it with a needle showed that it is not completely solid, like "wet-snow", however, on some pads, the deposition was more solid. There was also small transparent liquid deposits observed. Prepared 3 modules for cap gluing training; 1047, 10X1, 10X2. Only 1047 works (partially) and Reception test was performed on it. Two gluing jigs prepared in the assembly lab.

12.9.
Ran reception on 1509, it will go to ETH for irradiation (not really sure if it's ETH). Further tested modules 10X1 and 10X2. X2 should have hubID set to 17,25 after which the ROCs are programmable, however, there is no readout. X1 should have hudID 31,30; but only TBM with hubID 30 works, ROCs are programmable, but no readout as well. Glued protection cap to a module 10X1. Roughly 4 minutes after the start of mixing, the glue starts to become fairly solid which gives us a constraint on the time it takes us to apply it to a module.

13.9.
Glued caps on 1508 and 1047 using the jig further from the entrance door of the assembly lab. 1508 was glued first and had an asymmetrically placed cap (one side of the cap quite close to the wirebonds). The stage was adjusted so the cap glued to 1047 was properly placed. An unused HDI was found 8-011, it was tested and found to be working until the HV test where it started sparking at -1000V, similar to 8-010. There are burn marks visible under the microscope on the same position as the ones on 8-010.

16.9.
Investigated further the sparking issue with the HDIs. Ed Bartz took a look at the photos and identified that the damaged part of the TBM is the protection diode of pin 46.

17.9.
Equipped the coldbox with 4 DTBs which makes it possible to do qualification on 4 modules at the same time. Investigated phase2 ROC-HDI gluing jig.
  13   Thu Sep 19 14:20:07 2019 Matej RoguljicGeneralActivities 10.9.-19.9.2019.

Matej Roguljic wrote:
10.9.
Modules 1504,1505,1520 brought to PSI after irradiation to 1.2 MGy in Zagreb. All 3 were not working when put under test with pXar. 1504 and 1520 couldn't set Vana at all. 1505 could set Vana, but timing couldn't be found and pixels weren't responding. Inspection under the microscope showed some "white-greenish" deposits on the metal pads on the HDIs which are most likely shorting the ROC pads causing the module to be non-responsive

11.9.
Removed cap from 1504 to take a better look under the microscope and some photos. The good news is that the glue holds really well. The deposition looks like a crystal growth. It reminded Danek of the humidity issues they had at CERN in 2014. Touching it with a needle showed that it is not completely solid, like "wet-snow", however, on some pads, the deposition was more solid. There was also small transparent liquid deposits observed. Prepared 3 modules for cap gluing training; 1047, 10X1, 10X2. Only 1047 works (partially) and Reception test was performed on it. Two gluing jigs prepared in the assembly lab.

12.9.
Ran reception on 1509, it will go to ETH for irradiation (not really sure if it's ETH). Further tested modules 10X1 and 10X2. X2 should have hubID set to 17,25 after which the ROCs are programmable, however, there is no readout. X1 should have hudID 31,30; but only TBM with hubID 30 works, ROCs are programmable, but no readout as well. Glued protection cap to a module 10X1. Roughly 4 minutes after the start of mixing, the glue starts to become fairly solid which gives us a constraint on the time it takes us to apply it to a module.

13.9.
Glued caps on 1508 and 1047 using the jig further from the entrance door of the assembly lab. 1508 was glued first and had an asymmetrically placed cap (one side of the cap quite close to the wirebonds). The stage was adjusted so the cap glued to 1047 was properly placed. An unused HDI was found 8-011, it was tested and found to be working until the HV test where it started sparking at -1000V, similar to 8-010. There are burn marks visible under the microscope on the same position as the ones on 8-010.

16.9.
Investigated further the sparking issue with the HDIs. Ed Bartz took a look at the photos and identified that the damaged part of the TBM is the protection diode of pin 46.

17.9.
Equipped the coldbox with 4 DTBs which makes it possible to do qualification on 4 modules at the same time. Investigated phase2 ROC-HDI gluing jig.


18.9.
Decided on a final HDI irradiation test. We will have an HDI without components in a bag filled with nitrogen (N2 bag), HDI with components in N2 bag, Dummy module (HDI with cap glued on it) in N2 bag, Dummy module not in an N2 bag and and an HDI with components cleaned by Silvan not in an N2 bag. The idea is to expose them to irradiation in a dry environment to check if there will be weird deposits on the metallic pads of the HDI. With the 2 samples not in an N2 bag we will see if there are some deposits again. To do so, before irradiation we will inspect them under a microscope and take photos and inspect them immediately after irradiation.

Furthermore, a possible explanation for the HDI sparking problem was devised and solutions, if necessary. It is described in more details in a separate log entry.
  15   Thu Sep 26 17:38:07 2019 Matej RoguljicGeneralActivities 25.-26.9.
It was planned to test a batch of HDIs these 2 days but the shipment was late. Dinko and Andrei were shown the HDI testing procedure. Matej further worked on the script used to test the HDIs. Dinko glued caps on 2 dummy modules and on M1522. New HDIs arrived on 26th and it looks like the kapton will have to be put underneath them during the testing to prevent sparking.
  18   Fri Sep 27 22:24:31 2019 Dinko FerencekGeneralActivities on 27. 9. 2019.
  27   Wed Oct 30 10:38:04 2019 Matej RoguljicGeneralActivities 28.-31.10.2019.
28.10. - Investigated a problem with the FullQualification setup, described more in this note
29.10. - Further investigated the aforementioned problem and worked on preparing the jigs for Phase2 dummy module building.
30.10. - Confirmed that the reported issue is not present at the moment on the qualification system. Glued dummy sensor to four phase2 ROCs. Built a FC7 nanoCrate.
  38   Thu Nov 28 15:58:20 2019 Matej RoguljicGeneralActivities 26.-29.11.2019.
26.11.

Prepared the tools for RD53A digital module assembly in the assembly lab
Investigated if the latest pXar commit solved the issue with PhOptimization, now called Ph, and it runs successfully.
Investigated the bump bonding issue in which pXar reports bad bumps, but we suspected that was not the case. And indeed, putting a source on top of the chip reported to have bad bumps and taking data shows that there are no bad bumps as reported by pXar. Our conclusion was that the BB test developed by PSI is not applicable for bumps bonded by Helsinki. BB2 seems to be appropriate.

27.11.

Assembled a Phase 2 digital module using good ROCs - 1A48, 1A47, 1A43, 1A42; being ROC 0,1,2 and 3, respectively.
Used X-ray box to confirm our BB findings yesterday and the results agree that there are no defective bumps (as reported by pXar) on M1534.

28.11.

Took a look at the assembled module and it seems that one corner of the HDI was not properly glued. There are also solder marks on a couple of pads so we decided not to wire-bond it.
Measured the alignment of the two assembled module (one module was assembled during my last visit) using the microscope. The discrepancies in the distance between HDI wire-bond pads and ROC wire-bond pads are around 200 microns. If we take the length of the module to be 4cm, this is less than half a degree of tilt.

Worked on presentation reporting these activities for the tracker week.
Attachment 1: 1-1.jpg
1-1.jpg
Attachment 2: 1-2.jpg
1-2.jpg
Attachment 3: 1-3.jpg
1-3.jpg
Attachment 4: 1-4.jpg
1-4.jpg
  42   Fri Nov 29 17:06:26 2019 Dinko FerencekGeneralDropbox folder on CERNBox set up
A dropbox folder for elComandante tar files was set up in my CERNBox and synchronized with the following local folder on the lab PC

/media/disk1/DATA/L1_DATA_Backup/CERNBox_Dropbox/

Note that the following link in the home folder

/home/l_tester/DATA

points to

/media/disk1/DATA/

so an alternative path is

/home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/

The CERNBox client was installed on the lab PC following instructions for Ubuntu at https://cernbox.cern.ch/cernbox/doc/linux.html
  46   Mon Jan 20 18:09:50 2020 Matej RoguljicGeneralActivities 20.-31.01.2020.
20.01.

Expanded the HDI testing setup to be able to mount wire-bonded HDIs to HDI handles. Tested 12 HDIs (v7.2), all of them passed the tests. HDIs were given to Silvan and are located in the storage box in the assembly lab. They will be used for module production.

21.01.

Tested 4 HDIs, version 7.1. All passed the electrical tests.
Tested 16 v7.2 HDIs. All but one passed the electrical tests.
Glued a protection cap on a pre-production module (M1535).

22.01.

Investigated problems with full qualification setup, narrowed it down to communication between PC and Keithley.

23.01.

Tested 23 HDIs. One failed the test, rest of them passed.

24.01.

Tested 15 HDIs. All of them passed the tests.
Investigated problems with threshold/trim tests in full qualification when going to -20. Suspecting wrong DACs used in the test.

27.01.

Continuing to investigate problems with threshold in trimming procedures. The problem seems to be in the DAC 'ctrlreg' which was set to 17. Setting it to 9 removed the problem for M1529.

28.01.

Confirmed that 'ctrlreg' works also at room +20 degrees. Did reception test on modules 1529, 1536, 1537 and 1538. After this, caps were glued to modules 1537 and 1538. Did fulltest@-20, IV@-20, fullltest@+10 and IV@+10 for modules 1534, 1532, 1529 and 1536.

29.01.

Ran reeption on moules 1539 and 1540. Ran full qualification on modules 1536, 1537, 1538, 1540. Module 1539 turned out to have broken wirebonds on one TBM, reported this to Silvan and he re-bonded it. Assembled a second cap-gluing station.
  55   Fri Jan 24 18:22:00 2020 Dinko FerencekGeneralL1 replacement web page updated
This week several new links were added to the main L1 replacement web page located at the following URL
http://cms.web.psi.ch/L1Replacement/

The links point to other pages or information related to the replacement module qualification.
  113   Wed Mar 18 15:23:27 2020 Andrey StarodumovGeneralNew rules at PSI
From today only two persons from the group allowed to be present. We are working in shifts: Urs in the morning start full qualification and I in the afternoon test HDIs, run Reception, glue caps and switch the cold box, chiller etc after the full test finished.
Silvan is building usually 4 modules and 6-8 HDIs per day.
  212   Tue Apr 14 17:29:46 2020 Andrey StarodumovGeneralROC4 or ROC12 defects
Starting from M1650 almost every module has a cluster of dead pixels or broken bump bonds on
ROC4 or (more often) ROC12. The number of defects varied from 20 to 40.
It's almost excluded that such damage is made at PSI, since both of us: Silvan and me, first time connected the cable
to the module.
It would be interesting to check whether these bare modules arrived all together.
  222   Tue Apr 21 15:34:57 2020 Andrey StarodumovGeneralRetesting starts today
From today we will retest modules that have been tested with pXar SW versions earlier than March 18.
There were a few changes before this date:
1) trimming VCal: 40->50
2) threshold at which trim bit test is done
3) improvements in PH optimization algorithm

No changes in test algorithm have been introduced since March 18.

All modules have been tested with CtrlReg=9, for this several modules failed at +10C.
From now on for test CtrlReg=17 will be used.

Ft will be shorter: only one test at -20C, no T-cycling and one test at +10C. Leakage current will be measured up to 200V.
  249   Mon May 4 15:28:14 2020 Andrey StarodumovGeneralM1660
M1660 is taken from gel-pak and cabled for retest.
This module was graded C only at second FT at-20C, the first FT at -20C and FT at +10C give grade B. Massive trimming failure of pixels in ROC7 was not observed.
The module will be retested.
  280   Fri May 29 15:41:40 2020 Andrey StarodumovGeneralM1546
ROC10 of M1546 has 107 pixels with trimming problems:
in a VCal threshold scan after trimming 81 has a threshold underflow, means <0, and 26 pixels are outside 40-60VCal window (around Vcal 50) at +10C only.
Trimbit distribution looks reasonable.
It has been checked that using trimming parameters VCal threshold distribution is fine (checked at +20C). See plots attached:

VCal threshold distribution in p10_1 test:

Trim bits distribution in p10_1 test:

VCal threshold distribution taken with trim parameters at +20C:
  286   Fri Jun 5 16:22:19 2020 Andrey StarodumovGeneralCleaning L1_DATA directory
To be able to analyse Xray data and keep Total Production overview in a proper state we need to clean L1_DATA directory.
Reasons:
1) if one analyse Xray test results with flag -new, then all deleted with -d flag tests will be analysed again and Total production overview will have problems
2) if one analyse Xray test results with flag -m, then, first, it's very time consuming and, second, again all tests of this module will be analysed and again this module in total production overview will not have information and hence not ranged.

Solution:
leave only directories with relevant tests: one from cold box test and one from Xray tests.

I also started to remove directories with Reception test.

This should be completed next week of June 8th.
  293   Sun Jul 5 13:20:55 2020 Dinko FerencekGeneralDisk cleanup on the lab PC (pc11366.psi.ch)
Home folder was cleaned up by deleting old pXar output .log and .root files. The full list is in the attached file. This released around 11 GB of disk space.

In addition, the following MoReWeb output folders which are no longer needed were deleted

/home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/M1549_FullQualification_2020-02-04_23h22m_1580854955/
/home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/M1595_FullQualification_2020-03-12_11h05m_1584007524/
/home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/M1598_FullQualification_2020-03-12_11h05m_1584007524/
/home/l_tester/L1_DATA/WebOutput/MoReWeb/FinalResults/REV001/R001/M1646_FullQualification_2020-04-02_08h41m_1585809682/

This released around 805 MB of disk space.
Attachment 1: pxar_junk_home.txt
/home/l_tester/L1_SW/pxar/data/M1546/pxar.log
/home/l_tester/L1_SW/pxar/data/M1546/pxar.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091303.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091540.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104546.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105022.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104419.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092743.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092653.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091430.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_110245.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091625.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091346.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_110245.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105151.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104457.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091427.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092536.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091346.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104457.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105424.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104430.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092828.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104857.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104546.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104414.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092641.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105424.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105151.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_110009.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091303.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092653.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104857.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092641.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104414.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091625.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104419.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105022.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105307.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092828.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_104430.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_110009.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091427.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200314_091540.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092536.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200312_105307.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_091430.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4_pxar_old/pxar_20200313_092743.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094441.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200314_124852.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092937.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093323.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093323.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093223.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_093122.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094344.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_093122.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_093804.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093438.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093132.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_092710.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093732.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092935.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094441.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200314_124852.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093026.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_092925.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093108.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092937.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092850.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_092925.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_093804.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092850.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093551.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_092935.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093732.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094229.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093223.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_092710.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094229.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093438.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200316_094344.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093132.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093026.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093551.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_procv4/pxar_20200315_093108.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104546.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104419.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104457.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104457.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104430.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104546.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104414.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104414.root
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104419.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d_old/pxar_20200312_104430.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d/pxar.log
/home/l_tester/L1_SW/pxar_20200318/data/tbm10d/pxar.root
/home/l_tester/L1_SW/pxar_March12/data/M2129/pxar.log
/home/l_tester/L1_SW/pxar_March12/data/M2129/pxar_20190620_135313.log
/home/l_tester/L1_SW/pxar_March12/data/M2129/pxar.root
/home/l_tester/L1_SW/pxar_March12/data/M2129/pxar_20190620_135313.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094949.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_090641.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114417.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092743.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_095304.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093222.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_092716.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105512.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105644.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_093025.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093318.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093130.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094708.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_092716.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114437.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_114627.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_120104.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114437.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_110712.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105508.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105644.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093130.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093050.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092608.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_095507.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_091944.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093318.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094245.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105619.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_091446.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094708.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093050.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094949.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105512.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_093025.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114244.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_091446.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_114627.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190627_174518.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_120104.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_092855.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190412_110712.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114417.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190416_114244.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092608.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092743.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092504.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092504.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092835.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_094245.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_092855.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_090641.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_091944.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105508.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_095304.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190627_174518.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_093222.log
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191031_095507.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20191030_092835.root
/home/l_tester/L1_SW/pxar_March12/data/M1510/pxar_20190424_105619.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141747.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190912_141452.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_150605.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_131808.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_120743.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141906.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141803.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_150605.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_131808.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141747.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190912_143845.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_120743.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_151618.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_121334.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_143416.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141803.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190912_143845.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_151618.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_141906.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190912_141452.root
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_121334.log
/home/l_tester/L1_SW/pxar_March12/data/M10X2/pxar_20190911_143416.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_091952.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_143413.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105728.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_143404.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_095641.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_091412.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151605.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151605.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121353.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121346.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112924.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090434.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090928.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121202.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_135631.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_091753.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090434.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_141644.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111926.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_141644.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090533.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105925.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_153953.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090358.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_091952.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090333.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_091651.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_110124.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_110124.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_091638.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121353.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_160836.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105925.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151211.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_143413.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_142023.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_135631.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105936.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_093908.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_093921.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105946.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_092345.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_112007.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151211.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090928.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121122.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105936.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_092345.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_164358.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151218.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_170345.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111918.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112757.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_100733.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121346.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112942.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121202.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111950.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_095641.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090333.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_103205.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090358.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090533.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090801.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090703.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_143404.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191030_090801.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_141557.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190805_174123.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_105946.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_160844.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_141557.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_100851.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111950.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_091412.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121307.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_093908.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111926.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_112007.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_092026.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_094809.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_090703.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_164358.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121307.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_153953.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_091547.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112924.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112757.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111958.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111958.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_121122.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_103205.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190805_174123.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190805_171853.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_111918.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_151218.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_091547.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_093921.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_091638.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_135431.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190807_160836.log
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191028_142023.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20191029_112942.root
/home/l_tester/L1_SW/pxar_March12/data/M1529/pxar_20190806_092026.log
... 873 more lines ...
  296   Fri Aug 14 13:47:26 2020 Andrey StarodumovGeneralnew PH optimisation test and Total Overview
New PH optimisation is done at -20C as a FT, hence the results of this test are added to "DB" file as the second m20_1 test.
If there are more than one result per test in the DB file the total production overview is corrupted in a way that for such
modules # of defects is not calculated (due to ambiguity) and hence these modules are not rank according to the number of defects.
To correct this one has to remove from the DB file the second row with the same test. In our case one needs to remove the "old" m20_1 test.
While doing this I noticed that some new FT ended with grade C while the previous ones were fine (graded B) or some subtests are failed
wile grading remains B. Here is the list of such modules:
M1546 graded C
M1556 graded B but trimming completely failed (DTB_WRE1O5): TO RE-TEST
M1624 graded B but instead if 79 pixel defects it has 295 in 3 chips due to trimming problem (DTB_WRE1O5): TO RE-TEST
M1600 graded B but instead of ~50 pixel defects it has about 300 due to trimming problems (DTB_WXC03A): to retest

In general latest FT at -20C has shows less defective pixels and bumps but often leakage current is slightly higher.
  34   Wed Nov 20 16:53:50 2019 Andrey Full testM1533 and M1534 FullQualification
FullQualification of two modules M1533 and M1534 has been done. Full time of the test is about 7hrs.
Test included: FT@-10C, IV@-10, 2Cycles from -10C up to +10C, FT@-10C, IV@-10C, Ft@1-C and IV@10C.
No issues observed.
To be understood:
1) why SCurves results (noise) empty?
2) why the number of ROCs with defects always -1?
  43   Mon Dec 2 17:29:06 2019 Andrey StarodumovFull test4 modules FT
FullQualification of four modules M1526, M1529, M1521 and M1534 has been done. Full time of the test is about 5.5hrs.
Test included: FT@-20C, IV@-20, 3Cycles from -20C up to +10C, FT@10C and IV@10C.
Only M1521 passed the test (Grade B). Several issues observed in other modules:
1) 2-6 ROCs with TrimBit test failure for many pixels
2) Trimming id bad for some ROCs
3) test at +10C often better than at -20C
4) some ROCs hav issues with Thr and Gain distributions (out of specifications)

To be understood before the mass production!
  44   Tue Dec 3 12:15:54 2019 Andrey StarodumovFull test4 modules FT

Andrey Starodumov wrote:
FullQualification of four modules M1526, M1529, M1521 and M1534 has been done. Full time of the test is about 5.5hrs.
Test included: FT@-20C, IV@-20, 3Cycles from -20C up to +10C, FT@10C and IV@10C.
Only M1521 passed the test (Grade B). Several issues observed in other modules:
1) 2-6 ROCs with TrimBit test failure for many pixels
2) Trimming id bad for some ROCs
3) test at +10C often better than at -20C
4) some ROCs have issues with Thr and Gain distributions (out of specifications)

To be understood before the mass production!


We have run with Urs TrimBit test for M1526 at room T (without turning ON coldbox)
and results are confirmed yesterday observations of TrimBit test failure of several ROCs.
Urs will take a look and try to understand the issue.

To illustrate the problem below two summary tables of M1526 at -20C and +10C are shown.
Attachment 1: M1526ATm20.png
M1526ATm20.png
Attachment 2: M1526ATp10.png
M1526ATp10.png
  58   Mon Feb 3 11:28:38 2020 Anrey StarodumovFull test4 modules FT: 1536, 1537,1538, 1540
Jan 28: Reception test for 1536, 1537, 1538 OK
Then cap gluing
Jan 29
- Reception test for 1540 OK
- FT:
-- 1536: B/B/B (-20/-20/+10). Electrical Test is B due to mean noise >200e. IV: 3umA for -20 and +10, slope >300
-- 1537: C/B/B. First -20 is C due ROC10 >200 TrimBits failure, noise >200e. IV: A
-- Trimbit test fails but Trimbit distribution is OK and Threshold reasonable (attachment)
==> Manually regraded to B
-- 1538: B/B/B (-20/-20/+10). Electrical Test is B due to mean noise >200e. IV: <0.2umA for +10, slope ~1
-- 1540: B/B/B (-20/-20/+10). Electrical Test is B due to mean noise >200e. IV: <0.8umA for +10, slope ~2.7
Attachment 1: 1537ROC10m20-TrThr.pdf
1537ROC10m20-TrThr.pdf
  59   Mon Feb 3 15:35:09 2020 Anrey StarodumovFull test4 modules FT: 1539, 1541,1542, 1543
Jan 29
- Reception test for 1539 OK
- Cap gluing

Jan 30
- Reception test for 1541, 1542, 1543 OK
- Cap gluing
- FT:
-- 1539: C/B/B. First -20 is C due to INCOMPLETE TEST: DTB stuck
==> Must be retested at -20C and then files merging
-- 1541: C/C/C. Electrical Test is C due to Trimbit test failure in ROC13. Threshold is reasonable (see attachment)
==> Manually regraded to B
-- 1542: C/C/C. Electrical Test is C due to RelGain of 0.200 instead of <0.1 (too wide Gain distribution).
-- 1543: B/B/B (-20/-20/+10). Electrical Test is B due to mean noise >200e.
Attachment 1: 1542ROC13-TrThr.pdf
1542ROC13-TrThr.pdf
  72   Fri Feb 7 20:35:44 2020 Dinko FerencekFull testFT for 4 modules: 1545, 1547, 1548, 1549

1545: C/C/C (-20/-20/+10). Problematic ROC 14: mean noise >320e, many dead trimbits, wide trimmed Vcal threshold distribution. IV: 0.20 uA at +10 C, slope 1.09
1547: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.41 uA at +10 C, slope 1.08
1548: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.13 uA at +10 C, slope 1.10
1549: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.33 uA at +10 C, slope 1.12

Chiller operational procedures:
Chiller temperature was initially set to 4 C and the coldbox was able to reach -19.7 C which is within the require 0.5 C margin and the Fulltest could start. About 1 hours into the Fulltest, it was observed that the coldbox temperature rose to 19.0 C. At that point the chiller temperature was lowered to 3.5 C and the coldbox was able to lower and keep the temperature in the -19.2 to -19.3 C range. Once the test at -20 C were done, for the Fulltest and IV at +10 C the chiller temperature was initially raised to +11 C and after about one hour (midway through the Fulltest), it was lowered to +10 C because at one point the coolant temperature rose to +13 C. The temperature history plot is attached.
Attachment 1: TemperatureCycle.pdf
TemperatureCycle.pdf
  73   Sat Feb 8 20:48:11 2020 Dinko FerencekFull testFT for 4 modules: 1550, 1551, 1552, 1553

1550: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.45 uA at +10 C, slope 1.09
1551: C/C/C (-20/-20/+10). Mean noise >200e for some ROCs, dead trimbits in ROC 13, but trimmed Vcal threshold distribution and distribution of trim bits look OK. IV: 0.33 uA at +10 C, slope 1.09
==> Manually regraded to B
1552: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.23 uA at +10 C, slope 1.10
1553: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs, double-column readout problem in ROC 11 (see attached plot). IV: 0.41 uA at +10 C, slope 1.70

Chiller operational procedures:
Chiller temperature set and kept at 3.5 C throughout the entire test. This was done after observing that during tests at +10 C and with the chiller left at +3.5 C, the Peltiers were mostly inactive and there was no excessive heating occurring. This seems to suggest that the coolant temperature at +3.5 C is a good balance against the ambient temperature and heat produced by working modules to keep the temperature at +10 C. The temperature history plot is attached.
Attachment 1: M1553_C11_AddressDecoding.pdf
M1553_C11_AddressDecoding.pdf
Attachment 2: TemperatureCycle.pdf
TemperatureCycle.pdf
  74   Sun Feb 9 10:52:17 2020 Dinko FerencekFull testFT for 4 modules: 1539, 1554, 1555, 1556

1539: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.17 uA at +10 C, slope 1.08
1554: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.44 uA at +10 C, slope 1.80
1555: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.99 uA at +10 C, slope 1.30
1556: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.87 uA at +10 C, slope 1.67
  77   Sun Feb 9 19:34:55 2020 Dinko FerencekFull testFT for 4 modules: 1557, 1558, 1559, 1560

1557: C/C/C (-20/-20/+10). Mean noise >200e for some ROCs, dead trimbits, failed trimming and mean noise >300e in ROC 4. IV: 0.20 uA at +10 C, slope 1.08
1558: C/C/C (-20/-20/+10). Mean noise >200e for some ROCs, dead trimbits in ROCs 2 and 12, mean noise just above 300e for ROC 2, possibly C* module. IV: 0.27 uA at +10 C, slope 1.07
1559: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.23 uA at +10 C, slope 1.09
1560: B/B/B (-20/-20/+10). Electrical grade is B due to mean noise >200e for some ROCs. IV: 0.24 uA at +10 C, slope 1.09
  84   Fri Feb 14 09:48:27 2020 Dinko FerencekFull testFT for 12 modules: 1561-1576 (1563, 1567, 1572, 1575 excluded)
Feb. 11

1561: C/B/B (-20/-20/+10). Mean noise >200e for some ROCs, dead trimbits in ROC 6, but trimmed Vcal threshold distribution and distribution of trim bits look OK. IV: 0.26 uA at +10 C, slope 1.07
==> Manually regraded to B
1562: TEST INCOMPLETE: pXar crashed during 2nd Fulltest@-20 (m20_2). For more details, see here.
1564: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.78 uA at +10 C, slope 2.25

Feb. 12

1562: C/C/C (-20/-20/+10). TEST INCOMPLETE: SCurves test failed for all ROCs in all 3 Fulltests with the following error message
ERROR: <PixTestScurves.cc/scurves:L277> no scurve result histograms received?!
Trimming does not work in ROC 2.
1565: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.39 uA at +10 C, slope 1.07
1566: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.37 uA at +10 C, slope 1.07
1568: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.13 uA at +10 C, slope 1.15
1569: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.16 uA at +10 C, slope 1.09
1570: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.18 uA at +10 C, slope 1.08
1571: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.43 uA at +10 C, slope 1.09

Feb. 13

1573: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.25 uA at +10 C, slope 1.09
1574: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs, problems with bump bonding and trimming in ROCs 1 and 2. IV: 0.46 uA at +10 C, slope 2.14
1576: B/B/B (-20/-20/+10). Electrical grade B due to mean noise >200e for some ROCs. IV: 0.23 uA at +10 C, slope 1.09
  90   Wed Feb 26 10:34:55 2020 Urs LangeneggerFull testFulltest after Peltier replacement
On 2020/02/25, after SIlvan had replaced the Peltiers, I did a fulltest. Here the summary:

Finished all tests. Summary of test durations:
Fulltest@-20 127 min 5 sec
IV_TB0@-20 4 min 23 sec
IV_TB1@-20 5 min 1 sec
IV_TB2@-20 5 min 1 sec
IV_TB3@-20 5 min 1 sec
Cycle 39 min 23 sec
Fulltest@-20 121 min 6 sec
IV_TB0@-20 4 min 23 sec
IV_TB1@-20 5 min 1 sec
IV_TB2@-20 5 min 1 sec
IV_TB3@-20 5 min 1 sec
Fulltest@10 115 min 17 sec
IV_TB0@10 4 min 42 sec
IV_TB1@10 5 min 5 sec
IV_TB2@10 5 min 5 sec
IV_TB3@10 5 min 5 sec
--------------------------------------------------
total 461 min 48 sec


All modules were graded 'B' (I think because of the noise mean being to high in a few chips per module)
  91   Thu Feb 27 09:47:51 2020 Urs LangeneggerFull testFulltest on 2020/02/26
On Wednesday, 2020/02/26 the following modules went through the full qualification procedure:
M1581
M1582
M1583
M1584

All received grade B (from a cursory glance due to mean noise).
  93   Fri Feb 28 17:33:16 2020 Urs LangeneggerFull testFulltests on 2020/02/28
M1589 B
M1590 C (gain issues?)
M1591 C (noise issues?)
M1592 B
  94   Mon Mar 2 17:39:36 2020 Urs LangeneggerFull testFulltests on 2020/03/02
Modules tested:
M1595 C
M1596 B
M1597 C
M1600 C

Andrey and I suspect that these grades are driven by a bad PH optimization.
  96   Wed Mar 4 11:32:42 2020 danek kotlinskiFull testchange the target trim threshodl to vcal=50
After some discussion we decided to change the target trimming threshold from vcal 40 to 50.
Many ROCs cannot be run with xrays at 40 while all I have seen until now can be run at 50.
45 might be possible for some rocs but will fail for others.
D.
  105   Thu Mar 12 17:21:59 2020 Matej RoguljicFull test Fulltests on 2020/03/12
Modules tested:
M1557 C (trim bits at -20)
M1558 C (gain at -20)
M1590 B (still graded C in moreweb because of the previous full qualification, this should be corrected)
M1600 B (still graded C in moreweb because of the previous full qualification, this should be corrected)
  107   Sat Mar 14 17:08:14 2020 Matej RoguljicFull testFulltests on 2020/03/13

Modules tested:
M1591 C (gain at -20)
M1595 B
M1597 B
M1598 B
  115   Wed Mar 18 17:46:07 2020 Andrey StarodumovFull testFT of M1601-M1604
M1601-M1604 passed full test
M1601: grade B
M1602-M1604: Grade C. The main reason is failed pixels during trimbit test.
To be understood the reason and to be upgraded manually.
  117   Thu Mar 19 18:01:50 2020 Andrey StarodumovFull testFT of M1605-M1608
Only M1607 graded B. All others are graded C due to trimbit test.
M1606 should be looked carefully and may be retested since the threshold after trimming has strange features
(although not for all temperatures) that may mean that some trimbits really do not work.
Other modules to be re-analised without results of trimbit test or upgrade manually, since these results
are due to test algorithm. After trimming the threshold looks good for all failed ROCs.
  122   Fri Mar 20 18:32:34 2020 Andrey StarodumovFull testFT of M1609-M1612
M1609 and M1611 are graded C
M1610 and M1612 are graded B

C and most B grades are due to many trimbit failures. Interesting that this time at +10C there are more failures then at -20C.
  131   Tue Mar 24 18:09:07 2020 Andrey StarodumovFull testFT of M1615, M1619, M1620, M1622
Test results have been analysed with modified code:
M1615: B
M1619: A
M1620: B
M1622: B

One pixel of M1615 still failed mask test but it was not taken in to account in the final grading???
I put it in the shelves for Module doctor. To be decided what to do with this module.
  134   Wed Mar 25 14:17:51 2020 Andrey StarodumovFull testFT of M1609, M1613, M1614, M1618 on Mar 23
FT for these modules has been done on Mar 23
M1609: C
M1613: C
M1614: C
M1618: B

All C due to trim bit test failures
  137   Wed Mar 25 17:03:11 2020 Andrey StarodumovFull testFT of M1615, M1619, M1620, M1622

Andrey Starodumov wrote:
Test results have been analysed with modified code:
M1615: B
M1619: A
M1620: B
M1622: B

One pixel of M1615 still failed mask test but it was not taken in to account in the final grading???
I put it in the shelves for Module doctor. To be decided what to do with this module.


For Wolfram one channel of M1615 does not work. He noticed that the cable has corrosion (probably this cable has been attached to a module that has been irradiated in Zagreb). After Reception test this module again graded C due to a mask test failure of one pixel in one ROC.

Wolfram proposed to grade this module as C*.

  140   Wed Mar 25 18:40:53 2020 Andrey StarodumovFull testFT of M1599, M1613, M1624, M1616
M1599: B due to leakage current at +10 (2-3umA)
M1613: B due to a few pixels with bad trimmed threshold
M1624: B due to a few ROCs with mean noise>200electrons
M1626: A
  146   Thu Mar 26 17:45:15 2020 Andrey StarodumovFull testFT of M1545, M1557, M1627, M1628
Retested M1545: C->B (to be correct on the MoreWeb summary page)
Retested 1557: C->C in one ROC >160 pixels failed to be trimmed Module placed in a tray C*
1627: B
1628: B

All B grades due to high (>200electons) mean noise.
  147   Fri Mar 27 10:54:04 2020 Urs LangeneggerFull testIssues with pc11366 on March 27
On March 27 I had a lot of troubles getting the full qualification up
and running.

The problems were (1) strange error messages from pxar core, (2)
problems with USB connections to the DTBs, (3) loads of (intermittent)
data transmission errors from the DTBs, and (4) complaints about missing
(system) libs.

In the end I did killall firefox and (out of superstition) pkill compiz.

Then it worked again.
  153   Fri Mar 27 18:28:34 2020 Andrey StarodumovFull test4 HDIs tested
HDIs 5020, 5018, 1044 and 1041 are OK
  154   Fri Mar 27 18:29:53 2020 Andrey StarodumovFull testFT of M1629, M1630, M1631, M1632
M1629: B due to mean noise at -20C
M1630: C due to ROC1 with all pixels failed of PH calibration (Gain). Both FT at -20C are A!
Should be understood and retested. Put in C* tray.
M1631: B due to mean noise in all 3 FT
M1632: A
  156   Mon Mar 30 14:52:59 2020 Danek KotlinskiFull testFT of M1629, M1630, M1631, M1632
I have tested M1630.
I see not problem with ROC1. See the attached plot.
There is one dcol in ROC12 which shoes the "pattern" problem seen in a few other ROCs.
I think this module is fine, should be B. Could be retested.
Attachment 1: m1630_roc1_ph_fits.png
m1630_roc1_ph_fits.png
  157   Mon Mar 30 15:40:52 2020 UrsFull testFT of M1629, M1630, M1631, M1632
M1630 is interesting because (I am using my terminology in the following) for the test at T=+10C ROC1 fails the PH optimization test and by consequence the gain/pedestal test is also failed.

The PH optimization test is failed because the minimum pixel on which the test is based is a 'dead' pixel (according to the PixelAlive test), but unfortunately has hits in the initial PH map. As a result the phscale and phoffset for this ROC are not optimal and this is seen in the gain/pedestal fits.

Please find the plots attached from the T=+10 tests.
Attachment 1: phval-curve_M1630_p10_C1.pdf
phval-curve_M1630_p10_C1.pdf
Attachment 2: phshot_vcal255.pdf
phshot_vcal255.pdf
Attachment 3: pixelalive_C1.pdf
pixelalive_C1.pdf
  162   Tue Mar 31 08:21:39 2020 Andrey StarodumovFull testFT of 1558, 1606, 1634, 1636 on Mar 30
M1558: B
M1606: C again due to trimmed threshold for 189 pixels of ROC2 at -20C. Second time at -20C and at +10C the number of failed pixels is 90+, hence grading is B.
We could manually upgrade this module to B.
M1634: A
M1636: A
  163   Tue Mar 31 09:29:02 2020 Urs LangeneggerFull testexchanged adapter for DTB_WXC03A
I did not manage to get any r/o from the module connected to that adapter, also after exchanging the module.

So I exchanged the module adapter with the one from the blue box, and all issues with r/o were gone (with both modules)

Of course, the apparently flaky adapter from WXC03A seems to be working again/now in the blue box.

Just fyi.
  164   Tue Mar 31 10:32:32 2020 Urs LangeneggerFull testM1637
Maybe M1637 has an issue: It got stuck twice at the same place with

[09:53:16.043] <TB0> INFO: PixTestReadback::CalibrateIa()
[09:53:16.043] <TB0> INFO: ----------------------------------------------------------------------
[09:53:17.924] <TB0> ERROR: <datapipe.cc/CheckEventID:L486> Channel 6 Event ID mismatch: local ID (22) != TBM ID (23)
[09:53:17.924] <TB0> ERROR: <hal.cc/daqAllEvents:L1701> Channels report mismatching event numbers: 23 22 22 22 22 22 22 22

After that, the DTB is unresponsive and elCommandante loses it. (The printout above is from the second testrun. I had restarted the complete fullqualification after realizing that DTB0 was 'missing' and checking manually that M1637 was properly connected).

I hope elCommandante manages this gracefully.
  167   Tue Mar 31 17:36:36 2020 Urs LangeneggerFull testexchanged adapter for DTB_WXC03A

Urs Langenegger wrote:
I did not manage to get any r/o from the module connected to that adapter, also after exchanging the module.

So I exchanged the module adapter with the one from the blue box, and all issues with r/o were gone (with both modules)

Of course, the apparently flaky adapter from WXC03A seems to be working again/now in the blue box.

Just fyi.


Today I have to reconnect a few times modules to the module adapter in the blue box.
It looks like the Molex connector is a bit damaged. One should be very careful while connecting a cable to this Molex connector.
  169   Tue Mar 31 18:23:18 2020 Andrey StarodumovFull testFT of M1637, M1638, M1639, M1640
M1637: C. Graded C due to not completed first test at -20C. Urs has reported issues. The second -20C after T-Cycle and test at +10C are graded A.
Tomorrow I'll upgrade this module manually to A
M1638: A
M1639: B Due to B at first -20C test. ROC8 mean noise >200electrons. Second -20C and at +10C both are graded A
M1640: B All three FT are B due to several ROCs mean noise >200electrons
  172   Wed Apr 1 17:10:16 2020 Andrey StarodumovFull testFT of M1641-M1644
M1641: B due to mean noise > 200electons for a few ROCs
M1642: B due to mean noise > 200electons for a few ROCs
M1643: B due to mean noise > 200electons for a few ROCs
M1644: B due to mean noise > 200electons for one ROCs at -20C
  176   Thu Apr 2 22:50:33 2020 Andrey StarodumovFull testFT of M1645, M1646, M1647, M1648
All modules graded B due to mean noise > 200 electrons.
  178   Fri Apr 3 14:15:01 2020 Andrey StarodumovFull testFT of M1637, M1638, M1639, M1640

Andrey Starodumov wrote:
M1637: C. Graded C due to not completed first test at -20C. Urs has reported issues. The second -20C after T-Cycle and test at +10C are graded A.
Tomorrow I'll upgrade this module manually to A
M1638: A
M1639: B Due to B at first -20C test. ROC8 mean noise >200electrons. Second -20C and at +10C both are graded A
M1640: B All three FT are B due to several ROCs mean noise >200electrons


Following procedure of regrading the first -20C test has been manually upgraded to B. The final grade is A since manual upgrade was not taken in to account. I do not know why. So, the module will be graded A.
  180   Fri Apr 3 15:21:54 2020 Andrey StarodumovFull testFT of M1545, M1557, M1627, M1628

Andrey Starodumov wrote:
Retested M1545: C->B (to be correct on the MoreWeb summary page)
Retested 1557: C->C in one ROC >160 pixels failed to be trimmed Module placed in a tray C*
1627: B
1628: B

All B grades due to high (>200electons) mean noise.


Final grade in the MoreWeb summary page is corrected.
Module graded B.
  181   Fri Apr 3 15:33:10 2020 Andrey StarodumovFull testFT of M1645, M1646, M1647, M1648

Andrey Starodumov wrote:
All modules graded B due to mean noise > 200 electrons.

Correction:
in reality M1546 has been tested and NOT 1646. M1646 failed Reception due to not working ROC and graded C.
I hope we we will find a way to correct properly the module name and rerun MoreWeb analysis.
  182   Fri Apr 3 17:29:51 2020 Andrey StarodumovFull testFT of M1557, M1591, M1649, M1651
M1557: C due to 214 pixel threshold failure in ROC4 only at +10C (several previous FTs at +10C were graded B!)
M1591: B due to mean noise > 200electrons for several ROCs
M1649: C due to 270 pixel threshold failure in ROC11 only at +10C
M1651: B due to mean noise > 200electrons for several ROCs

For both modules C grading is an artifact. Should decide how to proceed with such cases.

M1557 and M1649 will be placed for the moment in C* tray.
  185   Mon Apr 6 14:23:03 2020 Andrey StarodumovFull testFT of M1645, M1646, M1647, M1648

Andrey Starodumov wrote:

Andrey Starodumov wrote:
All modules graded B due to mean noise > 200 electrons.

Correction:
in reality M1546 has been tested and NOT 1646. M1646 failed Reception due to not working ROC and graded C.
I hope we we will find a way to correct properly the module name and rerun MoreWeb analysis.


Folder with test results has been renamed from M1646 to M1546:
mv M1646_FullQualification_2020-04-02_08h41m_1585809682 M1546_FullQualification_2020-04-02_08h41m_1585809682

Dinko fixed logfiles and .tar file.

Results of M1646 has been removed with python Controller.py -d (remove all rows related to M1646).
  190   Mon Apr 6 17:18:02 2020 Andrey StarodumovFull testFT of M1606, M1630, M1655, M1566
M1606: B due to mean noise of several ROCs> 200e
M1639: C* due to failure many pixels of ROC1 at +10C as before: to be run at +10C with CtrlReg=17
M1655: B due to mean noise of several ROCs> 200e
M1656: B due to mean noise of several ROCs> 200e
  193   Tue Apr 7 16:57:01 2020 Andrey StarodumovFull testFT of M1593, M1658, M1659, M1660
M1593: B due to Rel.gain width and mean noise of a few ROCs
M1658: B due to threshold and mean noise of ROC15
M1659: B due to threshold and mean noise of a few ROCs
M1660: C due to 172 pixels failed Threshold (trimmed) on ROC7 only at second -20C, the first -20C and +10C trimming threshold is OK for this chip

M1660 to C* tray and retest with all other modules after production,
  200   Wed Apr 8 17:09:05 2020 Andrey StarodumovFull testFT of M1572, M1661, M1663, M1664
M1572: Grade B due to mean noise of ROC4 211 electrons
M1661: Grade B due to mean noise of a few ROCs > 200 electrons and in ROC12 44 pixels failed threshold cut
M1663: Grade B due to mean noise of a few ROCs > 200 electrons
M1664: Grade B due to mean noise of a few ROCs > 200 electrons and in ROC12 44 pixels failed threshold cut
  206   Thu Apr 9 17:24:49 2020 Andrey StarodumovFull testFT of M1654, M1665, M1666, M1667
M1654: Grade A
M1665: Grade B due to noisy pixels in ROC5. To be checked by module doctor!
M1666: Grade B due to mean noise of several chips > 200 electrons. Again on ROC12 there is a cluster of 31 dead bumps!
M1667: Grade B due to mean noise of several chips > 200 electrons. Again on ROC12 there is a cluster of 41 dead bumps!

M1665 goes to Module doctor!
  211   Tue Apr 14 17:19:44 2020 Andrey StarodumovFull testFT of M1668, M1669, M1670, M1672
M1668: Grade B due to mean noise >200e for several ROCs
M1669: Grade B due to ROC2 mean noise >200e
M1670: Grade B due to ROC1 mean noise >200e
M1672: Grade B due to mean noise >200e for several ROCs
  215   Wed Apr 15 17:26:46 2020 Andrey StarodumovFull testFT of M1623, M1657, M1673, M1674
M1623: Grade B due to rel gain width, in ROC4 74 pixels failed trimming (Threshold) and mean noise >200e
M1657: Grade B due to 70 dead pixels in ROC12 and mean noise >200e
M1673: Grade B due to mean noise >200e in a few ROCs
M1674: Grade B due to mean noise >200e in a few ROCs
  218   Thu Apr 16 17:31:16 2020 Andrey StarodumovFull testFT of M1662, M1675, M1676
M1662: Grade C due to failure of ROC4 in almost all tests: PixelAlive, PH calibration etc
Should be investigated and retested. At Reception PixelAlive etc was OK, only one double column showed problems
M1675: Grade B due to mean noise > 200e for several ROCs
M1676: Grade B due to mean noise > 200e for several ROCs
  219   Fri Apr 17 18:05:45 2020 Andrey StarodumovFull testFT of M1542, M1557, M1630, M1649 only at +10C
M1542 has grade C for relative gain width. Was tested with early versions of test SW with trim VCal=40 and not yet optimized PH optimization/calibration.
Other modules have grade C only at +10C. This time CtrlReg=17 instead of 9 is used/
M1542: Grade B due to 61 pixels failed Threshold criteria (trimming)
M1557: Grade B due to mean noise and NOT any more like in FT 214 pixels failed Threshold criteria
M1630: Grade B due to mean noise and NOT any more like in FT 3883 pixels in ROC1 failed Gain criteria
M1649: Grade B due to mean noise only in one ROC and NOT any more like in FT 270 pixels failed Threshold criteria
  223   Tue Apr 21 17:39:48 2020 Andrey StarodumovFull testFT of M1542, M1554, M1555, M1653
M1542: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC11,14,15. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1554: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC9,13. There was no such problem at previous test when CtrlReg=9 was used module was graded B, while for the present test CtrlReg=17 was used
M1555: Grade B due to 75 pixels had trimming failures at -20C in ROC10. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1653: Grade B due to >1% (~50) pixels had trimming failures at -20C in ROC5,12.
  225   Wed Apr 22 17:41:11 2020 Andrey StarodumovFull testFT of M1542, M1554, M1555, M1653

Andrey Starodumov wrote:

M1542: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC11,14,15. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1554: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC9,13. There was no such problem at previous test when CtrlReg=9 was used module was graded B, while for the present test CtrlReg=17 was used
M1555: Grade B due to 75 pixels had trimming failures at -20C in ROC10. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1653: Grade B due to >1% (~50) pixels had trimming failures at -20C in ROC5,12.


Repeat test with CtrlReg=9. ONLY 2 TESTs: -20C, IV@-20C (upto 205V), +10C,IV@+10C (up to 205V)

Warning: at +10C the total leakage current of all modules = 9.7umA!? From yesterday the IV curves: each single module had the current less 0.5-1umA
M1542: Grade B due failure of 41 pixels in trimming of ROC6 at -20C. Grading at+10C is A!
M1554: Grade B due to mean noise >200e in ROC14 at both temperatures
M1555: Grade B due to mean noise >200e in 2 ROC at both temperatures
M1653: Grade B due failure of 45 pixels in trimming of ROC12 and mean noise >200e in ROC0 at both temperatures
  226   Thu Apr 23 13:34:52 2020 Andrey StarodumovFull testFT of M1542, M1554, M1555, M1653

Andrey Starodumov wrote:

Andrey Starodumov wrote:

M1542: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC11,14,15. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1554: Grade C due to massive (>1000pixels in total) trimming failures at -20C in ROC9,13. There was no such problem at previous test when CtrlReg=9 was used module was graded B, while for the present test CtrlReg=17 was used
M1555: Grade B due to 75 pixels had trimming failures at -20C in ROC10. There was no such problem at previous test when CtrlReg=9 was used, while for the present test CtrlReg=17 was used
M1653: Grade B due to >1% (~50) pixels had trimming failures at -20C in ROC5,12.


Repeat test with CtrlReg=9. ONLY 2 TESTs: -20C, IV@-20C (upto 205V), +10C,IV@+10C (up to 205V)

Warning: at +10C the total leakage current of all modules = 9.7umA!? From yesterday the IV curves: each single module had the current less 0.5-1umA
M1542: Grade B due failure of 41 pixels in trimming of ROC6 at -20C. Grading at+10C is A!
M1554: Grade B due to mean noise >200e in ROC14 at both temperatures
M1555: Grade B due to mean noise >200e in 2 ROC at both temperatures
M1653: Grade B due failure of 45 pixels in trimming of ROC12 and mean noise >200e in ROC0 at both temperatures


According to IV curves currents at 150V are: 0.27umA, 0.44umA, 0.49umA, 0.16umA.
  227   Thu Apr 23 13:37:29 2020 Andrey StarodumovFull testFT of M1556, M1557, M1559, M1560
M1556: Grade B due to several ROCs mean noise >200e
M1557: Grade B due to several ROCs mean noise >200e and trimming failed for 60 pixel in ROC4 at -20C
M1559: Grade A
M1560: Grade B due to several ROCs mean noise >200e
  228   Thu Apr 23 17:26:51 2020 Andrey StarodumovFull testFT of M1561, M1564, M1565, M1566
M1561: Grade B due to several ROCs mean noise >200e
M1564: Grade B due to two ROCs mean noise >200e
M1565: Grade B due to two ROCs mean noise >200e
M1566: Grade B due to two ROCs mean noise >200e
  231   Fri Apr 24 13:59:43 2020 Andrey StarodumovFull testFT of M1568, M1569, M1570, M1571
M1568: Grade B due to mean noise >200e for a few ROCs and for ROC0 RelGainWidth(=0.1) is twice larger then for other ROCs
M1569: Grade B due to mean noise >200e for a few ROCs
M1570: Grade B due to mean noise >200e for a few ROCs
M1571: Grade C due to trimming for 190 pixels failed in ROC4 at +10C. This is not real failure, the first time this module has been tested the grade was B at 10C (while trimming was done for VCal=40)

M1571 goes to C* tray. Solution: either repeat the current test or test only at +10C and merge later.
  232   Mon Apr 27 13:23:47 2020 Andrey StarodumovFull testFT of M1573, M1574, M1576, M1577
Test has been done on April 24
M1573: Grade B due to mean noise >200e for a few ROCs
M1574: Grade B due to mean noise >200e for a few ROCs and trimming failed for >100 pixels and RelGainWidth too wide for ROC0
M1576: Grade B due to mean noise >200e for a few ROCs
M1577: Grade B due RelGainWidth too wide for ROC13 at +10C, at -20C graded A!
  233   Mon Apr 27 13:50:09 2020 Andrey StarodumovFull testFT of M1578, M1579, M1580, M1581
M1578: Grade B due to mean noise >200e for a few ROCs and 67 pixels failed trimming in ROC1 at -20C
M1579: Grade A
M1580: Grade B due to mean noise >200e for a few ROCs and 59/112 pixels failed trimming in ROC5/ROC8 at +10C
M1581: Grade B due to mean noise >200e and 120/120 pixels failed trimming in ROC8/ROC13 at both temperatures
  235   Tue Apr 28 18:06:41 2020 Andrey StarodumovFull testFT of M1582, M1583, M1584, M1585
Modules tested om Apr 27
M1582: Grade C due to 167 pixels failed trimming on ROC1 at +10C only. Previous test on Feb 26 at +10C was graded B!
M1583: Grade B due to mean noise >200e for a few ROCs
M1584: Grade B due to mean noise >200e for a few ROCs
M1585: Grade B due to mean noise >200e for a few ROCs

M1582 goes to C* tray. To be checked later.
  236   Tue Apr 28 18:11:45 2020 Andrey StarodumovFull testFT of M1586, M1587, M1588, M1589
M1586: Grade B due to mean noise >200e for a few ROCs
M1587: Grade B due to mean noise >200e for a few ROCs
M1588: Grade B due to mean noise >200e for a few ROCs
M1589: Grade B due to mean noise >200e for a few ROCs
  238   Wed Apr 29 14:08:42 2020 Andrey StarodumovFull testFT of M1536, M1537, M1538
M1536: Grade B due to mean noise >200e for ROC1
M1537: Grade B due to mean noise >200e for a few ROCs
M1538: Grade B due to mean noise >200e for a few ROCs and trimming failure for 70 pixels in ROC14 at -20C
  239   Wed Apr 29 18:11:36 2020 Andrey StarodumovFull testFT of M1590, M1592, M1596, M1600
Modules tested om April 28
M1590: Grade B Grade B due to mean noise >200e for a few ROCs
M1592: Grade B Grade B due to mean noise >200e for a few ROCs
M1596: Grade B Grade B due to mean noise >200e for a few ROCs
M1600: Grade B Grade B due to mean noise >200e for a few ROCs
  240   Thu Apr 30 15:16:58 2020 Andrey StarodumovFull testFT of M1540, M1541, M1543, M1547
Modules tested om April 29
M1540: Grade B due to many (>1000) pixels failed trimming but only 70 are in "C-zone" for ROC0 at -20C -> retest!!!
M1541: Grade B due to mean noise >200e for a few ROCs
M1543: Grade B due to mean noise >200e for ROC8 and 30+ damaged bumps in ROC14
M1547: Grade A

M1540 in C* tray for retest
  241   Thu Apr 30 15:25:36 2020 Andrey StarodumovFull testFT of M1548, M1549, M1550, M1551
M1548: Grade B due to mean noise >200e for ROC11
M1549: Grade B due to mean noise >200e for ROC2. In total 200+ pixels failed trimming in the module -> investigate???
M1550: Grade B due to mean noise >200e for ROC5
M1551: Grade B due to mean noise >200e for a few ROCs

M1549 in tray C* for investigation
  247   Mon May 4 14:13:39 2020 Andrey StarodumovFull testFT of M1552, M1553, M1595, M1597
FT on April 30th
M1552: Grade B due to mean noise >200e for ROC7,8
M1553: Grade B due to mean noise >200e for a few ROCs
M1595: Grade B due to mean noise >200e for a few ROCs at -20C and the same + trimming failes for ROC0 (82 pixels) and ROC15 (94 pixels)
M1597: Grade B due to mean noise >200e for a few ROCs
  248   Mon May 4 14:18:20 2020 Andrey StarodumovFull testFT of M1540, 1549, 1571, 1598
M1540: Grade A
M1549: Grade B due to mean noise >200e for ROC2 and 48 dead pixels in ROC5
M1571: Grade B due to mean noise >200e for many ROCs
M1598: Grade B due to mean noise >200e for a few ROCs
  250   Tue May 5 13:58:45 2020 Andrey StarodumovFull testFT of M1582, M1649, M1667
M1582: Grade C due to trimming failure in ROC1 for 189 pixels at +10C. This is third time module restesed:
1) February 26 (trimming for VCal 40 and old PH optimization): Grade B, max 29 failed pixels and in few ROCs mean noise
2) April 27: Grade C due to trimming failure in ROC1 for 167 pixels at +10C, at -20C still max 45 failed pixels and in few ROCs mean noise
3) March 5: Grade C due to trimming failure in ROC1 for 189 pixels at +10C, at -20C trimming failure in ROC1 for 157 pixels
The module quality getting worse.

M1649: Grade B due to mean noise >200e in ROC11
M1667: Grade B due to mean noise >200e in few ROCs

M1582 is in C* tray. To be investigated.
  251   Wed May 6 13:20:28 2020 Andrey StarodumovFull testFT of M1574, M1581, M1660, M1668
Modules tested on May 5th
M1574: Grade B due to mean noise >200e in ROC10 and trimming failures for 89 pixels in ROC0, the same as the first time April 24 (there 104 pixels failed)
M1581: Grade B due to mean noise >200e in ROC8/13, no trimming failures in ROC8/13, as it was on April 27 (120+ pixel in ROC8/13 failed) -> Resalts improved!
M1660: Grade B due to mean noise >200e in few ROCs, no more trimming failure for 172 pixels in ROC7 as it was on April 7 in ROC7 -> Results improved!
M1668: Grade B due to mean noise >200e in few ROCs results are worse than were on April 14: one more ROC with mean noise > 200e

resume: for 2 modules results are improved for 2 others almost the same
  252   Wed May 6 16:24:21 2020 Andrey StarodumovFull testFT of M1580, M1595, M1606, M1659
M1580: Grade B due to mean noise >200e in ROC5/8 and trimming failures for 100+ pixels in the same ROCs at +10C, previous result of April 27 was better
M1595: Grade B due to mean noise >200e in few ROCs, previous result of April 30 was much worse with 80/90 pixels failed trimming in ROC0 and ROC15
M1606: Grade C due to 192 pixels failed trimming in ROC2 at +10C, previous result of April 6 was much better with B grade
M1659: Grade B due to mean noise >200e in few ROCs, previous result of Aplirl 7 was almost the same

M1606 to tray C* for further investigation
  263   Tue May 12 13:29:27 2020 Andrey StarodumovFull testFT of M1539, M1582, M1606
M1539: Grade B due to mean noise >200e in few ROCs
M1582: Grade B due to mean noise >200e in few ROCs and at -20C 137 pixels in ROC1 failed trimming. For P5 could use older test results (trim parameters) of April 27 M20_1 when only 23 pixels in ROC1 failed trimming
M1606: Grade C due to 161 pixels failed trimming in ROC2 and total # of defects in this ROC 169. For P5 could use older test results (trim parameters) of March 19 M20_2 when only 36 pixels in ROC2 failed trimming or April 6 when 40 pixels failed (at all T this test has the best performance).
  271   Fri May 22 17:09:37 2020 Andrey StarodumovFTs for ETHZModule list1
green: correct in Total production overview
black: to remove old entries/raws
red: many failures at one or both temperatures

M1536: M20_1 and p10_1 of 2020-04-29
M1537: m20_1 and p10_1 of 2020-04-29

M1538: 149 defects, retest at m20?
M1539: m20_1 and p10_1 of 2020-05-11
M1540: M20_1 and p10_1 of 2020-05-04
M1541: m20_1 and p10_1 of 2020-04-29
M1542: m20_1 and p10_1 of 2020-04-22
M1543: m20_1 and p10_1 of 2020-04-29
M1545: m20_1 and p10_1 of 2020-03-26
M1546: 167 defects, retest at p10?
M1547: m20_1 and p10_1 of 2020-04-29
M1548: m20_1 and p10_1 of 2020-04-30
M1549: 196 defects, retest at m20?
M1550: m20_1 and p10_1 of 2020-04-30
M1551: m20_1 and p10_1 of 2020-04-30
M1552: m20_1 and p10_1 of 2020-04-30
M1553: m20_1 and p10_1 of 2020-04-30
M1554: m20_1 and p10_1 of 2020-04-22
M1555: m20_1 and p10_1 of 2020-04-22
M1556: m20_1 and p10_1 of 2020-04-23

M1557: to be checked on May27 at -20C: run Vcal Scurves for trimmed to VCal=50 module and with -10 to VthrComp value to check failed 61 pixels on ROC4
....................
M1565: m20_1 and p10_1 of 2020-04-23
M1566: m20_1 and p10_1 of 2020-04-23
M1568: m20_1 and p10_1 of 2020-04-24
M1569: m20_1 and p10_1 of 2020-04-24
  274   Tue May 26 22:56:46 2020 Dinko FerencekFTs for ETHZModule list1
The following FullQualification tar file have been uploaded to CERNBox

rsync -avPSh --no-r --include="M1536_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1537_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1539_FullQualification_2020-05-11*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1540_FullQualification_2020-05-04*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1541_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1542_FullQualification_2020-04-22*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1543_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1545_FullQualification_2020-03-26*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1547_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1548_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1550_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1551_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1552_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1553_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1554_FullQualification_2020-04-22*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1565_FullQualification_2020-04-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1566_FullQualification_2020-04-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1568_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1569_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
  281   Fri May 29 15:56:27 2020 Andrey StarodumovFTs for ETHZModule list2
green: correct in Total production overview
black: to remove old entries/rows
red: many failures at one or both temperatures

M1538: m20_1 and p10_1 of 2020-04-29
M1546: m20_1 and p10_1 of 2020-04-02
M1549: m20_1 and p10_1 of 2020-05-04
M1557: m20_1 and p10_1 of 2020-04-23
M1570: m20_1 and p10_1 of 2020-04-24
M1571: m20_1 and p10_1 of 2020-05-04
M1572: m20_1 and p10_1 of 2020-04-08
M1573: m20_1 and p10_1 of 2020-04-24
M1574: m20_1 and p10_1 of 2020-05-05
M1576: m20_1 and p10_1 of 2020-04-24
M1577: m20_1 and p10_1 of 2020-04-24
M1578: m20_1 and p10_1 of 2020-04-27
M1579: m20_1 and p10_1 of 2020-04-27
M1580: m20_1 and p10_1 of 2020-04-27
M1581: m20_1 and p10_1 of 2020-05-05
M1582: m20_1 of 2020-04-27 and p10_1 of 2020-05-11
M1583: m20_1 and p10_1 of 2020-04-27
M1584: m20_1 and p10_1 of 2020-04-27
M1585: m20_1 and p10_1 of 2020-04-27
M1586: m20_1 and p10_1 of 2020-04-28
M1587: m20_1 and p10_1 of 2020-04-28
M1588: m20_1 and p10_1 of 2020-04-28
M1589: m20_1 and p10_1 of 2020-04-28
M1590: m20_1 and p10_1 of 2020-04-28
M1591: m20_1 and p10_1 of 2020-04-03
M1592: m20_1 and p10_1 of 2020-04-28
M1593: m20_1 and p10_1 of 2020-04-07 NOT shipped to ETHZ
M1595: m20_1 and p10_1 of 2020-05-06
  283   Thu Jun 4 11:32:46 2020 Dinko FerencekFTs for ETHZModule list2
The following FullQualification tar file have been uploaded to CERNBox

rsync -avPSh --no-r --include="M1538_FullQualification_2020-04-29*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1546_FullQualification_2020-04-02*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1549_FullQualification_2020-05-04*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1557_FullQualification_2020-04-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1570_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1571_FullQualification_2020-05-04*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1572_FullQualification_2020-04-08*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1573_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1574_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1576_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1577_FullQualification_2020-04-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1578_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1579_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1580_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1581_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1582_FullQualification_2020-05-11*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1583_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1584_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1585_FullQualification_2020-04-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1586_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1587_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1588_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1589_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1590_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1591_FullQualification_2020-04-03*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1592_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1593_FullQualification_2020-04-07*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1595_FullQualification_2020-05-06*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
  288   Mon Jun 15 10:36:45 2020 Andrey StarodumovFTs for ETHZModule list 3
M1596 2020-04-28
M1597 2020-04-30
M1598 2020-05-04
M1599 2020-03-25
M1600 2020-04-28
M1601-M1605: only single test
M1606 2020-05-11
M1607-1608: only single test
M1609 2020-03-23
M1610-1612: only single test
M1613 2020-03-23
M1624-1622: only single test
M1624-1629: only single test
M1631: only single test
M1641-1648: only single test
M1649 2020-05-05
M1651: only single test
M1653 2020-04-22
M1654-1658: only single test
M1659 2020-05-06
M1660 2020-05-05
M1661-1666: only single test
M1667 2020-05-05
M1668 2020-05-05
M1669-1676: only single test
  289   Tue Jun 16 00:45:48 2020 Dinko FerencekFTs for ETHZModule list 3
The following FullQualification tar file have been uploaded to CERNBox

rsync -avPSh --no-r --include="M1596_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1597_FullQualification_2020-04-30*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1598_FullQualification_2020-05-04*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1599_FullQualification_2020-03-25*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1600_FullQualification_2020-04-28*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1601_FullQualification_2020-03-18*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1602_FullQualification_2020-03-18*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1603_FullQualification_2020-03-18*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1604_FullQualification_2020-03-18*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1605_FullQualification_2020-03-19*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1606_FullQualification_2020-05-11*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1607_FullQualification_2020-03-19*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1608_FullQualification_2020-03-19*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1609_FullQualification_2020-03-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1610_FullQualification_2020-03-20*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1611_FullQualification_2020-03-20*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1612_FullQualification_2020-03-20*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1613_FullQualification_2020-03-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1614_FullQualification_2020-03-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1615_FullQualification_2020-03-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1618_FullQualification_2020-03-23*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1619_FullQualification_2020-03-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1620_FullQualification_2020-03-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1622_FullQualification_2020-03-24*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1623_FullQualification_2020-04-15*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1624_FullQualification_2020-03-25*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1626_FullQualification_2020-03-25*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1627_FullQualification_2020-03-26*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1628_FullQualification_2020-03-26*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1629_FullQualification_2020-03-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1631_FullQualification_2020-03-27*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1641_FullQualification_2020-04-01*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1642_FullQualification_2020-04-01*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1643_FullQualification_2020-04-01*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1644_FullQualification_2020-04-01*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1645_FullQualification_2020-04-02*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1647_FullQualification_2020-04-02*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1648_FullQualification_2020-04-02*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1649_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1651_FullQualification_2020-04-03*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1653_FullQualification_2020-04-22*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1654_FullQualification_2020-04-09*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1655_FullQualification_2020-04-06*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1656_FullQualification_2020-04-06*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1657_FullQualification_2020-04-15*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1658_FullQualification_2020-04-07*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1659_FullQualification_2020-05-06*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1660_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1661_FullQualification_2020-04-08*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1662_FullQualification_2020-04-16*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1663_FullQualification_2020-04-08*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1664_FullQualification_2020-04-08*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1665_FullQualification_2020-04-09*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1666_FullQualification_2020-04-09*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1667_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1668_FullQualification_2020-05-05*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1669_FullQualification_2020-04-14*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1670_FullQualification_2020-04-14*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1672_FullQualification_2020-04-14*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1673_FullQualification_2020-04-15*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1674_FullQualification_2020-04-15*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1675_FullQualification_2020-04-16*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
rsync -avPSh --no-r --include="M1676_FullQualification_2020-04-16*.tar" --exclude="*" /home/l_tester/L1_DATA/* /home/l_tester/DATA/L1_DATA_Backup/CERNBox_Dropbox/
  1   Tue Aug 6 14:30:00 2019 Dinko FerencekDocumentationJumo Imago 500 manuals
Manuals can be found at https://www.manualslib.com/products/Jumo-Imago-500-8786441.html
  6   Wed Aug 7 12:24:14 2019 Matej RoguljicDocumentationActivities 31.7.-9.8.2019.
31.7.

Matej tested 4 HDIs (8030-8033 and 8010). 8010 had a faulty TBM which was replaced, however, during subsequent tests, other TBM (the one which was working fine!) started sparking during HV test. Burn marks visible under the microscope in the top-right corner of the right TBM, between wire-bond pads. Other HDIs were fine and were glued to 2 v4 modules (1523 and 1529) and 1 v3 module (1526).

1.8.

Prepared the software and hardware for the full-qualification of multiple modules in parallel. Ran reception test on M1520 and M1522.

2.8.

Ran full qualification on M1520 and M1522. It consisted of qualification at -20, IV@-20, 5 cycles between -20 and +10, qualification at -20, IV@-20, qualification at +10 and IV@+10. This set of tests will from now on simply be called "full qualification". For some reason, second qualification at -20 failed. M1522 shows significantly high leakage current compared to previous tests

5.8.

Boxes 2 and 9 have been filled with L1 module trays. Each tray can hold 4 modules and each box contains 13 trays. Box 9 filled with humidity collecting balls. M1520 and M1522 placed in the box 9 which is the box that should be filled first. Reception test was run on M1523 and M1526. There were a lot of issues with M1523 in the address decoding test affecting other tests as well. What was found out is that the third dac, Vsh, was set to the wrong value by default if one was making module configuration folder using mkConfig script. It was 30 and should be 8. This should be propagated in pXar later on.

6.8.

Started running reception tests with 3 modules (1523,1526,1529) in parallel, but with the corrected value of Vsh (8). Several issues were observed.

First, one of the DTBs had a connection timeout which appeared every time we tried the reception test. Changing the DTB didn't work. This was solved by plugging the USB cable on the PC side to a different USB slot
Second, pretest in the reception test did timing test which we deemed unnecessary so it was removed.
Third, results for 1523 and 1529 didn't look correct. It was as if there was a problem with either timing or dac values. Comparing the logfiles of the test and the pxar files, it was found out that the timing settings were correct. The issue was in the dac settings taken by elComandante. What we forgot was that elComandante does NOT take dac settings frmo the module folder, but rather from generic tbm folders like tbm10d. This is set in the elComandante.config. The issue was that tbm10d folder had wrong values for Vsh (30, instead of 8) and ctrlreg (0 instead of 17). After this correction reception tests ran fine.

PixPhOptimization.cc was edited to correct the starting values of the parameters for the PhOptimization test. The old values were sometimes causing the tests to algorithm to fail even though the chip was fine.

Started full qualifications of all 3 modules, but there were some issues with the coldbox. It was noticed when it was not cooling at the start of full characterization. The warning on the coldbox interface read "Geber-stillstand". Danek managed to make it go away by stopping the program, even though no program was being run.

Finally, full qualification started around 15:00, finished at 22:40.

M1523 failed first qualification at -20, but finished other two qualifications. IV curves for that module were taken while M1529 was stuck and connected to HV as well meaning that the readings are actually sum of IVs from these two modules.
M1529 failed first qualification at -20, few seconds apart from M1523 failure. One of them failed during the Gain Pedestal, other one on PhOptimization. It also failed other two qualifications (PhOptimization and GainPedestal)
M1526 had no issues in qualification, however, it was graded C. One chip was problematic in most of the tests, while 4 others had issues with TrimBits. This is a bit suspicious so we'll investigate it further.

7.8.

Investigating problems with qualifications on testboards 0 and 1. We noticed that it happens during PhOptimization or GainPedestal. Also investigating why M1526 failed trimbits test on several ROCs. The cause was identified as a memory leak (several Gbs!) during a PhOptimization scan. Separate log made for this.

8.8.

Started full qualification WITHOUT PhOptimization at 9:15 on modules 1523, 1526, 1529. Finished after 8 hours. 1529 had a reception test error during fulltest@+10, other tests ran fine. Glued protection caps on 4 modules - 1504,1505,1509,1520 using 2 component epoxy adhesive.

9.8.

Running fulltest@-20 and +10 for modules 1504, 1505, 1520. The plan is to irradiate them in Zagreb and evaluate the changes induced by radiation.
  158   Mon Mar 30 16:46:59 2020 Andrey StarodumovDBSensors missing in Advacam spreadsheet
Dinko noticed that several sensors are missing in Advacam spreadsheet:
- 381785-03-3 used for M1586
- 381783-02-1 used for M1617
- 381783-03-3 used for M1618
- 350853-16-1 used for M1619

It means that information about about ROCs used for these modules is missing in the Module_Assembly spreadsheet.
  7   Wed Aug 7 18:01:06 2019 Matej RoguljicCold box testsPhOptimization problem
August 7 - we found out that PhOptimization algorithm starts leaking memory if it fails to find proper values. If running one setup, the test might go through. However, if multiple modules are tested in parallel and they all start leaking memory at the same time, the system will kill one testboard process (or two if necessary), causing the unlucky testboard(s) to get stuck (powered on, HV on, but no tests running) until they are reset. Current solution to this problem is to simply omit the PhOptimization if the fulltest procedure. We left gainPedestal, since it will use default Ph dac values.
  31   Thu Oct 31 10:41:12 2019 Matej RoguljicCold box testsIssue with DTB_WWVASW
On 29th of October, an issue with DTB_WWVASW was noticed when trying to run FullQualification on 4 modules at the same time. On a tested, working module, setting Vana works but taking tornado plots (setvthrcomp) doesn't as well as other tests. Curiously enough, after switching the DTB with the one used for HDI testing (DTB_WRN13L) the problem was still there. This lead us to believe that the problem was with cables or ground. All cables and the adapter were tested and shown to be working fine. This is when we took a third DTB, put it in the same plae as previously WWVASW and WRN13L, connected all the cables and then the tests worked.

Conclusion of the tests is that DTB_WWVASW and DTB_WRN13L are suffering from the same failure mode. It was not noticed before on the WRN13L because it does not affect the HDI tests. This leaves us with 3 working DTBs for the FUllQualification tests.
  33   Tue Nov 19 14:10:10 2019 Andrey Cold box testsnew modules M1533 and M1534
Yesterday Silvan wire bonded two new modules M1533 and M1534 that are both grade C due to high leakage current and a few ROCs with large number of defective bump bonds.
Today I run Reception1st test with IV at +10C for both modules and above features are confirmed.
The IV curves present on a MoreWeb webpage but the values of a leakage current in the table under the curve is 0A (?). May be it relates to the fact that it should be current at -150V shown,
while the voltage scan stops at -130V since one of the modules draw 98umA current at this voltage. If it so, the MoreWeb script should be corrected: the current taken at -150V or if voltage
is lower then it is taken at the last (highest) value.
Tomorrow a FullQualification will be run on both modules.
  35   Wed Nov 27 08:39:24 2019 Matej RoguljicCold box testsBump bonding test investigation
The BB test on M1534 shows a lot of dead bunmps on C10 and some on C1 and C0 as well. Putting a source on top of the module shows that the pixels for which the test showed dead bumps are still able to read hits from the source. Therefore, it looks like the bumps are working, but the test is reporting them as defective. It turns out that the bumps used by Helsinki are different than the bumps used at PSI so a different test needs to be used. After trying all 4 BB tets in pXar, it was concluded that BB2 is the appropriate test to use.
  47   Mon Jan 20 21:33:20 2020 Dinko FerencekCold box testsLatest tests of PH optimization
We did some tests by running the latest version of pXar on modules M1521, M1529, M1534 and M1536, specifically trimming and PH optimization. Of the 4 modules tested, the PH optimization converged only for M1521. We plan to further investigate with Urs' help possible reasons for failed PH optimization.

We also noticed a change in the threshold distribution plot of the latest trim bit test compared to the previous version of the test (see attached plots).
Attachment 1: TrimBitTest_old.png
TrimBitTest_old.png
Attachment 2: TrimBitTest_new.png
TrimBitTest_new.png
  48   Tue Jan 21 12:19:08 2020 Dinko FerencekCold box testsLatest tests of PH optimization

Dinko Ferencek wrote:
We did some tests by running the latest version of pXar on modules M1521, M1529, M1534 and M1536, specifically trimming and PH optimization. Of the 4 modules tested, the PH optimization converged only for M1521. We plan to further investigate with Urs' help possible reasons for failed PH optimization.

We also noticed a change in the threshold distribution plot of the latest trim bit test compared to the previous version of the test (see attached plots).


The problem was traced down to a wrong value of the vcallow parameter in the PH optimization configuration. The value was 10 instead of 50 expected by the test. It turned out that the optimization converged for M1521 because this is a PROC600V3 module for which the configuration files were regenerated for another reason and vcallow was consequently set to the right value of 50. The other 3 modules are PROC600V4 modules and for them old configuration files with a wrong vcallow value were used.
  51   Thu Jan 23 14:54:45 2020 Dinko FerencekCold box testsKeithley exchanged
Yesterday during an attempt to run the FullQualification, elComandante would lose control over Keithley after performing the IV curve measurements and would go in the subsequent Fulltest with HV off. This happened both between the two sets of tests at -20 C and between the tests at -20 C and +10 C. Today we exchanged the old Keithley with a new one and we observe that the communication with this new Keithley is much smoother without any error codes and warning sounds produced by Keithley.

In order to properly communicate with the new Keithley, it was necessary to set its communication mode to RS-232 and BAUD rate to 57600 (https://youtu.be/-5RmguqC7xA). Everything else was left unchanged:

BITS = 8
PARITY = NONE
TERMINATOR = <CR+LF>
FLOW-CRTL = XON-XOFF
  56   Mon Jan 27 17:50:48 2020 Matej RoguljicCold box testsFailing modules at -20
Friday, 24.1., it was noticed that the modules are graded as 'C' in similar fashion. Module is graded A/B at +10 degrees and then graded C at -20. The reason for grade C was that the trimming procedure was not successful on at least one of the ROCs on each module. This problem was further investigated on 27.1. Wolfram suspected that the problem lies in zero suppression mode since the pixels with failed threshold comes in groups of four pixels. The corresponding DAC 'ctrlreg' was set, as recommended, to 17, during previous tests and after changing it to 9, the problems at -20 seem to be gone. This will be further investigated.
  57   Thu Jan 30 13:51:03 2020 Matej RoguljicCold box testsFulltest failure - psiAgente
During the first fulltest@-20 of the full qualification procedure for modules: 1539, 1541, 1542 and 1543; an error message popped up "psi Agente: self.Testboards[0] failed - the following boards are busy: TB1(DTB...:M1541), TB2(DTB...:M1542), TB3(DTB...:M1543). This means that the first fulltest for module 1539 was not executed.

Second fulltest seems to be working fine (so far)

Cause of this problem is not clear.
  64   Tue Feb 4 19:52:22 2020 Dinko FerencekCold box testsBlue coldbox setup for reception tests commissioned
Today I continued where Matej left off with commissioning the blue coldbox setup for reception tests. All the necessary software (pXar and elComandante) Matej already installed (just had to add the BB2 section in testParameters.dat). What was a problem in communication with Keithley (Model 2400) which was fixed after changing the BAUD rate in elComandante/keithleyClient/keithleyInterface.py to 19200 (value set in Keithley's communications settings) and changing FLOW-CRTL in Keithley's communication settings to XON-XOFF.

The reception test was successfully run for modules 1545, 1547, 1548, 1549, and 1550 after cap gluing.
  67   Thu Feb 6 00:45:25 2020 Dinko FerencekCold box testsLost at least one Peltier in the coldbox
During an overnight (Feb. 4-5) FullQualification run with modules 1545, 1547, 1548, and 1549 the coldbox lost the ability to maintain the temperature at -20 C. As can be seen from the attached temperature log, the problems already started during the temperature cycles where the last 2 cycles were noticeably longer than the first 3 and finally during the second Fulltests at -20 C the temperature started rising and stabilized around -10 C. Based on these observations it looked like at least one Peltier stopped working.

Silvan took apart the coldbox and will install new Peltiers. Based on measurements Andrey did with the old Peltiers it indeed looks like one Peltier stopped working but the remaining 3 also appear to be at different stages of degradation.
Attachment 1: TemperatureCycle.pdf
TemperatureCycle.pdf
  70   Thu Feb 6 17:28:23 2020 Dinko FerencekCold box testsNew Peltiers installed
Today new Peltiers were installed and tested. Unfortunately, we had trouble reaching -20 C with this new set of Peltiers (they are of different type than the ones that were taken out). After better insulation ofthe sample compartment and lowering the chiller temperature from 11 to 6 C, the coldbox was able to reach close to -19.8 C after about half an hour


After lowering the chiller temperature to 5 C, the coldbox was able to reach -20 C

and hold it for about 1 hour when the test was stopped


The plan is to order a new set of Peltiers that are better suited for our use case and in the meantime we will use the newly installed ones with the chiller temperature set lower.

IMPORTANT: Pipes inside the coldbox were left uninsulated. For now this is OK because of the low relative humidity in winter months but could lead to condensation in warmer part of the year when the relative humidity increases.
  79   Tue Feb 11 17:56:06 2020 Dinko FerencekCold box testsFulltest failure - psiAgente
During today's full qualification the 2nd fulltest@-20 for TB1 (M1562) failed
psiAgente:  self.Testboards[1] failed- the following boards are busy: TB0(DTB_WRBSJ9:M1561), TB2(DTB_WRE1O5:M1564)

This looks similar to the report made on Jan. 30.

Looking at the pXar terminal window I managed to catch the following output
Thread 1 (Thread 0x7fe20f554b00 (LWP 7987)):
#0  0x00007fe20c4e90cb in __GI___waitpid (pid=17338, stat_loc=stat_loc
entry=0x7ffe23b71480, options=options
entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:29
#1  0x00007fe20c461fbb in do_system (line=<optimized out>) at ../sysdeps/posix/system.c:148
#2  0x00007fe20e998172 in TUnixSystem::Exec (shellcmd=<optimized out>, this=0x1fa2570) at /home/l_tester/root/core/unix/src/TUnixSystem.cxx:2119
#3  TUnixSystem::StackTrace (this=0x1fa2570) at /home/l_tester/root/core/unix/src/TUnixSystem.cxx:2413
#4  0x00007fe20e99aa43 in TUnixSystem::DispatchSignals (this=0x1fa2570, sig=kSigSegmentationViolation) at /home/l_tester/root/core/unix/src/TUnixSystem.cxx:3644
#5  <signal handler called>
#6  0x00007fe20d5d6794 in PixTestTrim::trimTest (this=this
entry=0xcfe82f0) at /home/l_tester/L1_SW/pxar/tests/PixTestTrim.cc:359
#7  0x00007fe20d5da500 in PixTestTrim::doTest (this=0xcfe82f0) at /home/l_tester/L1_SW/pxar/tests/PixTestTrim.cc:132
#8  0x000000000040a3d0 in main (argc=<optimized out>, argv=0x7ffe23b7b828) at /home/l_tester/L1_SW/pxar/main/pXar.cc:376
===========================================================


The lines below might hint at the cause of the crash.
You may get help by asking at the ROOT forum http://root.cern.ch/forum
Only if you are really convinced it is a bug in ROOT then please submit a
report at http://root.cern.ch/bugs Please post the ENTIRE stack trace
from above as an attachment in addition to anything else
that might help us fixing this issue.
===========================================================
#6  0x00007fe20d5d6794 in PixTestTrim::trimTest (this=this
entry=0xcfe82f0) at /home/l_tester/L1_SW/pxar/tests/PixTestTrim.cc:359
#7  0x00007fe20d5da500 in PixTestTrim::doTest (this=0xcfe82f0) at /home/l_tester/L1_SW/pxar/tests/PixTestTrim.cc:132
#8  0x000000000040a3d0 in main (argc=<optimized out>, argv=0x7ffe23b7b828) at /home/l_tester/L1_SW/pxar/main/pXar.cc:376
===========================================================

However, looking at the relevant section of the code in tests/PixTestTrim.cc
356  for (unsigned int iroc = 0; iroc < rocIds.size(); ++iroc) {
357    for (int ix = 0; ix < 52; ++ix) {
358      for (int iy = 0; iy < 80; ++iy) {
359        if (thr2[iroc]->GetBinContent(1+ix,1+iy) > initialCorrectionThresholdMax - 2) {
360          thr2[iroc]->SetBinContent(1+ix,1+iy,0);
361        }
362      }
363    }
364  }

there is nothing special there, just getting the histogram bin content. Could this be some random memory corruption issue?
  257   Mon May 11 13:19:51 2020 Andrey StarodumovCold box testsM1539
After several attempts including reconnecting the cable, M1539 had no readout if it's connected to TB3. When connected to TB1, M1539 did not show any problem. M1606 worked properly both with TB1 and TB3.
For FT test the configuration is following:
TB1: M1539
TB3: M1606
  217   Thu Apr 16 15:21:51 2020 Andrey StarodumovChange TBM Change TBMs on M1635, M1653, M1671
M1635: no data from ROC8-ROC11 => change TBM1
M1653: ROC12-ROC15 not programmable => change TBM0
M1671: no data from ROC12-ROC15 => change TBM0

Modules to be given to Silvan
ELOG V3.1.3-7933898