VISTA IOT MEETING 2010-10-05 ============================ Minutes by: V. D. Ivanov ======================== October 05, 2010 09:00 Chile, 15:00 Garching, 14:00 UK ---------------------------------------------------------- CONTENTS: 0: Presence at the meeting 1. VISTA status 2. Software 3. QC 4. Progress of the surveys 5. ZY band flux calibrations 6. Action items review 7. Other issues 8. Date of the next meeting ============= === 0. PRESENCE ======================================================== ============= Santiago: V.D. Ivanov (VDI) Paranal: M. Reikuba (MRe), S. Mieske (SMi), W. Hummel (WHi) Garching: M. Hilker (MHi), A. Gabasch (AGa) Phone: J. Emerson (JEr), M. Irwin (MIr), A. Yoldas (AYo), J. Lewis (JLe) Preliminary Agenda ================== 1. VISTA general status and planned activities (TSz, VDI, MRe, SMi); pending visit to Chile, etc. 2. Status of various software: SADT/OT/P2PP/etc (mainly OT v.3.1) 3. QC issues, data reduction/product issues (MIr, etc.) 4. Progress of the surveys (MRe) 5. ZY flux calibration from the IRTF spectrophotometric standards; see http://irtfweb.ifa.hawaii.edu/~spex/IRTF_Spectral_Library/ for details on this library (VDI) 6. Action Items review (enclosed bellow is the AI section from the minutes of the last IOT) 7. Other issues 8. Date of the next meeting (end of Oct or Nov, 2010) ************************************************************ ================= === 1. VISTA Status ===================================================== ================= MRe: - change of period - no separate queues for P85 and P86 (except temporarily, see bellow) - telescope at at nominal operation level, i.e. the usual errors (M2 support, M1, preset problems, AG problems, technical CCDs going to stand by) are all present - all observations from Aug-Oct were reviewed, the frequency of errors was such that there was a single OB that was restarted up to 10 times due to various problems; taken separately the errors are small but all together they lead to a significant time loss VDI: - agreed, the cumulative overheads are big - a source of additional overhead if the fact that we restart a failed OB at the beginning of the failed pawprint, not at the image when it failed, so regardless of the issue, we on average repeat 3 images (a pawprint is 6) JEm: - this is by design, it was not a requirement; it could b fixed MRe: - this was adopted to help with data reduction, i.e. to make sure we have a complete pawprint observed close in time - it is a bad solution to modify the template trying to minimize the impact, it is better to fix the problem VDI: - true, but only part of the problem is fixable, sometimes the catalogs used for Ao and Ag reference stars are wrong, this is a fundamental limitation that will remain with us for a while and a modification of the template can reduce the impact of this problem SMi: - telescope osculations occurred during the lats few nights; this was investigated during the daytime by software and instrumentation, but they could not be reproduced; Serge G. thinks it is not mechanical (there is a damper), and he thinks it is a software issue, some open loop in the AG; tracking maps were done, but since the osculations didn't repeat they didn't help either; one possibility is the wind which was relatively high during those nights but the problem occurred at various positions so this is not conclusive either VDI: - the AG is logged, could be good to monitor it; try to see if there were trends with the amplitude of the AG corrections, etc. check if they retry the same OBs. ============= === 2. Software ===================================================== ============= MRe: - new OT with twilight constraints was just released and installed; the installation encountered some problems VDI: - what happened with the missing keywords? MRe: - BOB complained that the Strel ratio was missing; it has no meaning for VISTA, we delayed with the installation over the weekend; the headers have three new keywords, only the twilight is working. ======= === 3. QC ========================================================= ======= WHi: - spot checks on-going; still occasional OBs with high ellipticity above 0.2; major software upgrade here in Garching - new version of the pipeline (ver. 1.0.2), one version after the Paranal pipeline version; next week we expect to install ver. 1.0.4; - question: what is the status of fast (via the Internet) data transfer of the VISTA calibrations? SMi: - no update yet if this has happened JEm: - the Paranal-Antofagasta fiber should be fine at the end of Mar or Apr 2011; the fiber link between Paranal and Antofagasta is not officially to be turned over to ESO yet but as test we can do it earlier SMi: - what is the link's bandwidth? JEm: - the line from Paranal to Antofagasta is 10 Gbits SMi: - will it be able to transfer all Paranal data? JEm: - this is the ultimate idea but for now we can test and make it work with VISTA data first VDI: - We will make an AI for this test JEm: - the data arriving every day in Garching may be a strain for the QC, it will be more immediate work than looking at the data once every week VDI - there is also a training side to this procedure - to teach the Paranal DHAs in operating the link SMi: - may be we should just do a test for a week VDI: - when can this test be done? JEm: - by the end of Oct should be physically possible; need to talk to Fernando Comeron VDI: - so probably Nov 2010; is this good enough for Wolfgang? WHu: - it is fine; we are prepared to look at the data daily which we do for the other instruments anyway; VISTA is the only exception for logistical reasons MRe: - there was a report about ellipticities, it was pretty bad: ~1200 images from the VISTA start of operation to the end of Jul have ellipticity >0.2 but many of these (427) weer classified as "A". AYo: - nothing much to add to this summary of my report; most of the problematic OBs were from June and July when the camera was shifted for some reason, and then shifted again VDI: - the reason was to try to fix the ellipticity problem that appeared in Mar-Apr AYo: - the conclusion is that the number of affected images should not be so high SMi: - the main problem here is that many "bad" OBs were still classified "A"; but since a few days ago the Paranal QC script rises a flag if more than four or more detectors are with Ell>0.2; before only the average ellipticity was considered which was masking the problem WHi: - what is the official limit? VDI: - 4 bad detectors MIr: - note that 3/4 of this problem occurred in May-Jun-Jul JEm: - could it be that the problem was monitored with more attention during that period and it was caught more often but it is occurring always? MIr: - no, the software has always been able to pick it up MRe: - the Paranal scripts were could pick it up but they only reported the data for the central and the four corner detectors; the script was not counting the total number of the affected detectors; also, the TIOs didn't have clear instructions hot to grade the OBs with high ellipticity MRe: - was there a correlation with the seeing? - if ell>0.2 the seeing was bad >2.0 arcsec; VDI: - back in Mar/Apr the correlation was the other way around, the high ellipticity was visible only when the seeing was good SMi: - can the seeing in the report be given in pixels instead of arcsec? VDI: - I also suspect it is in px, for >2 arcsec the stars are usually round AYo: - no, it is in arcsec! Most likely the poor ellipticity because of the poor guiding VDI: - how do we know this? A: - they are all elongated in the same direction JEm: - this is not necessarily caused by the guiding VDI: - is there correlation with airmass? MIr: - no MRe: - can we get a list of OB IDs with list of parameters such as airmass, when it was executed, etc AYo: - I already have a database, I will send you the info VDI: - pls send it to everybody VDI - how many OBs do we have to reclassify? MHi: - 427 images were classified "A" but are they in 427 different OBs or in a smaller number of OBs? A: - I can't answer now SMi: - If we have one pawprint that fails the ellipticity, should we fail the entire OB? MRe: - Yes. VDI: - Yes. SMi: - I have doubts about this; for example for the seeing we look at the average seeing; why shouldn't we do the same here? VDI: - one pawprint of six is ~17% >> 10%; also, this is uneven sky coverage, which potentially could compromise the uniformity of the surveys SMi: - Wolfgang, do you reject OBs based on one failed paw or more? WHu: - based on one bad (i.e. with bad ellipticity) paw MIr: - has anyone asked the PIs? SMi: - we need a common criterion between Paranal, Garching QC and CASU MRe: - we haven't heard from the PIs MRe: - no feedback yet; the PIs were reminded; VMC's PI asked to repeat one OB because of the bad seeing (>10% above the constraints); I would wait before reclassifying the OB before seeing the progress reports that are due at the end of Nov 2010 MIr: - right, let's hear from the PIs first VDI: - no reason to hurry, most of the affected OBs are probably not visible anymore if they were observed in May-June-July VDI: - we have to create an AI to keep track but to keep it on hold until Nov 2010; then decide to reclassify the OBs or not; also, to get the affected OBs IDs and any other relevant info from CASU SMi: - as long as the ellipticity problem is not reproduced there is not way to find a solution MIr: - When did the recent oscillations happened? SMi: - 29/30.09.2010 and 30.09/1.10.2010 MIr: - we will look at these nights in about two weeks when we get the data SMi: - note that this time the affected OBs were classified "C" MRe: - there was even some downtime as seen from the NR ============================ ======= 4. Progress of the surveys =================================== ============================ MRe: - we were hit by poor weather - we had more than 30 bad nights this semester; we have very low completion rate for the VVV and there are about 10 night worth of observations remaining, in terms of time; VIKING with ~67% completion rate is the next most affected survey, with about 10 nights remaining; on the other hand UltraVISTA has ~30hr only left and the VHS is ~95% completed. Bottom line: VIKING and VVV are the most critical cases VDI: - we can not do anything with the lost time but we still can gain ~15 min for the VVV every night if we delay the observation of the first phot standard for the night to start observing for the VVV earlier - we can do this during twilight for the Ks band variability observations - 15 min is approximately the duration of one concatenation. This will gain for VVV up to ~10 hrs in the next ~40 days until the end of Nov 2010. MRe: - this is ~10% of the remaining time, it is noticeable VDI: - we should create an AI to tell the TIOs to observe the first PHOT std after the VVV OBs have set SMi: - it is not too bad, the last night we got only VVV and VIKING MHi: - was it a natural result from the ranking algorithm? SMi: - yes MRe: - not completely; I made a local queue for P85 OBs and told the TIOs to run that first; otherwise there are some P86 OBs that take precedence. ====================================== ======= 5 ZY bands absolute flux calibration =============================== ====================================== VDI: - the idea is to use the IRTF flux calibrated library of NIR spectra covering 0.8-2.5 (and longer for some stars) to obtain absolute flux calibration for ZY and for the narrow band filters; however, most stars are too bright to be observed without defocusing or without hardware windowing (this is a feature in IRACE but not implemented for the VIRGO detectors) - question to JEm - can we have windowing to observe the bright stars? JEm: - not known, will ask but probably will be difficult to implement; we have a test detector to try VDI: - this is only doable if the changes are to the software only, not if the hardware must be changed; MRe: - there is also the XSHOOTER library being done right now VDI: - right, there is also a library done with SINFONI, all are flux calibrated but the goals are to calibrate spectra, so most of the stars are too bright in all these libraries, too bright for calibrating imaging, except for the narrow band filters, may be. JEm is leaving the meeting. ====================================================== ======= 6. Action Items review ============ ====================================================== OLD AIs: -------- AI 2009-11: TSz should find out if the constraints sets could be carried with the OBs, to the OS and the fits header of the VIRCAM files. Otherwise it is not easy to propagate these data by interfaces. 2009-07-29: TSz iterating with R. Schmutzer, but DFI (T. Bierwirth) will also need to be involved. For surveys, given the small range of programs it is expected that there are not too many different OBs, and this might need to be followed in a different/manual way in the beginning - this could imply a major change for the tools that cannot be done in the last moment. 2010-03-23: the keywords have been implemented except for the moon; there is a ticket with details; some testing done by VDI, no errors found; keep ongoing, try to complete during my next shift end of Mar - early Apr. 2010-04-23: VDI reports one formula is missing 2010-05-27: Corresponding PPRS is still pending. VIV asks to keep this AI open until the PPRS is closed. 2010-06-29: No changes with respect to last meeting. VIV asks to keep this AI kept open/pending until the PPRS is closed. 2010-08-24: VIV not present at the meeting. It is not known if there was any progress. Therefore decided to keep this pending while waiting for final status from VIV. 2010-10-05: VDI: still pending, will force the issue with the SW during my next shift from Oct 19 to 30, 2010 STATUS: PENDING ** AI 2010-06: TSz+VDI to improve the TIO training, to prepare an operational manual 2010-05-27: Operations wiki has been updated. More training needed. 2010-06-29: Not much was done due to problems with IQ + intervention + work on IP. Keep the AI open. 2010-08-24: SMI comments that the training is an ongoing task. Therefore it seems that this could be closed. It is not clear about the status of the operational manual mentioned in the AI. Thus for the moment kept as PENDING. Comment from VIV and TSZ on the status required. 2010-10-05: to be kept open for now as a reminder STATUS: PENDING ** AI 2010-09: photometric zeropoints as calculated by the pipeline. 2010-05-27: Ongoing, see corresponding discussion this meeting 2010-06-29: TSZ points out that this is not well defined AI. MIR adds that the currently existing plots already would have shown the problems with zero points from pawprints to tiles. It is not clear whether this is about the zero points from Garching pipeline? WHU states that Garching pipeline has de-blending on. MIR: There is no significant increase in the scatter with respect to what one would expect from the random noise. It is agreed in the end to send one example data set and then the comparison should be done directly between Garching and CASU. 2010-08-24: This is still pending. WHU should send some example to JLE/MIR in order to do the checks and clarify what the problem actually is about. 2010-10-05: the example data were sent to MIr and JLe; the Cambridge pipeline results for the individual detector were as expected from "rms" noise considerations, though offset by 0.1 mag from the Garching results. JLe and WHu are attempting to identify the cause of this difference which is most likely due to differences in calibration frames used. JLe sent a new linearity curve to use; WHu reprocessed the data with the new linearity correction and there were no changes; JLe asked if he could get the calibration frames used from WHu. CASU use the "average" of all detector zero-points to monitor extinction/throughput. STATUS: PENDING ** AI 2010-12: VIV put in the User manual persistence amplitude and slope. Add to the instrument page the persistence report. 2010-08-24: MRE reports that the persistence is mentioned in the latest version of the manual, but not the amplitude and slope. The persistence report still needs to be added to the instrument web page (link in the news?). 2010-10-05: still pending, to be done during my next shift which is for VISTA, Oct 19-30, 2010. STATUS: PENDING ** AI 2010-13: SMI to verify with TSZ the flat field lamp stability and the adjustment of the correct voltage for H and Ks flats. 2010-10-05: the last two times the counts were fine, high but bellow the non-linearity; STATUS: CLOSED ** AI 2010-14: SMI to raise the issue of degradation of coating (based on degradation of photometric zero points). The next coating should be scheduled. 2010-10-05: Serge G. said that this is pending, some tests are still pending, M2 is much worse than M1; studying the experience of GEMINI; M2 hasn't been recoated for ~3 yrs; no way to measure the mirror reflectivity separately for M1 and M2 without a reflectivity measuring device; M2 seems to have a film of gray material, while M1 has only spots at the edges MIr: I had a quick look, Z is affected most, almost linear in terms of mag/month!! VDI: The AI will be kept open as a reminder and it is becoming urgent. STATUS: PENDING ** AI 2010-15: MRE to contact the survey PIs and remind them that the QC checks on the science data need to be done by the survey teams and any issues should be reported to CASU and usd-help@eso.org. 2010-10-05: we have done as much as we can STATUS: CLOSED ** NEW AIs: -------- AI 2010-16: for VDI, to update of PPRS's discussed at the meeting: -- PPRS-037040: add to check if there were trends with the amplitude of the AG corrections vs. wind, etc. check if they retry the same OBs. STATUS: AI 2010-17: Testing the fiber link Paranal - Antofagasta with real data (VISTA calibrations): - WHu to send an e-mail to JEm describing which data are needed - JEm to coordinate with Fernando Comeron STATUS: NEW AI 2010-18 SMi to create a PSO explaining the QC about the ellipticity, and the requirement to have no more than 4 detectors with >0.2 STATUS: NEW AI 2010-19 Placeholder AI to keep track of the ellipticity: AYo to provide the requested info to MRe, VDI, etc. about the OB IDs, etc. and based on this and the PI reports in Nov 2010 to reclassify some OBs, if necessary. STATUS: NEW AI 2010-20 Create a PSO to the VISTA TIOs: to start the night (after the HOCS) with VVV, and to postpone observing the first Phot std until the VVV is no longer observable. This mode is to be maintained until the end of Nov when the VVV field is no longer visible. 2010-10-05: VDI: done STATUS: CLOSED AI 2010-21 for JEm to find out for the next IOT if it is possible to implement hardware windowing of the detectors to observe bright stars for purpose of absolute calibration 2010-10-06: Update froim JEm: -- The VIRGO detectors do not allow random access of a window region. One needs to access all the pixels in each row up to the start of the window region in order to access the window rows/columns. -- However, it would be possible to read a window region if it is positioned close to the bottom of the detector. i.e. read all rows the detector (few rows in the bottom close to the readout side) in the normal fashion until the end of window, and then start the frame again and don't bother about the rest of the frame. This would then speed up the window rate depending on the number of rows of the window at the bottom of the detector. -- This would involve generating a new readout sequence in IRACE and also an acquisition process in the software. It would be worth trying this in the lab first. STATUS: NEW ================== ======= 7. Other issues ================================================= ================== - clarification: AI for the ellipticity is for SMi ============================= ======= 8. Date of the next meeting ====================================== ============================= - end Nov 23, 2010 (because Nov 30 is the Phase 3 workshop in Garching organized by Magda Arnaboldi)