VISTA IOT 2010-11-24, 11-12:30 S-go / 15-16:30 Garching, Minutes ================================================================ (prepared by Valentin D. Ivanov) Participants: - Garching: Marina Rejkuba (MRe), Wolfgang Hummel (WHu), Michael Hilker (MHi) - Santiago: Thomas Szeifert (TSz), Steffen Mieske (SMi) - Paranal: Valentin D. Ivanov (VDI), Henri Boffin (HBo) - Via phone: Jim Emerson (JEm), Jim Lewis (JLe), Mike Irwin (MIr) ************************************************************ =================================================================== 1. VISTA general status and planned activities (TSz, VDI, MRe, SMi) =================================================================== TSz: a few technical problems; pending mirror coating scheduling was supposed to happen in May 2011, but earlier date was requested, still being considered because it will be done in coordination with a replacement of one leg of the hexapod SMi: what about the oscillation? TSz: didn't happen since Oct 17, may re-occur at any time, still no definitive solution, it is not straightforward to determine the cause MRe: it seems the problem was random/transit in October TSz: one possibility was cleaning of an electronic board that had accumulated a lot of dust over time SMi: last entry by Stefan Sandrock talks about correlation between the pointing direction VDI: this correlation is new, there was no obvious correlations before but Stefan Sandrock has left and we can not clarify the statement from the PPRS-37040 SMi: when is Stefan Sandrock going to Garching? TSz: this may be one of his last turnos =================================================================== 2. Optimization of the operations, sources of overheads (VDI, TSz, MRe, MHi, etc.) =================================================================== TSz: let's summarize the overall progress of the survey - we are about 5 hr per week late if you compare the time spent and the time actually useful; i.e. the night of 8 hrs is not really 8 hrs long, which is common but unlike the other ESO telescopes we have no C class programs; we need 5 sec longer per offset. It was not like this until April when we changed the tolerance of the positioning; I though this is an acting of the M2 unit but no; we changed in May-Jun the parameters forcing the system to wait until the offset is done, which generated ~15 min of extra offset every night; also, there is a lack of parallelism, i.e. the filter wheel changes could be done while moving; this review prompted me to write a document with the following suggestions: - add extra 5 sec per offset - reduce the number of standard stars - announce it to the PIs when preparing the new IP - optimize the wavefront sensor star selection to avoid forcing the operator to manually select stars; this was partially minimized by extending the validity of the image analysis; but still many reference stars are fainter than 15 mag, and it is not realistic to get a corrections; one proposal is to brighten the mag limit for the WFS reference stars - optimize the OneCal to make it good enough so we can use it for longer time; there are two things still to be changed: the altitude and the rotation dependencies; they are larger than our model of the telescope presumes, so bellow 1 arcsec they are not good enough; when is the next IP due? NOTE ADDED POSTERIORI: JEm - after reading th minutes - said that the filter wheel does (or did when we handed over to ESO) move in parallel with telescope movements SMi: Dec 1, 2010 TSz: there were other suggestions too, and we have to do this now VDI: between the standard and the reducing the number of twilights we gain 40-45 min; if we tell the TIOs not to try to complete OBs that would be classified "C" because of the degraded weather, we get more than 1 hrs per night TSz: this is not enough, we need ~2 hrs; we need another hour for science, we can get it from parallelizing or from minimizing the number of aborts due to technical problems SMi: why is the scheduling so aggressive? (JEm joins the meeting) MRe: the scheduling was based on preliminary overhead estimates which turned out to be too optimistic TSz: where we can and we should improve is to make the observing sequences more reliable, most often they fail because the WFS stars are not good enough, hence the change of the WFS magnitude limit to ~14.8 or 14.9 mag MHi: if we do this we will have problems with the deep fields, and may be for some VHS and VIKING stripes MRe: for Ultra-VISTA they have one field and they have been told explicitly to make sure they have good WFS stars; it is harder to check the reference stars for VIKING and it is almost impossible for the VHS VDI: it is work for one day, even for VHS, to check the reference stars against the 2MASS TSz: alternatively, we charge them 7 min extra for a manual acquisition in case of faint WFS reference stars MHi: can we have in the ETRM a condition to charge extra time if the reference stars are too faint TSz: furthermore, we often spend ~1/2 hr just to understand the problem (that at the end turns out to have been caused by a bad reference star, not by a technical failure); Jim had suggested to increase the limit to 15.3, I remember successful correction being achieved on 15.0 mag star but never on fainter ones JEm: how much of the AO problem would be due to faint stars? It is not too much of a problem to implement s SADT to check into 2MASS what it the mag TSz: if we change to extrapolated magnitude (to R-band) we will need another mag limit JEm: the limit will be on the calculated band of the WF sensor (which is approximately I-band), so it is homogenized TSz: all this, including the tests must be done tomorrow with a deadline on Tuesday, it is not realistic JEm: could it be that the mirror recoating contributes to the problem? TSz: sure, it contributes, this is why we need the recoating sooner; but we need also to get an engineer quickly to replace the hexapod leg No 1 (I think) and this appears to be a very complicated logistical problem; Serge Guinaut is trying to find an earlier window; for now we can put a compromise value, i.e. 15 mag; it is very time consuming to acquire with WFS star at the faint end, including the time we spend to investigate the failures MHi: The new SADT version allows to set up the limit independently for each catalog, the limit is not taken from the Instrument Package anymore; we calculated the R mag from 2MASS during my last trip to Paranal JEm: we implemented different limits for different catalogs in the SADT config file VDI: why we have to calculate the limit on the fly for R band? JEm: actually, the CCD sensitivity is closer to I-band MIr: R or I band generated from 2MASS should be more reliable than any of the optical plate catalogs JEm: the issue is deciding how to deal with the limits MHi: We coded with VDI the algorithm for generating the R band from 2MASS TSz: remember, there is no time for testing MHi: we already studied the mag limits for different catalogs, it is in the report MIr: it is better to compare with SDSS, it is more reliable; then set the limit; I can test it for you - we have the SDSS installed here JEm: I showed you the paper MIr: we tested it JEm: even if we know the numbers there is no way to implement the SADT changes by the next week. TSz: I have a turno in Dec 11-18, I can do some testing, comparing the magnitudes with the real flux on the WF sensors JEm: we set all limits on GSC2 but apparently it wasn't too good; the 2MASS is probably the best solution on long term. TSz: this could be done but what are the timlines? we have deadline internally on Tue; USD has to deliver when? MRe: a week before Christmas TSz: so in reality mid-Jan 2011? MIr: we need a week to provide the equations to JEm JEm: I am reluctant to do it fast TSz: indeed, it is to risky MHi: the users at the moment are allowed to use 2MASS for some difficult field with J-band as reference magnitude but this is done with the same limits as for GSC2; we can tell them not to do it because we don't have the correct limits JEm: neither UCAC2 nor GCS give mags in the actual sensitivity rang of the WFS; presumably in the IP we will have different limits for the different catalogs CLARIFICATION ADDED BY JEm POSTERIORI: I think what to do is clear later, but the current SADT allows setting a different limit for 2MASS J from GSC-2 R - that was exactly why I suggested the changes and made them (any catalogue can have its own limit, and SADT can be set to read these, or to read from the IP). Of course this allows the user to potentially do foolish things, and in duec course all the right limits should go back to being read from the IP VDI: we keep ignoring one more catalog - DENIS; they have I band TSz: we need a server where it could be instilled VDI: there is a server - DENIS is on CDS JEm: it is not so simple to add a catalog; but for the next semester we can do it TSz: we should keep track of the proposed changes but for now we should just implement different mag limits JEm: it might be wiser to generate the I band from 2MASS which has better astrometry VDI: advantage of DENIS is that the I band is already there, and it covers homogeneously half of the sky MRe: let's summarize - for now we implement different mag in the SADT config file VDI: is it realistic to implement the extra overhead for faint WFS reference stars? MRe: it is a question VDI: let's check with Stephane Marteau then TSz: I am not sure if it is worth to do it and to spend a lot of time studying how long the extra overhead should be VDI: there is an easy way - let's look at the difference between the two peaks in the VIDEO execution time histogram MRe: the difference between peaks is 5-7 min VDI: so, we charge extra 7 min for the faint reference stars MRe: for the OBs that have too faint WFS stars the PIs should be advised to change the pointing in both RA and DEC. TSz: we also have to discuss the standard stars; a change from 5 to 3 jitters didn't allow to measure the IQ well; VDI proposed to stay only with UKIRT stars and to omit the 2MASS TSFs JEm: do we need standards? MIr: the reason for the TSF was to ensure that we can get good color equations, etc; this is done and not needed anymore TSz: with the NB filter we don't need to do standards at all MRe: Ultra-VISTA will be calibrating against themselves; they may not need the NB standards anymore TSz: so we stop observing standard in the NB filter immediately; the remaining question is jitter3 or jitter5, it could save 3-5 min MIr: are we talking about one standard at the end, and one at the beginning? TSz: and one in the middle MIr: this is really a question to the VVV VDI: for the VVV the full JHKs coverage is done, the remaining OBs are for ZY and for Ks variability monitoring, so if we still do UKIRT standard in ZY when the VVV is observed, we should be fine; the variability OBs can be calibrated versus the JHKs coverage or versus earlier variability epochs TSZ: can we summarize? MIr: we skip 2MASS TSF and we only observe UKIRT standard when the VVV ZY is done VDI: the UKIRT at the beginning is also useful for the sky brightness TSZ: this could be useful for ZYJ only; but it is confusing, can we summarize again? VDI: here is a summary of the night: - twilight in two filters (more if there is a backlog of pending flats); this gives us ~10 min extra per night - HOCS - UKIRT standard in ZYJ (always, so the TIOs can decide what to observe next based on the information for the sky brightness); if no UKIRT standards are available then observe TSF - science observations - if there is gap at the end: do more UKIRT standards in all filter; if no UKIRT standards are available then observe TSF TSZ: for the IP that is due next week we don't need more input =================================================================== 3. Status of various software: SADT/OT/P2PP/etc =================================================================== MHi: about the bug in the SADT... JEm: which bug? MHi: the one that needs a restart of SADT when searching for guide and aO stars JEm: I will look at it. What about the clbl parameter of UCAC3? MHi: this kills 90% of the vailable stars. It will be described in my report on the catalog tests. JEm: I will look at this as well. MRe: note that the SADT is part of the IP so the deadline is Dec 1! JEm: I will try to do it next week VDI: I am on Paranal until Dec 3, if something is ready, please send it earlier to start testing TSz: it is too soon, but my test period is Dec 11-17 4. QC issues, data reduction/product issues (MIr, etc.) WHu: I have a question if reducing the number of twilight flats from 3 to 2 will give us enough flats? JLe: we use only about / of the flats anyway VDI: can you pls elaborate? may be we can use another 10 min per night TSz: this happens too early to be useful for science, it is still bright MIr: to clarify what Jim said, in practice we select the best ~1/2 of the flats taken to make our master flatfields for science processing VDI: OK MRe: no changes for P2PP WHu: JLe was here and we discussed issues about the pipeline, 80% of the tickets were solved, a new version is working fine; a few weeks ago we had an electronic board problem and the science data were affected by electronic noise; the flax range of the twilight is still too narrow, the detectors with high response get too high counts, the detectors with the lower response get too few counts, usually a few detectors are affected TSz: the flux is the same but the gain conversion is different JLe: it is also a saturation issue; it is a matter of setting up the pipeline to filter properly the files SMi: the main problem is the H-band WHu: the problem is that there are many images from the high-gain detectors that the values are above the 12000 ADU limits SMi: the easiest solution is to reduce the target count level by 20-30% so the high-gain detectors are OK WHu: do you do the flats automatically? SMi: yes, and the target level is a configurable parameter JLe: or we can do flats in the morning twilight TSz: this needs getting a permission, it is a telescope safety issue JLe: if the upper limit is 12000 adu, then none of the detectors is saturated VDI: should we just rise the 12000 adu limit then? TSz: the data are OK, it is just that the limit is too conservative; also the DIT values are in integer seconds which introduces an extra error =================================================================== 5. Progress of the surveys, fraction of repeated OBs (MRe) =================================================================== MRe: we need to communicate the change of classification rules to the PIs TSz: a document has been sent to the Paranal director; it tell the PIs that this is done "in interest of completing the surveys"; we need to send a letter to the PIs about the status, changes, etc. - send it also to JEm as VISTA PI, and announce the change to the applicants for open time; we should not hide anything, say that there is no contingency time, all users are "sitting in the same boat" MRe: one more comment for JLe and you - we provide to CASU the list of executed OBs and their grades; on issue is the aborted OBs - if the OB was aborted and restarted from, for example, pawprint 3, and completed successfully; then CASU reduces the data and they are fine, but the grade comes from the first execution, no the PIs at the end get inconsistent information; this is just to inform everybody; we have also received report from the PIs - the satisfaction varies from survey to survey; one comment from a few surveys is that the J-band is shallower tan expected, shallower than ETC; it may be due to the bright sky at the beginning of the night or because of the mirror degradation; next week we will have a workshop in Garching with the PIs and team members =================================================================== 6. ZY flux calibration from the IRTF spectrophotometric standards; see http://irtfweb.ifa.hawaii.edu/~spex/IRTF_Spectral_Library/ for details on this library (VDI) =================================================================== VDI: let's differ this item to the next meeting; the numbers are too lower, probably I have an error JLe: you can compare with our values at CASU's web page VDI: mine are lower than those =================================================================== 7. Action Items review (enclosed bellow is the AI section from the minutes of the last IOT) =================================================================== See bellow. =================================================================== 8. Other issues =================================================================== SMi: EVALSO can be used to transfer VIRCAM calibrations for QC even before the commissioning and acceptance; the calibrations uncompressed will be 30GB, compressed - slightly less TSz: this needs to be coordinated with Cecilia SMi: it will take 5-6 hrs to transfer the calibrations WHu: VISTA calibrations will be transferred with the lowest priority, after the calibrations for all other instruments TSz: from my side I have given green light SMi: I was afraid this will lead to a back log TSz: I was sure everybody concerned has OK-ed it, including Cecilia, so it is going ahead JEm: are we talking about the fiber? TSz: to my knowledge this is still a microwave link; I haven't seen a real schedule about the fiber VDI: anybody knows what is EVALSO-light mentioned in the document that was distributed earlier? JEm: I don't know; we can just tell EVALSO that the activities can start; I will send a message to Andy Wright to inquire for details =================================================================== 9. Date of the next meeting (proposed Jan 2, 2011) =================================================================== MIr: there is a VMC meeting around this time, next week would be better VDI: Fine, let's to it on Thu, Jan 27, 2011, at the same time (11:00 Chile, 15:00 Garching) ************************************************************ ====================================================== ======= 6. Action Items review ============ ====================================================== OLD AIs: -------- AI 2009-11: TSz should find out if the constraints sets could be carried with the OBs, to the OS and the fits header of the VIRCAM files. Otherwise it is not easy to propagate these data by interfaces. 2009-07-29: TSz iterating with R. Schmutzer, but DFI (T. Bierwirth) will also need to be involved. For surveys, given the small range of programs it is expected that there are not too many different OBs, and this might need to be followed in a different/manual way in the beginning - this could imply a major change for the tools that cannot be done in the last moment. 2010-03-23: the keywords have been implemented except for the moon; there is a ticket with details; some testing done by VDI, no errors found; keep ongoing, try to complete during my next shift end of Mar - early Apr. 2010-04-23: VDI reports one formula is missing 2010-05-27: Corresponding PPRS is still pending. VDI asks to keep this AI open until the PPRS is closed. 2010-06-29: No changes with respect to last meeting. VDI asks to keep this AI kept open/pending until the PPRS is closed. 2010-08-24: VDI not present at the meeting. It is not known if there was any progress. Therefore decided to keep this pending while waiting for final status from VDI. 2010-10-05: VDI: still pending, will force the issue with the SW during my next shift from Oct 19 to 30, 2010 STATUS: PENDING ** AI 2010-06: TSz+VDI to improve the TIO training, to prepare an operational manual 2010-05-27: Operations wiki has been updated. More training needed. 2010-06-29: Not much was done due to problems with IQ + intervention + work on IP. Keep the AI open. 2010-08-24: SMI comments that the training is an ongoing task. Therefore it seems that this could be closed. It is not clear about the status of the operational manual mentioned in the AI. Thus for the moment kept as PENDING. Comment from VDI and TSZ on the status required. 2010-10-05: to be kept open for now as a reminder STATUS: PENDING ** AI 2010-09: photometric zeropoints as calculated by the pipeline. 2010-05-27: Ongoing, see corresponding discussion this meeting 2010-06-29: TSZ points out that this is not well defined AI. MIR adds that the currently existing plots already would have shown the problems with zero points from pawprints to tiles. It is not clear whether this is about the zero points from Garching pipeline? WHU states that Garching pipeline has de-blending on. MIR: There is no significant increase in the scatter with respect to what one would expect from the random noise. It is agreed in the end to send one example data set and then the comparison should be done directly between Garching and CASU. 2010-08-24: This is still pending. WHU should send some example to JLE/MIR in order to do the checks and clarify what the problem actually is about. 2010-10-05: the example data were sent to MIr and JLe; the Cambridge pipeline results for the individual detector were as expected from "rms" noise considerations, though offset by 0.1 mag from the Garching results. JLe and WHu are attempting to identify the cause of this difference which is most likely due to differences in calibration frames used. JLe sent a new linearity curve to use; WHu reprocessed the data with the new linearity correction and there were no changes; JLe asked if he could get the calibration frames used from WHu. CASU use the "average" of all detector zero-points to monitor extinction/throughput. STATUS: PENDING ** AI 2010-12: VDI put in the User manual persistence amplitude and slope. Add to the instrument page the persistence report. 2010-08-24: MRE reports that the persistence is mentioned in the latest version of the manual, but not the amplitude and slope. The persistence report still needs to be added to the instrument web page (link in the news?). 2010-10-05: still pending, to be done during my next shift which is for VISTA, Oct 19-30, 2010. STATUS: PENDING ** AI 2010-14: SMI to raise the issue of degradation of coating (based on degradation of photometric zero points). The next coating should be scheduled. 2010-10-05: Serge G. said that this is pending, some tests are still pending, M2 is much worse than M1; studying the experience of GEMINI; M2 hasn't been recoated for ~3 yrs; no way to measure the mirror reflectivity separately for M1 and M2 without a reflectivity measuring device; M2 seems to have a film of gray material, while M1 has only spots at the edges MIr: I had a quick look, Z is affected most, almost linear in terms of mag/month!! VDI: The AI will be kept open as a reminder and it is becoming urgent. STATUS: PENDING ** AI 2010-16: for VDI, to update of PPRS's discussed at the meeting: -- PPRS-037040: add to check if there were trends with the amplitude of the AG corrections vs. wind, etc. check if they retry the same OBs. STATUS: DONE ** AI 2010-17: Testing the fiber link Paranal - Antofagasta with real data (VISTA calibrations): - WHu to send an e-mail to JEm describing which data are needed - JEm to coordinate with Fernando Comeron 2010-11-24: - TSz - wants to stay out of the EVALSO business; this is not an IOT business - SMi - we should put pressure on implementing it - TSz - we are powerless do anything; JLe - I will speak with Fernando Comeron about it but we should close it. STATUS: CLOSED ** AI 2010-19 Placeholder AI to keep track of the ellipticity: AYo to provide the requested info to MRe, VDI, etc. about the OB IDs, etc. and based on this and the PI reports in Nov 2010 to reclassify some OBs, if necessary. 2010-11-24: JLe - I set info to Marina; MRe - the number of OBs is extremely large and we need feedback from the PIs which of them need to be repeated (VVV in particular); better to keep cancel them and the PIs later ask for more time to complete the surveys VDI - caution with the wording, the PIs were guaranteed certain amount of time, we can not re-write it MRe: the report was delivered by AYo and PIs were informed that they will need to report on quality of data and request to repeat some areas in case they are done too much out of specs. The additional time to do these repetitions should be taken into account in scheduling in case we are talking about large number of OBs. Strictly speaking, this would not be direct repetition, but rather some sort of compensation time STATUS: CLOSED ** AI 2010-21 for JEm to find out for the next IOT if it is possible to implement hardware windowing of the detectors to observe bright stars for purpose of absolute calibration 2010-10-06: Update from JEm: -- The VIRGO detectors do not allow random access of a window region. One needs to access all the pixels in each row up to the start of the window region in order to access the window rows/columns. -- However, it would be possible to read a window region if it is positioned close to the bottom of the detector. i.e. read all rows the detector (few rows in the bottom close to the readout side) in the normal fashion until the end of window, and then start the frame again and don't bother about the rest of the frame. This would then speed up the window rate depending on the number of rows of the window at the bottom of the detector. -- This would involve generating a new readout sequence in IRACE and also an acquisition process in the software. It would be worth trying this in the lab first. 2010-11;24: VDI - info obtained from JEm, solved STATUS: CLOSED