Maintenance window Wednesday 2017-05-03 -- finished
2017-05-03
Monthly maintenance window begins at 0900 hours on the first Wednesday of the month. (That is today.)
This time we will:
- Upgrade Slurm, Linux kernel and other system software on Bianca, Fysast1, Irma, Milou, and Rackham.
- Upgrade Linux kernel and other system software on Castor and Grus.
- Physically move one of the OpenStack server machines of Bianca from one chassi to another.
Bianca and Grus will be unavailable while we service them.
We will restart all login nodes of Fysast1, Irma, Milou and Rackham, probably only once.
Slurm jobs on Fysast1, Irma, Milou and Rackham will continue to run, but access to Slurm commands will be unavailable sometimes during the day.
Slurm queues on Bianca will be stopped and, most of the day, logins to Bianca will not be possible.
We plan to keep you informed about out progress with the maintenance with updates here.
Update at 1210 hours
Part of Bianca and Castor is updated.
We have some unexpected problems with the new Slurm version. First machine we are testing this on is Irma, so Slurm is unavailable on Irma. We are sorry about that.
Update at 1605 hours
We are now giving up on the new Slurm version and goes back to the old one.
Update at 1730 hours
We have changed back to the Slurm version of yesterday.
Some login nodes are not yet restarted, and will soon be.
Service of Bianca continues tomorrow. Restart of Milou-f will be done tomorrow, or this evening.
Update Thursday at 0845 hours
We are soon restarting the login node of Fysast1.
Maintenance of Bianca continues today. We try to improve the compute nodes of the project clusters.
Irma, Rackham, and the UPPNEX part of Milou are back in production. Compute nodes will upgrade themselves automatically, so the waiting time in Slurm queues will be longer than normal today.
Update Thursday at 1545 hours
We have lost part of the connection to compute nodes of Fysast1, and are busy trying to get it back.
Maintenance on Bianca has finished and we will soon allow new logins.
Update Thursday at 1600 hours
Bianca is back in production.
Update Friday at 0920 hours
Now most compute nodes of Fysast1 are available. We will probably soon close the maintenance window.
Update Friday at 1135 hours
The connection to compute nodes of Fysast1 is fully recovered. We have now finished maintenance.
Next maintenance day is June 7th.
Old System News
-
milou2 rebooted August 28
milou2 rebooted on Monday 2017-08-28 at 19:51
-
milou2 rebooted August 19
milou2 rebooted on Saturday 2017-08-19.
-
Intelmpi
Intelmpi performance issues
-
Intelmpi
Intelmpi performance issues.
-
Issues with X11 on milou (X11Forwarding) -- SOLVED
We have observed and several users have reported issues with running X11 applications on Milou. We are investigating it.
-
milou2 and milou-b rebooted
The login nodes milou2.uppmax.uu.se and milou-b.uppmax.uu.se were rebooted 15:00 today (29th of May) due to some issues with the kernel NFS module.
-
Cooling stop at 17.00 hours the 23rd of May -- CANCELLED
-
Issues with certain project volumes for milou/pica 20170515 and onwards.
Some project volumes on pica are very heavily loaded and slow/next to unusable for interactive use. We're doing what we can to resolve this but can not promise any set time for when things will behave as normal again.
UPDATE: We've had some continuing issues with this due to some nodes not realizing when resources behave better, we're working on these issues but this may have caused disturbances like failed jobs or missing output.
-
Support may be slow May 11th and 12th due to conference
The UPPMAX system group hosts the spring 'SONC' conference where administrators from all SNIC-centers meet and discuss how to improve our centers. With many UPPMAX adminstrators being out of office during the conference (Thursday 11th and Friday 12th) the support will likely be less responsive.
-
slurm disturbance on milou 2017-05-10
Due to a misconfiguration active on a certain number of nodes around 12AM today, some jobs that were launched on milou could not start.
If you have jobs that were victims of this, they will likely show up as completed although with a very short run time (a few seconds).
-
Disturbances in Slurm today Tuesday -- finished
-
Maintenance window Wednesday 2017-05-03 -- finished
-
Slurm problems on Rackham -- fixed
-
Intel license server not responding --fixed
We have gotten reports that the Intel license server is not responding. We are investigating it. This might manifest itself with hangs or freezes during compilations.
-
Problem "Invalid account or account/partition..." --solved
We have identified a problem with the Slurm account database. If you just got added or created a new project you might get the following message when scheduling jobs "Invalid account or account/partition...". It affects primarily Rackham and Milou.
-
Problem with Slurm on Milou -- fixed
-
Interrupts in Slurm service on Rackham -- fixed
-
Bianca's storage system Castor has problems -- fixed
-
Resetting your password from the homepage is not working --fixed
Resetting your password from this page is currently not working. If you need to reset your password please contact support@uppmax.uu.se
Update 2017-04-18: This issue should now be fixed.
-
Funk-accounts and new certificates
Some of the shared funk-accounts used on Irma and Milou might stop working due to the IP-address change.
-
Maintenance window Wednesday 2017-04-05 -- finished
-
Smog will be decommissioned on Wednesday 5th of April
Smog will be decommissioned on Wednesday 5th of April. As previously mentioned the SNIC Cloud Team is currently working on bringing up a new cloud to replace Smog and join the other two regions in the SNIC Science Cloud project.
For questions ,please contact support@cloud.snic.se (and not the UPPMAX support queues).
-
Rackham2, one of Rackham's login nodes, got into problems -- now fixed
-
Maintenance window for Bianca Wednesday 2017-03-22 -- finished
-
Problem with file permissions in certain projects
-
Poor performance using Intel MPI on Rackham
We have idenfied performance issues when using Intel MPI on Rackham. In some cases you see a 10x slowdown (or worse) using Intel MPI compared to Open MPI. We are investigating this issue and hope to have it solved soon. For now, please use Open MPI.
-
Fixed: "Project p123456 may not run jobs on this cluster (rackham)"
An issue exist on Rackham affecting projects of the form "p123456". The projects are not allowed to run due to the monthly core allocation incorrectly being set to 0 hours. We are investigating why this happens.
Update 2017-03-10: The issue should now be fixed.
-
Rackham is now open for all users
All active Tintin projects (exception Tintin-Fysast1, please see below) have been migrated to Rackham. All UPPMAX users should now have access to Rackham.
-
Rackham will soon be open for all users
Many Tintin users have missed that Rackham will replace Tintin. We are currently migrating all projects from Tintin to Rackham and when this is done, all users will get access to Rackham. We will announce this per email and on our homepage.
-
Maintenance window Wednesday 2017-03-01 -- finished
-
Today we decommission Tintin
1st of March 2017 is the day we decommission Tintin. It will be replaced by the Rackham cluster. All projects on Tintin will be moved to our new Rackham cluster.
-
Creation of new UPPMAX user accounts will be delayed
-
Delayed approval of Account Requests -- fixed
We have identified a problem with the UPPMAX Account Request which unfortunately causes some delay before you can login to UPPMAX. We hope to complete the registration this week. You do not have to resubmit your Accounts.
-
Problem sending in support tickets using support@uppmax.uu.se -- fixed
There is currently a problem sending in support tickets to support@uppmax.uu.se. We are investigating and hope to have it fixed soon.
-
Rackham is now available
We are happy to announce that UPPMAX's cluster Rackham is now available!
-
Downtime due to power outage
Milou, Tintin, and Fysast-1 are back in production. Bianca is back in test production. Still working on Smog.
-
Milou2 now back again
The degraded RAID now fixed
-
Milou-f rebooted tuesday afternoon
Lustre file system problem
-
Milou1 rebooted (tuesday 14:00)
Totally inresponsible. Lustre file system problem (wich will be decommissioned tomorrow....)
-
Milou1 rebooted -- now with limited number of inodes on /scratch (/tmp)
We have now quota on the number of files in /scratch (/tmp).
100000 is maximum (per user). If you need more you have to use a compute node.
-
Gulo (including glob directory) decommissioned January 18
-
Milou2 down for reinstallation 13:50 (now waiting for spare parts)
Milou2 hasn't worked well for a while. We will give him a fresch restart.
-
Milou1 rebooted Thursday am11.00
Milou1 rebooted Thursday am11.
Problems with lustre file system.
-
Fysast1 down Wednesday
Fysast1 down Wednesday before lunch.
One power supply broken and the fuses for half the cluster was blown.
-
Milou1 rebooted Wednesday am11.00
Milou1 rebooted Wednesday am11.00
Problems with lustre file system
-
Maintenance window on Irma Wednesday 2017-01-11 -- FINISHED
-
Maintnenace window on Mosler/topolino Wednesday Jan 11 - FINISHED
We have a maintenance window coming up on January 11 from 9:00. Due to physical work, we need to shut down the system during the maintenance window this time so jobs won't run.
We will also likely be required to rebuild virtual nodes and will probably lose information about queued jobs.
Update 21:10: Maintenance is now finished and the system should be available again.
-
Poor performance on Milou and Tintin
-
Maintenance window Wednesday 2017-01-04 -- FINISHED
-
Milou2 rebooted friday morning
Milou2 rebooted at 06:01 due to a problem with the lustre filesystem