Running the ThinLinc 4.14.0 server on a SLES12 VM I managed to get tl-mount-localdrives to hang during session startup. This issue seems very racy and temperamental as I've had a hard time reproducing it. Nonetheless, out of sheer luck, I managed to reproduce it while running inside of pdb and managed to obtain the following stack trace:
> (Pdb) bt
> -> "__main__", mod_spec)
> -> exec(code, run_globals)
> -> pdb.main()
> -> pdb._runscript(mainpyfile)
> -> self.run(statement)
> -> exec(cmd, globals, locals)
> -> if 82 - 82: Iii1i
> -> oo00 = OoOo ( "127.0.0.1" , i1iIIi1I1iiI )
> -> self . mountcl . null ( )
> -> self . make_call ( mount_const . MOUNTPROC3_NULL )
> -> Iio0 , i1i = self . pipe . listen ( oOoo0 )
> -> self._pending[xid].wait(timeout)
> -> self._filled.wait(timeout)
> -> signaled = self._cond.wait(timeout)
> > /usr/lib64/python3.4/threading.py(290)wait()
> -> waiter.acquire()
The issue was observed with a fresh ThinLinc install on an otherwise clean system. Looking at the client and xinit logs, nothing looks out of the ordinary.
I see the same issue. I have seen the issue on TL 4.14 (client and server) as well as TL 4.15 (client and server).
I am connecting to an number of systems, including the IU Research Desktop installation, the Cendio HPC installion on prem (tl.hpc.cendio.se), the Cendio general ThinLinc system (tl.cendio.se) as well as a system in the Oracle cloud and one that I maintain myself in AWS.
Only the tl.cendio.se system shows the isue with not being able to map storage. My client is Ubuntu 22.04.3 LTS.
Happy to provide more info as needed.
I got this issue again on SLES 12, and I saw that I had a bad/broken symlink to thindrives/ in my home directory after closing down the hanged session. Unfortunately, I did not inspect the symlink in more detail, I only noticed that it looked weird when running 'ls'.
The faulty symlink was probably set up before the client hanged, which might be a clue to where in the mounting this bug happens.