ssh fatal: buffer_get: trying to get more bytes than in buffer

The issue

You are using ssh to login to a server with ssh key authentication and you get connection closed. On the server you are logging into the syslog shows as messages like

Oct 17 11:30:02 myserver sshd[27687]: [ID 800047 auth.crit] fatal: buffer_get: trying to get more bytes than in buffer

The fix

Check your authorized_keys file on the remote server. Use ssh-keygen -l -f  ~/.ssh/authorized_keys

ssh-keygen -l -f ~/.ssh/authorized_keys
buffer_get: trying to get more bytes than in buffer

The above shows there is at least one  key in your file that is the wrong format – usually because it is  split over several lines rather than being just one long line. (note it could be any key in the file – not the one you are using from your server ) Once you fix the key then confirm with ssh-keygen that all is well – it should return a md5 checksum.
ssh-keygen -l -f authorized_keys
md5 1024 5d:35:7e:ad:3d:e6:70:6d:6f:1d:76:1a:46:ee:c1:c9 authorized_keys

Now retry your ssh access

Discover what control LDOM a guest LDOM is on

If you are on a Solaris LDOM and you want to find out what the Control LDOM is :-

virtinfo -a
Domain role: LDoms guest I/O
Domain name: myserver-p1
Domain UUID: 06a4456da-76e0-4aa9-a0ef-ebc64ed0aada
Control domain: mycontrolldom
Chassis serial#: 1223BEZ6RRE

So the above shows it to be a guest LDOM called myserver-p1 and the Control LDOM is mycontrolldom

Use can aslo just use virtinfo -c to just return the Control domain

 

Remove delete a legacy_run from SMF on Solaris 10

Remove the scripts from /etc/init.d and rc3.d etc.

root@mydb # svcs -a | grep -i oracle
legacy_run Feb_28 lrc:/etc/rc3_d/S99Oracle_Listener
root@mydb  #

Now we want to remove it from SMF – Note once you have removed the RC scripts rebooting the server will also mean SMF won’t pick it up again, but if you don’t want to reboot you can do the following :-

root@mydb # svccfg -s smf/legacy_run
svc:/smf/legacy_run> listpg *
rc2_d_S20sysetup framework NONPERSISTENT
rc2_d_S70uucp framework NONPERSISTENT
rc2_d_S72autoinstall framework NONPERSISTENT
rc2_d_S73cachefs_daemon framework NONPERSISTENT
rc2_d_S89PRESERVE framework NONPERSISTENT
rc2_d_S95lwact framework NONPERSISTENT
rc2_d_S98deallocate framework NONPERSISTENT
rc3_d_S16boot_server framework NONPERSISTENT
rc3_d_S52imq framework NONPERSISTENT
rc3_d_S84appserv framework NONPERSISTENT
rc3_d_S85dsmcsched framework NONPERSISTENT
rc3_d_S99Oracle_Listener framework NONPERSISTENT
svc:/smf/legacy_run> delpg rc3_d_S99Oracle_Listener
svc:/smf/legacy_run> exit
root@mydb #

root@mydb # svcs -a | grep -i oracle
root@mydb #

Solaris FTP chroot on Netapp mounted filesystem

When setting up Solaris chroot FTP where the user’s home directory is on a NFS mounted Netapp filesystem you may encounter an error when doing ftpconfig :-

myhost# ftpconfig  -d  /input/jblogs

Updating directory /input/jblogs
ftpconfig: Error: Creation of devices in /input/jblogs/dev failed

To be able to run mknod to create devices ( which the ftpconfig does ) requires the Netapp volume to be exported with setuid  enabled.

Even though the mount command on Solaris seemed to show that setuid was set on the mount – the volume on the Netapp server had not been exported with setuid.

You don’t have to mount and remount the filesytem.

For security you should turn off setuid once you have finished doing your ftpconfig