- ready to use and comfortable ZFS storage appliance for iSCSI/FC, NFS and SMB
- Active Directory support with Snaps as Previous Version
- user friendly Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
- commercial use allowed
- no capacity limit
- free download for End-User
- increased GUI performance/ background agents
- bugfix/ updates/ access to bugfixes
- extensions like comfortable ACL handling, disk and realtime monitoring or remote replication
- appliance diskmap, security and tuning (Pro complete)
- Redistribution/Bundling/setup on customers demand optional
please request a quotation
Napp-it ToGo VM Download
Last LTS: Ova template with OmniOS 151030/ up from ESXi 6.7
Previous stable: Ova template with OmniOS 151032/ up from ESXi 6.x
Last stable: Ova template with OmniOS 151034/ up from ESXi 6.7
Napp-In-One (All-In-One = ESXi + virtualized ZFS SAN/NAS in one server)
Please update OmniOS via pkg update and napp-it in menu About > Update
Downloadoption2: napp-it toGo VM/ ESXi ova templates
please read attached readme.txt
ESXi 6: Menu Create/Register VM > Deploy a virtual machine
Setup napp-In-One with our preconfigured ZFS appliance
- Verify that your mainboard + Bios + CPU supports vt-d (best: mainboard with Intel serverchipset and a Xeon)
- Set all onboard Sata ports to AHCI and enable vt-d in Bios settings
- Disable Active State Power Management (some SuperMicro boards, can cause problems) in Bios settings
- Insert a second SAS controller like a LSI 9207 (best) or 9211 or a IBM 1015 flashed to IT firmware
- Add a boot disk to Onbaord Sata (best a 40+ GB SSD),
- Use external Sata enclusures like a
http://www.raidon.com.tw/RAIDON2013/enweb/en product web/en intank/en iR2420-2s-s2.html
that allows hot mirror/clone/backup of bootdisks). As an option, you can
use clonezilla to clone bootdisks
Tip: Use the base VM for storage only and avoid a complex setup. Save the configured appliance as a template. On problems, you can simply import your storage VM and you are up again within minutes. Use VMs on ZFS foe all services with a special setup
- Install ESXi to your first Sata boot disk (option here is an
USB stick but I prefer combined installations on SSD with Sata
- Connect your ESXi box from a Windows machine via Browser https://ip of your box
- Install Vsphere to Windows (you can download via browser from your ESXi server) and connect your ESXi box via vsphere
- Enable pass-through within ESXi for your SAS controller,
- Import the downloaded napp-ot OVA template
- Bootup your VM, Enter a root pw, enter ifconfig to get IP
- Manage your appliance remotely via any web-browser (http://serverip:81)
setup a fixed IP, prefer vmxnet3 vnic (I had stability problems with e1000 on ESXi).
- Share NAS storage (use SMB for Windows compatible File sharing)
- Share SAN storage (use NFS), share this dataset also per SMB for easy access (snapshots, clone, backup)
- In ESXi settings, add shared NFS storage, connect the NFS SAN share.
- Create new VMs on this NFS datastore
If you reboot ESXi, be aware of some delay until these VM's are booted. (need to wait until the storage VM is up)
but they connect/come up automatically with NFS when you enable autostart for these VM's
Optimal ZFS Pool layout for ESXi datastores
- With several VM's, you have a lot of concurrent small read and
writes. For a good performance with such a workload, you need good I/O
values. Best is to build a pool from mirrored vdevs (2way mirrors or 3
way mirrors for extra security/performance). Avoid Raid-Z configs. They
may have good sequential pereformance but I/O is the same like one disk
(all heads must be positioned on every read/write).
- When ESXi writes data to a NFS datastore it requests sync writes
for security reasons. The default setting of ZFS is to follow this and
to do sync writes only. This is a very secure default setting but can
lower performance (compared to normal writes) dramatically. Sometimes
regular writes are 100x faster than sync writes where each single write
must be done and commited immediatly until the next one can occur. (Very
heavy I/O with small data, bad fore every file-system).
You have now two options:
1. Ignore sync write demands (=disable sync property on your NFS shared dataset) with the effect of a dataloss on powerloss.
2. Add an extra ZIL-device to log all sync writes. They can then written to disk sequentially with full speed like normal writes
you add a ZIL, you must use one with high write values and low latency.
Usually SSD's are bad on this. Best are DRAM based ZIL drives like a ZeusRam or a DDRDrive. Sad to say, they are really expensive. But a good SSD like an Intel s3700 helps a lot.
- Add Pools build from ZFS Z1-3 vdevs for backup or if you need a SMB filer.
- Best: Use a mirror or Raid-Z of fast enterprise SSDs like Intel S3610 or S3700 for VMs without an Slog as they are fast even under continous load with powerloss protection. The Intel S730 is a cheaper option for SoHo/ Lab use.
read these manuals
how to setup napp-it
how to setup napp-In-One
Manuals for Oracle Solaris 11 Express
after Download you can optionally update
- OmniOS to newest, see http://omnios.omniti.com/wiki.php/
- napp-it to newest (napp-it menu about - update)
For OmniOS/OI/Solaris Express 11: download Oracle manuals for Solaris Express 11
, google them
or check http://archive.today/snZaS
(Express 11 - downloads are working, links refer to new Solaris 11))
Oracle Solaris Express 11 and its free fork OmniOS/OI are nearly identical beside encryption.