Napp-it free ist ein sofort einsetzbarer und komfortabler ZFS Storage-Server mit einer kostenloser Web-GUI
der alle Funktionen für ein fortschrittliches NAS oder SAN enthält.
Es gibt keine kommerzielle Nutzungseinschränkung und kein Speicherlimit. Zusätzlich gibt es freie add-ons wie z.B.
AFP, AMP, Baikal, Proftp, Mediatomb, Owncloud, PHPVirtualbox, Pydio oder Serviio.
Wollen Sie napp-it Pro mit Updates auf neueste Fehlerbehebungen und kostenpflichtige Extensions wie komfortables
ACL Management, Disk und Echtzeitmonitoring oder Remote Replikation nutzen, fordern Sie ein
Async highspeed/ network replication (Solarish and Linux)
- Async Replication between appliances (near realtime) with remote appliance management and monitoring
- Based on ZFS send/ receive and snapshots
- After an initial full transfer, only modified datablocks are transferred
- High speed transport via (buffered on OmniOS/OI) netcat
(unencrypted transfer, intended for secure LANs)
- Replication is always pull data. You only need a key on the target server, not on sources
How to setup
- You need a licence key on a target server only (you can request evaluation keys)
- Register the key: copy/paste the whole key-line into menu extension-register, example:
replicate h:server2 - 20.06.2012::VcqmhqsmVsdcnetqsmVsTTDVsK
- Group your appliances with menu extension -appliance group. Klick on ++ add to add members to the group
- Create a replication job with menu Jobs - replicate - create replication job
- Start the job manually or timer based
- After the initial transfer (can last some time), all following transfers are copying only modifies blocks
- You can setup transfers down to every minute (near realtime)
- If one of your server is on an unsecure network like Internet: buld a secure VPN tunnel between appliances
- If you use a firewall with deep inspection: This may block netcat, set a firewall rule to allow port 81 and replication ports
Use it for
- Highspeed inhouse replication on secure networks
- External replication over VPN links with fixed ip's and a common DNS server (or manual host entries)
How replication works
- On first run, it creates a source snap jobid...nr_1 and transfers the complete dataset over a netcat highspeed connection
When the transfer is completed successfully, a target snap jobid.._nr_1 is created.
- The next replication run is incremental and based on this snap-pair. A source snap jobid.._nr_2 with modified datablocks is created
and transfered. When the transfer is completed successfully, a target snap jobid.._nr_2 is created.
- And so on. Only modified datablocks are transfered to provide near realtime syncs when run every few minutes.
- If a replication fails for whatever reason, you have a higher source than target snapnumber. This does not matter. The source snap is recreated on netx run.
In case of problems
- Check if basic communication/ remote control via webserver on port 81 is working.
delete/ rebuild the group or click to ZFS or snaps beside a hostname in menu extension - appliance group
- Check if you have such a snap-pair with same max
jobid_nr_n numbers on source or target.
(If you have deleted, you must restart
with an initial sync). Do not attempt to
delete the most recent snapshot on the target.
If later snapshots exist on the source, they can be safely deleted - but
do not delete the snapshot that matches the highest snapshot on the target.
- Check if you have enough space on source and target (check also reservations and quota settings)
- If the receiver and sender is starting but no data is transferred, check
- for network or routing problems problems
- if a firewall blocks the netcat transfer port (port is shown in menu jobs)
- check for pool/ filesystem problems, try a reboot, check system and fault logs
- If you need to restart a replication and if you have enough space, rename the old target dataset and delete after a successful new replication.
- Use menu jobs - replicate - monitor on both sides to monitor transfers
- If you delete a replication job, you may need to delete remaining snaps of this job manually.