inotify over nfs

M

Martin DeMello

I had a happily-running inotify daemon monitoring file creation and
deletion that broke when we moved to a multiple machine + NFS setup.
Apparently inotify doesn't work over NFS, but it *does* if you have
the watcher on the same machine as the process creating or deleting
the files, which is actually good enough for me.

So here's my current idea - I just wanted to run it by the list since
I know people here have done related stuff (Ara, I'm looking at you
:)):

* Have an inotify process per machine that watches for file creation
and deletion, and sends the messages to a Drb server on a single box.
* Have the Drb server field messages and put them into an in-memory queue
* Have a separate thread in the server program that wakes up every 15s
or so and drains the queue if needed

Any potential pitfalls? Any better way of doing this?

martin
 
A

ara.t.howard

I had a happily-running inotify daemon monitoring file creation and deletion
that broke when we moved to a multiple machine + NFS setup. Apparently
inotify doesn't work over NFS, but it *does* if you have the watcher on the
same machine as the process creating or deleting the files, which is
actually good enough for me.

how is this working for you? i think when i looked into inotify i found that
messages would get sent twice or not at all or out of order or something like
that... i forget this exact issues. have you seen any? btw. could i/we have
a look at your code?
So here's my current idea - I just wanted to run it by the list since
I know people here have done related stuff (Ara, I'm looking at you
:)):

* Have an inotify process per machine that watches for file creation and
deletion, and sends the messages to a Drb server on a single box.
* Have the Drb server field messages and put them into an in-memory queue
* Have a separate thread in the server program that wakes up every 15s
or so and drains the queue if needed

Any potential pitfalls? Any better way of doing this?

does running the inotify process on the nfs server itself catch all events?
seems like it must?

so your idea is basically to run a watcher on every node that could create
files and coalesce events? interesting. seems like you could have some
issues if a node wan't accounted for, eg it's a bit fragile.

another issue might crop up with silly names - when a one node has a file open
and another deletes it and you get those .nfs12345 files - but i'm not sure if
the act of monitoring, via inotify, or stating from all those remote machines
might affect each other... sillynames provide a consistent view of the file
system so that if

node b: open's file

node a: rm's file

node b: fstat on open file handle (this guy needs a silly name to exist)

so you might make sure that the act of monitoring is not going to creat events
to monitor ;-) i don't think it will, but nfs is weird...

so, the other idea is using dirwatch, which is what use in our near-real-time
satellite ingest processing system in exactly this way: it watches an nfs
directory and triggers events. the events themselves are simply jobs
submitted to a queue (ruby queue) which itself works on nfs and all nodes pull
jobs from it. i use lockfile and/or posixlock to provide nfs safe mutual
exclusion, and the whole system requires zero networking except for nfs. this
makes it really easy to get by sysads in today's security environment, plus
the whole thing is userland so i really don't need sysad help at all. a
__big__ perk is that nfs, if mounted hard, simply hangs processes if it goes
away so we can reboot a cluster and all nfs related stuff: dirwatch, rq, and
jobs just hang, even for an extended reboot followed by a 12 hr fsck. have
you looked at dirwatch? what kind of events are you triggering? do they need
to be distributed events or local to the node doing the monitoring?

sorry if this message is a bit all over the place - i'm trying to read to my
kid at the same time!

kind regards.


-a
 
A

ara.t.howard

I had a happily-running inotify daemon monitoring file creation and
deletion that broke when we moved to a multiple machine + NFS setup.
Apparently inotify doesn't work over NFS, but it *does* if you have
the watcher on the same machine as the process creating or deleting
the files, which is actually good enough for me.

So here's my current idea - I just wanted to run it by the list since
I know people here have done related stuff (Ara, I'm looking at you
:)):

* Have an inotify process per machine that watches for file creation
and deletion, and sends the messages to a Drb server on a single box.
* Have the Drb server field messages and put them into an in-memory queue
* Have a separate thread in the server program that wakes up every 15s
or so and drains the queue if needed

Any potential pitfalls? Any better way of doing this?

martin

one other quick thought - postgresql has an async notification mechanism that
might be useful in your arch.

cheers.

-a
 
M

Martin DeMello

how is this working for you? i think when i looked into inotify i found that
messages would get sent twice or not at all or out of order or something like
that... i forget this exact issues. have you seen any? btw. could i/we have
a look at your code?

I've had a few problems with inotify, but I usually assume it's
something I'm doing wrong (seeing the number of projects that rely on
it). Usually when I try to hammer at it by, say, touching a thousand
files in a loop, it gets them all. Out-of-order isn't really an issue
for me; it's just not-at-all that's problematic.
does running the inotify process on the nfs server itself catch all events?
seems like it must?

Not tried that, since our systems guy said it would cause problems
when we moved to a distributed storage model.
so your idea is basically to run a watcher on every node that could create
files and coalesce events? interesting. seems like you could have some
issues if a node wan't accounted for, eg it's a bit fragile.

That's actually less of a problem, since we can start up the watcher
when a node is brought online, and have a cron job for the whole quis
custodiet ipsos custodes thing.
another issue might crop up with silly names - when a one node has a file open
and another deletes it and you get those .nfs12345 files - but i'm not sure if
the act of monitoring, via inotify, or stating from all those remote machines
might affect each other... sillynames provide a consistent view of the file
system so that if

node b: open's file

node a: rm's file

node b: fstat on open file handle (this guy needs a silly name to exist)

so you might make sure that the act of monitoring is not going to creat events
to monitor ;-) i don't think it will, but nfs is weird...

Now that's something I'd never have thought of. Will need to
experiment with it.
so, the other idea is using dirwatch, which is what use in our near-real-time
satellite ingest processing system in exactly this way: it watches an nfs
directory and triggers events. the events themselves are simply jobs [...]
jobs just hang, even for an extended reboot followed by a 12 hr fsck. have
you looked at dirwatch? what kind of events are you triggering? do they need
to be distributed events or local to the node doing the monitoring?

I did look at dirwatch, since this is not a wheel I was keen on
reinventing (it's caused me as much grief as the rest of the app put
together, I think :)). However, I was worried about the fact that it's
polling - our use case is lots of files, with relatively few events.

Here's what we're doing exactly - we have a database-backed webapp
that mirrors an NFS-mounted /home, so that each user's files are
visible in his webpage. There is also a cluster of application
servers, that allow the user to run apps remotely. What inotify is
needed for is the case when files are created directly through those
apps and saved - those need to be added to the database. Since new
files can only be created by the applications or uploaded through the
web interface, it sounds reasonably safe to have a watcher on each app
node that communicate back to the webserver node with inotify events.

I don't think there'll be problems with files being created and
deleted simultaneously from different nodes, since our current
application architecture doesn't let this happen anyway, and there are
checks in place so that trying to add or delete the same file twice
won't create any issues.

martin
 
M

Martin DeMello

how is this working for you? i think when i looked into inotify i found that
messages would get sent twice or not at all or out of order or something like
that... i forget this exact issues. have you seen any? btw. could i/we have
a look at your code?

Code inlined (not sure how the usenet and forum sides handle actual
attachments) - one watcher.rb per node and one process-inotify sitting
on the server.

martin

--- watcher.rb ---
require 'inotify'
require 'find'
require 'drb'

DRb.start_service
$www = DRbObject.new(nil, 'druby://www:7777')

EVENTS = {
Inotify::CREATE => "created",
Inotify::DELETE => "deleted"
}

ACTIONS = {
Inotify::CREATE => :from_inotify,
Inotify::MOVED_TO => :from_inotify,
Inotify::MOVED_FROM => :delete_from_inotify,
Inotify::DELETE => :delete_from_inotify
}

HOME_WATCHES = {}
INOTIFY_WATCHES = {}

HOME = Inotify.new
INOTIFY = Inotify.new

def watch(user)
dir = File.join('/home', user)
begin
wd = INOTIFY.add_watch(dir, Inotify::CREATE | Inotify::DELETE |
Inotify::MOVED_TO)
INOTIFY_WATCHES[wd] = user
rescue Exception => e
puts "Skipping #{e}: #{$!}"
end
end

# watch the home directory
HOME.add_watch("/home", Inotify::CREATE | Inotify::ISDIR)

Thread.new do
HOME.each_event {|ev|
p ["HOME", ev]
if ev.mask == Inotify::CREATE | Inotify::ISDIR
newusr = ev.name
watch(newusr)
end
}
end

# watch all the user subdirectories
Dir['/home/*'].each {|dir|
user = File.basename(dir)
watch(user)
}

t = Thread.new do
INOTIFY.each_event {|ev|
# skip hidden files
unless ev.name =~ /^\./
user = INOTIFY_WATCHES[ev.wd]
str = ev.name
action = ACTIONS[ev.mask]
if action
begin
$www.incoming('resource', user, action, str)
rescue Exception => e
puts "!! exception: #{e}"
end
end
end
}
end

t.join
-----------------------------------

--- process-inotify.rb ---
#!usr/bin/ruby

require 'rubygems'
require 'active_record'
require 'config/environment'
require 'app/models/user'
require 'app/models/resource'
require 'drb'
require 'inotify'
require 'find'
require 'thread'
require 'logger'

# queue incoming events

$incoming = Queue.new

class InotifyListener
def incoming(*args)
$incoming << args
end
end

DRb.start_service("druby://:7777", InotifyListener.new)

LOG = Logger.new('log/inotify.log')

def log(msg)
LOG.info(msg)
end

log "opening db connection..."
db_config = YAML::load(File.open("config/database.yml"))
ActiveRecord::Base.establish_connection(db_config['production'])
log "done!"

t = Thread.new do
loop do
if $incoming.empty?
sleep 1
next
else
begin
signal = $incoming.shift
log signal.inspect
recv, username, action, file = signal
user = User.find_by_login(username)
unless user
log "Unrecognized user #{username}: #{[recv, action, file].inspect}"
next
end
if action
begin
Resource.send(action.to_sym, user, file)
rescue Exception => e
log "!! exception: #{e}"
end
end
rescue Exception => e1
log "!! exception: #{e1}"
end
end
end
end

DRb.thread.join
------------------------------
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,900
Latest member
Nell636132

Latest Threads

Top