/************************************************************************* * (C) 2011 AITNET ltd - Sofia/Bulgaria - * by Michael Pounov * * $Author: misho $ * $Id: hooks.c,v 1.26 2014/01/28 16:58:33 misho Exp $ * ************************************************************************** The ELWIX and AITNET software is distributed under the following terms: All of the documentation and software included in the ELWIX and AITNET Releases is copyrighted by ELWIX - Sofia/Bulgaria Copyright 2004 - 2014 by Michael Pounov . All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by Michael Pounov ELWIX - Embedded LightWeight unIX and its contributors. 4. Neither the name of AITNET nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY AITNET AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "global.h" #include "hooks.h" /* * sched_hook_init() - Default INIT hook * * @root = root task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_init(void *root, void *arg __unused) { sched_root_task_t *r = root; if (!r) return (void*) -1; #ifndef KQ_DISABLE r->root_kq = kqueue(); if (r->root_kq == -1) { LOGERR; return (void*) -1; } #else r->root_kq ^= r->root_kq; FD_ZERO(&r->root_fds[0]); FD_ZERO(&r->root_fds[1]); #endif return NULL; } /* * sched_hook_fini() - Default FINI hook * * @root = root task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_fini(void *root, void *arg __unused) { sched_root_task_t *r = root; if (!r) return (void*) -1; #ifndef KQ_DISABLE if (r->root_kq > 2) { close(r->root_kq); r->root_kq = 0; } #else FD_ZERO(&r->root_fds[1]); FD_ZERO(&r->root_fds[0]); r->root_kq ^= r->root_kq; #endif return NULL; } /* * sched_hook_cancel() - Default CANCEL hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_cancel(void *task, void *arg __unused) { sched_task_t *t = task; #ifndef KQ_DISABLE struct kevent chg[1]; struct timespec timeout = { 0, 0 }; #else sched_root_task_t *r = NULL; register int i; #endif #ifdef AIO_SUPPORT struct aiocb *acb; #ifdef EVFILT_LIO register int i = 0; struct aiocb **acbs; #endif /* EVFILT_LIO */ #endif /* AIO_SUPPORT */ if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef KQ_DISABLE r = TASK_ROOT(t); #endif switch (TASK_TYPE(t)) { case taskREAD: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_DELETE, 0, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_DELETE, 0, 0, (void*) TASK_FD(t)); #endif #else FD_CLR(TASK_FD(t), &r->root_fds[0]); /* optimize select */ for (i = r->root_kq - 1; i > 2; i--) if (FD_ISSET(i, &r->root_fds[0]) || FD_ISSET(i, &r->root_fds[1])) break; if (i > 2) r->root_kq = i + 1; #endif break; case taskWRITE: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_DELETE, 0, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_DELETE, 0, 0, (void*) TASK_FD(t)); #endif #else FD_CLR(TASK_FD(t), &r->root_fds[1]); /* optimize select */ for (i = r->root_kq - 1; i > 2; i--) if (FD_ISSET(i, &r->root_fds[0]) || FD_ISSET(i, &r->root_fds[1])) break; if (i > 2) r->root_kq = i + 1; #endif break; case taskALARM: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], (uintptr_t) TASK_DATA(t), EVFILT_TIMER, EV_DELETE, 0, 0, (intptr_t) TASK_DATA(t)); #else EV_SET(&chg[0], (uintptr_t) TASK_DATA(t), EVFILT_TIMER, EV_DELETE, 0, 0, (void*) TASK_DATA(t)); #endif #endif break; case taskNODE: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_VNODE, EV_DELETE, 0, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_VNODE, EV_DELETE, 0, 0, (void*) TASK_FD(t)); #endif #endif break; case taskPROC: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_PROC, EV_DELETE, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_PROC, EV_DELETE, 0, 0, (void*) TASK_VAL(t)); #endif #endif break; case taskSIGNAL: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_SIGNAL, EV_DELETE, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_SIGNAL, EV_DELETE, 0, 0, (void*) TASK_VAL(t)); #endif /* restore signal */ signal(TASK_VAL(t), SIG_DFL); #endif break; #ifdef AIO_SUPPORT case taskAIO: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_AIO, EV_DELETE, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_AIO, EV_DELETE, 0, 0, (void*) TASK_VAL(t)); #endif acb = (struct aiocb*) TASK_VAL(t); if (acb) { if (aio_cancel(acb->aio_fildes, acb) == AIO_CANCELED) aio_return(acb); free(acb); TASK_VAL(t) = 0; } #endif break; #ifdef EVFILT_LIO case taskLIO: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_LIO, EV_DELETE, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_LIO, EV_DELETE, 0, 0, (void*) TASK_VAL(t)); #endif acbs = (struct aiocb**) TASK_VAL(t); if (acbs) { for (i = 0; i < TASK_DATLEN(t); i++) { if (aio_cancel(acbs[i]->aio_fildes, acbs[i]) == AIO_CANCELED) aio_return(acbs[i]); free(acbs[i]); } free(acbs); TASK_VAL(t) = 0; } #endif break; #endif /* EVFILT_LIO */ #endif /* AIO_SUPPORT */ #ifdef EVFILT_USER case taskUSER: #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_USER, EV_DELETE, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_USER, EV_DELETE, 0, 0, (void*) TASK_VAL(t)); #endif #endif break; #endif /* EVFILT_USER */ case taskTHREAD: #ifdef HAVE_LIBPTHREAD pthread_cancel((pthread_t) TASK_VAL(t)); #endif return NULL; #if defined(HAVE_TIMER_CREATE) && defined(HAVE_TIMER_SETTIME) case taskRTC: timer_delete((timer_t) TASK_FLAG(t)); schedCancel((sched_task_t*) TASK_RET(t)); return NULL; #endif /* HAVE_TIMER_CREATE */ default: return NULL; } #ifndef KQ_DISABLE kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout); #endif return NULL; } #ifdef HAVE_LIBPTHREAD /* * sched_hook_thread() - Default THREAD hook * * @task = current task * @arg = pthread attributes * return: <0 errors and 0 ok */ void * sched_hook_thread(void *task, void *arg) { sched_task_t *t = task; pthread_t tid; sigset_t s, o; if (!t || !TASK_ROOT(t)) return (void*) -1; sigfillset(&s); pthread_sigmask(SIG_BLOCK, &s, &o); if ((errno = pthread_create(&tid, (pthread_attr_t*) arg, (void *(*)(void*)) _sched_threadWrapper, t))) { LOGERR; pthread_sigmask(SIG_SETMASK, &o, NULL); return (void*) -1; } else TASK_VAL(t) = (u_long) tid; if (!TASK_ISLOCKED(t)) TASK_LOCK(t); pthread_sigmask(SIG_SETMASK, &o, NULL); return NULL; } #endif /* * sched_hook_read() - Default READ hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_read(void *task, void *arg __unused) { sched_task_t *t = task; #ifndef KQ_DISABLE struct kevent chg[1]; struct timespec timeout = { 0, 0 }; #else sched_root_task_t *r = NULL; #endif if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef KQ_DISABLE r = TASK_ROOT(t); #endif #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_ADD | EV_CLEAR, 0, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_ADD | EV_CLEAR, 0, 0, (void*) TASK_FD(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #else FD_SET(TASK_FD(t), &r->root_fds[0]); if (TASK_FD(t) >= r->root_kq) r->root_kq = TASK_FD(t) + 1; #endif return NULL; } /* * sched_hook_write() - Default WRITE hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_write(void *task, void *arg __unused) { sched_task_t *t = task; #ifndef KQ_DISABLE struct kevent chg[1]; struct timespec timeout = { 0, 0 }; #else sched_root_task_t *r = NULL; #endif if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef KQ_DISABLE r = TASK_ROOT(t); #endif #ifndef KQ_DISABLE #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_ADD | EV_CLEAR, 0, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_ADD | EV_CLEAR, 0, 0, (void*) TASK_FD(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #else FD_SET(TASK_FD(t), &r->root_fds[1]); if (TASK_FD(t) >= r->root_kq) r->root_kq = TASK_FD(t) + 1; #endif return NULL; } /* * sched_hook_alarm() - Default ALARM hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_alarm(void *task, void *arg __unused) { #ifndef KQ_DISABLE sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout = { 0, 0 }; if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef __NetBSD__ EV_SET(&chg[0], (uintptr_t) TASK_DATA(t), EVFILT_TIMER, EV_ADD | EV_CLEAR, 0, t->task_val.ts.tv_sec * 1000 + t->task_val.ts.tv_nsec / 1000000, (intptr_t) TASK_DATA(t)); #else EV_SET(&chg[0], (uintptr_t) TASK_DATA(t), EVFILT_TIMER, EV_ADD | EV_CLEAR, 0, t->task_val.ts.tv_sec * 1000 + t->task_val.ts.tv_nsec / 1000000, (void*) TASK_DATA(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #endif return NULL; } /* * sched_hook_node() - Default NODE hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_node(void *task, void *arg __unused) { #ifndef KQ_DISABLE sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout = { 0, 0 }; if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef __NetBSD__ EV_SET(&chg[0], TASK_FD(t), EVFILT_VNODE, EV_ADD | EV_CLEAR, NOTE_DELETE | NOTE_WRITE | NOTE_EXTEND | NOTE_ATTRIB | NOTE_LINK | NOTE_RENAME | NOTE_REVOKE, 0, (intptr_t) TASK_FD(t)); #else EV_SET(&chg[0], TASK_FD(t), EVFILT_VNODE, EV_ADD | EV_CLEAR, NOTE_DELETE | NOTE_WRITE | NOTE_EXTEND | NOTE_ATTRIB | NOTE_LINK | NOTE_RENAME | NOTE_REVOKE, 0, (void*) TASK_FD(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #endif return NULL; } /* * sched_hook_proc() - Default PROC hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_proc(void *task, void *arg __unused) { #ifndef KQ_DISABLE sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout = { 0, 0 }; if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_PROC, EV_ADD | EV_CLEAR, NOTE_EXIT | NOTE_FORK | NOTE_EXEC | NOTE_TRACK, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_PROC, EV_ADD | EV_CLEAR, NOTE_EXIT | NOTE_FORK | NOTE_EXEC | NOTE_TRACK, 0, (void*) TASK_VAL(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #endif return NULL; } /* * sched_hook_signal() - Default SIGNAL hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_signal(void *task, void *arg __unused) { #ifndef KQ_DISABLE sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout = { 0, 0 }; if (!t || !TASK_ROOT(t)) return (void*) -1; /* ignore signal */ signal(TASK_VAL(t), SIG_IGN); #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_SIGNAL, EV_ADD | EV_CLEAR, 0, 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_SIGNAL, EV_ADD | EV_CLEAR, 0, 0, (void*) TASK_VAL(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #else #if 0 sched_task_t *t = task; struct sigaction sa; memset(&sa, 0, sizeof sa); sigemptyset(&sa.sa_mask); sa.sa_handler = _sched_sigHandler; sa.sa_flags = SA_RESETHAND | SA_RESTART; if (sigaction(TASK_VAL(t), &sa, NULL) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #endif /* 0 */ #endif return NULL; } /* * sched_hook_user() - Default USER hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ #ifdef EVFILT_USER void * sched_hook_user(void *task, void *arg __unused) { #ifndef KQ_DISABLE sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout = { 0, 0 }; if (!t || !TASK_ROOT(t)) return (void*) -1; #ifdef __NetBSD__ EV_SET(&chg[0], TASK_VAL(t), EVFILT_USER, EV_ADD | EV_CLEAR, TASK_DATLEN(t), 0, (intptr_t) TASK_VAL(t)); #else EV_SET(&chg[0], TASK_VAL(t), EVFILT_USER, EV_ADD | EV_CLEAR, TASK_DATLEN(t), 0, (void*) TASK_VAL(t)); #endif if (kevent(TASK_ROOT(t)->root_kq, chg, 1, NULL, 0, &timeout) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } #endif return NULL; } #endif /* * sched_hook_fetch() - Default FETCH hook * * @root = root task * @arg = unused * return: NULL error or !=NULL fetched task */ void * sched_hook_fetch(void *root, void *arg __unused) { sched_root_task_t *r = root; sched_task_t *task, *tmp; struct timespec now, m, mtmp; #ifndef KQ_DISABLE struct kevent evt[1], res[KQ_EVENTS]; struct timespec *timeout; #else struct timeval *timeout, tv; fd_set rfd, wfd, xfd; #endif register int i, flg; int en; #ifdef AIO_SUPPORT int len, fd; struct aiocb *acb; #ifdef EVFILT_LIO int l; register int j; off_t off; struct aiocb **acbs; struct iovec *iv; #endif /* EVFILT_LIO */ #endif /* AIO_SUPPORT */ if (!r) return NULL; /* get new task by queue priority */ while ((task = TAILQ_FIRST(&r->root_event))) { #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskEVENT]); #endif TAILQ_REMOVE(&r->root_event, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskEVENT]); #endif task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif return task; } while ((task = TAILQ_FIRST(&r->root_ready))) { #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_REMOVE(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif return task; } #ifdef TIMER_WITHOUT_SORT clock_gettime(CLOCK_MONOTONIC, &now); sched_timespecclear(&r->root_wait); TAILQ_FOREACH(task, &r->root_timer, task_node) { if (!sched_timespecisset(&r->root_wait)) r->root_wait = TASK_TS(task); else if (sched_timespeccmp(&TASK_TS(task), &r->root_wait, -) < 0) r->root_wait = TASK_TS(task); } if (TAILQ_FIRST(&r->root_timer)) { m = r->root_wait; sched_timespecsub(&m, &now, &mtmp); r->root_wait = mtmp; } else { /* set wait INFTIM */ sched_timespecinf(&r->root_wait); } #else /* ! TIMER_WITHOUT_SORT */ if (!TAILQ_FIRST(&r->root_task) && (task = TAILQ_FIRST(&r->root_timer))) { clock_gettime(CLOCK_MONOTONIC, &now); m = TASK_TS(task); sched_timespecsub(&m, &now, &mtmp); r->root_wait = mtmp; } else { /* set wait INFTIM */ sched_timespecinf(&r->root_wait); } #endif /* TIMER_WITHOUT_SORT */ /* if present member of task, set NOWAIT */ if (TAILQ_FIRST(&r->root_task)) sched_timespecclear(&r->root_wait); if (r->root_wait.tv_sec != -1 && r->root_wait.tv_nsec != -1) { #ifndef KQ_DISABLE timeout = &r->root_wait; #else sched_timespec2val(&r->root_wait, &tv); timeout = &tv; #endif /* KQ_DISABLE */ } else if (sched_timespecisinf(&r->root_poll)) timeout = NULL; else { #ifndef KQ_DISABLE timeout = &r->root_poll; #else sched_timespec2val(&r->root_poll, &tv); timeout = &tv; #endif /* KQ_DISABLE */ } #ifndef KQ_DISABLE if ((en = kevent(r->root_kq, NULL, 0, res, KQ_EVENTS, timeout)) == -1) { #else rfd = xfd = r->root_fds[0]; wfd = r->root_fds[1]; if ((en = select(r->root_kq, &rfd, &wfd, &xfd, timeout)) == -1) { #endif /* KQ_DISABLE */ if (r->root_hooks.hook_exec.exception) { if (r->root_hooks.hook_exec.exception(r, NULL)) return NULL; } else if (errno != EINTR) LOGERR; goto skip_event; } /* kevent dispatcher */ now.tv_sec = now.tv_nsec = 0; /* Go and catch the cat into pipes ... */ #ifndef KQ_DISABLE for (i = 0; i < en; i++) { memcpy(evt, &res[i], sizeof evt); evt->flags = EV_DELETE; /* Put read/write task to ready queue */ switch (res[i].filter) { case EVFILT_READ: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_read, task_node, tmp) { if (TASK_FD(task) != ((intptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove read handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREAD]); #endif TAILQ_REMOVE(&r->root_read, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREAD]); #endif if (r->root_hooks.hook_exec.exception && res[i].flags & EV_EOF) { if (r->root_hooks.hook_exec.exception(r, (void*) EV_EOF)) { task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; case EVFILT_WRITE: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_write, task_node, tmp) { if (TASK_FD(task) != ((intptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove write handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskWRITE]); #endif TAILQ_REMOVE(&r->root_write, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskWRITE]); #endif if (r->root_hooks.hook_exec.exception && res[i].flags & EV_EOF) { if (r->root_hooks.hook_exec.exception(r, (void*) EV_EOF)) { task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; case EVFILT_TIMER: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_alarm, task_node, tmp) { if ((uintptr_t) TASK_DATA(task) != ((uintptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove alarm handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskALARM]); #endif TAILQ_REMOVE(&r->root_alarm, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskALARM]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; case EVFILT_VNODE: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_node, task_node, tmp) { if (TASK_FD(task) != ((intptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove node handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskNODE]); #endif TAILQ_REMOVE(&r->root_node, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskNODE]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; case EVFILT_PROC: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_proc, task_node, tmp) { if (TASK_VAL(task) != ((uintptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove proc handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskPROC]); #endif TAILQ_REMOVE(&r->root_proc, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskPROC]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; case EVFILT_SIGNAL: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_signal, task_node, tmp) { if (TASK_VAL(task) != ((uintptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove signal handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskSIGNAL]); #endif TAILQ_REMOVE(&r->root_signal, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskSIGNAL]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; #ifdef AIO_SUPPORT case EVFILT_AIO: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_aio, task_node, tmp) { acb = (struct aiocb*) TASK_VAL(task); if (acb != ((struct aiocb*) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove user handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskAIO]); #endif TAILQ_REMOVE(&r->root_aio, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskAIO]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif fd = acb->aio_fildes; if ((len = aio_return(acb)) != -1) { if (lseek(fd, acb->aio_offset + len, SEEK_CUR) == -1) LOGERR; } else LOGERR; free(acb); TASK_DATLEN(task) = (u_long) len; TASK_FD(task) = fd; } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; #ifdef EVFILT_LIO case EVFILT_LIO: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_lio, task_node, tmp) { acbs = (struct aiocb**) TASK_VAL(task); if (acbs != ((struct aiocb**) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove user handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskLIO]); #endif TAILQ_REMOVE(&r->root_lio, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskLIO]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif iv = (struct iovec*) TASK_DATA(task); fd = acbs[0]->aio_fildes; off = acbs[0]->aio_offset; for (j = len = 0; i < TASK_DATLEN(task); len += l, i++) { if ((iv[i].iov_len = aio_return(acbs[i])) == -1) l = 0; else l = iv[i].iov_len; free(acbs[i]); } free(acbs); TASK_DATLEN(task) = (u_long) len; TASK_FD(task) = fd; if (lseek(fd, off + len, SEEK_CUR) == -1) LOGERR; } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; #endif /* EVFILT_LIO */ #endif /* AIO_SUPPORT */ #ifdef EVFILT_USER case EVFILT_USER: flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_user, task_node, tmp) { if (TASK_VAL(task) != ((uintptr_t) res[i].udata)) continue; else { flg++; TASK_RET(task) = res[i].data; TASK_FLAG(task) = (u_long) res[i].fflags; } /* remove user handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUSER]); #endif TAILQ_REMOVE(&r->root_user, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUSER]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* if match at least 2, don't remove resouce of event */ if (flg > 1) evt->flags ^= evt->flags; break; #endif /* EVFILT_USER */ } if (kevent(r->root_kq, evt, 1, NULL, 0, &now) == -1) { if (r->root_hooks.hook_exec.exception) { if (r->root_hooks.hook_exec.exception(r, NULL)) return NULL; } else LOGERR; } } #else /* end of kevent dispatcher */ for (i = 0; i < r->root_kq; i++) { if (FD_ISSET(i, &rfd) || FD_ISSET(i, &xfd)) { flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_read, task_node, tmp) { if (TASK_FD(task) != i) continue; else { flg++; TASK_FLAG(task) = ioctl(TASK_FD(task), FIONREAD, &TASK_RET(task)); } /* remove read handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREAD]); #endif TAILQ_REMOVE(&r->root_read, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREAD]); #endif if (r->root_hooks.hook_exec.exception) { if (r->root_hooks.hook_exec.exception(r, NULL)) { task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } /* if match equal to 1, remove resouce */ if (flg == 1) FD_CLR(i, &r->root_fds[0]); } if (FD_ISSET(i, &wfd)) { flg = 0; TAILQ_FOREACH_SAFE(task, &r->root_write, task_node, tmp) { if (TASK_FD(task) != i) continue; else { flg++; TASK_FLAG(task) = ioctl(TASK_FD(task), FIONWRITE, &TASK_RET(task)); } /* remove write handle */ #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskWRITE]); #endif TAILQ_REMOVE(&r->root_write, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskWRITE]); #endif if (r->root_hooks.hook_exec.exception) { if (r->root_hooks.hook_exec.exception(r, NULL)) { task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } else { task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } } /* if match equal to 1, remove resouce */ if (flg == 1) FD_CLR(i, &r->root_fds[1]); } } /* optimize select */ for (i = r->root_kq - 1; i > 2; i--) if (FD_ISSET(i, &r->root_fds[0]) || FD_ISSET(i, &r->root_fds[1])) break; if (i > 2) r->root_kq = i + 1; #endif /* KQ_DISABLE */ skip_event: /* timer update & put in ready queue */ clock_gettime(CLOCK_MONOTONIC, &now); TAILQ_FOREACH_SAFE(task, &r->root_timer, task_node, tmp) if (sched_timespeccmp(&now, &TASK_TS(task), -) >= 0) { #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskTIMER]); #endif TAILQ_REMOVE(&r->root_timer, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskTIMER]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } /* put regular task priority task to ready queue, if there is no ready task or reach max missing hit for regular task */ if ((task = TAILQ_FIRST(&r->root_task))) { if (!TAILQ_FIRST(&r->root_ready) || r->root_miss >= TASK_VAL(task)) { r->root_miss ^= r->root_miss; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskTASK]); #endif TAILQ_REMOVE(&r->root_task, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskTASK]); #endif task->task_type = taskREADY; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif } else r->root_miss++; } else r->root_miss ^= r->root_miss; /* OK, lets get ready task !!! */ task = TAILQ_FIRST(&r->root_ready); if (!(task)) return NULL; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskREADY]); #endif TAILQ_REMOVE(&r->root_ready, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskREADY]); #endif task->task_type = taskUNUSE; #ifdef HAVE_LIBPTHREAD pthread_mutex_lock(&r->root_mtx[taskUNUSE]); #endif TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); #ifdef HAVE_LIBPTHREAD pthread_mutex_unlock(&r->root_mtx[taskUNUSE]); #endif return task; } /* * sched_hook_exception() - Default EXCEPTION hook * * @root = root task * @arg = custom handling: if arg == EV_EOF or other value; default: arg == NULL log errno * return: <0 errors and 0 ok */ void * sched_hook_exception(void *root, void *arg) { sched_root_task_t *r = root; if (!r) return NULL; /* custom exception handling ... */ if (arg) { if (arg == (void*) EV_EOF) return NULL; return (void*) -1; /* raise scheduler error!!! */ } /* if error hook exists */ if (r->root_hooks.hook_root.error) return (r->root_hooks.hook_root.error(root, (void*) ((intptr_t) errno))); /* default case! */ LOGERR; return NULL; } /* * sched_hook_condition() - Default CONDITION hook * * @root = root task * @arg = killState from schedRun() * return: NULL kill scheduler loop or !=NULL ok */ void * sched_hook_condition(void *root, void *arg) { sched_root_task_t *r = root; if (!r) return NULL; return (void*) (r->root_cond - *(intptr_t*) arg); } /* * sched_hook_rtc() - Default RTC hook * * @task = current task * @arg = unused * return: <0 errors and 0 ok */ #if defined(HAVE_TIMER_CREATE) && defined(HAVE_TIMER_SETTIME) void * sched_hook_rtc(void *task, void *arg __unused) { sched_task_t *sigt = NULL, *t = task; struct itimerspec its; struct sigevent evt; timer_t tmr; if (!t || !TASK_ROOT(t)) return (void*) -1; memset(&evt, 0, sizeof evt); evt.sigev_notify = SIGEV_SIGNAL; evt.sigev_signo = (intptr_t) TASK_DATA(t) + SIGRTMIN; evt.sigev_value.sival_ptr = TASK_DATA(t); if (timer_create(CLOCK_MONOTONIC, &evt, &tmr) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; return (void*) -1; } else TASK_FLAG(t) = (u_long) tmr; if (!(sigt = schedSignal(TASK_ROOT(t), _sched_rtcWrapper, TASK_ARG(t), evt.sigev_signo, t, (size_t) tmr))) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; timer_delete(tmr); return (void*) -1; } else TASK_RET(t) = (uintptr_t) sigt; memset(&its, 0, sizeof its); its.it_value.tv_sec = t->task_val.ts.tv_sec; its.it_value.tv_nsec = t->task_val.ts.tv_nsec; if (timer_settime(tmr, TIMER_RELTIME, &its, NULL) == -1) { if (TASK_ROOT(t)->root_hooks.hook_exec.exception) TASK_ROOT(t)->root_hooks.hook_exec.exception(TASK_ROOT(t), NULL); else LOGERR; schedCancel(sigt); timer_delete(tmr); return (void*) -1; } return NULL; } #endif /* HAVE_TIMER_CREATE */