星期日, 九月 17, 2006

人生七部曲 Monty Python's The Meaning of Life

[webnote]
[movie] 
 
中文名称:人生七部曲
英文名称:Monty Python's The Meaning of Life
资源类型:HDTVRip
发行时间:1984年
电影导演:特里 吉列姆(Terry Gilliam)
电影演员:格雷厄姆 查普曼 (Graham Chapman)
     约翰 克里斯 (John Cleese)
     特里 吉列姆 (Terry Gilliam)
地区:英国
语言:英语
制作团队:TLF
简介:
转自TLF
post-233384-1124857605.jpg
【原 片 名】Monty Python's The Meaning of Life
【中 文 名】人生七部曲
【出品年代】1984
【MPAA级别】R 级
【IMDB链接】http://us.imdb.com/Title?0085959
【IMDB评分】7.3/10
【国  家】英国
【类  别】喜剧
【导  演】特里 吉列姆(Terry Gilliam)
【主  演】格雷厄姆 查普曼 (Graham Chapman)
约翰 克里斯 (John Cleese)
特里 吉列姆 (Terry Gilliam)
【影片长度】107 MiN
【对白语言】英语原声
 
【内容简介】
这是一部喜剧片,共分七章来笑论人生的百态。内容包含了人性、性欲、宗教等应有尽有的人生问题。剧中大量运用了讽刺的手法,疯狂幽默之处令人捧腹喷饭。"蒙蒂皮东剧团"是英国最为出名的喜剧团,在英国无论是电影和电视节目都有着很多影迷,这部片子就是他们的巅峰之作,曾获戛纳电影节评审员特别奖。

The Best Free Project Manager

[webnote]
[software] 
 
2.1 The Best Free Project Manager
Open Workbench is a free Open Source project manager that is so feature rich and so powerful that it should at least be considered before any decision is made to purchase a commercial project management package. It's a product that takes time to get your head around. If you have been using Microsoft Project or other task based manager you'll have to re-orient your thinking because Open Workbench is resource-driven not task-driven. "An Open Workbench plan is built up from estimates for the tasks of work. Estimates are tied to the resource assigned to the tasks. Duration is then driven by the number of hours each resource will work per week to cover the total number of hours required for the tasks. Open Workbench is best suited for groups that estimate total work effort based on the estimates for all the tasks associated with a project, and then create a staffing plan and schedule for those estimates." Once you come to terms with this, you will still have to grapple with learning how to use this powerful product. Here is a partial feature list:
 
Define projects and create associated work breakdown structures with activities, phases, tasks and milestones
Create dependencies as finish-start, start-start, finish- finish or start-finish
Create subprojects and link them to master projects
Create and manage inter-project dependencies
Manage advanced task properties such as fixed duration, dependency lag, imposed start/end dates and charge codes
Schedule tasks manually or automatically using Auto Schedule
Automatically schedule tasks forwards or backwards
Schedule across linked master and subprojects
Schedule to general or individualized calendars
Define resources as people, equipment, materials or expense
Assign resources to tasks
Configure resources on tasks with uniform, fixed, contour, front or back loading
Track status, percent complete and estimates to complete
View Gantt charts (both detail and roll-up), PERT charts and the critical path
Conduct earned value analysis
Define, compare and reset project baseline setting
Can read Microsoft Project files
Open Workbench is the real thing, not some amateurish, half baked effort. Like Microsoft Project, it is best suited to large scale projects that can justify the considerable time it takes to learn the product. Those with smaller projects may want to consider some of the simpler (and less powerful) alternatives such as GanttPV [2] or ToDoList [3]. Freeware (registration required), Windows 2000 and later, 9.03MB.
 

Digital camera 流行相机 Top 10 cameras on Flickr

[webnote]
 
 

Digital camera
 
    *
      Top 10 cameras on Flickr
 
      Flagrantdisregard.com has a running list of the top digital cameras being used on Flickr. The list goes by model and brand. For instance, this week 11th of September, the top ten are as follows:
 
      1. Canon EOS DIGITAL REBEL XT
      2. NIKON D50
      3. Canon EOS 350D DIGITAL
      4. Canon EOS 20D
      5. NIKON D70
      6. NIKON D70s
      7. Canon PowerShot S2 IS
      8. Canon EOS 30D
      9. Sony CYBERSHOT
      10. Canon EOS DIGITAL REBEL
 
      This list is generated not by camera sales, surveys or other market data, but from the EXIF information found on the actual photos uploaded on Flickr. It's a good list in that the ones on top are those that have a vote of confidence among users, judging from the actual ownership and usage.
 
      Of course, the data may be skewed, since some users might unknowingly be stripping off EXIF data from their photos before uploading to Flickr (say, if the photos were resized using an editor that didn't save the EXIF along with the resized image). Also, notice that Sony cameras are marked by "CYBERSHOT," and not by exact model�Clikely, photos were taken with phone-cams.
 
      Further, you will notice that the D-SLRs dominate the list. Perhaps this is due to the sheer volume that photo enthusiasts (amateur photographers and professionals) upload on their sites, which sometimes serve as their portfolios. They're likely the ones to have Pro accounts, too, which gives them unlimited upload capability. Casual users, on the other hand, might not be uploading so frequently, and would probably have free, limited accounts.
 
      Still, it's a good representation of the actual market share of the various brands, with the top ten ranked as follows:
 
      1. Canon
      2. Nikon
      3. Sony
      4. Olympus
      5. FujiFilm
      6. Kodak
      7. Panasonic
      8. Casio
      9. Nokia
      10. Sony Ericsson
 
      Notice that cellphone manufacturers Nokia and SonyEricsson are included in the list.
 
 

星期六, 九月 16, 2006

Geek to Live: Wget local copies of your online research (del.icio.us, digg or Google Notebook)

[webnote]
 
 
Geek to Live: Wget local copies of your online research (del.icio.us, digg or Google Notebook)
wget-online-research1.jpg
wget-online-research1.jpg
 
by Gina Trapani
 
You've been diligently bookmarking and clipping web pages using an online service like del.icio.us, Google Notebook or digg. Sure, storing your data on the web is great for from-any-online-computer access, but in an age of cheap, enormous hard drives and powerful desktop search, why not replicate the data you keep on the web to your computer? That way you'll have an copy of every web page as it appeared when you bookmarked it, and a searchable archive of your research even when you're offline.
 
Using my favorite command line tool wget, you can download the contents of a page of del.icio.us links, diggs or public Google Notebook automatically and efficiently to your hard drive.
Wget 101
 
Wget newbies, take a gander at my first tutorial on wget. There you'll get some background on how wget works, where to download it, and the format of a wget command.
 
Seasoned wgetters, come with me.
Archive del.icio.us bookmarks
 
Say you've got a presentation due about the current state of software and you've been collecting research on the topic in your del.icio.us bookmarks' "software" tag. Download all the documents linked from the http://del.icio.us/ginatrapani/software page using the following command (WITHOUT line breaks):
 
wget -H -r --level=1 -k -p -erobots=off -np -N 
--exclude-domains=del.icio.us,doubleclick.net 
http://del.icio.us/ginatrapani/software
 
    wget -H -r --level=1 -k -p -erobots=off -np -N 
    --exclude-domains=del.icio.us,doubleclick.net 
    http://del.icio.us/ginatrapani/software
 
How to run this script: Replace http://del.icio.us/ginatrapani/software with your del.icio.us username and desired tag. Create a new directory called "del.icio.us archive" and from that directory at the command line, run your edited version of the script. (Even better, copy and paste the command into a text file, tweak it to your own needs, and save it as a script - .bat for Windows users, and .sh for Mac users. Then run the script instead of typing out that long thing every time.) When the command has completed, you'll have directories set up named after each domain in the del.icio.us links, with the files stored within them.
 
The breakdown: This command says tells wget to fetch all the documents linked from http://del.icio.us/ginatrapani/software:
 
    * -H: Across hosts meaning, get all the links from del.icio.us to other sites
    * -r: Recursively
    * --level=1: 1 level in so as not to grab all the docments those pages link to too
    * -k: With local copy links converted to link to the local copies of pages
    * -p: Get all the images and other auxiliary files to completey construct the pages
    * -erobots=off: Ignore robots files and just download
    * -np: Don't go up to the parent directory (or all ginatrapani's bookmarks)
    * -N: Only download NEWER files than what's already been downloaded
    * --exclude-domains=del.icio.us,doubleclick.net: Exclude links to other del.icio.us pages and the ad server at doubleclick.net because you don't want to download ads.
 
If that's too much for you to swallow, simply run the command pointed at your own del.icio.us bookmarks. Trust me, it works.
 
Alternately, instead of limiting the download to one tag, get all your del.icio.us bookmarks using the following command (omit the line breaks):
 
wget -H -r --level=1 -k -p -erobots=off -np -N
--exclude-directories=ginatrapani 
--exclude-domains=del.icio.us, doubleclick.net  http://del.icio.us/ginatrapani
 
    wget -H -r --level=1 -k -p -erobots=off -np -N
    --exclude-directories=ginatrapani 
    --exclude-domains=del.icio.us, doubleclick.net  http://del.icio.us/ginatrapani
 
The only difference between this command and the last is that it includes an "--exclude-directories=ginatrapani" directive, which keeps wget from downloading every tag folder unnecessarily.
Archive someone's diggs
 
Say you want to archive all the stories Kevin Rose diggs. The wget command would look something like this (without the line breaks):
 
wget -H -r --level=1 -k -p -erobots=off -np -N 
--exclude-domains=digg.com,doubleclick.net,doubleclick.com,fastclick.net,fmpub.net,tacoda.net,adbrite.com,sitemeter.com
 http://digg.com/users/kevinrose/dugg
 
    wget -H -r --level=1 -k -p -erobots=off -np -N 
    --exclude-domains=digg.com,doubleclick.net,doubleclick.com,fastclick.net,fmpub.net,tacoda.net,adbrite.com,sitemeter.com
     http://digg.com/users/kevinrose/dugg
 
Similar to the command above, this one excludes more ad servers (so you don' t fill your hard drive with banner ad images) and is pointed at kevinrose's dugg page.
Archive a public Google Notebook
 
Google Notebook's a great way to clip sections of web pages and make notes about them online, and you can make those notebooks public. Say you've got a public Google Notebook of aviation quotes you've found all over the web you want to archive locally for when you're off line. Point wget at that notebook and tell it to save the page to aviationquotes-notebook.html with this command. (Omit the line breaks.)
 
wget  -k -p -erobots=off -np -N -nd -O aviationquotes-notebook.html
http://www.google.com/notebook/public/18344006957932515597/BDSKUIgoQ9K_Emdkh
 
    wget  -k -p -erobots=off -np -N -nd -O aviationquotes-notebook.html
    http://www.google.com/notebook/public/18344006957932515597/BDSKUIgoQ9K_Emdkh
 
Tips and tricks for archiving the web locally
 
Use Google Desktop or Mac OS X's Spotlight to search the contents of your downloaded bookmarks and web clips. Serious researchers on a Mac could import the downloaded documents into DevonThink as well.
 
Make downloaded pages expire after x amount of time. If you want to read all the stuff Kevin's dugg only in the last two weeks, clean up your download folder using the hard drive janitor, which will delete old files.
 
Schedule automatic runs of wget downloads using Windows Task Scheduler or cron on OS X and Linux.
 
Got a trusted wget recipe that you use all the time? Or a question about any of the ones presented here? Hit us up in the comments.
 
Gina Trapani, the editor of Lifehacker, thinks distributed personal data is the killer app. Her semi-weekly feature, Geek to Live, appears every Wednesday and Friday on Lifehacker. Subscribe to the Geek to Live feed to get new installments in your newsreader.
read more:
 
    * back to school
    * command line
    * del.icio.us
    * desktop search
    * digg
    * download managers
    * feature
    * geek to live
    * google notebook
    * research
    * social bookmarking
    * top
    * web as desktop
    * wget
 
Mail2Friend Permalink icon [+] Add this post to... del.icio.us digg wists
 
 
 
No commenter image uploaded phantomdata says:
 
Very nice Gina! I've always been an avid fan of wget. It's saved me any number of times from having a particular site disappear just as I was going to reference it in a report. I always keep a wget mirror of sites I reference for just that reason. However, I had never thought of using it to grab delicious tags. What a wonderful idea!
 
You could take it further if you've got a UNIX box, and set it up as a cron job to run nightly in order to ensure that you'll always have your favorite sites. When I was on dial-up I would have a cronjob fire up the connection and download all my daily reads for me around 1A, so I'd always have a local mirror to read in the mornings.
09/13/06 01:02 PM
No commenter image uploaded CorranRogue9 says:
 
@ phantomdata:
 
Good idea setting it up to run nightly, but if the site goes away, then it will rewrite your current data with the blank site. Then you *won't* have it. I would suggest that just after writing your essay that you need websites archived for, then backup the data.
09/13/06 01:34 PM
No commenter image uploaded digdug says:
 
If you use Firefox, you could also give the Slogger extension a try. It lets you archive a complete copy of a webpage with one click, or even automatically.
09/13/06 02:26 PM
Image of David Burch David Burch says:
 
Gina,
 
Couldn't you also pass a user name and password, either in the URL or as command-line arguments, to down load private Google Notebooks?
09/13/06 02:54 PM
Image of Bassam Bassam says:
 
Great Article Gina! I'll definitely be trying this out.
 
Any ideas on how to use wget to archive private Google Notebooks? I keep most of my notebooks private, and I'd love to be able to download them.
09/13/06 02:56 PM
No commenter image uploaded cebailey says:
 
Good stuff. I think this might strike a perfect balance for me in terms of keeping online bookmarks but also having an archive.
 
What would be fantastic for this is a way to somehow tie the del.icio.us notes and tags to the downloaded pages, maybe via spotlight metadata. Anyone have any thoughts on this? It might be possible using the API, or the export to HTML feature...
09/13/06 04:01 PM
Image of Gina Trapani, Lifehacker Editor Gina Trapani, Lifehacker Editor says:
 
@David: You'd probably have to pass on your Google cookies with wget to the command to authenticate to see private notebooks. I haven't given this a try, but do wget --help to see the cookie options.
 
And yes, Slogger and Scrapbook are both Firefox-based non command line (so non schedulable) extensions that do this as well, with a much friendly GUI interface.
09/13/06 04:17 PM
No commenter image uploaded kjohn says:
 
testing
09/13/06 08:03 PM
No commenter image uploaded kjohn says:
 
Good one Gina! But, still I can only get 1 pages worth of my del.icio.us bookmarks.
 
One workaround is to use the url of the form http://del.icio.us/username?setcount=100 .
Still you need multiple tries if you have more than 100 bookmarks.
09/13/06 08:08 PM
No commenter image uploaded cebailey says:
 
Gina:
 
There's an errant space in your example command for getting ALL del.icio.us links. You have:
 
--exclude-domains=del.icio.us, doubleclick.net
 
should probably be:
 
--exclude-domains=del.icio.us,doubleclick.net
09/14/06 12:17 AM
No commenter image uploaded Sander says:
 
I just found the following code, it is the most concise way I found in two days to back up delicious bookmarks.
 
wget http://del... --http-user=YOURUSERNAME --http-passwd=YOURPASSWORD --no-check-certificate
 
    wget http://del... --http-user=YOURUSERNAME --http-passwd=YOURPASSWORD --no-check-certificate
 
It gets all your bookmarks because it accesses the API. Not just your first 100 like some other examples. The only problem i have is that the results are in XML format, if anyone had a automated way of transforming it to a unordered html list that would be great. (or could adjust the command).
09/14/06 04:09 AM
No commenter image uploaded babette says:
 
Been a fan of wget for a long time and converted a few people. I have a few family members who are terrified of command line. For them I reccomend
deep vacuum. Sort of a wget with GUI for OS X. Anybody else used this?
09/14/06 12:3

星期五, 九月 15, 2006

神奇四侠 Fantastic Four

[webnote]
[movie] 
 
中文名称:神奇四侠
英文名称:Fantastic Four
资源类型:HDTVRip
发行时间:2005年
电影导演:蒂姆・斯托利 Tim Story
电影演员:迈克・奇科利斯 Michael Chiklis
     伊安・格鲁福德 Ioan Gruffudd
     杰西卡・艾尔芭 Jessica Alba
     克里斯・伊文斯 Chris Evans
     凯莉・华盛顿 Kerry Washington
地区:美国
语言:英语
制作团队:TLF
简介:
1608805.jpg
转自TLF
 
导演:蒂姆・斯托利 Tim Story
主演:迈克・奇科利斯 Michael Chiklis
   伊安・格鲁福德 Ioan Gruffudd
   杰西卡・艾尔芭 Jessica Alba
   克里斯・伊文斯 Chris Evans
   凯莉・华盛顿 Kerry Washington
类型:动作/科幻/喜剧
级别:PG-13(紧张动作场面)
发行公司:20世纪福克斯 20th Century Fox
上映日期:2005年7月8日
官方网站:http://www.fantasticfourmovie.com/
 
◇ 故事:神奇四侠太空诞生
  
挂着发明家、宇航员和科学家的头衔,李德・理查兹博士可不是吃素的!最近,他一生的梦想即将被实现,他将启程踏上深入玄妙莫测外太空,进入宇宙风暴的中心地带,此行的目标将能顺利解除人类基因码之谜。这时候,政府的拨款大幅度减缩,眼看行程又将泡汤,好在他的大学同学,也是老对手,如今的亿万富翁维克托愿意为此出资。
 
 和李德同行的是他最好的朋友,本・格里姆、苏・斯托姆和苏・斯托姆那个热血的弟弟强尼,苏同时也是维克托的基因研究小组负责人和李德的前女友。太空旅程似乎一帆风顺,直到李德发现自己计算错了迎面风暴的速度。就在那一瞬间,特殊射线和宇宙磁暴的辐射,不可避免的改变了他们的命运。
 
 本以为必死无疑的四人,却发现自己似乎毫发无损。然而,回到地球后,核爆的威力慢慢凸现了出来。不过,副作用没什么,倒是被迅速改变了DNA的四人,不但体质发生变化,还每人被赋予了别具一格的特殊超能力。李德成为神奇先生,苏成为隐身女侠,强尼变身为霹雳火,本则成为石头人,神奇四侠正式诞生,而四人将面对的挑战则是利用超能力完成看似不可能的任务――打倒妄想毁灭地球的死亡博士……
 
--------------------------------------------------------------------------------
◇ 看点:惊奇漫画旗下最长寿漫画改编
  
《蜘蛛侠》、《蝙蝠侠》、《X战警》、《夜魔侠》……,不知道惊奇漫画的元老斯坦・李怎么会有那么多的好点子,反正,今年暑假,同是出自他旗下的《神奇四侠》也将出道。可别以为《神奇四侠》是见风使舵!虽然的确是它的师兄弟们导引了漫画改编的潮流,可历史最悠久、长达44年的《神奇四侠》一向都引领着最著名无厘头超人家族的称号。而且,据说它迟迟不能面世的原因,是因为能让它现世的技术到最近几年才诞生。
 
 既然是超人,免不了和其他师兄弟一样为国家、世界和地球的和平添砖加瓦,但神奇四侠与其他的超人们区别还是很大的―这个号称第一个超人家庭的团体,可学不来单打独斗那玩意,而且他们也完全不像其他超人害羞得遮遮掩掩、整天为自己的双重身份心事重重,有超能力怕什么,这可不是酒好不怕巷子深的年代了!四人甚至还让自己的超能力成为大众焦点,崇拜的偶像。无厘头是这四个年轻人最让人追捧的特质,由于突如其来的超能力大到难以控制,四个基本靠心情变化的家伙经常在公众场合制造轰动,他们一反蜘蛛侠、蝙蝠侠们戴头盔套面罩的严肃装束,随心所欲,这样的超人也难怪每个青少年都想当!
 
--------------------------------------------------------------------------------
◇ 点评:轻轻松松过夏天
  
《星战前传》的复仇西斯,《世界大战》的末世情结,《蝙蝠侠Ⅴ:侠影之谜》的童年阴影……,比起这个夏天的其他大片来说,《神奇四侠》绝对算是欢乐大聚会。它不仅颠覆了漫画改编电影的悲情与阴暗,还来了些许异样"明星"生活纪录。四位超人一天打打闹闹,颇有不务正业之势,就连拯救世界也时常控制不好自己的超能力,不仅频频在公众场合"脱衣变身",还常搞破坏,引得路人围观,街知巷闻。
 
 导演蒂姆・斯托瑞的宣布,让众多漫画迷们保持不小的怀疑态度,这个曾拍过《理发店》和《终极杀阵》的家伙,很明显没有太多动作片的执导经验。不过,对于《神奇四侠》来说,或许他的搞笑功力正是影片需要的,作为无厘头超人组合,缺少让人捧腹的笑料怎么了得?至于动作场面,就留给那一大队幕后制作去考量吧!
 
 "缩水版X战警"、"超人总动员真人版",媒体的一大堆类似的称呼实在有点委屈历史最悠久"神奇四侠",想当年它风靡世界的时候,还不知道那一堆超人们身在何方呢!虽然打着情节紧张的科幻电影称号,可要期待看到很多让人叹为观止的特技场面却不太现实,好在四人常常大街小巷边救人边上演"清凉脱衣秀",当中还有曾在大热剧集《末世黑天使》中迷倒众生,今年早些用《罪恶之城》中所扮演的脱衣舞娘吸引了无数年轻影迷的杰西卡・艾尔芭和帅哥克里斯・伊文斯,这个炎热的夏天凉快凉快也好!
 

CODE
HDSOURCE.....1080i ts              ???哌哌哌哌哌哌哌哌哌????   
 ?DIVX RELEASE.....09.14.06              ?苘?哪 Install Notes哪?圮??   
 ?    SUPPLIER.....TEAM TLF              ?苘苘苘苘苘苘苘苘苘苘苘苘苘??   
 ?      RIPPER.....VawaV                 ?                            ?   
 ?   IMDB RATE.....6.0                        Unrar all the files,     ?   
 ?  FRAME RATE.....23.976fps                Enjoy this nice Movie!     ?   
    VIDEO CODEC.....XVID 1.20 1456kbps       Be sure to install the     ?   
    AUDIO CODEC.....AC3     384kbps          latest version of ffdshow  ?   
   ASPECT RATIO.....1.85:1                   VodSub is necessary,       ?   
     RESOLUTION.....704 X 384                if u want to watch the     ?   
        RUNTIME.....106MiN                   subtitles.                 ?   
         GENERE.....Action / Adventure / Fantasy / Sci-Fi               ?   
       LANGUAGE.....English                                                 
      SUBSTITLE.....N/A                                                   
     MOVIE SIZE.....CD1  49 X 15MB CD2  49 X 15MB                             
                                                                              
      iMDB LiNK: http://www.imdb.com/title/tt0120667/

google站内搜索

Google